OpenGL for Embedded Systems (OpenGL ES) is a simplified version of OpenGL that eliminates redundant functionality in order to present a programming interface that is easily implemented in mobile hardware. OpenGL ES allows your application to configure a traditional 3D graphics pipeline and submit vertex data to OpenGL, where they are transformed and lit, assembled into primitives, and rasterized to create a 2D image.
Currently, there are two distinct versions of OpenGL ES:
OpenGL ES 1.1 implements the standard graphics pipeline with a well-defined fixed-function pipeline. The fixed-function pipeline implements a traditional lighting and rasterization model that allows various parts of the pipeline to be enabled and configured to perform specific tasks, or disabled to improve performance.
OpenGL ES 2.0 shares many functions in common with OpenGL ES 1.1, but removes all functions that act on the fixed-function pipeline, replacing it with a general-purpose shader-based pipeline. Shaders allow you to create your own vertex attributes and execute custom vertex and fragment functions directly on the graphics hardware, allowing your application to completely customize the operations applied to each each vertex and fragment.
Apple offers hardware that supports both OpenGL ES 1.1 and OpenGL ES 2.0.
The remainder of this chapter gives an overview of the iOS graphics model and OpenGL ES, and explains how the two fit together.
Core Animation is fundamental to the iOS graphics subsystem. Every UIView
object in your application is backed by a Core Animation layer. As the various layers update their contents, they are animated and composited by Core Animation and presented to the display. This process is described in detail in “Tuning for Performance and Responsiveness” in the iOS Application Programming Guide.
OpenGL ES, like every other graphics system on iOS, is a client of Core Animation. To use OpenGL ES to draw to the screen, your application creates a UIView
class backed by a special Core Animation layer, a CAEAGLLayer
object. A CAEAGLLayer
object is aware of OpenGL ES and can be used to create rendering targets that act as part of Core Animation. When your application finishes rendering a frame, you present the contents of the CAEAGLLayer
object, where they are composited with the data from other views.
The complete discussion of how to create a CAEAGLLayer
object and use it display your rendered images is in “Working with OpenGL ES Contexts and Framebuffers.”
Although your application can compose scenes using both OpenGL ES layers and non–OpenGL ES drawing, in practice, you can achieve higher performance by limiting yourself to OpenGL ES. Compositing non–OpenGL ES with OpenGL ES content is covered in more detail in “Displaying Your Results.”
OpenGL ES provides a procedural API for submitting geometry to a hardware accelerated rendering pipeline. OpenGL ES commands are submitted to a rendering context, where they are consumed to generate images that can be displayed to the user. Most commands in OpenGL ES perform one of the following actions:
Reading the current state of an OpenGL ES context. This is most typically used to determine the capabilities of an OpenGL ES implementation. See “Determining OpenGL ES Capabilities” for more information.
Changing state variables in an OpenGL ES context. This is typically used to configure the pipeline for some future operations. In OpenGL ES 1.1, state variables are used extensively to configure lights, materials, and other values that affect the fixed-function pipeline.
Creating, modifying or destroying OpenGL ES objects. Both OpenGL ES 1.1 and 2.0 provide a number of objects, defined below.
Submitting geometry to be rendered. Vertex data is submitted to the pipeline, processed, assembled into primitives and then rasterized to a framebuffer.
The OpenGL ES specifications define the precise behavior for each function.
An OpenGL ES implementation is allowed to extend the OpenGL ES specification, either by offering limits higher than the minimum required (such as allowing larger textures) or by extending the API through extensions. Apple uses the extensions mechanism to provide a number of critical extensions that help provide great performance in iOS. For example, Apple offers a texture compression extension that makes it easier to fit your textures into the memory available on iOS. Note that the limits and extensions offered by iOS may vary depending on the hardware. Your application must test the capabilities at runtime and alter its behavior to match what is available. For more information on how to do this, see “Determining OpenGL ES Capabilities.”
As described above, OpenGL ES offers a number of objects that can be created and configured to help create your scene. All of these objects are managed for you by OpenGL ES. Some of the most important object types include:
A texture is an image that can be sampled by the graphics pipeline. This is typically used to map a color image onto your geometry but can also be used to map other data onto the geometry (for example, normals or lighting information). “Best Practices for Working with Texture Data” discusses critical topics for using textures on Apple’s OpenGL ES implementation.
A buffer is a set of memory owned by OpenGL ES that your application can read or write data into. The most common use for a buffer is to hold vertex data that your application wants to submit to the graphics hardware. Because this buffer is owned by the OpenGL ES implementation, it can optimize the placement and format of the data in this buffer in order to more efficiently process vertices, particularly when the data does not change from frame to frame. Using buffers to manage your vertex data can significantly boost the performance of your application.
Shaders are also objects. An OpenGL ES 2.0 application creates a shader, compiles and links code into it, and then assigns it to process vertex and fragment data.
A renderbuffer is a simple 2D graphics image in a specified format. This format may be defined as color data, but it could also be depth or stencil information. Renderbuffers are not usually used alone, but are instead collected and used as part of a framebuffer.
Framebuffers are the ultimate destination of the graphics pipeline. A framebuffer object is really just a container that attaches textures and renderbuffers to itself to create a complete destination for rendering. Framebuffer objects are part of the OpenGL ES 2.0 standard, and Apple also implements them on OpenGL ES 1.1 with the OES_framebuffer_object
extension. Framebuffers are used extensively on iOS and are described in more detail below. A later chapter, “Working with OpenGL ES Contexts and Framebuffers,” describes strategies for creating and using framebuffers in iOS applications.
Although each object in OpenGL ES has its own functions to manipulate it, the objects all share a standard model:
Generate an object identifier.
For each object that your application wants to create, you should generate an identifier. An identifier is analogous to a pointer. Whenever your application wants to operate on an object, you use the identifier to specify which object to work on.
Note that creating the object identifier does not actually allocate an object, it simply allocates a reference to it.
Bind your object to the OpenGL ES context.
Each object type in OpenGL ES has a method to bind an object to the context. You can only work on one object of each type at a time, and you select that object by binding to it. The first time you bind to an object identifier, OpenGL ES allocates memory and initializes that object.
Modify the state of your object.
Commands implicitly operate on the currently bound object. After binding the object, your application makes one or more OpenGL ES calls to configure the object. For example, after binding to a texture, your application makes an additional call to actually load the texture image.
Use your objects for rendering.
Once you’ve created and configured your objects, you can start drawing your geometry. As you submit vertices, the currently bound objects are used to render your output. In the case of shaders, the current shader is used to calculate the final results. Other objects may be involved at various stages of the pipeline.
Delete your objects.
Finally, when you are done with an object, your application deletes it. When an object is deleted, its contents and object identifier are recycled.
In iOS, OpenGL ES objects are managed by a sharegroup object. Two or more rendering contexts can be configured to use the same sharegroup. The two rendering contexts can then use the same data (for example, a texture) to actually share a single texture object. Sharegroups are covered in “EAGLSharegroup.”
Framebuffer objects are the target of all rendering commands. Traditionally in OpenGL ES, framebuffers were created using a platform-defined interface. Each platform would provide its own functions to create a framebuffer that can be drawn to the screen. The OES_framebuffer_object
extended OpenGL ES to provide a standard mechanism to create and configure framebuffers that rendered to offscreen renderbuffers or to textures.
Apple does not provide a platform interface for creating framebuffer objects. Instead, all framebuffer objects are created using the OES_framebuffer_object
extension. In OpenGL ES 2.0, these functions are part of the core specification.
Framebuffer objects provide storage for color, depth and/or stencil data by attaching images to the framebuffer, as shown in Figure 1-1. The most common image attachment is a renderbuffer. However, a texture can also be attached to the color attachment of a framebuffer, allowing an image to be drawn, and then later texture mapped onto other geometry.
The typical procedure for creating a framebuffer is as follows:
Generate and bind a framebuffer object.
Generate, bind, and configure an image.
Attach the image to the framebuffer.
Repeat steps 2 and 3 for other images.
Test the framebuffer for completeness. The rules for completeness are defined in the specification. These rules ensure the framebuffer and its attachments are well-defined.
Apple extends framebuffer objects by allowing the color renderbuffer’s storage to be allocated so that it is shared with a Core Animation layer. This data can be presented, where it is combined with other Core Animation data and presented to the screen. See “Working with OpenGL ES Contexts and Framebuffers” for more information.
All implementations of OpenGL ES require platform-specific code to create a rendering context and to use it to draw to the screen. iOS does this through EAGL, an Objective-C interface. This section highlights the classes and protocols of the EAGL API, which is covered in more detail in “Working with OpenGL ES Contexts and Framebuffers.”
The EAGLContext
class defines the rendering context that is the target of all OpenGL ES commands. Your application creates and initializes an EAGLContext
object and makes it the current target of commands. When your application makes calls to OpenGL ES, those commands are typically stored in a queue maintained by the context and later executed to render the final image.
The EAGLContext
class also provides a method to present images to Core Animation for display.
Every EAGLContext
object contains a reference to an EAGLSharegroup
object. Whenever an object is allocated by OpenGL ES for that context, the object is actually allocated and maintained by the sharegroup. This division of responsibility is useful because it is possible to create two or more contexts that use the same sharegroup. In this scenario, objects that are allocated by one rendering context can be used by another, as shown in Figure 1-2.
Using a sharegroup allows two or more contexts to share OpenGL resources without duplicating that data for each context. Resources on a mobile device are scarcer than those on desktop hardware. By sharing textures, shaders and other objects, your application makes better use of available resources.
An application can also implement resource-sharing mechanism by using a single context and creating one framebuffer object for each rendering destination. The application switches the current target for rendering commands as needed, without changing the current context.
Your application does not directly implement the EAGLDrawable
protocol on any objects. An EAGLContext
object recognizes that objects that implement this protocol can allocate storage for a renderbuffer that can later be presented by the user. Only renderbuffers that are allocated using the drawable object can be presented in this way.
In iOS, this protocol is implemented only by the CAEAGLLayer
class, to associate OpenGL ES renderbuffers with the Core Animation graphics system.
Last updated: 2010-07-09