iOS Reference Library Apple Developer
Search

OpenGL ES on iOS

OpenGL for Embedded Systems (OpenGL ES) is a simplified version of OpenGL that eliminates redundant functionality in order to present a programming interface that is easily implemented in mobile hardware. OpenGL ES allows your application to configure a traditional 3D graphics pipeline and submit vertex data to OpenGL, where they are transformed and lit, assembled into primitives, and rasterized to create a 2D image.

Currently, there are two distinct versions of OpenGL ES:

Apple offers hardware that supports both OpenGL ES 1.1 and OpenGL ES 2.0.

The remainder of this chapter gives an overview of the iOS graphics model and OpenGL ES, and explains how the two fit together.

iOS Graphics Overview

Core Animation is fundamental to the iOS graphics subsystem. Every UIView object in your application is backed by a Core Animation layer. As the various layers update their contents, they are animated and composited by Core Animation and presented to the display. This process is described in detail in “Tuning for Performance and Responsiveness” in the iOS Application Programming Guide.

OpenGL ES, like every other graphics system on iOS, is a client of Core Animation. To use OpenGL ES to draw to the screen, your application creates a UIView class backed by a special Core Animation layer, a CAEAGLLayer object. A CAEAGLLayer object is aware of OpenGL ES and can be used to create rendering targets that act as part of Core Animation. When your application finishes rendering a frame, you present the contents of the CAEAGLLayer object, where they are composited with the data from other views.

The complete discussion of how to create a CAEAGLLayer object and use it display your rendered images is in “Working with OpenGL ES Contexts and Framebuffers.”

Although your application can compose scenes using both OpenGL ES layers and non–OpenGL ES drawing, in practice, you can achieve higher performance by limiting yourself to OpenGL ES. Compositing non–OpenGL ES with OpenGL ES content is covered in more detail in “Displaying Your Results.”

Overview of OpenGL ES

OpenGL ES provides a procedural API for submitting geometry to a hardware accelerated rendering pipeline. OpenGL ES commands are submitted to a rendering context, where they are consumed to generate images that can be displayed to the user. Most commands in OpenGL ES perform one of the following actions:

The OpenGL ES specifications define the precise behavior for each function.

An OpenGL ES implementation is allowed to extend the OpenGL ES specification, either by offering limits higher than the minimum required (such as allowing larger textures) or by extending the API through extensions. Apple uses the extensions mechanism to provide a number of critical extensions that help provide great performance in iOS. For example, Apple offers a texture compression extension that makes it easier to fit your textures into the memory available on iOS. Note that the limits and extensions offered by iOS may vary depending on the hardware. Your application must test the capabilities at runtime and alter its behavior to match what is available. For more information on how to do this, see “Determining OpenGL ES Capabilities.”

OpenGL ES Objects

As described above, OpenGL ES offers a number of objects that can be created and configured to help create your scene. All of these objects are managed for you by OpenGL ES. Some of the most important object types include:

Although each object in OpenGL ES has its own functions to manipulate it, the objects all share a standard model:

  1. Generate an object identifier.

    For each object that your application wants to create, you should generate an identifier. An identifier is analogous to a pointer. Whenever your application wants to operate on an object, you use the identifier to specify which object to work on.

    Note that creating the object identifier does not actually allocate an object, it simply allocates a reference to it.

  2. Bind your object to the OpenGL ES context.

    Each object type in OpenGL ES has a method to bind an object to the context. You can only work on one object of each type at a time, and you select that object by binding to it. The first time you bind to an object identifier, OpenGL ES allocates memory and initializes that object.

  3. Modify the state of your object.

    Commands implicitly operate on the currently bound object. After binding the object, your application makes one or more OpenGL ES calls to configure the object. For example, after binding to a texture, your application makes an additional call to actually load the texture image.

  4. Use your objects for rendering.

    Once you’ve created and configured your objects, you can start drawing your geometry. As you submit vertices, the currently bound objects are used to render your output. In the case of shaders, the current shader is used to calculate the final results. Other objects may be involved at various stages of the pipeline.

  5. Delete your objects.

    Finally, when you are done with an object, your application deletes it. When an object is deleted, its contents and object identifier are recycled.

In iOS, OpenGL ES objects are managed by a sharegroup object. Two or more rendering contexts can be configured to use the same sharegroup. The two rendering contexts can then use the same data (for example, a texture) to actually share a single texture object. Sharegroups are covered in “EAGLSharegroup.”

Framebuffers

Framebuffer objects are the target of all rendering commands. Traditionally in OpenGL ES, framebuffers were created using a platform-defined interface. Each platform would provide its own functions to create a framebuffer that can be drawn to the screen. The OES_framebuffer_object extended OpenGL ES to provide a standard mechanism to create and configure framebuffers that rendered to offscreen renderbuffers or to textures.

Apple does not provide a platform interface for creating framebuffer objects. Instead, all framebuffer objects are created using the OES_framebuffer_object extension. In OpenGL ES 2.0, these functions are part of the core specification.

Framebuffer objects provide storage for color, depth and/or stencil data by attaching images to the framebuffer, as shown in Figure 1-1. The most common image attachment is a renderbuffer. However, a texture can also be attached to the color attachment of a framebuffer, allowing an image to be drawn, and then later texture mapped onto other geometry.

Figure 1-1  Framebuffer with color, depth, and stencil buffers

Framebuffer with attachments.

The typical procedure for creating a framebuffer is as follows:

  1. Generate and bind a framebuffer object.

  2. Generate, bind, and configure an image.

  3. Attach the image to the framebuffer.

  4. Repeat steps 2 and 3 for other images.

  5. Test the framebuffer for completeness. The rules for completeness are defined in the specification. These rules ensure the framebuffer and its attachments are well-defined.

Apple extends framebuffer objects by allowing the color renderbuffer’s storage to be allocated so that it is shared with a Core Animation layer. This data can be presented, where it is combined with other Core Animation data and presented to the screen. See “Working with OpenGL ES Contexts and Framebuffers” for more information.

iOS Classes

All implementations of OpenGL ES require platform-specific code to create a rendering context and to use it to draw to the screen. iOS does this through EAGL, an Objective-C interface. This section highlights the classes and protocols of the EAGL API, which is covered in more detail in “Working with OpenGL ES Contexts and Framebuffers.”

EAGLContext

The EAGLContext class defines the rendering context that is the target of all OpenGL ES commands. Your application creates and initializes an EAGLContext object and makes it the current target of commands. When your application makes calls to OpenGL ES, those commands are typically stored in a queue maintained by the context and later executed to render the final image.

The EAGLContext class also provides a method to present images to Core Animation for display.

EAGLSharegroup

Every EAGLContext object contains a reference to an EAGLSharegroup object. Whenever an object is allocated by OpenGL ES for that context, the object is actually allocated and maintained by the sharegroup. This division of responsibility is useful because it is possible to create two or more contexts that use the same sharegroup. In this scenario, objects that are allocated by one rendering context can be used by another, as shown in Figure 1-2.

Figure 1-2  Two contexts sharing OpenGL objects

Core Animation-based renderbuffer

Using a sharegroup allows two or more contexts to share OpenGL resources without duplicating that data for each context. Resources on a mobile device are scarcer than those on desktop hardware. By sharing textures, shaders and other objects, your application makes better use of available resources.

An application can also implement resource-sharing mechanism by using a single context and creating one framebuffer object for each rendering destination. The application switches the current target for rendering commands as needed, without changing the current context.

EAGLDrawable Protocol

Your application does not directly implement the EAGLDrawable protocol on any objects. An EAGLContext object recognizes that objects that implement this protocol can allocate storage for a renderbuffer that can later be presented by the user. Only renderbuffers that are allocated using the drawable object can be presented in this way.

In iOS, this protocol is implemented only by the CAEAGLLayer class, to associate OpenGL ES renderbuffers with the Core Animation graphics system.




Last updated: 2010-07-09

Did this document help you? Yes It's good, but... Not helpful...