Applications built against iPhone SDK 4 and later need to be prepared to run on devices with different screen resolutions. Fortunately, iOS makes supporting multiple screen resolutions easy. Most of the work of handling the different types of screens is done for you by the system frameworks. However, your application still needs to do some work to update raster-based images, and depending on your application you may want to do additional work to take advantage of the extra pixels available to you.
To update your applications for devices with high-resolution screens, you need to do the following:
Provide a high-resolution image for each image resource in your application bundle, as described in “Loading Images into Your Application.”
Provide high-resolution application and document icons, as described in “Updating Your Application’s Icons and Launch Images.”
For vector-based shapes and content, continue using your custom Core Graphics and UIKit drawing code as before. If you want to add extra detail to your drawn content, see “Updating Your Custom Drawing Code” for information on how to do so.
If you use Core Animation layers directly, you may need to adjust the scale factor of your layers prior to drawing, as described in “Accounting for Scale Factors in Core Animation Layers.”
If you use OpenGL ES for drawing, decide whether you want to opt in to high-resolution drawing and set the scale factor of your layer accordingly, as described in “Drawing High-Resolution Content Using OpenGL ES.”
For custom images that you create, modify your image-creation code to take the current scale factor into account, as described in “Creating High-Resolution Bitmap Images Programmatically.”
If your application provides the content for Core Animation layers directly, adjust your code as needed to compensate for scale factors, as described in “Accounting for Scale Factors in Core Animation Layers.”
In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the underlying device. When using native drawing technologies such as Quartz, UIKit, and Core Animation, you specify coordinate values using a logical coordinate space, which measures distances in points. This logical coordinate system is decoupled from the device coordinate space used by the system frameworks to manage the pixels on the screen. The system automatically maps points in the logical coordinate space to pixels in the device coordinate space, but this mapping is not always one-to-one. This behavior leads to an important fact that you should always remember:
One point does not necessarily correspond to one pixel on the screen.
The purpose of using points (and the logical coordinate system) is to provide a fixed frame of reference for drawing. The actual size of a point is irrelevant. The goal of points is to provide a relatively consistent scale that you can use in your code to specify the size and position of views and rendered content. How points are actually mapped to pixels is a detail that is handled by the system frameworks. For example, on a device with a high-resolution screen, a line that is one point wide may actually result in a line that is two pixels wide on the screen. The result is that if you draw the same content on two similar devices, with only one of them having a high-resolution screen, the content appears to be about the same size on both devices.
In your own drawing code, you use points most of the time, but there are times when you might need to know how points are mapped to pixels. For example, on a high-resolution screen, you might want to use the extra pixels to provide extra detail in your content, or you might simply want to adjust the position or size of content in subtle ways. In iOS 4 and later, the UIScreen
, UIView
, UIImage
, and CALayer
classes expose a scale factor that tells you the relationship between points and pixels for that particular object. Before iOS 4, this scale factor was assumed to be 1.0, but in iOS 4 and later it may be either 1.0 or 2.0, depending on the resolution of the underlying device. In the future, other scale factors may also be possible.
The drawing technologies in iOS provide a lot of support to help you make your rendered content look good regardless of the resolution of the underlying screen:
Standard UIKit views (text views, buttons, table views, and so on) automatically render correctly at any resolution.
Vector-based content (UIBezierPath
, CGPathRef
, PDF) automatically takes advantage of any additional pixels to render sharper lines for shapes.
Text is automatically rendered sharper at higher resolutions.
UIKit supports the automatic loading of high-resolution variants (@2x
) of your images.
The reason most of your existing drawing code just works is that native drawing technologies such as Core Graphics take the current scale factor into account for you. For example, if one of your views implements a drawRect:
method, UIKit automatically sets the scale factor for that view to the screen’s scale factor. In addition, UIKit automatically modifies the current transformation matrix (CTM) of any graphics contexts used during drawing to take into account the view’s scale factor. Thus, any content you draw in your drawRect:
method is scaled appropriately for the underlying device’s screen.
If your application uses only native drawing technologies for its rendering, the only thing you need to do is provide high-resolution versions of your images. Applications that use nothing but system views or that rely solely on vector-based content do not need to be modified. But applications that use images need to provide new versions of those images at the higher resolution. Specifically, you must scale your images by a factor of two, resulting in twice as many pixels horizontally and vertically as before and four times as many pixels overall. For more information on updating your image resources, see “Updating Your Image Resource Files.”
Applications running in iOS 4 should now include two separate files for each image resource. One file provides a standard-resolution version of a given image, and the second provides a high-resolution version of the same image. The naming conventions for each pair of image files is as follows:
Standard: <ImageName><device_modifier>.
<filename_extension>
High resolution: <ImageName>@2x
<device_modifier>.
<filename_extension>
The <ImageName> and <filename_extension> portions of each name specify the usual name and extension for the file. The <device_modifier> portion is optional and contains either the string ~ipad
or ~iphone
. You include one of these modifiers when you want to specify different versions of an image for iPad and iPhone. The inclusion of the @2x
modifier for the high-resolution image is new and lets the system know that the image is the high-resolution variant of the standard image.
Important: When creating high-resolution versions of your images, place the new versions in the same location in your application bundle as the original.
The UIImage
class handles all of the work needed to load high-resolution images into your application. When creating new image objects, you use the same name to request both the standard and the high-resolution versions of your image. For example, if you have two image files, named Button.png
and Button@2x.png
, you would use the following code to request your button image:
UIImage* anImage = [UIImage imageNamed:@"Button"]; |
On devices with high-resolution screens, the imageNamed:
, imageWithContentsOfFile:
, and initWithContentsOfFile:
methods automatically looks for a version of the requested image with the @2x
modifier in its name. It if finds one, it loads that image instead. If you do not provide a high-resolution version of a given image, the image object still loads a standard-resolution image (if one exists) and scales it during drawing.
When it loads an image, a UIImage
object automatically sets the size
and scale
properties to appropriate values based on the suffix of the image file. For standard resolution images, it sets the scale
property to 1.0 and sets the size of the image to the image’s pixel dimensions. For images with the @2x
suffix in the filename, it sets the scale
property to 2.0 and halves the width and height values to compensate for the scale factor. These halved values correlate correctly to the point-based dimensions you need to use in the logical coordinate space to render the image.
Note: If you use Core Graphics to create an image, remember that Quartz images do not have an explicit scale factor, so their scale factor is assumed to be 1.0. If you want to create a UIImage
object from a CGImageRef
data type, use the initWithCGImage:scale:orientation:
to do so. That method allows you to associate a specific scale factor with your Quartz image data.
A UIImage
object automatically takes its scale factor into account during drawing. Thus, any code you have for rendering images should work the same as long as you provide the correct image resources in your application bundle.
If your application uses a UIImageView
object to present images, all of the images you assign to that view must use the same scale factor. You can use an image view to display a single image or to animate several images, and you can also provide a highlight image. Therefore, if you provide high-resolution versions for one of these images, then all must have high-resolution versions as well.
In addition to updating your application’s custom image resources, you should also provide new high-resolution icons for your application’s icon and launch images. The process for updating these image resources is the same as for all other image resources. Create a new version of the image, add the @2x
modifier string to the corresponding image filename, and treat the image as you do the original. For example, for application icons, add the high-resolution image filename to the CFBundleIconFiles
key of your application’s Info.plist
file.
For information about specifying the icons for your application, see “Application Icons.” For information about specifying launch images, see “Application Launch Images.”
When you do any custom drawing in your application, most of the time you should not need to care about the resolution of the underlying screen. The native drawing technologies automatically ensure that the coordinates you specify in the logical coordinate space map correctly to pixels on the underlying screen. Sometimes, however, you might need to know what the current scale factor is in order to render your content correctly. For those situations, UIKit, Core Animation, and other system frameworks provide the help you need to do your drawing correctly.
If you currently use the UIGraphicsBeginImageContext
function to create bitmaps, you may want to adjust your code to take scale factors into account. The UIGraphicsBeginImageContext
function always creates images with a scale factor of 1.0. If the underlying device has a high-resolution screen, an image created with this function might not appear as smooth when rendered. To create an image with a scale factor other than 1.0, use the UIGraphicsBeginImageContextWithOptions
instead. The process for using this function is the same as for the UIGraphicsBeginImageContext
function:
Call UIGraphicsBeginImageContextWithOptions
to create a bitmap context (with the appropriate scale factor) and push it on the graphics stack.
Use UIKit or Core Graphics routines to draw the content of the image.
Call UIGraphicsGetImageFromCurrentImageContext
to get the bitmap’s contents.
Call UIGraphicsEndImageContext
to pop the context from the stack.
For example, the following code snippet creates a bitmap that is 200 x 200 pixels. (The number of pixels is determined by multiplying the size of the image by the scale factor.)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(100.0,100.0), NO, 2.0); |
Note: If you always want the bitmap to be scaled appropriately for the main screen of the device, set the scale factor to 0.0 when calling the UIGraphicsBeginImageContextWithOptions
function.
If you want to draw your content differently on high-resolution screens, you can use the current scale factor to modify your drawing code. For example, suppose you have a view that draws a 1-pixel-wide border around its edge. On devices where the scale factor is 2.0, using a UIBezierPath
object to draw a line with a width of 1.0 would result in a line that was 2 pixels wide. In this case, you could divide your line width by the scale factor to obtain a proper 1-pixel-wide line.
Of course, changing drawing characteristics based on scale factor may have unexpected consequences. A 1-pixel-wide line might look nice on some devices but on a high-resolution device might be so thin that it is difficult to see clearly. It is up to you to determine whether to make such a change.
Applications that use Core Animation layers directly to provide content may need to adjust their drawing code to account for scale factors. Normally, when you draw in your view’s drawRect:
method, or in the drawLayer:inContext:
method of the layer’s delegate, the system automatically adjusts the graphics context to account for scale factors. However, knowing or changing that scale factor might still be necessary when your view does one of the following:
Creates additional Core Animation layers with different scale factors and composites them into its own content
Sets the contents
property of a Core Animation layer directly
Core Animation’s compositing engine looks at the contentsScale
property of each layer to determine whether the contents of that layer need to be scaled during compositing. If your application creates layers without an associated view, each new layer object’s scale factor is set to 1.0 initially. If you do not change that scale factor, and if you subsequently draw the layer on a high-resolution screen, the layer’s contents are scaled automatically to compensate for the difference in scale factors. If you do not want the contents to be scaled, you can change the layer’s scale factor to 2.0, but if you do so without providing high-resolution content, your existing content may appear smaller than you were expecting. To fix that problem, you need to provide higher-resolution content for your layer.
Important: The contentsGravity
property of the layer plays a role in determining whether standard-resolution layer content is scaled on a high-resolution screen. This property is set to the value kCAGravityResize
by default, which causes the layer content to be scaled to fit the layer’s bounds. Changing the gravity to a nonresizing option eliminates the automatic scaling that would otherwise occur. In such a situation, you may need to adjust your content or the scale factor accordingly.
Adjusting the content of your layer to accommodate different scale factors is most appropriate when you set the contents
property of a layer directly. Quartz images have no notion of scale factors and therefore work directly with pixels. Therefore, before creating the CGImageRef
object you plan to use for the layer’s contents, check the scale factor and adjust the size of your image accordingly. Specifically, load an appropriately sized image from your application bundle or use the UIGraphicsBeginImageContextWithOptions
function to create an image whose scale factor matches the scale factor of your layer. If you do not create a high-resolution bitmap, the existing bitmap may be scaled as discussed previously.
For information on how to specify and load high-resolution images, see “Updating Your Image Resource Files.” For information about how to create high-resolution images, see “Creating High-Resolution Bitmap Images Programmatically.”
If your application uses OpenGL ES for rendering, your existing drawing code should continue to work without any changes. When drawn on a high-resolution screen, though, your content is scaled accordingly and will appear more blocky. The reason for the blocky appearance is that the default behavior of the CAEAGLLayer
class, which you use to back your OpenGL ES renderbuffers , is the same as other Core Animation layer objects. In other words, its scale factor is set to 1.0 initially, which causes the Core Animation compositor to scale the contents of the layer on high-resolution screens. To avoid this blocky appearance, you need to increase the size of your OpenGL ES renderbuffers to match the size of the screen. (With more pixels, you can then increase the amount of detail you provide for your content.) Because adding more pixels to your renderbuffers has performance implications, though, you must explicitly opt in to support high-resolution screens.
To enable high-resolution drawing, you must change the scale factor of the view you use to present your OpenGL ES content. Changing the contentScaleFactor
property of your view from 1.0 to 2.0 triggers a matching change to the scale factor of the underlying CAEAGLLayer
object. The renderbufferStorage:fromDrawable:
method, which you use to bind the layer object to your renderbuffers, calculates the size of the renderbuffer by multiplying the layer’s bounds by its scale factor. Thus, doubling the scale factor doubles the width and height of the resulting renderbuffer, giving you more pixels for your content. After that, it is up to you to provide the content for those additional pixels.
Listing 5-1 shows the proper way to bind your layer object to your renderbuffers and retrieve the resulting size information. If you used the OpenGL ES application template to create your code, then this step is already done for you, and the only thing you need to do is set the scale factor of your view appropriately. If you did not use the OpenGL ES application template, you should use code similar to this to retrieve the renderbuffer size. You should never assume that the renderbuffer size is fixed for a given type of device.
Listing 5-1 Initializing a renderbuffer’s storage and retrieving its actual dimensions
GLuint colorRenderbuffer; |
glGenRenderbuffersOES(1, &colorRenderbuffer); |
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); |
[myContext renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:myEAGLLayer]; |
// Get the renderbuffer size. |
GLint width; |
GLint height; |
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &width); |
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &height); |
Important: A view that is backed by a CAEAGLLayer
object should not implement a custom drawRect:
method. Implementing a drawRect:
method causes the system to change the default scale factor of the view so that it matches the scale factor of the screen. If your drawing code is not expecting this behavior, your application content will not be rendered correctly.
If you do opt in to high-resolution drawing, you also need to adjust the model and texture assets of your application accordingly. For example, when running on an iPad or a high-resolution device, you might want to choose larger models and more detailed textures to take advantage of the increased number of pixels. Conversely, on a standard-resolution iPhone, you can continue to use smaller models and textures.
An important factor when determining whether to support high-resolution content is performance. The quadrupling of pixels that occurs when you change the scale factor of your layer from 1.0 to 2.0 puts additional pressure on the fragment processor. If your application performs many per-fragment calculations, the increase in pixels may reduce your application’s frame rate. If you find your application runs significantly slower at the higher scale factor, consider one of the following options:
Optimize your fragment shader’s performance using the performance-tuning guidelines found in OpenGL ES Programming Guide for iOS.
Choose a simpler algorithm to implement in your fragment shader. By doing so, you are reducing the quality of each individual pixel to render the overall image at a higher resolution.
Use a fractional scale factor between 1.0 and 2.0. A scale factor of 1.5 provides better quality than a scale factor of 1.0 but needs to fill fewer pixels than an image scaled to 2.0.
OpenGL ES in iOS 4 and later offers multisampling as an option. Even though your application can use a smaller scale factor (even 1.0), implement multisampling anyway. An added advantage is that this technique also provides higher quality on devices that do not support high-resolution displays.
The best solution depends on the needs of your OpenGL ES application; you should test more than one of these options and choose the approach that provides the best balance between performance and image quality.
Last updated: 2010-06-30