This chapter provides an introduction to the iPad family of devices, orienting you to the basic features available on the devices and what it takes to develop applications for them. If you have already written an iPhone application, writing an iPad application will feel very familiar. Most of the basic features and behaviors are the same. However, iOS 3.2 includes features specific to iPad devices that you will want to use in your applications.
With iPad devices, you now have an opportunity to create Multi-Touch applications on a larger display than previously available. The 1024 x 768 pixel screen provides much more room to display content, or provide greater detail for your existing content. And the addition of new interface elements in iOS 3.2 enable an entirely new breed of applications.
The size and capabilities of iPad mean that it is now possible to create a new class of applications for a portable device. The increased screen size gives you the space you need to present almost any kind of content. The Multi-Touch interface and support for physical keyboards enables diverse modes of interaction, ranging from simple gesture-driven interactions to content creation and substantial text input.
The increased screen size also makes it possible to create a new class of immersive applications that replicate real-world objects in a digital form. For example, the Contacts and Calendar applications on iPad look more like the paper-based address book and calendar you might have on your desk at home. These digital metaphors for real-life objects provide a more natural and familiar experience for the user and can make your applications more compelling to use. But because they are digital, you can go beyond the limitations of the physical objects themselves and create applications that enable greater productivity and convenience.
If you are already familiar with the process for creating iPhone applications, then the process for creating iPad applications will feel very familiar. For the most part, the high-level process is the same. All iPhone and iPad devices run iOS and use the same underlying technologies and design techniques. Where the two devices differ most are in screen size, which in turn may affect the type of interface you create for each. Of course, there are also some other subtle differences between the two, and so the following sections provide an overview of some key system features for iPad devices along with information about places where those features differ from iPhone devices.
With only minor exceptions, the core architecture of iPad applications is the same as it is for iPhone applications. At the system level:
Only one application runs at a time and that application’s window fills the entire screen.
Applications are expected to launch and exit quickly.
For security purposes, each application executes inside a sandbox environment. The sandbox includes space for application-specific files and preferences, which are backed up to the user’s computer. Interactions with other applications on a device are through system-provided interfaces only.
Each application runs in its own virtual memory space but the amount of usable virtual memory is constrained by the amount of physical memory. In other words, memory is not paged to and from the disk.
Custom plug-ins and frameworks are not supported.
Inside an application, the following behaviors apply:
(New) An application’s interface should support all landscape and portrait orientations. This behavior differs slightly from the iPhone, where running in both portrait and landscape modes is not required. For more information, see “Designing for Multiple Orientations.”
Applications are written in Objective-C primarily but C and C++ may be used as well.
All of the classes available for use in iPhone applications are also available in iPad applications. (Classes introduced in iOS 3.2 are not available for use in iPhone applications.)
Memory is managed using a retain/release model.
Applications may spawn additional threads as needed. However, view-based operations and many graphics operations must always be performed on the application’s main thread.
All of the fundamental design patterns that you are already familiar with for iPhone applications also apply to iPad applications. Patterns such as delegation and protocols, Model-View-Controller, target-action, notifications, and declared properties are all commonly used in iPad applications.
If you are unfamiliar with the basics of developing iPhone applications, you should read iOS Application Programming Guide before continuing. For additional information about the fundamental design patterns used in all Cocoa Touch applications, see Cocoa Fundamentals Guide
Just as they are for iPhone applications, view controllers are a crucial piece of infrastructure for managing and presenting the user interface of your iPad application. A view controller is responsible for a single view. Most of the time, a view controller’s view is expected to fill the entire span of the application window. In some cases, though, a view controller may be embedded inside another view controller (known as a container view controller) and presented along with other content. Navigation and tab bar controllers are examples of container view controllers. They present a mixture of custom views and views from their embedded view controllers to implement complex navigation interfaces.
In iPad applications, navigation and tab bar controllers are still supported and perfectly acceptable to use but their importance in creating polished interfaces is somewhat diminished. For simpler data sets, you may be able to replace your navigation and tab bar controllers with a new type of view controller called a split view controller. Even for more complex data sets, navigation and tab bar controllers often play only a secondary role in your user interface, providing lower-level navigation support only.
For specific information about new view controller-related behaviors in iOS 3.2, see “Views and View Controllers.”
All of the graphics and media technologies you use in your iPhone applications are also available to iPad applications. This includes native 2D drawing technologies such as Core Graphics, UIKit, and Core Animation. You can also use OpenGL ES 2.0 or OpenGL ES 1.1 for drawing 2D and 3D content.
Using OpenGL ES on iPad is identical to using OpenGL ES on other iOS devices. An iPad is a PowerVR SGX device and supports the same basic capabilities as other SGX devices. However, because the processor, memory architecture, and screen dimensions are different for iPad, you should always test your code on an iPad device before shipping to ensure performance meets your requirements.
All of the same audio technologies you have used in iOS previously are also available in your iPad applications. You can use technologies such as Core Audio, AV Foundation, and OpenAL to play high-quality audio through the built-in speaker or headphone jack. You can also play tracks from the user’s iPod library using the classes of the Media Player framework.
If you want to incorporate video playback into your application, you use the classes in the Media Player framework. In iOS 3.2, the interface for playing back video has changed significantly, providing much more flexibility. Rather than always playing in full-screen mode, you now receive a view that you can incorporate into your user interface at any size. There is also more direct programmatic control over playback, including the ability to seek forwards and backwards in the track, set the start and stop points of the track, and even generate thumbnail images of video frames.
For information on how to port existing Media Player code to use the new interfaces, see “Important Porting Tip for Using the Media Player Framework.” For more information on the hardware capabilities of OpenGL ES, along with how to use it in iOS applications, see OpenGL ES Programming Guide for iOS.
The Multi-Touch technology is fundamental to both iPhone and iPad applications. Like iPhone applications, the event-handling model for iPad applications is based on receiving one or more touch events in the views of your application. Your views are then responsible for translating those touch events into actions that modify or manipulate your application’s content.
Although the process for receiving and handling touch events is unchanged for iPad applications, iOS 3.2 now provides support for detecting gestures in a uniform manner. Gesture recognizers simplify the interface for detecting swipe, pinch, and rotation gestures, among others, and using those gestures to trigger additional behavior. You can also extend the basic set of gesture recognizer classes to add support for custom gestures your application uses.
For more information about how to use gesture recognizers, see “Gesture Recognizers.”
Many of the distinguishing features of iPhone are also available on iPad. Specifically, you can incorporate support for the following features into your iPad applications:
Accelerometers
Core Location
Maps (using the MapKit framework)
Preferences (either in app or presented from the Settings application).
Address Book contacts
External hardware accessories
Peer-to-peer Bluetooth connectivity (using the Game Kit framework)
Although iPad devices do not include a camera, you can still use them to access the user’s photos. The image picker interface supports selecting images from the photo library already on the device.
Although there are many similarities between iPhone and iPad applications, there are new features available for iPad devices that make it possible to create dramatically different types of applications too. These new features may warrant a rethinking of your existing iPhone applications during the porting process. The advantage of using these new features is that your application will look more at home on an iPad device.
The biggest change between an iPhone application and an iPad application is the amount of screen space available for presenting content. The screen size of an iPad device measures 1024 by 768 pixels. How you adapt your application to support this larger screen will depend largely on the current implementation of your existing iPhone application.
For immersive applications such as games where the application’s content already fills the screen, scaling your application is a good strategy. When scaling a game, you can use the extra pixels to increase the amount of detail for your game environment and the objects within it. With extra space available, you should also consider adding new controls or status displays to the game environment. If you factor your code properly, you might be able to use the same code for both types of device and simply increase the amount of detail when rendering on iPad.
For productivity applications that use standard system controls to present information, you are almost certainly going to want to replace your existing views with new ones designed to take advantage of iPad devices. Use this opportunity to rethink your design. For example, if your application uses a navigation controller to help the user navigate a large data set, you might be able to take advantage of some of the new user interface elements to present that data more efficiently.
To support the increased screen space and new capabilities offered by iPad, iOS 3.2 includes some new classes and interfaces:
Split views are a way to present two custom views side-by-side. They are a good supplement for navigation-based interfaces and other types of master-detail interfaces.
Popovers layer content temporarily on top of your existing views. You can use them to implement tool palettes, options menus, and present other kinds of information without distracting the user from the main content of your application.
Modally presented controllers now support a configurable presentation style, which determines whether all or only part of the window is covered by the modal view.
Toolbars can now be positioned at the top and bottom of a view. The increased screen size also makes it possible to include more items on a toolbar.
Responder objects now support custom input views. A custom input view is a view that slides up from the bottom of the screen when the object becomes the first responder. Previously, only text fields and text views supported an input view (the keyboard) and that view was not changeable. Now, you can associate an input view with any custom views you create. For information about specifying a custom input view, see “Input Views and Input Accessory Views.”
Responders can also have a custom input accessory view. An input accessory view attaches itself to the top of a responder’s input view and slides in with the input view when the object becomes first responder. The most common use for this feature is to attach custom toolbars or other views to the top of the keyboard. For information about specifying a custom input accessory view, see “Input Views and Input Accessory Views.”
As you think about the interface for your iPad application, consider incorporating the new elements whenever appropriate. Several of these elements offer a more natural way to present your content. For example, split views are often a good replacement (or supplement) to a navigation interface. Others allow you to take advantage of new features and to extend the capabilities of your application.
For detailed information on how to use split views, popovers, and the new modal presentation styles, see “Views and View Controllers.” For information on input views and input accessory views, see “Custom Text Processing and Input.” For guidance on how to design your overall user interface, see iPad Human Interface Guidelines.
In earlier versions of iOS, text support was optimized for simple text entry and presentation. Now, the larger screen of iPad makes more sophisticated text editing and presentation possible. In addition, the ability to connect a physical keyboard to an iPad device enables more intense text entry. To support enhanced text entry and presentation, iOS 3.2 also includes several new features that you can use in your applications:
The Core Text framework provides support for sophisticated text rendering and layout.
The UIKit framework includes several enhancements to support text, including:
New protocols that allow your own custom views to receive input from the system keyboard
A new UITextChecker
class to manage spell checking
Support for adding custom commands to the editing menu that is managed by the UIMenuController
class
Core Animation now includes the CATextLayer
class, which you can use to display text in a layer.
These features give you the ability to create everything from simple text entry controls to sophisticated text editing applications. For example, the ability to interact with the system keyboard now makes it possible for you to create custom text views that handle everything from basic input to complex text selection and editing behaviors. And to draw that text, you now have access to the Core Text framework, which you can use to present your text using custom layouts, multiple fonts, multiple colors, and other style attributes.
For more information about how you use these technologies to handle text in your applications, see “Custom Text Processing and Input.”
An iPad can now be connected to an external display through a supported cable. Applications can use this connection to present content in addition to the content on the device’s main screen. Depending on the cable, you can output content at up to a 720p (1280 x 720) resolution. A resolution of 1024 by 768 resolution may also be available if you prefer to use that aspect ratio.
To display content on an external display, do the following:
Use the screens
class method of the UIScreen
class to determine if an external display is available.
If an external screen is available, get the screen object and look at the values in its availableModes
property. This property contains the configurations supported by the screen.
Select the UIScreenMode
object corresponding to the desired resolution and assign it to the currentMode
property of the screen object.
Create a new window object (UIWindow
) to display your content.
Assign the screen object to the screen
property of your new window.
Configure the window (by adding views or setting up your OpenGL ES rendering context).
Show the window.
Important: You should always assign a screen object to your window before you show that window. Although you can change the screen while a window is already visible, doing so is an expensive operation and not recommended.
Screen mode objects identify a specific resolution supported by the screen. Many screens support multiple resolutions, some of which may include different pixel aspect ratios. The decision for which screen mode to use should be based on performance and which resolution best meets the needs of your user interface. When you are ready to start drawing, use the bounds provided by the UIScreen
object to get the proper size for rendering your content. The screen’s bounds take into account any aspect ratio data so that you can focus on drawing your content.
If you want to detect when screens are connected and disconnected, you can register to receive screen connection and disconnection notifications. For more information about screens and screen notifications, see UIScreen Class Reference. For information about screen modes, see UIScreenMode Class Reference.
To support the ability to create productivity applications, iOS 3.2 includes several new features aimed at support the creation and handling of documents and files:
Applications can now register themselves as being able to open specific types of files. This support allows applications that do need to work with files (such as email programs) the ability to pass those files to other applications.
The UIKit framework now provides the UIDocumentInteractionController
class for interacting with files of unknown types. You can use this class to preview files, copy their contents to the pasteboard, or pass them to another application for opening.
Of course, it is important to remember that although you can manipulate files in your iPad applications, files should never be a focal part of your application. There are no open and save panels in iOS for a very good reason. The save panel in particular implies that it is the user’s responsibility to save all data, but this is not the model that iPhone applications should ever use. Instead, applications should save data incrementally to prevent the loss of that data when the application quits or is interrupted by the system. To do this, your application must take responsibility for managing the creation and saving the user’s content at appropriate times.
For more information on how to interact with documents and files, see “The Core Application Design.”
In iOS 3.2, UIKit introduces support for creating PDF content from your application. You can use this support to create PDF files in your application’s home directory or data objects that you can incorporate into your application’s content. Creation of the PDF content is simple because it takes advantage of the same native drawing technologies that are already available. After preparing the PDF canvas, you can use UIKit, Core Graphics, and Core Text to draw the text and graphics you need. You can also use the PDF creation functions to embed links in your PDF content.
For more information about how to use the new PDF creation functions, see “Generating PDF Content.”
Last updated: 2010-04-13