An iPhone, iPad, or iPod touch device has multiple items of hardware that generate streams of input data an application can access. The Multi-Touch technology enables the direct manipulation of views, including the virtual keyboard. Three accelerometers measure acceleration along the three spatial axes. A gyroscope (only on some device models) measures the rate of rotation around the three axes. The Global Positioning System (GPS) and compass provide measurements of location and orientation. Each of these hardware systems, as they detect touches, device movements, and location changes, produce raw data that is passed to system frameworks. The frameworks package the data and deliver them as events to an application for processing.
The following sections identifies these frameworks and describes how events are packaged and delivered to applications for handling.
Note: This document describes touch events, motion events, and remote control events only. For information on handling GPS and magnetometer (compass) data, see Location Awareness Programming Guide.
An event is an object that represents a user action detected by hardware on the device and conveyed to iOS—for example, a finger touching the screen or hand shaking the device. Many events are instances of the UIEvent
class of the UIKit framework. A UIEvent
object may encapsulate state related to the user event, such as the associated touches. It also records the moment the event was generated. As a user action takes place—for example, as fingers touch the screen and move across its surface—the operating system continually sends event objects to an application for handling.
UIKit currently recognizes three types of events: touch events, “shaking” motion events, and remote-control events. The UIEvent
class declares the enum
constants shown in Listing 1-1.
Listing 1-1 Event-type and event-subtype constants
typedef enum { |
UIEventTypeTouches, |
UIEventTypeMotion, |
UIEventTypeRemoteControl, |
} UIEventType; |
typedef enum { |
UIEventSubtypeNone = 0, |
UIEventSubtypeMotionShake = 1, |
UIEventSubtypeRemoteControlPlay = 100, |
UIEventSubtypeRemoteControlPause = 101, |
UIEventSubtypeRemoteControlStop = 102, |
UIEventSubtypeRemoteControlTogglePlayPause = 103, |
UIEventSubtypeRemoteControlNextTrack = 104, |
UIEventSubtypeRemoteControlPreviousTrack = 105, |
UIEventSubtypeRemoteControlBeginSeekingBackward = 106, |
UIEventSubtypeRemoteControlEndSeekingBackward = 107, |
UIEventSubtypeRemoteControlBeginSeekingForward = 108, |
UIEventSubtypeRemoteControlEndSeekingForward = 109, |
} UIEventSubtype; |
Each event has one of these event type and subtype constants associated with it, which you can access through the type
and subtype
properties of UIEvent
. The event type includes touch events, motion events ,and remote control events. In iOS 3.0, there is a shake-motion subtype (UIEventSubtypeMotionShake
) and many remote-control subtypes; touch events always have a subtype of UIEventSubtypeNone
.
A remote-control event originates as commands from the system transport controls or an external accessory conforming to an Apple-provided specification, such as a headset. They are intended to allow users to control multimedia content using those controls and external accessories. Remote-control events are new with iOS 4.0 and are described in detail in “Remote Control of Multimedia.”
You should never retain a UIEvent
object in your code. If you need to preserve the current state of an event object for later evaluation, you should copy and store those bits of state in an appropriate manner (using an instance variable or a dictionary object, for example).
A device running iOS can send other types of events, broadly considered, to an application for handling. These events are not UIEvent
objects, but still encapsulate a measurement of some hardware-generated values. “Motion Event Types” discusses these other events.
The delivery of an event to an object for handling occurs along a specific path. As described in “Preparing Your Application for Remote-Control Events,” when users touch the screen of a device, iOS recognizes the set of touches and packages them in a UIEvent
object that it places in the active application’s event queue. If the system interprets the shaking of the device as a motion event, an event object representing that event is also placed in the application’s event queue. The singleton UIApplication
object managing the application takes an event from the top of the queue and dispatches it for handling. Typically, it sends the event to the application’s key window—the window currently the focus for user events—and the window object representing that window sends the event to an initial object for handling. That object is different for touch events and motion events.
Touch events. The window object uses hit-testing and the responder chain to find the view to receive the touch event. In hit-testing, a window calls hitTest:withEvent:
on the top-most view of the view hierarchy; this method proceeds by recursively calling pointInside:withEvent:
on each view in the view hierarchy that returns YES
, proceeding down the hierarchy until it finds the subview within whose bounds the touch took place. That view becomes the hit-test view.
If the hit-test view cannot handle the event, the event travels up the responder chain as described in “Responder Objects and the Responder Chain” until the system finds a view that can handle it. A touch object (described in “Events and Touches”) is associated with its hit-test view for its lifetime, even if the touch represented by the object subsequently moves outside the view. “Hit-Testing” discusses some of the programmatic implications of hit-testing.
Motion and remote-control events. The window object sends each shaking-motion or remote-control event to the first responder for handling. (The first responder is described in “Responder Objects and the Responder Chain.”
Although the hit-test view and the first responder are often the same view object, they do not have to be the same.
The UIApplication
object and each UIWindow
object dispatches events in the sendEvent:
method. (These classes declare a method with the same signature). Because these methods are funnel points for events coming into an application, you can subclass UIApplication
or UIWindow
and override the sendEvent:
method to monitor events (which is something few applications would need to do). If you override these methods, be sure to call the superclass implementation (that is, [super sendEvent:theEvent]
); never tamper with the distribution of events.
The preceding discussion mentions the concept of responders. What is a responder object and how does it fit into the architecture for event delivery?
A responder object is an object that can respond to events and handle them. UIResponder
is the base class for all responder objects, also known as, simply, responders. It defines the programmatic interface not only for event handling but for common responder behavior. UIApplication
, UIView
, and all UIKit classes that descend from UIView
(including UIWindow
) inherit directly or indirectly from UIResponder
, and thus their instances are responder objects.
The first responder is the responder object in an application (usually a UIView
object) that is designated to be the first recipient of events other than touch events. A UIWindow
object sends the first responder these events in messages, giving it the first shot at handling them. To receive these messages, the responder object must implement canBecomeFirstResponder
to return YES
; it must also receive a becomeFirstResponder
message (which it can invoke on itself). The first responder is the first view in a window to receive the following type of events and messages:
Motion events—via calls to the UIResponder
motion-handling methods described in “Shaking-Motion Events”
Remote-control events—via calls to the UIResponder
method remoteControlReceivedWithEvent:
Action messages—sent when the user manipulates a control (such as a button or slider) and no target is specified for the action message
Editing-menu messages—sent when users tap the commands of the editing menu (described in Device Features Programming Guide)
The first responder also plays a role in text editing. A text view or text field that is the focus of editing is made the first responder, which causes the virtual keyboard to appear.
Note: Applications must explicitly set a first responder to handle motion events, action messages, and editing-menu messages; UIKit automatically sets the text field or text view a user taps to be the first responder.
If the first responder or the hit-test view doesn’t handle an event, it may pass the event (via message) to the next responder in the responder chain to see if it can handle it.
The responder chain is a linked series of responder objects along which an event, action message, or editing-menu message is passed. It allows responder objects to transfer responsibility for handling an event to other, higher-level objects. An event proceeds up the responder chain as the application looks for an object capable of handling the event. Because the hit-test view is also a responder object, an application may also take advantage of the responder chain when handing touch events. The responder chain consists of a series of next responders in the sequence depicted in Figure 1-1.
When the system delivers a touch event, it first sends it to a specific view. For touch events, that view is the one returned by hitTest:withEvent:
; for “shaking”-motion events, remote-control events, action messages, and editing-menu messages, that view is the first responder. If the initial view doesn’t handle the event, it travels up the responder chain along a particular path:
The hit-test view or first responder passes the event or message to its view controller if it has one; if the view doesn’t have a view controller, it passes the event or message to its superview.
If a view or its view controller cannot handle the event or message, it passes it to the superview of the view.
Each subsequent superview in the hierarchy follows the pattern described in the first two steps if it cannot handle the event or message.
The topmost view in the view hierarchy, if it doesn’t handle the event or message, passes it to the window object for handling.
The UIWindow
object, if it doesn’t handle the event or message, passes it to the singleton application object.
If the application object cannot handle the event or message, it discards it.
If you implement a custom view to handle “shaking”-motion events, remote-control events, action messages, or editing-menu messages, you should not forward the event or message to nextResponder
directly to send it up the responder chain. Instead invoke the superclass implementation of the current event-handling method—let UIKit handle the traversal of the responder chain.
Motion events come from two hardware sources on a device: the three accelerometers and the gyroscope, which is available only some devices. An accelerometer measures changes in velocity over time along a given linear path. The combination of accelerometers lets you detect movement of the device in any direction. You can use this data to track both sudden movements in the device and the device’s current orientation relative to gravity. A gyroscope measures the rate of rotation around each of the three axes. (Although there are three accelerometers, one for each axis, the remainder of this document refers to them as a single entity.)
The Core Motion framework is primarily responsible for accessing raw accelerometer and gyroscope data and feeding that data to an application for handling. In addition, Core Motion processes combined accelerometer and gyroscope data using special algorithms and presents that refined motion data to applications. Motion events from Core Motion are represented by three data objects, each encapsulating one or more measurements:
A CMAccelerometerData
object encapsulates a structure that captures the acceleration along each of the spatial axes.
A CMGyroData
object encapsulates a structure that captures the rate of rotation around each of the three spatial axes.
A CMDeviceMotion
object encapsulates several different measurements, including attitude and more useful measurements of rotation rate and acceleration.
Core Motion is apart from UIKit architectures and conventions. There is no connection with the UIEvent
model and there is notion of first responder or responder chain. It delivers motion events directly to applications that request them.
The CMMotionManager
class is the central access point for Core Motion. You create an instance of the class, specify an update interval (either explicitly or implicitly), request that updates start, and handle the motion events as they are delivered. “Core Motion” describes this procedure in full detail.
An alternative to Core Motion, at least for accessing accelerometer data, is the UIAccelerometer
class of the UIKit framework. When you use this class, accelerometer events are delivered as UIAcceleration
objects. Although UIAccelerometer
is part of UIKit, it is also separate from the the UIEvent
and responder-chain architectures. See “Accessing Accelerometer Events Using UIAccelerometer” for information on using the UIKit facilities.
Notes: The UIAccelerometer
and UIAcceleration
classes will be deprecated in a future release, so if your application handles accelerometer events, it should transition to the Core Motion API.
In iOS 3.0 and later, if you are trying to detect specific types of motion as gestures—specifically shaking motions—you should consider handling motion events (UIEventTypeMotion
) instead of using the accelerometer interfaces. If you want to receive and handle high-rate, continuous motion data, you should instead the Core Motion accelerometer API. Motion events are described in “Shaking-Motion Events.”
Last updated: 2010-08-03