iOS Reference Library Apple Developer
Search

About Audio Unit Access

To use one of the system-supplied audio units, your application must connect to it at runtime. Establishing this connection entails a specific series of steps that you learn about in this chapter.

Before diving into the subject of audio unit access, though, take a moment to look at audio units themselves.

About Audio Units

This quick survey of audio units may be all you need if you’ve already used them. For a complete explanation, see Audio Unit Programming Guide in the Mac Dev Center.

An audio unit (often abbreviated as AU in header files and elsewhere) is a plug-in component that you use to enhance your iOS application. For example, the Voice Processing I/O unit connects to input and output hardware, performs sample rate conversions between the hardware and your application, and provides acoustic echo cancellation for two-way chat. When you connect to this audio unit, your application acquires these features.

Audio units use a simple and well-defined API that your application exercises at runtime. The API is factored in terms of properties and parameters. Audio unit properties are configuration settings that typically do not change over time, such as audio format. Audio unit parameters are user-adjustable settings, such as volume. The properties and parameters for each system-supplied audio unit are described in Audio Unit Framework Reference.

To understand how to set the value for an audio unit property, you need a basic grasp of audio unit architecture—because each property applies to a specific part of an audio unit. An illustration can help here; see Figure 1-1.

Figure 1-1  Audio unit architecture for an effect unit

In the figure, you see that an audio unit has three scopes: input, output, and global. A scope is a discrete, nonnestable programmatic context. When you set a property value, you apply it to a particular scope.

The input and output scopes each have one or more elements—commonly referred to as buses because they are somewhat analogous to real-world signal buses in analog audio equipment. An audio unit bus is a programmatic context nested within a scope, most-often used for audio input or output. Buses are identified by integers and are zero indexed. Each bus has one audio stream format which includes the specification of how many channels of audio the bus carries.

You identify a signal connection to or from an audio unit according to its scope and bus. Looking at Figure 1-1, for example, you could uniquely specify a signal connection by specifying “bus 0 of the input scope.”

A signal connection is like any other aspect of audio unit configuration: you establish it by setting a property value. Later in this document you’ll see code examples of this.

Audio data moves through an audio unit as requested by the audio signal’s destination, where the destination is whatever is downstream of the audio unit. The destination could be another audio unit, your application, or system output that goes to hardware. From the audio unit’s perspective it is all the same. It gets a periodic request for more processed audio—represented in the figure by the “RENDER” block pointing at the audio unit.

The request has the form of invoking a render callback in the audio unit. The callback responds by invoking a render callback in its upstream neighbor (whatever it may be) to get more raw audio to process. Eventually having acquired fresh data, the audio unit processes it and passes it on to the destination. In Core Audio, the use of requests from downstream to invoke upstream processing is called the pull model.

With the basics of audio unit architecture in hand, you’re ready to learn about accessing system audio units.

Establishing a Connection to an Audio Unit

To connect your application to a system audio unit, you perform the following steps in order:

  1. Ask the system for a reference to the audio unit.

  2. Instantiate the audio unit.

  3. Configure the audio unit instance so that it can communicate with your application.

  4. Initialize the instance so you can start using it.

The system automatically registers the built-in audio units, and so obtaining a reference to one is just a matter of asking for it—essentially by name.

Identifying an Audio Unit

Any plug-in architecture relies on a system of unique identification that allows running applications to reliably find the plug-ins they are looking for.

In the file system, an audio unit’s loadable code is contained in a bundle. Each such bundle is uniquely identified by a triplet of four-char codes. The type code programmatically identifies what the audio unit is for—such as mixing or audio format conversion—and indirectly specifies the audio unit’s API. The subtype code contributes to the bundle’s identification and indicates more specifically what the audio unit does. For instance, the subtype of a mixer type of audio unit might indicate that it is a multichannel mixer.

A third piece of bundle identification specifies the manufacturer—the company that developed the audio unit. All system-supplied audio units name Apple as the manufacturer.

For a list of of the system audio units in iOS, along with their unique identifiers, see “System-Supplied Audio Units in iOS.” For descriptions of all audio unit types and subtypes specified by Apple, see Audio Unit Component Services Reference.

Finding and Instantiating an Audio Unit

You perform two steps to find an audio unit. First, you configure a particular data structure so that its fields contain the audio unit’s type, subtype, and manufacturer codes—that is, the audio unit’s unique description. Second, you pass that structure to a function that finds and returns a reference to the audio unit.

Using the reference, you instantiate the audio unit. Then, as described in the next section, you configure it.

Note: If you‚Äôre not already comfortable with how plug-ins work, it‚Äôs good to keep in mind the difference between an audio unit and an audio unit instance. An audio unit is a code library. An audio unit instance is a live object, defined by that library, that your application can modify and use. However, for the sake of reading flow, this document sometimes uses ‚Äúaudio unit‚Äù to mean ‚Äúaudio unit instance.‚Äù Context indicates what is being talked about. For example, the heading ‚ÄúConfiguring an Audio Unit,‚Äù to be strictly correct, would be ‚ÄúConfiguring an Audio Unit Instance.‚Äù

Configuring an Audio Unit

A freshly instantiated audio unit is, in some ways, a blank slate. It doesn’t yet know the audio channel count or the audio format that your application is using. A new I/O unit—which can take on three different roles—doesn’t yet know if you want it for input, output, or both. Configuring an audio unit molds it to the particular needs of your application.

The most common sort of configuration is to assign the audio format that your application uses. Most audio units take the same format for input and output. Format converter units, by definition, use different input and output formats.

An I/O unit gets special treatment because one side is connected to hardware; that side always uses a hardware-based audio format.

All configuration—from audio stream format to render callbacks—is done using the mechanism of setting properties on an audio unit. The next chapter, “Accessing Audio Units,” explains how to set properties.

Creating an Audio Processing Graph

Core Audio’s audio processing graph facility provides another way to access audio units. An audio processing graph is an interconnected set of audio units that you manage as a group. This facility can simplify your code, even if the graph consists of only two units. When you initialize a graph, for example, the graph takes care of initializing all of its constituent audio units.

If you want to use more than one audio unit in sequence, you would typically create a graph. To use a single audio unit directly, you access and connect to it alone.




Last updated: 2010-01-20

Did this document help you? Yes It's good, but... Not helpful...