Use the DSound Audio Renderer


This section describes and demonstrates how to support DirectSound®, the audio component of Microsoft® DirectX™, in Microsoft DirectShow™ applications. DirectSound interfaces and classes support application control of a sound's origin relative to the user, which enables the programmer to simulate a three-dimensional audio environment. The DSound Audio Renderer filter supports the IAMDirectSound interface; use this interface to retrieve the filter's IDirectSound interface and sound buffers.

This section contains the following topics.

Programmers who want to use DirectSound objects and the DSound Audio Renderer filter in their applications should be familiar with how to initialize and create DirectShow objects, as well as the DirectSound objects and interfaces. You should also be familiar with the basics of DirectShow and know how to create filter graphs and add filters to a filter graph. For more information about DirectShow, see the Getting Started section of the DirectShow SDK. The Microsoft DirectX SDK documentation has information and documentation for DirectSound and its associated objects.

Using the DSound Audio Renderer in an Application

To support DirectSound, you first must instantiate the DSound Audio Renderer filter and add it to an existing filter graph. When you add a File Source (async or URL) filter to the graph and call the IGraphBuilder::RenderFile method on its output pin, it will automatically use the DirectSound Audio Renderer for sound playback, except on Windows NT. On Windows NT, you will not get the DirectSound Audio Renderer filter unless you ask for it explicitly. When the filter graph is complete, you can use the IAMDirectSound interface methods to retrieve the DSound Audio Renderer filter's IDirectSound pointer and its primary and secondary sound buffers. The sound buffers must be in stereo to simulate three-dimensional playback; a monaural signal can not produce this kind of sound placement.

To provide access to DirectSound correctly, you must follow these guidelines:

The following steps and code fragments demonstrate how to add the DSound Audio Renderer filter to a filter graph and retrieve its DirectSound interface and sound buffers. Your application doesn't have to use implementation identical to this example; this example is meant to be as succinct and straightforward as possible. For the sake of brevity, this example has no error checking.

1) Include the amaudio.h header file; it contains the IAMDirectSound interface definition.

#include <amaudio.h>

2) Declare your function. This example function takes three pointers: a pointer to the primary and secondary sound buffers and a pointer to a DirectSound object.

HRESULT Setup(LPDIRECTSOUND *pDSound,
	LPDIRECTSOUNDBUFFER *pDSoundPrimary,
	LPDIRECTSOUNDBUFFER *pDSoundSecondary)

3) Create pointers for the filter graph, filter, and IAMDirectSound interface. Create the filter graph and the DSound Audio Renderer filter.


{
	IGraphBuilder	*Fg;
	HRESULT		hr;
	IBaseFilter		*pDSWaveRender;
	IAMDirectSound	*pAMDirectSound;

	// Create the filter graph
	CoInitialize(NULL);
	hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC,
			IID_IGraphBuilder, (LPVOID *)&Fg);

	// Create a DSound Audio Renderer filter
    	hr = CoCreateInstance((REFCLSID)CLSID_DSoundRender, NULL, CLSCTX_INPROC,
			(REFIID)IID_IBaseFilter, (void **)&pDSWaveRender);

4) Add the filter to the filter graph. Once you do, you no longer need the local pointer, so you can release it.


	hr = Fg->AddFilter( pDSWaveRender, NULL );

	pDSWaveRender->Release();  // Filter graph has a reference to it now

5) Retrieve the filter's IAMDirectSound interface.


	hr = pDSWaveRender->QueryInterface( IID_IAMDirectSound,
						(void **)&pAMDirectSound );

6) Add the source filter and render its output pin. As long as the source file has audio data, the DSound Audio Renderer will automatically connect to the graph as the audio renderer. For the sake of brevity, this code is omitted.

7) Retrieve the IDirectSound interface and the filter's primary and secondary sound buffers.

  	     
	hr = pAMDirectSound->GetDirectSoundInterface(pDSound);
	hr = pAMDirectSound->GetPrimaryBufferInterface(pDSoundPrimary);
	hr = pAMDirectSound->GetSecondaryBufferInterface(pDSoundSecondary);

8) Now that you have the interface pointers, you can use them as desired in your application. When your application no longer needs the interface pointers, make sure to release them in the correct order, as demonstrated in the following code fragment.


	pDSound->Release();
	pDSoundPrimary->Release();
	pDSoundSecondary->Release();
	pAMDirectSound->Release();  // MUST BE SECOND TO LAST
	Fg->Release();		    // MUST BE LAST

	return NOERROR;
}

The preceding steps detail only how to retrieve the applicable DirectSound interfaces; they do not address positioning the sound itself. For documentation on these interfaces, their functionality, and how to use them in your programs, consult the DirectX documentation.

Configuring the Primary Sound Buffer

The format of the primary sound buffer determines whether you can use DirectSound three-dimensional audio effects; to do so, it must be stereo-capable. The following code fragment shows how to obtain a pointer to a DirectSound primary sound buffer and configure it correctly.


	WAVEFORMATEX  wfx;
	DWORD dw;
	IDirectSoundBuffer *pDSBPrimary;

	// Get the primary sound buffer interface
	HRESULT hr = pAMDirectSound->GetPrimaryBufferInterface(pDSBPrimary);

	// Retrieve the buffer's format
	pDSBPrimary->GetFormat(&wfx, sizeof(wfx), &dw);

	// If the current signal is monaural (1 channel), you need to change it to stereo
	if (wfx.nChannels == 1) {
		  wfx.nChannels = 2;
		  wfx.nBlockAlign *= 2;
		  wfx.nAvgBytesPerSec *= 2;
		  pDSBPrimary->SetFormat(&wfx);
	}

You can now control the buffer and manipulate the sound playback as desired. For additional information on DirectSound, sound buffers, and creating three-dimensional sound effects, consult the DirectX SDK documentation.

© 1997 Microsoft Corporation. All rights reserved. Terms of Use.