openAL sound on the iPhone

Hey all,

Now that the NDA is lifted, and we can start talking about the iPhone code out in the open, i thought it might be nice to talk about some of the problems I have encountered in my forays into the iPhone world and how I went about fixing them.

Currently I am working on an iPhone game. It is all openGLES based and uses openAL for sound. I think I am gonna talk about openAL today.

For now I am only going to be talking about sounds that are less than 30 seconds, so sound effects and short loops. Before you can think about playing sounds on the iPhone they need to be in the right format (or they should be in the right format, many of the audio toolbox methods will handle multiple formats, but if you put the sound in the right format to start, then the iPhone wont have to do it at play time).

So, pop open terminal and type this:

/usr/bin/afconvert -f caff -d LEI16@44100 inputSoundFile.aiff outputSoundFile.caf

what the hell does that do? you ask. it puts the file into a nice Little-Endian 16-bit 44,100 sample rate format. (generally saved with a .caf extension)

OK! now we have a nice .caf file in the proper format, we are ready to do something.

There are lots of ways to play sound on the iPhone, there is the ‘easy’ way, and then there are a few ‘hard ways’.. I am gonna touch on the easy way quickly and then move onto the openAL ‘hard way’.

the quickest (and easiest) way to make the iPhone spit out some sound is to use the audio system services:

NSString* path = [[NSBundle mainBundle] pathForResource:@"soundEffect1" ofType:@"caf"];
NSURL * afUrl = [NSURL fileURLWithPath:path];
UInt32 soundID;
AudioServicesCreateSystemSoundID((CFURLRef)afUrl,&soundID);
AudioServicesPlaySystemSound (soundID);

this works well for making your interface buttons click and simple UI interaction stuff. However, it is absolutely shite for anything more complicated than that (think: a game). It doest always play right away, and if you are trying to match up specific frame of your game with specific sound effects, then this method is basically useless. (I actually implemented my whole sound engine using the above style of code, then i got onto the phone and every time a sound played, it was either late by many frames or the whole thing would pause and wait for the audio toolbox to load the sound into the buffer, it sucked.

For better control of the sound, you will require either openAL or audioUnits or the audioQueue.

I decided to go with openAL so that my sound code could be kinda sorta portable, and by learning how to use openAL I would be able to use those skills on some other platform besides the iPhone. (and since I am a code-mercenary, i figured that having openAL experience was more marketable than audioQueue experience) (that and I already have familiarity with openGL, and openAL is very similar, and the audio units and audio queue code is kinda ugly)

So, this will be a super quick tutorial on openAL and the absolute bare minimum you need to do to accomplish static sound generated from openAL.

OpenAL is really quite straight forward. there are 3 main entities: the Listener, the Source, and the Buffer.

The Listener is you. Any sound the listener can ‘hear’ comes out the speakers. openAL allows you to specify where the listener is in relation to the sources, but for this example we dont care, we are going to bare minimum static sound, so just keep in mind that there is a concept of ‘listener’ and that you could move this object around if you wanted to do more complicated stuff, but I wont go into it in this post.

The Source: basically this is analogous to a speaker. it generates sound which the listener can ‘hear’. like the listener, you can move the sources around and get groovy positional effects. However, for this example we wont be doing that.

The buffer: basically this is the sound that will be played. the buffer holds the raw audio data.

there are two other very important objects: the device and the context.
the device is the actual bit of hardware that will be playing the sound, and the context is the current ‘session’ that all these sounds are going to be played in (you can think of it as the room that all the sources and the listener is in. Or it is the air that the sound is played through, or whatever.. it is the context.)

How does this all work: (this is the bare minimum)

1) get the device
2) make a context with the device
3) put some data into a buffer
4) attach the buffer to a source
5) play the source

that is it! The above presumes that your implementation of openAL has decent defaults for the listener and if you dont specify any listener or source positions then this will all work dandy. (it works just dandy on the iPhone in any case)

so, lets look at some code:

// define these somewhere, like in your .h file
ALCcontext* mContext;
ALCdevice* mDevice;

// start up openAL
-(void)initOpenAL
{
	// Initialization 
	mDevice = alcOpenDevice(NULL); // select the "preferred device"  
	if (mDevice) { 
		// use the device to make a context
		mContext=alcCreateContext(mDevice,NULL); 
		// set my context to the currently active one
		alcMakeContextCurrent(mContext);  
	} 
}

Pretty straight forward really. get the ‘default’ device. then use it to build a context! done.

Next: put data into a buffer, this is a bit more complicated:

First: you need to open the file in a nice audio-friendly way

// get the full path of the file
NSString* fileName = [[NSBundle mainBundle] pathForResource:@"neatoEffect" ofType:@"caf"];
// first, open the file
AudioFileID fileID = [self openAudioFile:fileName];

wait! what is that: openAudioFile: method?
here it is:

// open the audio file
// returns a big audio ID struct
-(AudioFileID)openAudioFile:(NSString*)filePath
{
	AudioFileID outAFID;
	// use the NSURl instead of a cfurlref cuz it is easier
	NSURL * afUrl = [NSURL fileURLWithPath:filePath];
	
	// do some platform specific stuff.. 
#if TARGET_OS_IPHONE
	OSStatus result = AudioFileOpenURL((CFURLRef)afUrl, kAudioFileReadPermission, 0, &outAFID);
#else
	OSStatus result = AudioFileOpenURL((CFURLRef)afUrl, fsRdPerm, 0, &outAFID);
#endif		
	if (result != 0) NSLog(@"cannot openf file: %@",filePath);
	return outAFID;
}

this is pretty simple: we get the file path from the main bundle, then send it off to this handy method which checks the platform and uses the audio toolkit method: AudioFileOpenURL() to generate an AudioFileID.

What’s next? Oh yes: get the actual audio data out of the file. To do this we need to figure out how much data is in the file:

// find out how big the actual audio data is
UInt32 fileSize = [self audioFileSize:fileID];

another handy method is needed:

// find the audio portion of the file
// return the size in bytes
-(UInt32)audioFileSize:(AudioFileID)fileDescriptor
{
	UInt64 outDataSize = 0;
	UInt32 thePropSize = sizeof(UInt64);
	OSStatus result = AudioFileGetProperty(fileDescriptor, kAudioFilePropertyAudioDataByteCount, &thePropSize, &outDataSize);
	if(result != 0) NSLog(@"cannot find file size");
	return (UInt32)outDataSize;
}

This uses the esoteric method: AudioFileGetProperty() to figure out how much sound data there is in the file and jams it into the outDataSize variable. groovy, next!

Now we are ready to copy the data from the file into an openAL buffer:


// this is where the audio data will live for the moment
unsigned char * outData = malloc(fileSize);

// this where we actually get the bytes from the file and put them 
// into the data buffer
OSStatus result = noErr;
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
AudioFileClose(fileID); //close the file

if (result != 0) NSLog(@"cannot load effect: %@",fileName);

NSUInteger bufferID;
// grab a buffer ID from openAL
alGenBuffers(1, &bufferID);
	
// jam the audio data into the new buffer
alBufferData(bufferID,AL_FORMAT_STEREO16,outData,fileSize,44100); 

// save the buffer so I can release it later
[bufferStorageArray addObject:[NSNumber numberWithUnsignedInteger:bufferID]];

OK, lots went on here (well, not really). made some room for the data, used the AudioFileReadBytes() function from the audio toolkit to read the bytes from the file into the awaiting block of memory. The next bit is slightly more interesting. We call alGenBuffers() to make us a valid bufferID, then we call alBufferData() to load the awaiting data blob into the openAL buffer.

Here I have just hardcoded the format and the frequency. If you use the afconvert command at the top of the post to generate your audio files, then you will know what their format and sample rate are. However, if you want to be able to do any kind of audio format or frequency, then you will need to build some methods similar to audioFileSize: but using kAudioFilePropertyDataFormat to get the format, then convert it to the proper AL_FORMAT, and something even more byzantine to figure out the frequency. I am lazy so i just make sure my files are formatted properly.

Next I put the number into a nice NSArray for later reference. you can do with that ID whatever you want.

OK, now we have a buffer! neato. Time to hook it to the source.

NSUInteger sourceID;

// grab a source ID from openAL
alGenSources(1, &sourceID); 

// attach the buffer to the source
alSourcei(sourceID, AL_BUFFER, bufferID); 
// set some basic source prefs
alSourcef(sourceID, AL_PITCH, 1.0f);
alSourcef(sourceID, AL_GAIN, 1.0f);
if (loops) alSourcei(sourceID, AL_LOOPING, AL_TRUE);

// store this for future use
[soundDictionary setObject:[NSNumber numberWithUnsignedInt:sourceID] forKey:@"neatoSound"];	

// clean up the buffer
if (outData)
{
	free(outData);
	outData = NULL;
}

Much like the buffer, we need to get a valid sourceID from openAL. Once we have that we can connect the source and the buffer. finally we will throw in a few basic buffer settings just to make sure it is all set up right. If we want it to loop, then we need to set the AL_LOOPING to true, if not, the default is not to loop, so ignore it. Then I store the ID into a nice dictionary do I can call it out by name.

lastly, clean up our temporary memory.

So close now! everything is all ready to go, now we just need to play the damn thing:

// the main method: grab the sound ID from the library
// and start the source playing
- (void)playSound:(NSString*)soundKey
{ 
	NSNumber * numVal = [soundDictionary objectForKey:soundKey];
	if (numVal == nil) return;
	NSUInteger sourceID = [numVal unsignedIntValue];	
	alSourcePlay(sourceID);	
} 

that’s it. alSourcePlay().. easy. If the sound doesnt loop, it will stop of it’s own accord when it is all done. If it is looping, or you want to stop it early:

- (void)stopSound:(NSString*)soundKey
{ 
	NSNumber * numVal = [soundDictionary objectForKey:soundKey];
	if (numVal == nil) return;
	NSUInteger sourceID = [numVal unsignedIntValue];	
	alSourceStop(sourceID);	
} 

That is basically the quickest and simplest way to get sound out of the iPhone using openAL. (that I can figure out anyway).

Lastly, when you are done with everything, be nice and clean up:

-(void)cleanUpOpenAL:(id)sender
{
	// delete the sources
	for (NSNumber * sourceNumber in [soundDictionary allValues]) {
		NSUInteger sourceID = [sourceNumber unsignedIntegerValue];
		alDeleteSources(1, &sourceID);
	}
	[soundDictionary removeAllObjects];
	
	// delete the buffers
	for (NSNumber * bufferNumber in bufferStorageArray) {
		NSUInteger bufferID = [bufferNumber unsignedIntegerValue];
		alDeleteBuffers(1, &bufferID);
	}
	[bufferStorageArray removeAllObjects];
	
	// destroy the context
	alcDestroyContext(mContext);
	// close the device
	alcCloseDevice(mDevice);
}

One note: in a real implementation you will probably have more than one source (I have a source for each buffer, but I only have about 8 sounds, so this is not a problem). There is an upper limit on the number of sources you can have. I dont know the actual number on the iphone, but it is probably something like 16 or 32. The way to deal with this is to load all your buffers, then dynamically assign those buffers the the next available source that isnt already playing something else.

Groovy, hopefully this will be helpful to someone. I had a bit of a hard time finding a good basic sample to get myself started so I made this one by going through the openAL programmers guide and just doing the very minimum.

Cheers!
-b

EDIT: clever reader Nathan points out that I forgot to include:
AudioFileClose(fileID);
in my sample code! Whoops! good catch, it is now fixed in the tutorial :-)

EDIT: this page continues to be the most visited page on my site, so: yay! Unfortunately, if you get here via google, then it can be hard to find other good articles about OpenAL on my site. So, to be servicey, if you read this article, you might also like:

This entry was posted in code, iPhone, openAL. Bookmark the permalink.

55 Responses to openAL sound on the iPhone

  1. Pingback: More openAL tidbits for iPhone at benbritten.com

  2. Pingback: restarting openAL after application interruption on the iPhone at benbritten.com

  3. Pingback: links for 2009-02-27 | /dev/random

  4. frankmail007 says:

    Hi Ben,

    I read your 2 articles about OpenAl. Thanks for your efforts. I learned something from them. I’m writing a program similar to FingerPiano which plays some music notes. I got 32 sample files for each notes. What I did is to load these 32 wave files into buffer and attach them into different sources. The sample file is less than 2s long. To mimic a instrument play, I’m trying to add “release” effect by adjust gain value through a timer to each note when it’s going to stop. My problem is I can hear some pop/clicking noise if I play 2 notes closely. The wave files should have no problem. According to your expertise, what could be the reasons?

    By the way, according to your article, the upper limit for number of sources is about 16 to 32. I reach the upper limits already.So if I change my program to dynamically create source in real time for note playing, would it be a performance problem? In other word, is creating a source and attach a buffer expensive?

    Thank you so much,

    Frank

  5. Ben says:

    Hey Frank,

    A few things:

    I am not sure if I mention it in my newer post about openAL or not, but I did find out that the iPhone openAL implementation allows for a maximum of 32 concurrent sources (ie you can play up to 32 sounds all at one time) I built a ‘drumkit’ style app that could be hammered on and some of the notes were multiple second samples so it was pretty easy to get to the 32 limit. Anyhow I did not have any trouble with clicking or popping with my sounds (even when they were played close together and even while adjusting the gain and pitch during playback)

    In your code, at startup, create 32 sources, and create however many buffers you need. Then at play time (ie right after someone has hit a button to play something) you can do a quick search through the list of 32 sources and find the next one that is not playing something already and attach the desired buffer to it and play.

    This is how I did my ‘drumkit’ style app. You should avoid at all costs creating new buffers and new sources during playback. (creating new sources is cheaper than new buffers, and assigning a buffer to a source is very cheap, so you should have all your buffers loaded up front)

    If that doesnt help, then you should start to decrease the sample rate of your sound files.

    are they all 44.1khz? (if you are trying to play 32 samples all at once and they are all high sample rates then you might run into mixer issues, not to mention memory issues and data throughput issues)

    Here is what I would do: use the afconvert command line program to convert all of your sound files into .caf format and make them all identical (i would go with something like mono, 16khz, that way you have pretty decent quality, but the files wont be too huge. if 16khz isnt high enough quality for your ears, then go up to 22k. and so on )

    The last thing you can do is to set you mixer rate to the same rate as your sound samples.

    // set the output rate value to match my sound sample rate
    alcMacOSXMixerOutputRateProc(OPENAL_MIXER_OUTPUT_RATE);

    you will need to define that proc earlier in the code with something like: (this is basically ripped directly from apple’s sound engine)

    typedef ALvoid AL_APIENTRY (*alcMacOSXMixerOutputRateProcPtr) (const ALdouble value);
    ALvoid alcMacOSXMixerOutputRateProc(const ALdouble value)
    {
    static alcMacOSXMixerOutputRateProcPtr proc = NULL;

    if (proc == NULL) {
    proc = (alcMacOSXMixerOutputRateProcPtr) alcGetProcAddress(NULL, (const ALCchar*) “alcMacOSXMixerOutputRate”);
    }

    if (proc)
    proc(value);

    return;
    }

    cheers!
    -b

  6. frankmail007 says:

    Hi Ben,

    Thanks for your prompt reply. I’ll try your suggestion to rewrite my sound engine. Actually I’m using more than 32 sources because there are some other sound effects are used beside the notes. Sometimes I encountered some crashes with unknown reason. That could be due to I used too many sources. I at most to play no more than 10 sources at the same time, so I’ll create 10 sources at beginning and assign the buffer to the one which is not used during run time.

    Thanks again,

    Frank

  7. rpstro02 says:

    Hey Ben,
    Thanks for posting this. It’s really helping me to understand openAL. However, I’m still having trouble getting sound to play even though I think I’ve followed your instructions exactly. My code is below. Am I missing something obvious?

    I’m just trying to press a button to play a sound. Most of my setup is done in viewDidLoad:

    #import “ExperimentWithOpenALSimplifiedViewController.h”
    #import
    #import

    @implementation ExperimentWithOpenALSimplifiedViewController
    @synthesize soundDictionary;
    @synthesize bufferStorageArray;

    -(IBAction)playSample:(id)sender{
    NSLog(@”buttonpressed”);
    [self playSound:@"neatoSound" ];
    }

    - (void)viewDidLoad {

    NSMutableArray *bufferStorageArray = [NSMutableArray arrayWithCapacity:16];
    NSMutableDictionary *soundDictionary = [NSMutableDictionary dictionaryWithCapacity:16];

    [self initOpenAL];

    // get the full path of the file
    NSString* fileName = [[NSBundle mainBundle] pathForResource:@”NewCymbal” ofType:@”caf”];
    // first, open the file
    AudioFileID fileID = [self openAudioFile:fileName];

    // find out how big the actual audio data is
    UInt32 fileSize = [self audioFileSize:fileID];

    // this is where the audio data will live for the moment
    unsigned char * outData = malloc(fileSize);

    // this where we actually get the bytes from the file and put them
    // into the data buffer
    OSStatus result = noErr;
    result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);

    if (result != 0) NSLog(@”cannot load effect: %@”,fileName);

    NSUInteger bufferID;
    // grab a buffer ID from openAL
    alGenBuffers(1, &bufferID);
    if((alGetError()) != AL_NO_ERROR) {
    printf(“Error!”);
    }

    // jam the audio data into the new buffer
    alBufferData(bufferID,AL_FORMAT_MONO16,outData,fileSize,44100);
    if((alGetError()) != AL_NO_ERROR) {
    printf(“Error!”);
    }

    // save the buffer so I can release it later
    [bufferStorageArray addObject:[NSNumber numberWithUnsignedInteger:bufferID]];

    NSUInteger sourceID;

    // grab a source ID from openAL
    alGenSources(1, &sourceID);

    // attach the buffer to the source
    alSourcei(sourceID, AL_BUFFER, bufferID);
    // set some basic source prefs
    //alSourcef(sourceID, AL_PITCH, 1.0f);
    alSourcef(sourceID, AL_GAIN, 1.0f);

    //if (loops) alSourcei(sourceID, AL_LOOPING, AL_TRUE);

    // store this for future use
    [soundDictionary setObject:[NSNumber numberWithUnsignedInt:sourceID] forKey:@”neatoSound”];

    // clean up the buffer
    if (outData)
    {
    free(outData);
    outData = NULL;
    }

    [super viewDidLoad];
    }

    -(void)initOpenAL
    {
    // Initialization
    mDevice = alcOpenDevice(NULL); // select the “preferred device”
    if (mDevice) {
    // use the device to make a context
    mContext=alcCreateContext(mDevice,NULL);
    // set my context to the currently active one
    alcMakeContextCurrent(mContext);
    }
    }

    - (void)playSound:(NSString*)soundKey
    {
    NSLog(@”Play Sound!”);
    NSNumber * numVal = [soundDictionary objectForKey:soundKey];
    if (numVal == nil) return;
    NSUInteger sourceID = [numVal unsignedIntValue];
    alSourcePlay(sourceID);
    if((alGetError()) != AL_NO_ERROR) {
    printf(“Error!”);
    }
    }

    - (void)stopSound:(NSString*)soundKey
    {
    NSNumber * numVal = [soundDictionary objectForKey:soundKey];
    if (numVal == nil) return;
    NSUInteger sourceID = [numVal unsignedIntValue];
    alSourceStop(sourceID);
    }

    -(AudioFileID)openAudioFile:(NSString*)filePath
    {
    AudioFileID outAFID;
    // use the NSURl instead of a cfurlref cuz it is easier
    NSURL * afUrl = [NSURL fileURLWithPath:filePath];

    // do some platform specific stuff..
    #if TARGET_OS_IPHONE
    OSStatus result = AudioFileOpenURL((CFURLRef)afUrl, kAudioFileReadPermission, 0, &outAFID);
    #else
    OSStatus result = AudioFileOpenURL((CFURLRef)afUrl, fsRdPerm, 0, &outAFID);
    #endif
    if (result != 0) NSLog(@”cannot openf file: %@”,filePath);
    return outAFID;
    }

    // find the audio portion of the file
    // return the size in bytes
    -(UInt32)audioFileSize:(AudioFileID)fileDescriptor
    {
    UInt64 outDataSize = 0;
    UInt32 thePropSize = sizeof(UInt64);
    OSStatus result = AudioFileGetProperty(fileDescriptor, kAudioFilePropertyAudioDataByteCount, &thePropSize, &outDataSize);
    if(result != 0) NSLog(@”cannot find file size”);
    return (UInt32)outDataSize;
    }

    -(void)cleanUpOpenAL:(id)sender
    {
    // delete the sources
    for (NSNumber * sourceNumber in [soundDictionary allValues]) {
    NSUInteger sourceID = [sourceNumber unsignedIntegerValue];
    alDeleteSources(1, &sourceID);
    }
    [soundDictionary removeAllObjects];

    // delete the buffers
    for (NSNumber * bufferNumber in bufferStorageArray) {
    NSUInteger bufferID = [bufferNumber unsignedIntegerValue];
    alDeleteBuffers(1, &bufferID);
    }
    [bufferStorageArray removeAllObjects];

    // destroy the context
    alcDestroyContext(mContext);
    // close the device
    alcCloseDevice(mDevice);
    }

    ———-
    And here’s my .h file.

    #import
    #import
    #import

    @interface ExperimentWithOpenALSimplifiedViewController : UIViewController {
    ALCcontext* mContext;
    ALCdevice* mDevice;
    NSMutableDictionary *soundDictionary;
    NSMutableArray *bufferStorageArray;
    }
    @property (nonatomic, retain) NSMutableDictionary *soundDictionary;
    @property (nonatomic, retain) NSMutableArray *bufferStorageArray;

    -(IBAction)playSample:(id)sender;
    @end

  8. rpstro02 says:

    My imports didn’t make it in my first post. Here they are:

    .m file:

    #import “AudioToolbox/AudioToolbox.h”
    #import “CoreAudio/CoreAudioTypes.h”

    .h file
    #import “UIKit/UIKit.h”
    #import “OpenAL/al.h”
    #import “OpenAL/alc.h”

    Anyway, everything compiles, but when I press the button I get no sound.

  9. Ben says:

    Hey rpstro02,

    While I am generally happy to give free and (sometimes very detailed) advice, you are going to have to try and narrow it down for me a bit. (in other words I don’t have the time to read through your code, possibly load it into xcode and debug it for you (unless you are paying me by the hour :-))

    All that said, if everything seems to be working (ie no openAL errors and whatnot) then a few big things to check:

    make sure that your sound file is converted to the proper format that you are specifying to openAL. (ie read up on afconvert and make sure that you are using it right)

    make sure you are running on an actual device (with the volume up). There are some who claim to have openAL running on the simulator, but I have never gotten it to work.

    make sure the buffer has actual data in it. (ie dump the nsdata to the console to see if there is anything there etc…)

    Good Luck!
    -b

  10. rpstro02 says:

    Hey Ben,
    Thanks for your quick reply, and I understand if you don’t want to scan through my code. Actually I was finally able to debug it. Basically a stupid error on my part.

    Anyway, I’ve been trying to trigger a sound every 0.1 seconds. I’ve never been able to play the sound exactly on time every time. There’s a slight delay every few triggers that is noticeable. Even with your openAL bare minimum code. I was just going to ask if you knew of any ways to optimize it even further? If not, I think I’m going to have to try remoteIO.

  11. Ben says:

    Hey rpstro02,

    I am glad you found your bug :-) (see, you so totally didn’t need me :-)

    anyhow, as far as the 0.1 seconds thing, openAL should be able to do that no problem, however….

    make sure that you are preloading all your sounds, and re-using your buffers and sources. this is the biggest thing. if you are generating new buffers or sources at play time, then you will definitely see performance issues.

    remember you only get 32 concurrent sounds. so if your sounds are more than 3.2 seconds long, then you will get a blip when you run out of sources.

    playing sounds 10 times a second, presumably to generate a hum or a tone, maybe you should be using openAL streaming instead? (and filling up the buffer manually?)

    also you might try reducing the sample rate of your sounds (looks like from your code you are using 44.1k mono. (i did actually scan it briefly :-)) if you are playing these sounds 10 times a second, you could probably get away with much lower sample rates (like 8k or 16k).. is possible with the 44.1k sample rate sounds you are simply overwhelming openAL with too much data trying to play them at that rate.

    and lastly, make sure your timer is firing at the right times. Is there something else in the app that might be causing a delay?

    Good luck!
    -b

  12. chillidesign says:

    Hi Ben,

    Thanks for this great post – I *was* one of those that just plonked the SoundEngine code in and didn’t work through it. I’m now cleaning up my code and implementing just the stuff I need.

    I’m having trouble with iPod audio playback in the App though – I can happily playback iPod audio (using the AudioSessionCategory_AmbientSound) and play sound effects in my App – but the volume of the iPod audio is too loud for my sound effects. Is there anyway to lower the iPod Audio slightly in comparison to my sound effects (as the global hardware volume adjusts both). Any pointers would be gratefull received.

    Thanks, Craig

  13. Ben says:

    Hey Craig,

    there is no way (that I am aware of) to affect the iPod music volume. However, you can turn the gain up on your sounds. (using AL_GAIN on a source), and you can go as big as you want (well, until it starts to clip)

    Most likely your sound samples are just recorded at a lower volume, so give them a bit of gain and you should be good :-)

    Cheers!
    -B

  14. Ben says:

    I’v been plagiarized! (i must be bigtime now!)

    http://www.gehacktes.net/2009/02/iphone-programming-part-5-audio/

    is nearly word-for-word of much of this post (edited down, because I can go on and on :-)

    And not even a courtesy link! hehe, oh well, I am glad that this information is getting out there :-)

    Cheers!
    -B

  15. brutaldeath says:

    Hi all!

    First of all, I´d like to thank Ben for his post. I have started developing for the iPhone only a week ago, and I need to work with openAL, but I´m completly lost.

    rpstro02, can you say what your error was? It was impossible for me to look for it… And I haven´t found yet a complete example of openAL running on the iPhone simulator.

    Thanks in advance.

  16. Ben says:

    Hey brutal,

    I wouldn’t spend lots of time getting openAL to run properly on the simulator. I have heard that it is possible, but I never got it to work, and ultimately the code will be different than the code on the iPhone, so you are better off just testing your sound stuff on the device directly.

    Cheers!
    -B

  17. brutaldeath says:

    Hi Ben,

    Thanks for your quick answer. I´m programming right now a small application using openAL. I have tested a couple of simple things in the simulator -playing a sound, moving the listener and so- and it´s working.

    See you!

  18. Tirex says:

    Hi Ben.
    Thanks for good lesson!
    I am using your and oalTouch sample for work with OpenAL but all it have small memory leaks when alcMakeContextCurrent(mContext) calling. Tested in 2.1/2.2.1/3.0b5 sdk.
    Is it my code bug ?

  19. Hi Ben,

    After struggling with AVAudioPlayer I completely re-wrote the sound in my app following your instructions. Works a treat! Also works fine in the simulator (which AVAudioPlayer never did).

    Many thanks

    Gavin

  20. Ben says:

    Hey Gavin!

    Glad to hear it! Yeah, i had a similar experience. AVAudioPlayer is great if you can get it working, but I find that openAL is just all around easier to understand and to code. And easier means less bugs (well, usually :-) so I use openAL for all my stuff these days.

    Cheers!
    -B

  21. MarsMan says:

    Thanks for sharing your openAL wisdom with us Ben!

    Is there an easy method of stopping all currently playing sounds at once?

  22. Ben says:

    Hey MarsMan,

    the best way (or at least the way that I would probably do it) would just be to step through all my active sources and stop them one at a time.

    However, if you need to pause the whole OpenAL context for some reason (like for an interruption) then you can use this:

    alcSuspendContext(mContext); // this will ‘pause’ the entire context

    then to restart:

    alcProcessContext(mContext); // this will ‘unpause’ the entire context

    Cheers!
    -B

  23. bluenight says:

    Hi Ben,

    I ‘d like to ask you about sound effect.
    In openAL, we can do the “pitch shift” effect easily:
    alSourcef(sourceID, AL_PITCH, 1.0f);

    As I know, in sound processing, there are 2 main parts to do the sound effect. They are PitchShift and Timbre (timbre contains strength & shift)
    What is about effect with the Timbre supported by openAL?
    If openAL doesn’t support Timbre already, what is the alogirthm for change Timbre for audio ?

  24. Ben says:

    Hey Bluenight,

    It would be great if OpenAL offered more built-in sound processing like timbre, but it only really provides volume and pitch. (both of which you can effect directly, or by moving the sounds around relative to the listener) (and I guess: speed. you could theoretically play back the sounds slower or faster than their sample rate)

    If you need to be able to modify the timbre of a sound, then you will have to process the buffer directly with your own algorithms. As far as HOW to change the timbre, I am not entirely sure. Timbre is a fairly complex concept, and I imagine there are many different ways to effect the timbre of a sound, so I would suggest having a look around, or trying some stuff out.

    Cheers!
    -Ben

  25. bluenight says:

    Hey Ben,
    Thank you very much for your help. It is really useful.

  26. aliparr says:

    Hi Ben,

    Firstly thank you for this post, it’s taken me from being totally in the dark about OpenAL on iPhone to being only mildly in the shade :)

    If you don’t mind, I have a couple of questions for you.

    In my current project (a game), all asset data is loaded into memory in one (lovely asyncronous) go. Textures etc are then constructed by referencing the raw chunks of memory. This obviously has massive benefits to runtime performance as everything is pre-processed upfront and ready to go. I see in the comments etc this is also something you recommend.

    So now we move onto audio, I obviously want to follow the same mechanism.

    I guess that I can pass my raw audio data to alGenBuffers and achieve the same thing. The question is how to extract the appropriate arguments out of the .caf (most importantly the offset to the start of the raw audio data avoiding the header, as the other values format etc can be hard-coded). My tools implementation is on Windows (don’t ask!), so parsing the .caf files using AudioFileOpenURL etc isn’t really open to me in any sensible way.

    What I really asking therefore, is do you know of, or have to share, any sensible C/C++ code that can take a .caf and extract the appropriate arguments I need to pass to alBufferData?

    Also, at this point, is .caf even the best format for the job? If I’m more likely to find a robust .wav header parser, will that do the job exactly the same? My understanding is that .caf is just a container for, in this instance a .wav anyway.

    Am I even talking sense?

    Anyway, Many thanks again, you’ve saved me bags of time on this already!

    Cheers, Alistair

  27. Ben says:

    Alistair,

    you say that your tools are all on the PC, are you developing for the iPhone? (and if so then why cant you use AudioFileOpenURL?)

    If you are not developing for the iPhone and this is just a general OpenAL question, then I can only answer parts of your question with any authority:

    .caf is a good format, but as you say is basically just a wrapper, similar to .wav. as far as OpenAL is concerned, it just wants buffers full of sound data, so if you can find a nice wav parse, then i say go for it. you can also find some decent ogg parsers (that also happen to run on the iPhone as well) and you can save your sounds as ogg files, and decompress them into OpenAL buffers, and that works just fine.

    You can get libogg, which is just a few little src files and compile it right into your engine pretty simply. (start here: http://www.xiph.org/ ).

    If you are doing iPhone stuff, i would recommend using the built-in audio system just for performance reasons, but if for some reason you are unable to do that, then I would say, go with libogg as it is all nice and open and pretty simple to get running. (and it works fine with OpenAL, and works fine on the iPhone)

    Cheers!
    -Ben

  28. aliparr says:

    Ben,

    Wow, thanks for the prompt reply and thanks for attempting to make sense of my stream of random thoughts :)

    Yes, I am developing for the iPhone but yes my tools are on PC. There are historical reasons behind this, mainly I’m new to Apple development but comfortable with Windows (same with my co-developer). Plus we’re attempting to keep our ‘engine’ as cross platform as possible, in fact this current project started life as a Nintendo DS Homebrew.

    I’ll try and clarify. All our asset data is bundled up into a single ‘package’. All textures, meshs, scripts, font pages (and now hopefully audio), related to a single screen or level, are bundled into a single file that is loaded in one go at runtime. The .caf is not available as a standalone file in the iPhone application bundle, so I don’t have anything I can point AudioFileOpenURL to. The data is just in memory.

    Thank you for affirming my point that .caf is just a container for audio-data. At the point we get to OpenAL it’s just after the raw sound buffer so I guess wav with a nice Windows parser for the arguments is the way to go.

    What is iPhone performance like playing a large, background music type file using OpenAL and ogg? I’m keen to keep things as fast as possible (our game is quite processor intensive), so if playing compressed audio is better done elsewhere using another API, no doubt with another set of headaches, then so be it…

    Thanks again. I’m buying SnowFerno!

  29. Ben says:

    Hey Alistair,

    I would be interested to know what your work pipeline is like to get code from you PC tools onto a phone.

    As far as big sound files vs little sound files, for the big background music kinds of things, I usually just go mp3 and play it via the built in mp3 hardware decoder via AVAudioPlayer.

    But for the smaller sound effects i use uncompressed cafs and play them via OpenAL.

    In the rare circumstances that I need to play multiple big files all at once, I use OpenAL and streaming buffers.

    Cheers!
    -B

  30. bluenight says:

    Hi Ben,

    I ‘ve tried to use afconvert for converting .caf to .mp3.
    It is successful:
    /usr/bin/afconvert -f caff -d LEI16@44100 test.caf test.mp3
    But as you mentioned, it is in the terminal.
    Is there any functions to convert audio file type in iPhone SDK?

    Cheers! Bluenight.

  31. Ben says:

    Hey Bluelight,

    there are not any native methods in the SDK to compress or uncompress audio, and to be honest I would rarely recommend doing any sort of real-time decompression in your app (with the exception of the hardware decompressor of course). (this after I just told Alistair to do it with libogg, but I think he is in a bit of a unique situation, and i would normally just recommend using either straight MP3s played via the hardware decoder or uncompressed cafs played via openAL.

    Using the phone to decompress audio is a slow slow process. it takes quite a bit of processor to decompress most file formats on the fly, and your performance will suffer.

    Cheers!
    -B

  32. bluenight says:

    Hi Ben,
    Thank you for your answers. You are right, I have used the method you recommended. It is really simpler. :)

    There is one trouble about openAL, could you help me :(
    We knew there is a function to change the pitch shift in openAL but actually, it doesn’t make the change on the audio data.
    I mean after this function
    alSourcef(sourceID, AL_PITCH, 1.0f);
    how could we get the audio data having pitch changed?
    Is that thing supported by openAL?

    Thank Ben.

  33. Ben says:

    Hey Bluenight,

    Both the volume and the pitch only affect the sound of the played buffers, not the actual buffer data.

    I am not sure if OpenAL supports data transcoding like you are talking about, but pitch shifting is not a terribly complicated algorithm, so you could probably find some methods to pitch shift your data buffers with a quick google search.

    Cheers!
    -B

  34. bluenight says:

    Hi Ben,

    Thank you so much.

  35. pubudu says:

    Hi Ben,
    Your tutorial was really helpful in understanding openal, thanks for that.
    I’m trying to play sythesize sounds by iPhone but couldn’t figure out how. For example I wanna play a simple tone (sinusoid), so I can make an array of numbers (as samples of the sinusoid). Do you know how can I play it by iPhone.
    All the tutorials talk about playing sounds, that are in sound files like .caf, .aif etc…. but none talks abt playing syhthesized sounds like that.
    Any help is greatly appreciated!!!

    P

  36. hanstutschku says:

    Hi Ben, I learned a lot from your tutorial.
    I’m trying now to use OpenAL to play back long soundfiles – never more than 2 at a time. As I need pitch control I can’t use the AVAudioPlayer and I can’t either load the entire files into memory.
    I searched the web without success for an example for audio streaming in OpenAL. You mentioned that you use streaming for longer files. Came you accress some good tutorials for that or could you provide an example? Thanks a lot, Hans

  37. Ben says:

    Hey Hans,

    I finally got it done (streaming tutorial): you can check it out here:

    http://benbritten.com/2010/05/04/streaming-in-openal/

    Cheers!
    -Ben

  38. Pingback: coders» Blog Archive » iPhone Programming Part 5 : Audio

  39. Pingback: Tweets that mention openAL sound on the iPhone | benbritten.com -- Topsy.com

  40. Pingback: iPhone Programming Part 4 : Constructor / Destructor | Anima Entertainment GmbH

  41. Pingback: links for 2010-07-08 « andrewskinner.name

  42. Pingback: iPhone Programming Part 4 : Audio | Anima Entertainment GmbH

  43. Pingback: Bear Code » Blog Archive » Libraries which I use on a regular basis

  44. Pingback: iOS Development Link Roundup: Part 1 | iOS/Web Developer's Life in Beta

  45. Pingback: iOS SDK 4.3 OpenAL alGenSources results in AL_INVALID_OPERATION | taking a bite into Apple

  46. Pingback: Sounding Things Out » Claymore Games

  47. Pingback: Why are my audio sounds not playing on time? - Programmers Goodies

  48. Pingback: Why are my audio sounds not playing on time? | taking a bite into Apple

  49. Priyanka says:

    Can we do something similar in android …

  50. Pingback: iPhone Programming Part 4 : Audio | Anima Entertainment GmbH

Leave a Reply