Streaming in OpenAL

I have posted a few posts on OpenAL that have been very popular:

One thing that people keep asking me is how to playback gigantic files in OpenAL. Things like songs or audio books. (This post has been slowly growing over the past few months and I am finally going to put the final bits in and post it!)

The answer to this problem (in case you didn’t read the title of the post) is streaming.

First off, lets define what I really mean when I say streaming. In all of the above code samples, the sound buffers were fully resident in memory. This works great for smaller sounds like button click sounds and swords clanging sounds for games and whatnot. However, for big files like full quality songs and really anything over about a dozen seconds long, putting it all into memory (especially on the iPhone) becomes a bit of a problem.

To solve this we read the data off of the disk in chunks, filling small buffers one at a time and then playing them back one after the other. All the while, in the background we are removing the old buffers and replacing them with new buffers so that we only ever have a few seconds of sound in memory at one time.

Here is a crappy diagram that I spent way too much time on, and it is still crappy:

OK, at the top we have the ‘standard’ way to do thing (the way described in all those posts linked above). Basically:

  • Step1: Load the sound data from a file into an OpenAL Buffer
  • Step2: Connect the OpenAL Buffer to an OpenAL source with something like: alSourcei(sourceID, AL_BUFFER, bufferID);
  • step 3: Start source playing with with alSourcePlay(sourceID);

Pretty simple.

Streaming is a bit more complicated, but still pretty simple (in concept). The first three steps are really very similar to the ‘standard’ way above.

  • Step A: Load in a few chunks of the big file into all the waiting OpenAL buffers.
  • Step B: Queue up all the newly filled buffers into a single OpenAL source via alSourceQueueBuffers(sourceID, 1, &bufferID);
  • Step C: Start the source playing with alSourcePlay(sourceID); This will begin to consume the queued up buffers.
  • Step D: In a background thread, check to see if there are any used buffers, and if so then load the next chunk of the sound file into the used buffer
  • Step E: Queue up this newly filled buffer to the source with alSourceQueueBuffers(sourceID, 1, &bufferID); As long as there are still bufferes queued up, the source will continue to play them

So there it is, five easy steps. Lets look at them one at a time:

Step A: Load in a few chunks of the big file into all the waiting OpenAL buffers.

Ok first thing we need to do is to prepare a place for all this state data to live. This will be kinda like preloading the streaming file, execept that we dont load any sound into memory just yet, we are just going to get ready to do that.

// this queues up the specified file for streaming
-(NSMutableDictionary*)initializeStreamFromFile:(NSString*)fileName format:(ALenum)format freq:(ALsizei)freq
{
	// first, open the file
	AudioFileID fileID = [self _openAudioFile:fileName];
	
	// find out how big the actual audio data is
	UInt32 fileSize = [self _audioFileSize:fileID];
	

First thing, we open the file. This doesnt load it into memory, it just gives us a handle to some data that we need. Specifically the file size. We will need this to calculate the number of buffers we need to play the whole file.


	UInt32 bufferSize = OPENAL_STREAMING_BUFFER_SIZE;
	UInt32 bufferIndex = 0;
	
	// ok, now we build a data record for this streaming file
	// before, with straight sounds this is just a soundID
	// but with the streaming sound, we need more info
	NSMutableDictionary * record = [NSMutableDictionary dictionary];
	[record setObject:fileName forKey:@"fileName"];
	[record setObject:[NSNumber numberWithUnsignedInteger:fileSize] forKey:@"fileSize"];
	[record setObject:[NSNumber numberWithUnsignedInteger:bufferSize] forKey:@"bufferSize"];
	[record setObject:[NSNumber numberWithUnsignedInteger:bufferIndex] forKey:@"bufferIndex"];
	[record setObject:[NSNumber numberWithInteger:format] forKey:@"format"];
	[record setObject:[NSNumber numberWithInteger:freq] forKey:@"freq"];
	[record setObject:[NSNumber numberWithBool:NO] forKey:@"isPlaying"];

Next we are going to make a streaming sound record. This dictionary will hold all of the state for this stream.

The OPENAL_STREAMING_BUFFER_SIZE is how many bytes of data we want in each buffer chunk. I have had good luck with 48000, this is about a second at high quality so I have plenty of time to grab the next chunk.

The bufferIndex is the current buffer that is full. For now that is zero of course.

Format and Freq we will use when we eventually setup our source. and isPlaying will let us know if we need to be refilling the buffers or not.

	
	// this will hold our buffer IDs
	NSMutableArray * bufferList = [NSMutableArray array];
	int i;
	for (i = 0; i < 3; i++) {
		NSUInteger bufferID;
		// grab a buffer ID from openAL
		alGenBuffers(1, &bufferID);
		
		[bufferList addObject:[NSNumber numberWithUnsignedInteger:bufferID]];
	}	
	
	[record setObject:bufferList forKey:@"bufferList"];

This is where the actual buffer references will live. In theory you can get away with just two buffers. One to play and one to fill while the other is playing. However If you get any stitch in your background thread, or do any heavy lifting on the processor that delays that refill, then you will get some nasty skipping. Instead I tend to use three buffers. This takes up more memory, but affords me a bit more robust playback.

	
	// close the file
	AudioFileClose(fileID);
	
	return record;	
}

Finally close our audio file and return the record. Take that record returned from the above method and store it into a big NSMutableDictionary called soundLibrary with some key, like the name of the sound. (or in whatever data structure you want, that is how I do it, and that is how the sample code works)

We are still in Step A (well, pre step A even). We haven't loaded any buffers, but we are ready when that time comes.

Once you are ready to play your sound, we move to:

Steps A, B and C in one fell swoop

OK, lets define a play streaming sound method that does A, B and C for us. (and kick off Step D)

We would call this method when we want to actually begin playing the streaming sound, the key is whatever key you used to store the record into the sound library.

This method returns the sourceID so that the calling object can use it to stop the sound.

- (NSUInteger)playStream:(NSString*)soundKey gain:(ALfloat)gain pitch:(ALfloat)pitch loops:(BOOL)loops
{ 
	// if we are not active, then dont do anything
	if (!active) return 0;
	
	ALenum err = alGetError(); // clear error code 

Just some housekeeping. I have a big boolean called 'active' that shuts off all sounds. Makes it nice and easy. Second we clear out the OpenAL errors so that anything in there will be a result of what we do in this method.

	// generally the 'play sound method' whoudl be called for all sounds
	// however if someone did call this one in error, it is nice to be able to handle it
	if ([[soundLibrary objectForKey:soundKey] isKindOfClass:[NSNumber class]]) {
		return [self playSound:soundKey gain:1.0 pitch:1.0 loops:loops];
	}

If you followed along on the other tutorials you would know that previously I had simply stored the NSNumber value for the buffer in the soundLibrary. This is a way to handle that if the streaming method gets called with the wrong key (ie a non-streaming sound) It just punts to the standard playSound method. Similarly in my playsound method I check to see if the record is an NSDictionary, and if so then it calls this method. That way you can just use the playSounds method with either streaming sounds or regular sounds and it all works dandy.


	// get our keyed sound record
	NSMutableDictionary * record = [soundLibrary objectForKey:soundKey];
	
	// first off, check to see if this sound is already playing
	if ([[record objectForKey:@"isPlaying"] boolValue]) return 0;
	

Ok, we grab the record and start to go through the state. If the sound is already playing then we get out early.


	// first, find the buffer we want to play
	NSArray * bufferList = [record objectForKey:@"bufferList"];
	
	// now find an available source
	NSUInteger sourceID = [self nextAvailableSource];	
	alSourcei(sourceID, AL_BUFFER, 0);
	
	// reset the buffer index to 0
	[record setObject:[NSNumber numberWithUnsignedInteger:0] forKey:@"bufferIndex"];

Now we are going to move into Step A proper. Grab the buffer list, and get an available source. the Method nextAvailableSource simply goes through a big list of premade sources and finds one that is not being used currrently. I think I went over that in the 'lots of sounds' tutorial linked above.

Then we reset the bufferindex to 0. This is basically setting the playhead to the beginning of the sound. next, we fill the buffers

	// queue up the first 3 buffers on the source
	for (NSNumber * bufferNumber in bufferList) {
		NSUInteger bufferID = [bufferNumber unsignedIntegerValue];
		[self loadNextStreamingBufferForSound:soundKey intoBuffer:bufferID];		
		alSourceQueueBuffers(sourceID, 1, &bufferID);
		err = alGetError(); 
		if (err != 0) [self _error:err note:@"Error alSourceQueueBuffers!"];		
	}

Ok, this is pretty simple looking but there is the one magic method: loadNextStreamingBufferForSound: intoBuffer: I will get to this in a minute, but basically it grabs a chunk of the audio file based on the bufferIndex and loads it into the buffer. then it increments the bufferIndex so that the next time I call this method I will get the next chunk.
We load a chunk into every buffer in the buffer list (which in our case will be three buffers)
And here is the important part: (this would be the Step B part of the diagram)

alSourceQueueBuffers(sourceID, 1, &bufferID);

This is the magic OpenAL function call that makes this source a streaming source instead of a single buffer source. Basically it will continue to play as long as there are buffers queued up.

	// set the pitch and gain of the source
	alSourcef(sourceID, AL_PITCH, pitch);
	err = alGetError(); 
	if (err != 0) [self _error:err note:@"Error AL_PITCH!"];		
	alSourcef(sourceID, AL_GAIN, gain);
	err = alGetError(); 
	if (err != 0) [self _error:err note:@"Error AL_GAIN!"];		
	// streams should not be looping
	// we will handle that in the buffer refill code
	alSourcei(sourceID, AL_LOOPING, AL_FALSE);				
	err = alGetError(); 
	if (err != 0) [self _error:err note:@"Error AL_LOOPING!"];		

With our buffers loaded and queued on the source, we just need to set up the source with all the properties that were passed in. This is exactly like you would do it for a single buffer sound.

	
	// everything is queued, start the buffer playing
	alSourcePlay(sourceID);	
	// check to see if there are any errors
	err = alGetError(); 
	if (err != 0) {
		[self _error:err note:@"Error Playing Stream!"];		
		return 0;
	}

Ok, finally we move to Step C: and we start the source playing. From this point on, we are on the clock to keep the buffers filled up.

	// set up some state
	[record setObject:[NSNumber numberWithBool:YES] forKey:@"isPlaying"];
	[record setObject:[NSNumber numberWithBool:loops] forKey:@"loops"];
	[record setObject:[NSNumber numberWithUnsignedInteger:sourceID] forKey:@"sourceID"];
	
	// kick off the refill methods
	[NSThread detachNewThreadSelector:@selector(rotateBufferThread:) toTarget:self withObject:soundKey];
	return sourceID;
} 

This last bit sets up the state we need to keep the buffers full, and kicks off a new thread to run in the background to keep our buffers full.

OK, before we move onto Step D lets have a look at our buffer loader method

loadNextStreamingBufferForSound: intoBuffer:

This is roughly equivalent to the method that you would use to load an entire file into a buffer for standard sound playback, only we are only going to be grabbing a small bit of the file. Luckily for us this is a pretty common thing you would want to do, so mostly all we have to worry about is keeping the state set properly.

// this takes the stream record, figures out where we are in the file
// and loads the next chunk into the specified buffer
-(BOOL)loadNextStreamingBufferForSound:(NSString*)key intoBuffer:(NSUInteger)bufferID
{
	// check some escape conditions
	if ([soundLibrary objectForKey:key] == nil) return NO;
	if (![[soundLibrary objectForKey:key] isKindOfClass:[NSDictionary class]]) return NO;
	

First off just some simple checks to make sure I am not trying to load a non-existent sound file, or a non-streaming file.

	// get the record
	NSMutableDictionary * record = [soundLibrary objectForKey:key];
	
	// open the file
	AudioFileID fileID = [self _openAudioFile:[record objectForKey:@"fileName"]];
	
	// now we need to calculate where we are in the file
	UInt32 fileSize = [[record objectForKey:@"fileSize"] unsignedIntegerValue];	
	UInt32 bufferSize = [[record objectForKey:@"bufferSize"] unsignedIntegerValue];
	UInt32 bufferIndex = [[record objectForKey:@"bufferIndex"] unsignedIntegerValue];;
	

Grab the record that has all of our state information, and set up all of our variables.


	// how many chunks does the file have total?
	NSInteger totalChunks = fileSize/bufferSize;
	
	// are we past the end? if so get out
	if (bufferIndex > totalChunks) return NO;
	
	// this is where we need to start reading from the file
	NSUInteger startOffset = bufferIndex * bufferSize;
	
	// are we in the last chunk? it might not be the same size as all the others
	if (bufferIndex == totalChunks) {
		NSInteger leftOverBytes = fileSize - (bufferSize * totalChunks);		
		bufferSize = leftOverBytes;
	}

Here we are just using our state info to figure out where in the file to look and how big of a chunk to take. If we are at the last chunk, then it may not be a full sized chunk, so we have to take that into account and change our data size.

	// this is where the audio data will live for the moment
	unsigned char * outData = malloc(bufferSize);
	
	// this where we actually get the bytes from the file and put them 
	// into the data buffer
	UInt32 bytesToRead = bufferSize;
	OSStatus result = noErr;
	result = AudioFileReadBytes(fileID, false, startOffset, &bytesToRead, outData);
	if (result != 0) NSLog(@"cannot load stream: %@",[record objectForKey:@"fileName"]);

	// if we are past the end, and no bytes were read, then no need to Q a buffer
        // this should not happen if the math above is correct, but to be sae we will add it
	if (bytesToRead == 0) {
            free(outData);
            return NO; // no more file!
        }

Ok, here we do the actual meat of the method. We alloc some memory, and then use the AudioFileReadBytes() function to grab our desired slice of data from the big file. This loads our chunk of sound data into the outData memory. At this point we will proceed exactly like we would with a single-buffer sound.

	
	ALsizei freq = [[record objectForKey:@"freq"] intValue];
	ALenum format = [[record objectForKey:@"format"] intValue];
	
	// jam the audio data into the supplied buffer
	alBufferData(bufferID,format,outData,bytesToRead,freq); 

Load out sound into our OpenAL buffer. easy.

	// clean up the buffer
	if (outData)
	{
		free(outData);
		outData = NULL;
	}
	
	AudioFileClose(fileID);	

Do some cleanup.

	// increment the index so that next time we get the next chunk
	bufferIndex ++;
	// are we looping? if so then flip back to 0
	if ((bufferIndex > totalChunks) && ([[record objectForKey:@"loops"] boolValue])) {
		bufferIndex = 0;
	}
	[record setObject:[NSNumber numberWithUnsignedInteger:bufferIndex] forKey:@"bufferIndex"];
	return YES;
}

Finally we increment the bufferIndex so that the net time we call this method we get the next chunk of data in the sequence. If we are looping we reset the index to 0 at the end.

So, that is steps A, B, C, and the beginning of D.

Lets look more closely at our background thread now.

Step D (and E): The background thread to refill our buffers

OK, you may recall, like ten pages ago, that we kicked off a thread to refill the buffers in the background. Lets look at that:

-(void)rotateBufferThread:(NSString*)soundKey
{
	NSAutoreleasePool * apool = [[NSAutoreleasePool alloc] init];
	BOOL stillPlaying = YES;
	while (stillPlaying) {
		stillPlaying = [self rotateBufferForStreamingSound:soundKey];	
		if (interrupted) 	{
			// slow down our thread during interruptions
			[NSThread sleepForTimeInterval:kBufferRefreshDelay * 3];			
		} else {
			// normal thread delay
			[NSThread sleepForTimeInterval:kBufferRefreshDelay];			
		}
	}
	[apool release];
}

This is a pretty simple method. Remember we are in a new thread, so we need to set up our own pool. Once we are no longer playing, then the thread ends. We make one call basically to rotateBufferForStreamingSound:. Finally if we are interrupted then we dont need to be refilling but our thread will still run (until we are terminated if that happens). To be good citizens we will decrease the amount of time we are checking the thread.

Otherwise we just come back in kBufferRefreshDelay seconds. Setting this number can be a bit tricky. If you set it too close to the actual time it takes to play your individual buffers (as you would think) then if it lags at all then you will fall behind and never be able to catch up. You want it to be more than once during every chunk, incase you need to load more than one chunk because of a slow thread. However, run it too often and you are wasting cycles. I have mine set to 0.25 seconds based on the 48000 byte buffer. I dont even remember why i came to these numbers and they might be terrible. but they do work. Feel free to tune them to your heart's desire.

Next up, the actual buffer rotator:

// this checks to see if there is a buffer that has been used up.
// if it finds one then it loads the next bit of the sound into that buffer
// and puts it into the back of the queue
-(BOOL)rotateBufferForStreamingSound:(NSString*)soundKey
{
	// make sure we arent trying to stream a normal sound
	if (![[soundLibrary objectForKey:soundKey] isKindOfClass:[NSDictionary class]]) return NO;	
	if (interrupted) return YES; // we are still 'playing' but we arent loading new buffers

	// get the keyed record
	NSMutableDictionary * record = [soundLibrary objectForKey:soundKey];
	NSUInteger sourceID = [[record objectForKey:@"sourceID"] unsignedIntegerValue];	
	

First some defensive programming, if we are getting called with the wrong key then get out, if we are interrupted then we are not loading any new buffers, so get out (but return YES because we want to keep the thread alive)
Then we grab our ubiquitous record and start to fill in some variables.

	// check to see if we are stopped
	NSInteger sourceState;
	alGetSourcei(sourceID, AL_SOURCE_STATE, &sourceState);
	if (sourceState != AL_PLAYING) {
		[record setObject:[NSNumber numberWithBool:NO] forKey:@"isPlaying"];
		return NO; // we are stopped, do not load any more buffers
	}

First up: check to see if this source that we are meant to be loading buffers into is stopped. If it is, then we dont need to load any new buffers, and we want to return NO so that the thread finishes as well.


	// get the processed buffer count
	NSInteger buffersProcessed = 0;
	alGetSourcei(sourceID, AL_BUFFERS_PROCESSED, &buffersProcessed);
	
	// check to see if we have a buffer to deQ
	if (buffersProcessed > 0) {
		// great! deQ a buffer and re-fill it
		NSUInteger bufferID;
		// remove the buffer form the source
		alSourceUnqueueBuffers(sourceID, 1, &bufferID);
		// fill the buffer up and reQ! 
		// if we cant fill it up then we are finished
		// in which case we dont need to re-Q
		// return NO if we dont have mroe buffers to Q
		if (![self loadNextStreamingBufferForSound:soundKey intoBuffer:bufferID]) return NO;
		// Q the loaded buffer
		alSourceQueueBuffers(sourceID, 1, &bufferID);
	}

Next up, the big event: we see how many buffers the source has processed since our last check. For every buffer that has been processed we should fill in a new one and queue it up.
Before we can queue up a new buffer, we need to dequeue the old one. We do this with a call to alSourceUnqueueBuffers(). This fills in our bufferID variable, we can use this buffer now for whatever we want. In this case we want to fill it up with the next chunk of sound data and then put it at the end of the queue.

We call our loadNextStreamingBufferForSound:intoBuffer: method and if it returns NO, then we are all done and we can get out.
If it returns YES then we can queue up the newly filled buffer.

	return YES;
}

Finally, if we have made it this far then it was a successful buffer load and we return a YES to keep our loop going.

That is about it. The way this is all built, once you cann alSourceStop() on your streaming sound ID, everything sorta cleans itself up. The buffers stop rotating, and the thread stops. You still have buffers in memory tho, so if you want to clean those up too, be sure to add that code.

To Sum Up

OK, so there are the five-ish steps to streaming sound glory with OpenAL.

Obviously I have left out all the other code you need to get this running, have a look at the articles linked at the top, they have most of the rest of it.

Also I should mention that all of this code comes from an earlier, simpler version of what I generally use as my 'sound engine' nowadays. (basically I have been working on finishing this tutorial for awhile now and my working code has evolved since then :-)
This is not to say the sample code here is necessarily inferior, in fact my current code does almost exactly the same things, only there is a bit more optimised state handling and some other things that make it faster/easier, but not as easy to explain in an already very long post. So feel free to take this code and make it better in your own way :-) (for instance, instead of using dictionaries for state, I now have some proper sound classes that hold all that state for me etc..)

Also, i should say that if this code crashes your machine, or bricks your phone or makes your cat lose all it's hair, it is not my fault, you have been warned.

Cheers!
-B

This entry was posted in code, iPhone, openAL. Bookmark the permalink.

20 Responses to Streaming in OpenAL

  1. hanstutschku says:

    Hi Ben, this is very helpful. Still trying to work my head around it.
    Just a quick question. I’m getting an error on the last line of this snippet. “bufferID undeclared”.
    Should this line of code be inside the preceeding for-loop?

    // queue up the first 3 buffers on the source
    for (NSNumber * bufferNumber in bufferList) {
    NSUInteger bufferID = [bufferNumber unsignedIntegerValue];
    [self loadNextStreamingBufferForSound:soundKey intoBuffer:bufferID];
    alSourceQueueBuffers(sourceID, 1, &bufferID);
    err = alGetError();
    if (err != 0) [self _error:err note:@"Error alSourceQueueBuffers!"];
    }

    Ok, this is pretty simple looking but there is the one magic method: loadNextStreamingBufferForSound: intoBuffer: I will get to this in a minute, but basically it grabs a chunk of the audio file based on the bufferIndex and loads it into the buffer. then it increments the bufferIndex so that the next time I call this method I will get the next chunk.
    We load a chunk into every buffer in the buffer list (which in our case will be three buffers)
    And here is the important part: (this would be the Step B part of the diagram)

    alSourceQueueBuffers(sourceID, 1, &bufferID);

  2. hanstutschku says:

    never mind my above post, I figured it out

  3. hanstutschku says:

    Hi Ben,

    I finally got it working. This has been indeed invaluable help from your end.

    Just a minor little detail:
    I needed to add this line

    [record setObject:[NSNumber numberWithBool:NO] forKey:@”isPlaying”];

    into the following block of code.

    Otherwise the “isPlaying” flag was never set to NO

    The code seemed to exit in that block of code
    (and not in the place where you are setting the NO, which is
    in this block

    if (sourceState != AL_PLAYING) {
    NSLog(@”stopped playing “);
    [record setObject:[NSNumber numberWithBool:NO] forKey:@”isPlaying”];
    return NO; // we are stopped, do not load any more buffers
    }

    But this again might be just a difference in your actual implementation versus these
    excerpts for the tutorial.

    Again – MANY THANKS for sharing your knowledge.

    here is my adaptation which made it work for me

    // check to see if we have a buffer to deQ
    if (buffersProcessed > 0) {
    NSLog(@”refill”);

    // great! deQ a buffer and re-fill it
    NSUInteger bufferID;
    // remove the buffer form the source
    alSourceUnqueueBuffers(sourceID, 1, &bufferID);
    // fill the buffer up and reQ!
    // if we cant fill it up then we are finished
    // in which case we dont need to re-Q
    // return NO if we dont have mroe buffers to Q
    if (![self loadNextStreamingBufferForSound:soundKey intoBuffer:bufferID])
    {
    [record setObject:[NSNumber numberWithBool:NO] forKey:@”isPlaying”];
    return NO;
    }
    // Q the loaded buffer
    alSourceQueueBuffers(sourceID, 1, &bufferID);
    }

    If anybody is interested, I could also share the xcode project.

  4. goat says:

    This example looks like what I’m looking for, any chance of sharing a Xcode project?

  5. Ben says:

    Hey Goat,

    possibly, one day, when I get the time to clean it up and take out all the bits I can’t publish at the moment. But probably not very soon :-) however, if you go back through all the various openAL posts I have done, all the code is there, you just have to put it together :-) (and that is half the fun)

    Alternatively, grab a copy of this: http://benbritten.com/2010/05/14/beginning-iphone-games-development/ it has a great set of chapters about OpenAL and has the sample code to go along with it.

    Cheers!
    -B

  6. goat says:

    Hi Ben,

    I actually stopped being lazy and did exactly as you suggested and got everything running as I wanted. Thanks for your excellent tutorials.

    My next problem is applying some basic DSP (filters etc) to the stream. Do you have any pointers on that?

    Thanks again!

  7. Ben says:

    Hey Goat,

    with OpenAl you are pretty much on your own as far as filtering goes. You have to do all the bit twiddling yourself.
    With the streaming buffers your best bet is to apply the filters as you load in the buffers (ie load a buffer, run your filter algo on it, queue it up, rinse repeat.) Many DSP filters rely on a window of data, so you may need to keep the previous and post buffers around just for data continuity etc..

    if your filters are changing in near real time, then you will want to make your buffer size smaller so that you can be more responsive to the filter changes. (but if you make them really small, then you may underrun, so you might need more buffers etc.. )

    Good luck!

    Cheers!
    -B

  8. goat says:

    Ugh. Sounds messy. I have everything else set up how I want it, except for basic (real time changing) filtering.

    I’ve seen a lot of stuff around about using Audio Units for DSP. I’m not really interested in re-writing my sound engine to lower level code just to get a filter – but the filter is vital to my app.

    Do you think I’m better off re-writing the sound engine to use Audio Units (major pain) or just attempting to filter using OpenAl/buffers?

    Thanks – Goat

  9. Ben says:

    Hey Goat,

    I guess it depends on what kinds of filters you want. There are not many built-in AUs on the iphone and if you already know how to do the DSP enough to write your own AU, then you might as well just do it in OpenAL, it is much the same. If the AU you want is already available, then it is may be worth the effort to just go to CoreAudio.

    Cheers!
    -B

  10. phetsana says:

    Hi Ben,

    Thank you for this very useful report on OpenAL streaming ;)

    You use for your example uncompressed format but can you tell me if it is possible to use OpenAL streaming with compressed format like mp3 or aac in simply way ?

    Thanks,

    Phetsana.

  11. Ben says:

    Hey Phetsana,

    in the not so distant past I would probably just tell you to totally avoid any sort of software decompression on the iphone since doing that generally totally screws your performance. However, with the advent of the iPhone 4, we now have three very popular devices (ie 3gs,iPad,iphone4) that are probably more than capable of doing some real-time software decompression.

    As for how you implement it: it is exactly like doing it for uncompressed, only you have to uncompress the mp3 before playing :-) Now decompressing mp3s(or aacs) is kinda a pain in the ass, you need to find a decent open source decompression lib, (or go with OGG/libogg/libvorbis they are fairly decent) and, if you are streaming then you need to decompress the file in small buffer-sized chunks, making the decompression that much more complicated….

    you know what, I still stand by my original thought: just dont do it. (or just go get FMOD and forget OpenAL)

    If you need to play long music files (which will generally be mp3 or aac or some other compressed format) then just play those through the hardware decompressor via the AudioToolkit.

    What I generally do is break my sounds up into two categories: sounds that will be pretty small when uncompressed and sounds that will be fooking gargantuan when uncompressed. The latter category gets converted to mp3 or mp4 and played back via audio toolkit, the uncompressed ones get played back via OpenAL. (cuz OpenAL is soooo much more responsive)

    If you absolutely must have streaming mp3s and you absolutely must have access to the buffer data for some reason, then you are stuck doing the long yards to implement a proper software decompression system. (but I am lazy and would never do that, i would either change my requirements or get FMOD)

    Cheers!
    -B

  12. john.mihirayan says:

    Hi, Ben.

    I have implemented a C++ based OpenAL wrapper learning from your invaluable post on the topic. Thank you for the great post.

    I am further extending it to support IMA4 format. I will write the decompressor loader for it. But are their any concerns for performance?

    cheers,
    john mihirayan

  13. Ben says:

    Hey John,

    There will always be a performance hit when you are decompressing in software (which is why that should be avoided). Also, software decompression just adds complexity that is often unjustified. However, there are perfectly valid reasons to do it, so just keep an eye on your buffers to make sure you are far enough ahead to keep them full and you will be fine.

    Cheers!
    -Ben

  14. Lopdo says:

    Hi Ben!

    I was able to create OpenAL sound engine using your tutorials but I have strangest problem when streaming music. My engine is more or less exact copy (as exact as I could get it as it is scattered across multiple pages :) ) and I am using uncompressed sound. It plays just fine but after a while (random duration) it stops playing without throwing any error (I am checking after every openal function call). It stops because source changes from AL_PLAYING to AL_STOPPED. This change is what is bothering me. I don’t change it, I don’t anything cause my music to stop playing and there are no errors. I tried to google something about when source can stop by itself but I wasn’t successful…

    Do you have any idea where problem could be? I am trying to fix this for few days now and I am going crazy. Any help will be greatly appreciated!

    Thanks,
    Lope

  15. Ben says:

    Hey Lopdo,

    You are probably getting buffer underruns. If the buffers are not getting filled fast enough, and OpenAL runs out, it will stop (thinking that it is all done) so Just make sure that you have tuned your refill thread and buffer sizes so that you can refill them quick enough.

    Cheers!
    -Ben

  16. Lopdo says:

    Hey Ben!

    Thanks a lot for your reply, looks like you were right. I increased buffer size and it works. I thought that this could be problem, but for some reason I thought that this is handled somehow and I would get error message when it happened so I ignored this solution. Turns out I was mistaking it for source overruns (how stupid of me!). Thanks again, you saved me :)

    Lope

  17. rbshadel says:

    Hey Ben,
    Thanks a lot for putting this all together, it’s helped tremendously.
    I do have a question about real time filtering. I need to convolve each buffer with an impulse respone, and my thought process is that after AudioFileReadBytes is called, outData contains the audio samples which need to be processed. However, the characters returned by outData[0], outData[1], etc. are not at all what I’d expect for an audio stream. I assume I need to perform some conversion from the values in outData to get meaningful audio samples, but I haven’t been able to figure it out yet. If you have a free minute, I’d really appreciate it if you could let me know if I’m missing something simple and/or point me in the right direction.
    To sum up: I know how to perform the convolution, I just need to get at the audio samples.
    Thanks!
    Brent

  18. rodrigoreis22 says:

    Do you guys have the xcodeproj for this? It´s possible to share with me? Please send an email to rodrigo.reis@invit.com.br .
    Thanks!!

  19. vinutaprabhu says:

    Hi Ben,

    This is very helpful. I have a small question. For a media player on iOS, for playing out audio, what would be the best choice? Is it Audio Units or OpenAL?

    Thanks!!

  20. hsgaoshou8 says:

    Hey Ben!

    I met a problem, have not been able to solve. help me!
    The following code:
    /*
    UInt32 setting=alcGetEnumValue(NULL, “ALC_IPHONE_SPATIAL_RENDERING_QUALITY_HEADPHONES”);
    alcMacOSXRenderingQualityProc ((const Alint)setting);
    */
    This code is run in the iphone,OPENAL no sound.Why?

Leave a Reply