BD Wild BD Wild - 4 months ago 26x
Objective-C Question

Callback function of output unit of audio graph - can't read data

I am trying to capture the input data stream to the output unit in an audio graph so I can write it to a file. I have registered an input callback function for the output unit (default output) after creating the graph like so:

AudioComponent comp = AudioComponentFindNext (NULL, &cd);
if (comp == NULL) {
printf ("can't get output unit");
exit (-1);
CheckError (AudioComponentInstanceNew(comp, &player->outputUnit),
"Couldn't open component for outputUnit"); // outputUnit is of type AudioUnit

// register render callback
AURenderCallbackStruct input;
input.inputProc = MyRenderProc;
input.inputProcRefCon = player;
"AudioUnitSetProperty failed");

// initialize unit
CheckError (AudioUnitInitialize(player->outputUnit),"Couldn't initialize output unit");

The callback function gets called but when I try to read the input data stream from ioData->mBuffers[0].mData, all I get is zeros. Here is the callback function:

OSStatus MyRenderProc(void *inRefCon, // PLAYER CODE
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData) {

int frame = 0;
Float32 leftFloat = 0;
Float32 rightFloat = 0;
NSNumber *leftNumber;
NSNumber *rightNumber;
for (frame = 0; frame < inNumberFrames; ++frame)
Float32 *data = (Float32*)ioData->mBuffers[0].mData;
leftFloat = (data)[frame];
leftNumber = [NSNumber numberWithDouble:leftFloat];
(data)[frame] = 0;

// copy to right channel too
data = (Float32*)ioData->mBuffers[1].mData;
rightFloat = (data)[frame];
rightNumber = [NSNumber numberWithDouble:(data)[frame]];
(data)[frame] = 0;

[leftData addObject:leftNumber];
[rightData addObject:rightNumber];


return noErr;

Furthermore, if I don't zero out the data I hear noise during playback, which tells me I am misinterpreting the function of the mBuffers. What am I doing wrong here?


If capturing input data from an AUGraph is the task, the critical part of code (more or less) boils down to this simplest one-channel demo example:

OSStatus MyRenderProc(void *inRefCon, 
                      AudioUnitRenderActionFlags *ioActionFlags,
                      const AudioTimeStamp *inTimeStamp,
                      UInt32 inBusNumber,
                      UInt32 inNumberFrames,
                      AudioBufferList * ioData) 
    Float32 buf [inNumberFrames]; //just for one channel!

    MyMIDIPlayer *player = (MyMIDIPlayer *)inRefCon;

    if (*ioActionFlags & kAudioUnitRenderAction_PostRender){
        static int TEMP_kAudioUnitRenderAction_PostRenderError = (1 << 8);
        if (!(*ioActionFlags & TEMP_kAudioUnitRenderAction_PostRenderError)){

            Float32* data = (Float32 *)ioData->mBuffers[0].mData; //just for one channel!

            memcpy(buf, data, inNumberFrames*sizeof(Float32));

            //do something with buff - there are nice examples of ExtAudioFileWriteAsync()
    return noErr;

In the setupAUGraph() this callback can be set up in the following way:

void setupAUGraph(MyMIDIPlayer *player) 
    // the beginning follows the textbook example setup pattern
    {…  …  …}

    // this is the specific part
    AURenderCallbackStruct input = {0};
    input.inputProc = MyRenderProc;
    input.inputProcRefCon = player->instrumentUnit;

           "AudioUnitAddRenderNotify Failed");

    // now initialize the graph (causes resources to be allocated)
           "AUGraphInitialize failed");

Please note that the render callback "taps" the connection between the output of instrument node and input of output node, capturing what comes from upstream. The callback just copies ioData into some other buffer which can be saved. AFAIK, this is the simplest way of accessing ioData that I know it works, without breaking the API.

Please also note there are very efficient plain-C methods for testing if this works for a specific implementation - no need for Objective-C methods inside the callback. Fiddling with some NSArrays, adding objects etc, inside a plain-C real-time callback introduces risk of priority issues which may later become difficult to debug. CoreAudio API is written in plain-C for a reason. At the heart of Obj-C runtime there's much of what can't take place on the real time thread without risking glitches (locks, memory management, etc). So, it would be safer to keep off Obj-C on a real time thread.
Hope this can help.