I have a Multichannel Mixer audio unit playing back audio files in an iOS app, and I need to figure out how to update the app's UI and perform a reset when the render callback hits the end of the longest audio file (which is set up to run on bus 0). As my code below shows I am trying to use KVO to achieve this (using the boolean variable tapesUnderway
- the AutoreleasePool is necessary as this Objective-C code is running outside of its normal domain, see http://www.cocoabuilder.com/archive/cocoa/57412-nscfnumber-no-pool-in-place-just-leaking.html).
static OSStatus tapesRenderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
SoundBufferPtr sndbuf = (SoundBufferPtr)inRefCon;
UInt32 bufferFrames = sndbuf[inBusNumber].numFrames;
AudioUnitSampleType *in = sndbuf[inBusNumber].data;
// These mBuffers are the output buffers and are empty; these two lines are just setting the references to them (via outA and outB)
AudioUnitSampleType *outA = (AudioUnitSampleType *)ioData->mBuffers[0].mData;
AudioUnitSampleType *outB = (AudioUnitSampleType *)ioData->mBuffers[1].mData;
UInt32 sample = sndbuf[inBusNumber].sampleNum;
// --------------------------------------------------------------
// Set the start time here
if(inBusNumber == 0 && !tapesFirstRenderPast)
{
printf("Tapes first render past\n");
tapesStartSample = inTimeStamp->mSampleTime;
tapesFirstRenderPast = YES; // MAKE SURE TO RESET THIS ON SONG RESTART
firstPauseSample = tapesStartSample;
}
// --------------------------------------------------------------
// Now process the samples
for(UInt32 i = 0; i < inNumberFrames; ++i)
{
if(inBusNumber == 0)
{
// ------------------------------------------------------
// Bus 0 is the backing track, and is always playing back
outA[i] = in[sample++];
outB[i] = in[sample++]; // For stereo set desc.SetAUCanonical to (2, true) and increment samples in both output calls
lastSample = inTimeStamp->mSampleTime + (Float64)i; // Set the last played sample in order to compensate for pauses
// ------------------------------------------------------
// Use this logic to mark end of tune
if(sample >= (bufferFrames * 2) && !tapesEndPast)
{
// USE KVO TO NOTIFY METHOD OF VALUE CHANGE
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
FuturesEPMedia *futuresMedia = [FuturesEPMedia sharedFuturesEPMedia];
NSNumber *boolNo = [[NSNumber alloc] initWithBool: NO];
[futuresMedia setValue: boolNo forKey: @"tapesUnderway"];
[boolNo release];
[pool release];
tapesEndPast = YES;
}
}
else
{
// ------------------------------------------------------
// The other buses are the open sections, and are sync开发者_运维问答hed through the tapesSectionsTimes array
Float64 sectionTime = tapesSectionTimes[inBusNumber] * kGraphSampleRate; // Section time in samples
Float64 currentSample = inTimeStamp->mSampleTime + (Float64)i;
if(!isPaused && !playFirstRenderPast)
{
pauseGap += currentSample - firstPauseSample;
playFirstRenderPast = YES;
pauseFirstRenderPast = NO;
}
if(currentSample > (tapesStartSample + sectionTime + pauseGap) && sample < (bufferFrames * 2))
{
outA[i] = in[sample++];
outB[i] = in[sample++];
}
else
{
outA[i] = 0;
outB[i] = 0;
}
}
}
sndbuf[inBusNumber].sampleNum = sample;
return noErr;
}
At the moment when this variable is changed it triggers a method in self, but this leads to an unacceptable delay (20-30 seconds) when executed from this render callback (I am thinking because it is Objective-C code running in the high priority audio thread?). How do I effectively trigger such a change without the delay? (The trigger will change a pause button to a play button and call a reset method to prepare for the next play.)
Thanks
Yes. Don't use objc code in the render thread since its high priority. If you store state in memory (ptr or struct) and then get a timer in the main thread to poll (check) the value(s) in memory. The timer need not be anywhere near as fast as the render thread and will be very accurate.
Try this.
Global :
BOOL FlgTotalSampleTimeCollected = False;
Float64 HigestSampleTime = 0 ;
Float64 TotalSampleTime = 0;
in -(OSStatus) setUpAUFilePlayer:
AudioStreamBasicDescription fileASBD;
// get the audio data format from the file
UInt32 propSize = sizeof(fileASBD);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyDataFormat,
&propSize, &fileASBD),
"couldn't get file's data format");
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyAudioDataPacketCount,
&propsize, &nPackets),
"AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] failed");
Float64 sTime = nPackets * fileASBD.mFramesPerPacket;
if (HigestSampleTime < sTime)
{
HigestSampleTime = sTime;
}
In RenderCallBack :
if (*actionFlags & kAudioUnitRenderAction_PreRender)
{
if (!THIS->FlgTotalSampleTimeCollected)
{
[THIS setFlgTotalSampleTimeCollected:TRUE];
[THIS setTotalSampleTime:(inTimeStamp->mSampleTime + THIS->HigestSampleTime)];
}
}
else if (*actionFlags & kAudioUnitRenderAction_PostRender)
{
if (inTimeStamp->mSampleTime > THIS->TotalSampleTime)
{
NSLog(@"inTimeStamp->mSampleTime :%f",inTimeStamp->mSampleTime);
NSLog(@"audio completed");
[THIS callAudioCompletedMethodHere];
}
}
This worked for me. Test in Device.
精彩评论