I hope this question is not too vague. I'm trying to take info from an audio buffer in this Xcode project and use it to do some DSP.
framebuffer points to an array of values that I would like to pass to a function, loop through and finally plug into the original buffer. The method would act like a sound filter or effect.
Maybe to keep my question as clear as possible, could we get an example of a sub-routine that would add 0.25 to each sample in the buffer?
Here's the code so far:
s开发者_JS百科tatic OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
EAGLView *remoteIOplayer = (EAGLView *)inRefCon;
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
//get the buffer to be filled
AudioBuffer buffer = ioData->mBuffers[i];
short *frameBuffer = (short*)buffer.mData;
for (int j = 0; j < inNumberFrames; j++){
// get NextPacket returns a 32 bit value, one frame.
frameBuffer[j] = [[remoteIOplayer inMemoryAudioFile] getNextPacket];
}
EAGLView* thisView = [[EAGLView alloc] init];
[thisView DoStuffWithTheRecordedAudio:ioData];
[thisView release];
}
return noErr;
}
Trying to do UI or Open GL stuff inside an audio callback is a bad idea on iOS devices. You need to decouple the callback and UI execution using queues or fifos, and the like.
Trying to do Objective C messaging inside the inner loop of real-time audio may also a very bad idea in term of device performance. Sticking to plain C/C++ works far better in performance critical inner loops.
Also, adding a constant to audio data will likely just result in an inaudible DC offset.
精彩评论