开发者

Is kAudioFormatFlagIsFloat supported on iPhoneOS?

开发者 https://www.devze.com 2022-12-31 08:01 出处:网络
I am writing an iPhone app that records and plays audio simultaneously using the I/O audio unit as per Apple\'s recommendations.

I am writing an iPhone app that records and plays audio simultaneously using the I/O audio unit as per Apple's recommendations.

I want to apply some sound effects (reverb, etc) on the recorded audio before playing it back. For these effects to work well, I need the samples to be floating point numbers, rather than integers. It seems this should be possible, by creating an AudioStreamBasicDescription with kAudioFormatFlagIsFloat set on mFormatFlags. This is what my code looks like:

AudioStreamBasicDescription streamDescription;

streamDescription.mSampleRate = 44100.0;
streamDescription.mFormatID = kAudioFormatLinearPCM;
streamDescription.mFormatFlags = kAudioFormatFlagIsFloat;
streamDescription.mBitsPerChannel = 32;
streamDescription.mBytesPerFrame = 4;
streamDescription.mBytesPerPacket = 4;
streamDescription.mChann开发者_JS百科elsPerFrame = 1;
streamDescription.mFramesPerPacket = 1;
streamDescription.mReserved = 0;

OSStatus status;

status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &streamDescription, sizeof(streamDescription));
if (status != noErr)
  fprintf(stderr, "AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input) returned status %ld\n", status);

status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &streamDescription, sizeof(streamDescription));
if (status != noErr)
  fprintf(stderr, "AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output) returned status %ld\n", status);

However, when I run this (on an iPhone 3GS running iPhoneOS 3.1.3), I get this:

AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input) returned error -10868
AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output) returned error -10868

(-10868 is the value of kAudioUnitErr_FormatNotSupported)

I didn't find anything of value in Apple's documentation, apart from a recommendation to stick to 16 bit little-endian integers. However, the aurioTouch example project contains at least some support code related to kAudioFormatFlagIsFloat.

So, is my stream description incorrect, or is kAudioFormatFlagIsFloat simply not supported on iPhoneOS?


It's not supported, as far as I know. You can pretty easily convert to floats, though using AudioConverter. I do this conversion (both ways) in real time to use the Accelerate framework with iOS audio. (note: this code is copied and pasted from more modular code, so there may be some minor typos)

First, you'll need the AudioStreamBasicDescription from the input. Say

AudioStreamBasicDescription aBasicDescription = {0};
aBasicDescription.mSampleRate       = self.samplerate;
aBasicDescription.mFormatID         = kAudioFormatLinearPCM;
aBasicDescription.mFormatFlags      = kAudioFormatFlagIsSignedInteger |     kAudioFormatFlagIsPacked;
aBasicDescription.mFramesPerPacket          = 1;
aBasicDescription.mChannelsPerFrame     = 1;
aBasicDescription.mBitsPerChannel       = 8 * sizeof(SInt16);
aBasicDescription.mBytesPerPacket       = sizeof(SInt16) * aBasicDescription.mFramesPerPacket;
aBasicDescription.mBytesPerFrame        = sizeof(SInt16) * aBasicDescription.mChannelsPerFrame

Then, generate a corresponding AudioStreamBasicDescription for float.

AudioStreamBasicDescription floatDesc = {0};
floatDesc.mFormatID = kAudioFormatLinearPCM;      
floatDesc.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked;
floatDesc.mBitsPerChannel = 8 * sizeof(float);
floatDesc.mFramesPerPacket = 1;                                          
floatDesc.mChannelsPerFrame = 1;           
floatDesc.mBytesPerPacket = sizeof(float) * floatDesc.mFramesPerPacket;                                                                            
floatDesc.mBytesPerFrame = sizeof(float) * floatDesc.mChannelsPerFrame;                                                                                   
floatDesc.mSampleRate = [controller samplerate];

Make some buffers.

UInt32 intSize = inNumberFrames * sizeof(SInt16);
UInt32 floatSize = inNumberFrames * sizeof(float);
float *dataBuffer = (float *)calloc(numberOfAudioFramesIn, sizeof(float));

Then convert. (ioData is your AudioBufferList containing the int audio)

AudioConverterRef converter;
OSStatus err = noErr;
err = AudioConverterNew(&aBasicDescription, &floatDesct, &converter);
//check for error here in "real" code
err = AudioConverterConvertBuffer(converter, intSize, ioData->mBuffers[0].mData, &floatSize, dataBuffer);
//check for error here in "real" code
//do stuff to dataBuffer, which now contains floats
//convert the floats back by running the conversion the other way


I'm doing something unrelated to AudioUnits but I am using AudioStreamBasicDescription on iOS. I was able to use float samples by specifying:

dstFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked;

The book Learning Core Audio: A Hands-on Guide to Audio Programming for Mac and iOS was helpful for this.


It is supported.

The problem is you must also set kAudioFormatFlagIsNonInterleaved on mFormatFlags. If you don't do this when setting kAudioFormatFlagIsFloat, you will get a format error.

So, you want to do something like this when preparing your AudioStreamBasicDescription:

streamDescription.mFormatFlags = kAudioFormatFlagIsFloat | 
                                 kAudioFormatFlagIsNonInterleaved;

As for why iOS requires this, I'm not sure - I only stumbled across it via trial and error.


From the Core Audio docs:

kAudioFormatFlagIsFloat
  Set for floating point, clear for integer.
  Available in iPhone OS 2.0 and later.
  Declared in CoreAudioTypes.h.

I don't know enough about your stream to comment on its [in]correctness.


You can obtain an interleaved float RemoteIO with the following ASBD setup:

// STEREO_CHANNEL = 2, defaultSampleRate = 44100
AudioStreamBasicDescription const audioDescription = {
                    .mSampleRate        = defaultSampleRate,
                    .mFormatID          = kAudioFormatLinearPCM,
                    .mFormatFlags       = kAudioFormatFlagIsFloat,
                    .mBytesPerPacket    = STEREO_CHANNEL * sizeof(float),
                    .mFramesPerPacket   = 1,
                    .mBytesPerFrame     = STEREO_CHANNEL * sizeof(float),
                    .mChannelsPerFrame  = STEREO_CHANNEL,
                    .mBitsPerChannel    = 8 * sizeof(float),
                    .mReserved          = 0
                };

This worked for me.

0

精彩评论

暂无评论...
验证码 换一张
取 消