开发者

Example of using Audio Queue Services

开发者 https://www.devze.com 2023-01-08 13:05 出处:网络
I am seeking an example of using Audio Queue Services. I开发者_如何转开发 would like to create a sound using a mathematical equation and then hear it.Here\'s my code for generating sound from a funct

I am seeking an example of using Audio Queue Services.

I开发者_如何转开发 would like to create a sound using a mathematical equation and then hear it.


Here's my code for generating sound from a function. I'm assuming you know how to use AudioQueue services, set up an AudioSession, and properly start and stop an audio output queue.

Here's a snippet for setting up and starting an output AudioQueue:

// Get the preferred sample rate (8,000 Hz on iPhone, 44,100 Hz on iPod touch)
size = sizeof(sampleRate);
err = AudioSessionGetProperty (kAudioSessionProperty_CurrentHardwareSampleRate, &size, &sampleRate);
if (err != noErr) NSLog(@"AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate) error: %d", err); 
//NSLog (@"Current hardware sample rate: %1.0f", sampleRate);

BOOL isHighSampleRate = (sampleRate > 16000);
int bufferByteSize;
AudioQueueBufferRef buffer;

// Set up stream format fields
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamFormat.mBitsPerChannel = 16;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerPacket = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mBytesPerFrame = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mFramesPerPacket = 1;
streamFormat.mReserved = 0;

// New output queue ---- PLAYBACK ----
if (isPlaying == NO) {
    err = AudioQueueNewOutput (&streamFormat, AudioEngineOutputBufferCallback, self, nil, nil, 0, &outputQueue);
    if (err != noErr) NSLog(@"AudioQueueNewOutput() error: %d", err);

    // Enqueue buffers
    //outputFrequency = 0.0;
    outputBuffersToRewrite = 3;
    bufferByteSize = (sampleRate > 16000)? 2176 : 512; // 40.5 Hz : 31.25 Hz 
    for (i=0; i<3; i++) {
        err = AudioQueueAllocateBuffer (outputQueue, bufferByteSize, &buffer); 
        if (err == noErr) {
            [self generateTone: buffer];
            err = AudioQueueEnqueueBuffer (outputQueue, buffer, 0, nil);
            if (err != noErr) NSLog(@"AudioQueueEnqueueBuffer() error: %d", err);
        } else {
            NSLog(@"AudioQueueAllocateBuffer() error: %d", err); 
            return;
        }
    }

    // Start playback
    isPlaying = YES;
    err = AudioQueueStart(outputQueue, nil);
    if (err != noErr) { NSLog(@"AudioQueueStart() error: %d", err); isPlaying= NO; return; }
} else {
    NSLog (@"Error: audio is already playing back.");
}

Here's the part that generates the tone:

// AudioQueue output queue callback.
void AudioEngineOutputBufferCallback (void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
    AudioEngine *engine = (AudioEngine*) inUserData;
    [engine processOutputBuffer:inBuffer queue:inAQ];
}

- (void) processOutputBuffer: (AudioQueueBufferRef) buffer queue:(AudioQueueRef) queue {
    OSStatus err;
    if (isPlaying == YES) {
        [outputLock lock];
        if (outputBuffersToRewrite > 0) {
            outputBuffersToRewrite--;
            [self generateTone:buffer];
        }
        err = AudioQueueEnqueueBuffer(queue, buffer, 0, NULL);
        if (err == 560030580) { // Queue is not active due to Music being started or other reasons
            isPlaying = NO;
        } else if (err != noErr) {
            NSLog(@"AudioQueueEnqueueBuffer() error %d", err);
        }
        [outputLock unlock];
    } else {
        err = AudioQueueStop (queue, NO);
        if (err != noErr) NSLog(@"AudioQueueStop() error: %d", err);
    }
}

-(void) generateTone: (AudioQueueBufferRef) buffer {
    if (outputFrequency == 0.0) {
        memset(buffer->mAudioData, 0, buffer->mAudioDataBytesCapacity);
        buffer->mAudioDataByteSize = buffer->mAudioDataBytesCapacity;
    } else {
        // Make the buffer length a multiple of the wavelength for the output frequency.
        int sampleCount = buffer->mAudioDataBytesCapacity / sizeof (SInt16);
        double bufferLength = sampleCount;
        double wavelength = sampleRate / outputFrequency;
        double repetitions = floor (bufferLength / wavelength);
        if (repetitions > 0.0) {
            sampleCount = round (wavelength * repetitions);
        }

        double      x, y;
        double      sd = 1.0 / sampleRate;
        double      amp = 0.9;
        double      max16bit = SHRT_MAX;
        int i;
        SInt16 *p = buffer->mAudioData;

        for (i = 0; i < sampleCount; i++) {
            x = i * sd * outputFrequency;
            switch (outputWaveform) {
                case kSine: 
                    y = sin (x * 2.0 * M_PI);
                    break;
                case kTriangle:
                    x = fmod (x, 1.0);
                    if (x < 0.25)
                        y = x * 4.0; // up 0.0 to 1.0
                    else if (x < 0.75)
                        y = (1.0 - x) * 4.0 - 2.0; // down 1.0 to -1.0
                    else 
                        y = (x - 1.0) * 4.0; // up -1.0 to 0.0
                    break;
                case kSawtooth:
                    y  = 0.8 - fmod (x, 1.0) * 1.8;
                    break;
                case kSquare:
                    y = (fmod(x, 1.0) < 0.5)? 0.7: -0.7;
                    break;
                default: y = 0; break;
            }
            p[i] = y * max16bit * amp;
        }

        buffer->mAudioDataByteSize = sampleCount * sizeof (SInt16);
    }
}

Something to watch out for is that your callback will be called on a non-main thread, so you have to practice thread safety with locks, mutexs, or other techniques.


This is a version using C# of the same sample from @lucius

    void SetupAudio ()
    {
        AudioSession.Initialize ();
        AudioSession.Category = AudioSessionCategory.MediaPlayback;

        sampleRate = AudioSession.CurrentHardwareSampleRate;
        var format = new AudioStreamBasicDescription () {
            SampleRate = sampleRate,
            Format = AudioFormatType.LinearPCM,
            FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked,
            BitsPerChannel = 16,
            ChannelsPerFrame = 1,
            BytesPerFrame = 2,
            BytesPerPacket = 2, 
            FramesPerPacket = 1,
        };

        var queue = new OutputAudioQueue (format);
        var bufferByteSize = (sampleRate > 16000)? 2176 : 512; // 40.5 Hz : 31.25 Hz 
        var buffers = new AudioQueueBuffer* [numBuffers];
        for (int i = 0; i < numBuffers; i++){
            queue.AllocateBuffer (bufferByteSize, out buffers [i]);
            GenerateTone (buffers [i]);
            queue.EnqueueBuffer (buffers [i], null);
        }
        queue.OutputCompleted += (object sender, OutputCompletedEventArgs e) => {
            queue.EnqueueBuffer (e.UnsafeBuffer, null);
        };

        queue.Start ();
        return true;
    }

This is the tone generator:

    void GenerateTone (AudioQueueBuffer *buffer)
    {
        // Make the buffer length a multiple of the wavelength for the output frequency.
        uint sampleCount = buffer->AudioDataBytesCapacity / 2;
        double bufferLength = sampleCount;
        double wavelength = sampleRate / outputFrequency;
        double repetitions = Math.Floor (bufferLength / wavelength);
        if (repetitions > 0) 
            sampleCount = (uint)Math.Round (wavelength * repetitions);

        double      x, y;
        double      sd = 1.0 / sampleRate;
        double      amp = 0.9;
        double      max16bit = Int16.MaxValue;
        int i;
        short *p = (short *) buffer->AudioData;

        for (i = 0; i < sampleCount; i++) {
            x = i * sd * outputFrequency;
            switch (outputWaveForm) {
                case WaveForm.Sine: 
                    y = Math.Sin (x * 2.0 * Math.PI);
                    break;
                case WaveForm.Triangle:
                    x = x % 1.0;
                    if (x < 0.25)
                        y = x * 4.0; // up 0.0 to 1.0
                    else if (x < 0.75)
                        y = (1.0 - x) * 4.0 - 2.0; // down 1.0 to -1.0
                    else 
                        y = (x - 1.0) * 4.0; // up -1.0 to 0.0
                    break;
                case WaveForm.Sawtooth:
                    y  = 0.8 - (x % 1.0) * 1.8;
                    break;
                case WaveForm.Square:
                    y = ((x % 1.0) < 0.5)? 0.7: -0.7;
                    break;
                default: y = 0; break;
            }
            p[i] = (short)(y * max16bit * amp);
        }
        buffer->AudioDataByteSize = sampleCount * 2;
    }
}

You also want these definitions:

    enum WaveForm {
        Sine, Triangle, Sawtooth, Square
    }
    WaveForm outputWaveForm;
    const float outputFrequency = 220;


High level: use AVAudioPlayer https://github.com/hollance/AVBufferPlayer

Med level: audio queues trailsinthesand.com/exploring-iphone-audio-part-1/ gets you going nicely. NOTE: I removed the http so the old link could be there, but it does direct to a bad site, so it apparently has changed.

Low level: alternatively, you can drop down a level and do it with audio units: http://cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html

0

精彩评论

暂无评论...
验证码 换一张
取 消