开发者

Audio processing for iPhone. Which layer to use

开发者 https://www.devze.com 2023-02-28 17:38 出处:网络
I want to apply an audio filter on the users voice in iPhone. The filter is quite heavy and needs many audio samples to get the desired quality. I do not want to apply the filter in realtime but I wan

I want to apply an audio filter on the users voice in iPhone. The filter is quite heavy and needs many audio samples to get the desired quality. I do not want to apply the filter in realtime but I want to have an almost realtime performance. I would like the processing to happen in parrallel with the recording when the nessesary samples are collected and when the user stops 开发者_如何学Pythonrecording to hear (after a few seconds) the distorted sound. My questions are: 1. Which is the right technology layer for this task e.g. audio units? 2. Which are the steps involved? 3. Which are the key concepts and API methods to use? 4. I want to capture the users voice. Which are the right recording settings for this? If my filter alter alters the frequency should I use a wider range? 5. How can I collect the necessary samples for my filter? How can I handle the audio data? I mean depending on the recording settings how the data are packed? 6. How can I wright the final audio recording to a file?

Thanks in advance!


If you find a delay of over a hundred milliseconds acceptable, you can use the Audio Queue API, which is a bit simpler than using the RemoteIO Audio Unit, for both capture and audio playback. You can process the buffers in your own NSOperationQueue as the come in from the audio queue, and either save the processed results to a file or just kept in memory if there is room.

For Question 4: If your audio filter is linear, then you won't need any wider frequency range. If you are doing non-linear filtering, all bets are off.

0

精彩评论

暂无评论...
验证码 换一张
取 消