I was playing with Jack, and I noticed that the default audio type JACK_DEFAULT_AUDIO_TYPE
is set to "32 bit float mono audio".
I'm a bit confused: IEEE defines 32-bit C float range approximately from 3.4E–38 to 3.4E+38, and I was wondering what is the maximum and minimum "undistor开发者_如何学JAVAted" amplitude that a jack_default_audio_sample_t
can hold with that audio type. For example, if some DSP algorithm gives me samples in the range [0,1], how can I correctly convert between them and Jack's format?
It's pretty common to do signal processing operations in floating point, then scale and cast the results to 16-bit or 24-bit integers before sending them to the ADC. Implementing an IIR filter, for instance, in floating point means you can reduce your sensitivity to coefficient quantization. Or if you're doing FFTs, you get greater dynamic range with floating point calculations.
The usual way of converting is to do x_float = x_int * (1.0/SHRT_MAX)
when the data comes in from the ADC, and y_int = y_float * SHRT_MAX
when sending to the DAC, for 16-bit codecs. For 24-bit codecs, you need to define ADC_MAX = (1 << 24) - 1
.
In the case of using JACK, I guess the framework takes care of this conversion for you, so you should see floating point values in the range +/-1, and feed it back values in the same range.
精彩评论