开发者

rendering an audio stream (WASAPI / WINAPI )

开发者 https://www.devze.com 2023-02-18 15:53 出处:网络
i\'m currently reading the documentation of MSDN to render a stream to an audio renderer.. or in other word, to play my captured data from microph开发者_Python百科one.

i'm currently reading the documentation of MSDN to render a stream to an audio renderer.. or in other word, to play my captured data from microph开发者_Python百科one.

http://msdn.microsoft.com/en-us/library/dd316756%28v=vs.85%29.aspx

this example provides example.

My problem now is that i couldn't really understand the project flow. I currently have a different class storing the below parameters which i obtained from the capture process. these parameters will be continuously re-written as the program captures streaming audio data from the microphone.

BYTE data;
UINT32 bufferframecount;
DWORD flag;
WAVEFORMATEX *pwfx;

My question is, How does really the loadData() function works. is it suppose to grab the parameter i'm writing from capture process? how does the program sends the data to audio renderer, and play it in my speaker.


The loadData() function fills in audio pointed to by pData. The example abstracts the audio source, so this could be anything from a .wav file to the microphone audio that you already captured.

So, if you are trying to build from that example, I would implement the MyAudioSource class, and have it just read PCM or float samples from a file, whenever loadData() is called. Then, if you run that program, it should play the audio from the file out the speaker.

0

精彩评论

暂无评论...
验证码 换一张
取 消