Hi All,
I am studying the media data path in the webrtc now. I found that the data path of audio and video are different.
For video the capturer module calls the VideoCaptureImpl::IncomingFrame, the captured data is sent to encoder and transport layer. I define one class derived from webrtc::VideoRendererInterface and implements its RenderFrame. The video track will put the captured video frame into this function.
But for audio, take linux as example, audio device module contains two threads. They are PlayThreadProcess(), RecThreadProcess().
For my understanding, RecThreadProcess() will get the raw data captured by Microphone and pass the raw data to encoder and transport layer to send it out. On the other hand, I get the remote media stream via interface of OnAddStream(). From stream I get out the audio track. I define one class derived from AudioTrackSinkInterface and implement its OnData() function. I will call AudioTrackInterface::AddSink() to add my sink implementation. But I can't get the audio data after the call is setup just like I got the video frame.
Is there anything wrong in my understanding?
For the video callback RenderFrame() I get the video frame. But in OnData() interface it defines char array. Why is it not AudioFrame?
Thanks for your instructions.
BRs/Gavin