I am successfully able to get the remote audio stream data (via AudioTrackSinkInterface::OnData), and I'd like to pass this audio data to
another audio framework (OpenAL) to play back the remote audio stream + some other
stereo audio on iOS. My problem is that WebRTC -- upon establishing a peerconnection -- seems to configure the native audio layer (coreaudio?) to output in mono.
I know WebRTC cannot playback stereo audio, and that's fine. I don't want WebRTC to output any audio -- more specifically, I'd like is for WebRTC to not configure/tamper the native audio output (AVAudioSession/Coreaudio) to mono. I tried simply setting isAudioEnabled to NO in the RTCAudioSession, but then the catch is that AudioTrackSinkInterface::onData doesn't get called anymore.
So to summarize, is there a way for me to be able to grab the remote audio data while having WebRTC not play audio / not set CoreAudio to output in mono ?
Any pointers to how/where I should go about modifying the source would be appreciated, thanks!