Hi.
In our iOS app, we have a need to access video and audio directly, while at the same time sending the A/V out over WebRTC. For the video side of things, I can create a custom implementation of RTCVideoCapturer which lets us tap into the AVCaptureSession's video output.
Is there a way to do the same thing for audio? From looking at the SDK source code, I see that the low-level Audio Toolbox library is used deep inside the C++ source code, but it's not clear how to tap into the flow of audio buffers.
Ideally, I'd like to accomplish this using the Objective C/Swift interface, but I can dive down into the native C++ if that's the only option.
Thanks.
Eddie Sullivan