I'm grabbing the raw audio output data from the WebRTC remote audio track (using the AudioTrackSinkInterface::OnData callback), and passing this audio data to my audio engine (separate from WebRTC) to process & render. I mute the WebRTC remote audio track when doing so (since my audio engine is taking care of rendering, and I therefore don't want WebRTC outputting any audio) by calling set_enabled(false) on the remote audio track.
Because the audio track is being disabled, it outputs silence, and hence AEC3 doesn't actually do anything (obviously). Now, my question is: is it somehow possible for me to
avail of WebRTC's AEC3 but apply it on the rendered result of my audio engine output instead? i.e. would it be possible to write my rendered audio engine audio output to the remote audio track somehow, so that it can be used as the input to the AEC3 algorithm?
I suppose this would also mean that I wouldn't be able to disable the remote audio track anymore (since I will be writing data to it), which would mean I'd have to let WebRTC do the audio output and silence my audio engine instead (which is fine if AEC3 can work on 48kHz stereo data, which is what my audio engine currently outputs). Unless there's a way for me to have my cake and eat it too by being able to write audio data to the webrtc remote audio track, but still 'silence'/mute webrtc audio output somehow?
Hope this makes sense, any insight would be appreciated. Thanks!