I'm encountering a specific echo cancellation issue in my native macOS application using the M125 branch of WebRTC. I'm hoping to get some guidance on how to debug it further.
- Client A establishes a PeerConnection with Client B. The call is stable. There is no audible echo for either Client A or Client B.
- While the A-B call is active, Client A establishes a second, separate PeerConnection with Client C. Client A hears the mixed audio from B and C. Both B and C hear A's audio. The AEC works correctly; there is no echo for any of the three parties.
- Client A closes the PeerConnection with Client C. The original call between A and B remains active.
On Client A, during the three-party call, I can confirm that the remote audio streams from B and C are being correctly combined by the AudioMixer. This mixed audio is then passed to the audio processing module for playout, and a copy is sent to the AEC via ProcessReverseAudioFrame().
The relevant code path appears to be
What are the recommended tools or logging methods for inspecting the internal state of the AudioProcessing module, specifically the AEC component, when a remote stream is removed from the mixer? Is there a specific function I should be calling on the AudioProcessing module to signal this change in the reverse stream's composition?
Does Chrome's multi-party implementation use this same flow ?
Any guidance on how to proceed with debugging this would be greatly appreciated.
BRs
Kiran