Echo in native MacOS application with m125

41 views
Skip to first unread message

kiranr...@gmail.com

unread,
Sep 5, 2025, 5:31:55 AMSep 5
to discuss-webrtc
Hi All

I'm encountering a specific echo cancellation issue in my native macOS application using the M125 branch of WebRTC. I'm hoping to get some guidance on how to debug it further.


  • WebRTC Branch: refs/branch-heads/6422 (M125)
  • Platform: macOS Native Application
  • Client A(A native application) manages two separate PeerConnection objects to establish concurrent calls with two remote peers (Client B and Client C, both are Web clients).
Scenario & Steps to Reproduce
  1. - Client A establishes a PeerConnection with Client B. The call is stable. There is no audible echo for either Client A or Client B.

  2. - While the A-B call is active, Client A establishes a second, separate PeerConnection with Client C. Client A hears the mixed audio from B and C. Both B and C hear A's audio. The AEC works correctly; there is no echo for any of the three parties.

  3. Client A closes the PeerConnection with Client C. The original call between A and B remains active.

    • Actual Result: Client B begins to hear a echo of their own voice. Client A does not hear an echo.
    • Expected Result: The call between A and B should revert to its initial state, with no echo for either party.

On Client A, during the three-party call, I can confirm that the remote audio streams from B and C are being correctly combined by the AudioMixer. This mixed audio is then passed to the audio processing module for playout, and a copy is sent to the AEC via ProcessReverseAudioFrame().

The relevant code path appears to be

https://source.chromium.org/chromium/chromium/src/+/refs/tags/125.0.6422.0:third_party/webrtc/audio/audio_transport_impl.cc.


int32_t AudioTransportImpl::NeedMorePlayData(const size_t nSamples,
const size_t nBytesPerSample,
const size_t nChannels,
const uint32_t samplesPerSec,
void* audioSamples,
size_t& nSamplesOut,
int64_t* elapsed_time_ms,
int64_t* ntp_time_ms) {
TRACE_EVENT0("webrtc", "AudioTransportImpl::SendProcessedData");
RTC_DCHECK_EQ(sizeof(int16_t) * nChannels, nBytesPerSample);
RTC_DCHECK_GE(nChannels, 1);
RTC_DCHECK_LE(nChannels, 2);
RTC_DCHECK_GE(
samplesPerSec,
static_cast<uint32_t>(AudioProcessing::NativeRate::kSampleRate8kHz));

// 100 = 1 second / data duration (10 ms).
RTC_DCHECK_EQ(nSamples * 100, samplesPerSec);
RTC_DCHECK_LE(nBytesPerSample * nSamples * nChannels,
AudioFrame::kMaxDataSizeBytes);

mixer_->Mix(nChannels, &mixed_frame_);
*elapsed_time_ms = mixed_frame_.elapsed_time_ms_;
*ntp_time_ms = mixed_frame_.ntp_time_ms_;

if (audio_processing_) {
const auto error =
ProcessReverseAudioFrame(audio_processing_, &mixed_frame_);
RTC_DCHECK_EQ(error, AudioProcessing::kNoError);
}

nSamplesOut = Resample(mixed_frame_, samplesPerSec, &render_resampler_,
static_cast<int16_t*>(audioSamples));
RTC_DCHECK_EQ(nSamplesOut, nChannels * nSamples);
return 0;
}


What are the recommended tools or logging methods for inspecting the internal state of the AudioProcessing module, specifically the AEC component, when a remote stream is removed from the mixer? Is there a specific function I should be calling on the AudioProcessing module to signal this change in the reverse stream's composition?

  1. Does Chrome's multi-party implementation use this same flow ? 

Any guidance on how to proceed with debugging this would be greatly appreciated.


BRs
Kiran 

Reply all
Reply to author
Forward
0 new messages