Hello All,
Let me begin this topic by stating that I am a total newbie to WebRtc and if any I mention anything half-witty, please bear with me in
The app reads WAV files and feeds the samples to the AEC module, and a WAV writer saves the output of Echo cancellation,
I have 2 inputs:
1) Speaker Input or Rendered Signal or FarEnd Signal
2) MicInput or Captured Signal or NearEnd Signal
And one Output:
1) MicOutput- Which is the result of Echo cancellation.
Now for Speex modules, I see a well behaved manner. Please have a look at the attached file, its doing a good job in cancelling the rendered signal from
Captured Signal.
However, when I am passing the same files with WebRtc Aec3, I am getting a flat out signal. Attached is also the result of AEC3.
It seems like it is also cancelling out the original mic signal too.
I am using the following parameters(extracted from Wav file reader):
Sample rate = 8000
Channel = 1
Bits/Sample = 16
NumOfSamples = 270399
Samples Fed to AEC at a time = (10*SampleRate)/1000 = 80
This is the initialization:
m_streamConfig.set_sample_rate_hz(sampleRate);
m_streamConfig.set_num_channels(CHANNEL_COUNT);
// Create a temporary buffer to convert our RTOP input audio data into the webRTC required AudioBuffer.
m_tempBuffer[0] = static_cast<float*> (malloc(sizeof(float) * m_samplesPerBlock));
// Create AEC3.
m_echoCanceller3.reset(new EchoCanceller3(m_echoCanceller3Config, sampleRate, true)); //use high pass filter is true
// Create noise suppression.
m_noiseSuppression.reset(new NoiseSuppressionImpl(&m_criticalSection));
m_noiseSuppression->Initialize(CHANNEL_COUNT, sampleRate);And this is how I am calling the APIs: //auto capturedAudioBuffer = CreateAudioBuffer(&micSamples[index]);
auto renderAudioBuffer = CreateAudioBuffer(spkSamples);
auto capturedAudioBuffer = CreateAudioBuffer(micSamples);
// Analyze capture buffer
m_echoCanceller3->AnalyzeCapture(capturedAudioBuffer.get());
// Analyze render buffer
m_echoCanceller3->AnalyzeRender(renderAudioBuffer.get());
// Cancel echo
m_echoCanceller3->ProcessCapture(
capturedAudioBuffer.get(), false); // Assuming the analog level is not changed. If we want to detect change, need to use gain controller and remember the previously rendered audio's analog level
// Copy the Captured audio out
capturedAudioBuffer->CopyTo(m_streamConfig, m_tempBuffer);
//arrayCopy_32f(m_tempBuffer[0], &micOut[index], m_samplesPerBlock);
arrayCopy_32f(m_tempBuffer[0], micOut, m_samplesPerBlock);
And also regarding the parameters (delay, echoModel, reverb, noisefloor etc.) , I am using all default values.
Can anyone tell me what I am doing wrong? Or how can I make it better by adjusting the appropriate parameters?
Thanks.
-Kazi