I would like to run the WebRTC AEC at 32kHz sample rate. After studying the source code, it looks like the microphone data has to be split into two bands (0-8kHz) and (8-16kHz) using QMF analysis.
So I'm calling the AEC like this:
WebRtcAec_Process(aecInst, &aecInput[0], 2, &aecOutput[0], 160, 0, 0);
where aecInput[0] is a pointer to 160 samples of low-band data and aecInput[1] is a pointer to 160 samples of high-band data.
The same principle applies to aecOutput. The AEC will output data for the lower band at location 'aecOutput[0]' and data for the higher band at location 'aecOutput[1]'. As a final operation, QMF synthesis is used to reconstruct the AEC output signal at a sample rate of 32kHz.
My questions are:
WebRtcAec_BufferFarend(aecInst, speakerDataLowBand, 160);
where speakerDataLowBand is the lower band (0-8kHz) of the original speaker signal. The lower band of the speaker data is obtained (again) using QMF analysis.
Is the above correct?
Thank you.
I would like to run the WebRTC AEC at 32kHz sample rate.
--
---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.