We are having an audio quality problem and I am wondering if maybe one of you already has experience with this, or has ideas about it.
We are using KMS 6.13.0 (with our own memory leak fix). Still have to perform my experiments with KMS 6.14.0 and/or the nightly.
What we are trying to do is this:
* we have our own RTSP tool generating OPUS encoded 48KHz stereo audio
* an RTSP Player endpoint, consuming our RTSP audio
* we want low latency and predictable resource consumption,
* so we configure it with networkCache=0 and useEncodedMedia=true
* a media pipeline
* a WebRTC endpoint
* a firefox or a chrome browser
With this setup, the sound in the browser is very choppy and robotic.
If we configure the Player endpoint with useEncodedMedia=false, allowing kurento to transcode, we get good audio quality in the browser, but a delay is being built up.
If we record the output of our RTSP tool directly (with an ffmpeg commandline),
we get good audio quality in the recording.
If we create a Player endpoint with an opus audio file (instead of RTSP), we get good audio quality in the browser.
If we create a packet capture of the WebRTC stream, decrypt this and extract the opus data from the capture into a file (using the opusrtp commandline tool), we get good audio quality in the extracted opus file.
It is quite baffling and hard to understand and we are looking for ideas to tackle this.
One thing we noticed from the chrome webrtc internals dump is a lot of jitter, on average higher jitter buffer delay and many concealed samples and concealmentEvents for the useEncodedMedia=true case compared to when kurento is allowed to transcode.
Any insights, ideas, remarks, questions are welcome!
Thanks in advance,