Hi All,
Here at ZipDX we have a WebRTC-based soft phone that runs in Chrome. We have a particular use case (simultaneous interpretation) that requires that one person on a call both listen and speak at the same time. They hear English (for example) and speak in Spanish.
Echo cancellation in Chrome historically mangles audio when there's double-talk. The situation described above means that this participant is in constant double-talk.
No problem. We make the interpreter wear a headset, and we disable echo cancellation for their instance of Chrome. Problem solved.
That arrangement works great on Windows, Linux and Chromebooks. However, on OSX it doesn't work at all. Interpreters using Chrome on OSX always introduce massive echo into the call.
We use the exact same audio constraints in setting up an interpreter, regardless of platform. Why would a user on a Mac be so problematic? Where can I look to gain some insight into this matter?
Of course, Apple is no help at all as soon as they learn that it's a web app that won't run in Safari. If it can't be debugged in Safari they won't even talk about it.
Many thanks,
Michael Graves