Hi all,
I am trying to implement a custom audio source, that is the ability to inject some custom raw PCM audio data (obtained/generated in an opaque way) as a source for one or more audio tracks. The goal is to be able to select this custom audio source per track, so it should not replace the default audio source (microphone) but instead be an alternative, as some other tracks might still want to use the microphone in parallel, or even use multiple microphones.
Looking around it seems that the only approach documented so far is to create a custom audio device module (ADM), and inject it into the peer connection factory.
Unfortunately that solution doesn't work for us, since this replaces the default microphone-based ADM, so makes ALL tracks use that ADM instead of the default one. We need to be able to select per track which source to use.
I had originally thought that the webrtc::AudioSourceInterface would have been the way to go, in the same way that we currently have implemented custom video sources by implementing the webrtc::VideoTrackSourceInterface (custom video sources works well). However despite the documentation saying webrtc::AudioSourceInterface can be shared among multiple tracks, in practice there is no audio data manipulation in this interface, and in fact I am wondering what is the purpose of that interface in the first place if tracks ignore it and simply pull their audio data from the (unique) ADM configured on the peer connection?
So, is it currently impossible to have 2 different audio sources and have tracks use both at the same time? (some tracks one, some tracks the other)
The only way out I see is to implement a custom "dispatch" ADM which knows about all tracks and can dynamically connect them with the correct source. That seems like a lot of work, and I am surprised there is nothing else simpler to achieve that feature.
Thanks