Multiple Audio tracks in peerconnection

2,436 views
Skip to first unread message

Michael Cooley

unread,
Nov 21, 2017, 3:25:41 AM11/21/17
to discuss-webrtc
Hi,

I have the scenario below that I was hoping someone could shed some light on....

Scenario:
I am sending multiple audio tracks within a stream on a WEBRTC peerconnection.  When I receive these tracks on the other browser, I wish to play each track out of a different speaker. 

What I have tried:
Currently, I take each received track, create a new MediaStream, and addtrack() one of the individual tracks to the new stream.   I repeat this process for each track.   Thus, if I receive 3 tracks, I will create three different MediaStreams and attach a single track to each stream.    I then assign each stream to it's own Audio element and set the sinkid of each element to a different speaker.   However, all 3 tracks/streams seem to follow the last streams sinkid (They are all played out of the same speaker).

I did a similar test, but instead of the peerconnection tracks, I created 3 AudioContexts and attached oscillators to them and then attached them to Audio elements.   When I set sinkids of the Audio elements, each one properly goes to a different speaker.

Question:
Is there something that is tying the peerconnection tracks together?
Any thoughts on how to break them apart?     I tried "cloning" the track as I assigned it to the new stream, but to no avail.

Thanks,
Mike

Michael Cooley

unread,
Nov 28, 2017, 8:16:49 AM11/28/17
to discuss-webrtc
I'm coming to the conclusion that Chrome simply treats all remote streams as a single entity (perhaps going through a single mixer?).   I attempted to create multiple peer connections between the same source and destination and separate the multiple streams to different speakers, and they appear to all go to the same speaker (last sinkid set).

Thus, I'm wondering if this is actually a WEBRTC issue, or if this is a chrome / WEB Audio API issue.    Any opinions?

Tommi

unread,
Nov 28, 2017, 8:41:42 AM11/28/17
to discuss...@googlegroups.com
Is there something that is tying the peerconnection tracks together?

- Yes. WebRTC internally mixes the audio tracks and the audio for all of the tracks is actually delivered to Chrome as a single audio stream. Changes to properties such as volume, is sent into WebRTC to apply before mixing.  So, that unfortunately means that the output from that mixer can only be directed to a single audio device.

The above applies to <audio> and <video> media elements.

However, there is a workaround available in Chrome, which involves rendering via WebAudio.  For this to work, audio still needs to be "pulled" from the mixed stream, even though it doesn't necessarily go to a particular device or is muted.
Then you can follow this example to clone the remote tracks and render them via WebAudio.

What happens behind the scenes is that the per-track audio gets sent directly to web audio before it gets mixed, so WebAudio gets its own copy.

Then "all you have to do" is to use WebAudio to render the audio to separate devices :D

Hope that helps,
Tommi

On Tue, Nov 28, 2017 at 2:16 PM Michael Cooley <michael...@gmail.com> wrote:
I'm coming to the conclusion that Chrome simply treats all remote streams as a single entity (perhaps going through a single mixer?).   I attempted to create multiple peer connections between the same source and destination and separate the multiple streams to different speakers, and they appear to all go to the same speaker (last sinkid set).

Thus, I'm wondering if this is actually a WEBRTC issue, or if this is a chrome / WEB Audio API issue.    Any opinions?

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/c03585dd-c69b-4b82-9a4e-739e167becb7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Michael Cooley

unread,
Jan 23, 2018, 3:08:21 PM1/23/18
to discuss-webrtc
Just wanted to follow-up on your suggestion and say that it worked great.    Thanks for your help!

Iñaki Baz Castillo

unread,
Jan 23, 2018, 3:14:07 PM1/23/18
to discuss...@googlegroups.com
On 28 November 2017 at 14:41, Tommi <to...@webrtc.org> wrote:
> Is there something that is tying the peerconnection tracks together?
>
> - Yes. WebRTC internally mixes the audio tracks and the audio for all of the
> tracks is actually delivered to Chrome as a single audio stream. Changes to
> properties such as volume, is sent into WebRTC to apply before mixing. So,
> that unfortunately means that the output from that mixer can only be
> directed to a single audio device.
>
> The above applies to <audio> and <video> media elements.

I assume this is a temporal "behavior" and, in the future, the WebRTC
stack won't assume how remote tracks will be rendered by the app, am I
right?

--
Iñaki Baz Castillo
<i...@aliax.net>

Brandon Mathis

unread,
Sep 10, 2022, 8:20:36 AM9/10/22
to discuss-webrtc
I would be very interested to hear how in particular you managed to achieve your multiple audio tracks to multiple devices. I am attempting to create a system that listens to WebRTC calls and transmit's each participant's audio to a virtual device which then streams each track out to a separate system that performs independent speech-to-text transcription of their audio.

Any pointers would be much appreciated.  

guest271314

unread,
Sep 10, 2022, 2:07:13 PM9/10/22
to discuss-webrtc
> I would be very interested to hear how in particular you managed to achieve your multiple audio tracks to multiple devices. I am attempting to create a system that listens to WebRTC calls and transmit's each participant's audio to a virtual device which then streams each track out to a separate system that performs independent speech-to-text transcription of their audio.
>
> Any pointers would be much appreciated. 

Sounds interesting.

What issues are you having implementing?

You can use a data channel to receive text and send back audio data.

Reply all
Reply to author
Forward
0 new messages