Lip sync and multiple streams.

874 views
Skip to first unread message

Robert Jongbloed

unread,
May 13, 2015, 7:48:35 AM5/13/15
to discuss...@googlegroups.com

We here at Blackboard have been working on a virtual classroom based on WebRTC clients in Chrome. It is really working well, except for lip sync, which is usually very good, but all too often goes off. Sometimes by seconds.

We use a similar technique to Jitsi, in using multiple SSRC values (aka multi-stream or multi-track) in each session (audio & video) with the SDP indicating via msid values, which pairs are together, and thus should be connected for the purposes of lip sync. So, after going over and over our own code, it came time to try and figure out what WebRTC wanted as we were sure we were "compliant" to the RFC's.

So, digging into the source, I am convinced that WebRTC does use the "normal" RFC3550 technique for lip sync, utilising RTCP sender reports timestamp and NTP time information to adjust relative delays in the audio and video streams. And, if we limit the peer connection to a single stream each for audio and video, it in fact does work exactly as expected. Various custom added log output has confirmed this.

However, when multiple SSRC's are present, what I discovered is that the information was not getting from the RTCP packet handlers (e.g. RTCPReceiver::HandleSenderReceiverReport) to where it is needed in the ViESyncModule class. Further trawling through the code led me to ViEBaseImpl::ConnectAudioChannel and where it was called from, revealed the following comment:

// Connect the voice channel, if there is one.
// TODO(perkj): The A/V is synched by the receiving channel. So we need to
// know the SSRC of the remote audio channel in order to fetch the correct
// webrtc VoiceEngine channel. For now- only sync the default channel used
// in 1-1 calls. 

Which is really not good for us. :-(


Does anyone know the status of this? Fix is imminent? No one is even away of the problem? Well, at least perkj was aware of it!


Thanks for any info.

Robert.


Ben Weekes

unread,
May 15, 2015, 4:09:33 AM5/15/15
to discuss...@googlegroups.com
Hi All,

I would like to star this issue please.
There definitely seems to be a platform weakness in this area. The same problem can be reproduced on Hangouts as well.
Should we create a bug for it? 
Can somebody ask Perkj what he thinks please?

Thanks

Ben

Robert Jongbloed

unread,
May 15, 2015, 4:37:20 AM5/15/15
to discuss...@googlegroups.com
I have been doing further research in the WebRTC code on this. The basic problem remains, it is just the code where the comment from perkj is never executed! There is another part of the code, which is definitely executed, with a very similar comment:

  // Set up A/V sync if there is a VoiceChannel.
  // TODO(pbos): The A/V is synched by the receiving channel. So we need to know
  // the SSRC of the remote audio channel in order to sync the correct webrtc
  // VoiceEngine channel. For now sync the first channel in non-conference to
  // match existing behavior in WebRtcVideoEngine.

So it's pbos now.


Robert.


Peter Boström

unread,
May 15, 2015, 4:45:42 AM5/15/15
to discuss...@googlegroups.com
As the owner of that second TODO I can assure you I was copying existing behavior. :)

I think the issue is that we simply haven't the signals in that part of the code to know which voice channel corresponds to which video stream, so we "just guess", which is easier in the one-to-one case.

I can't really guess how much work it'd be to get that mapping there, I don't even know if the mapping between A/V is even signaled through SDP.

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Robert Jongbloed

unread,
May 15, 2015, 5:10:30 AM5/15/15
to discuss...@googlegroups.com

I have managed to work a few things out. The information is available at the SDP level. It's in the "sync_label" field of the cricket::StreamParams structure. I have seen this passed around a lot, and I think it is where we need it to be. Which, I think, is where voice_channel_id_ is set in WebRtcVideoChannel2.

Still digging, but any assistance would be good.

Robert.

Peter Boström

unread,
May 18, 2015, 4:46:44 AM5/18/15
to discuss...@googlegroups.com
To anyone interested this is tracked here: https://code.google.com/p/webrtc/issues/detail?id=4667

pablo platt

unread,
May 27, 2015, 8:39:38 AM5/27/15
to discuss...@googlegroups.com
Is there a workaround for this issue?
Do all services and MCUs like hangout, jitsi, janus and kurento affected by this issue?

Will I have lip-sync issue if I have one client broadcasting audio+video and several clients broadcasting only audio?
Does the order in which SSRCs are added to the peer connection important?

Ben Weekes

unread,
Jun 9, 2015, 10:39:53 AM6/9/15
to discuss...@googlegroups.com


Yes, this is a major issue which affects all the services that use Multi SSRC.
A bug is open to track it:

Peter Boström

unread,
Jul 15, 2015, 11:15:15 AM7/15/15
to discuss...@googlegroups.com
This should be fixed in tip of tree, feel free to test it in a maybe two days once this lands in Canary. Scheduled for M46 unless I accidentally broke it. :)

Reply all
Reply to author
Forward
0 new messages