experience with WebRTC Scalable MCU (multipoint control unit) in production

4123 views
Skip to first unread message

Henry Stewart

unread,
Feb 5, 2015, 7:41:40 PM2/5/15
to discuss...@googlegroups.com
Our goal is to find a MCU server that can ingests a single WebRTC stream & then broadcast it out to many users. For our purposes we need a MCU that supports transcoding & video recording. Realtime transcoding is helpful for changing a stream for mobile/html5. 

I've looked into several libraries that offer MCU support for WebRTC broadcasting (one to many). 

The tools I've tried so far are kurento & jitsi videobridge. The reason we're looking at these libraries & not a simple TURN server is b/c we haven't found a TURN server that has video recording & can stream to mobile. 

Has anyone had any experience with MCU in production using Kurento, Jitsi/Videobridge, Lynckia/Licode, Intel's WebRTC/MCU? Please mention any other tools not listed.

Best,

Peter Saint-Andre - &yet

unread,
Feb 5, 2015, 8:16:31 PM2/5/15
to discuss...@googlegroups.com
> <https://software.intel.com/sites/landingpage/webrtc/>? Please mention
> any other tools not listed.

For Talky we use the Jitsi Videobridge, which IMHO is great. Note that
it is not an MCU, it's a selective forwrading unit (SFU), which is more
scalable anyway. But if you need transcoding you might truly need an MCU.

Peter

--
Peter Saint-Andre
https://andyet.com/

Alexandre GOUAILLARD

unread,
Feb 5, 2015, 8:24:38 PM2/5/15
to discuss...@googlegroups.com
TURN server canot do this because they cannot, and should not, decrypt the streams, they just relay, and 1-1. That s what makes then transparent, and also remove the snooping risk (they don t have the key to decrypt the streams anyway).

for your "MCU" list, you could add:
open source
- meedooze 
- janus
closed source/commercial
- dialogic's power media
- radysis
- oracle?
bundled with platform offer
- tokbox 's mantis
- Temasys

I think that you should investigate if you really want a full MCU (with transcoding) or just an SFU. the problem of adjusting the resolution depending on the target device can be solved in many case using simpler solutions like simulcast or SVC, the former being possible in chrome, and the second being possible with VP8 already (temporal scalability), in which case, an SFU is not only simpler, but also propose a better user experience (as opposed to compositing the streams on server side).

HTH.

alex.





--

--- You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrtc+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Alex. Gouaillard, PhD, PhD, MBA
------------------------------------------------------------------------------------
CTO - Temasys Communications, S'pore / Mountain View
President - CoSMo Software, Cambridge, MA
------------------------------------------------------------------------------------

Luis Lopez

unread,
Feb 6, 2015, 2:38:07 AM2/6/15
to discuss...@googlegroups.com
Alex,

IMHO, simulcast is not a suitable option for broadcasting scenarios. An SFU with simulcast can probably provide media to 1000 viewers. But if the number of viewers goes up (10,000, 100,000) chaining SFUs is not an option due to the need of generating key-frames whenever a new viewer enters. If you have 100,000 viewers with churn, the presenter will be needing to generate key-frames constantly and, as a result, the QoE of viewers may degrade significantly. In that case, having an SFU combined with transcoding barriers at the media server can significantly improve the QoE of everybody, at the cost of increasing the computing resources at the infrastructure, of course.


Luis Lopez
Kurento.org Project Coordinator
tel +34 914 888 713tel lu...@kurento.comtel prof.luis.lopez • twitter linkedin blog youtube

Kurento.org logo

Best WoW Factor Award at WebRTC Conference & Expo 2014 (California)
Award

Audience Choice Award at WebRTC Conference & Expo 2014 (California)
Award

Best of Show Award at WebRTC conference expo 2014 (Paris)
Award



To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

Sergio Garcia Murillo

unread,
Feb 6, 2015, 5:26:33 AM2/6/15
to discuss...@googlegroups.com
Hi,

I am obviously biased, but I still think that transcoding MCUs still make sense in some scenarios, but you should check carefully if you need a mixer or an SFU and accept the implications.

Regarding Medooze, we have just integrated google cloud compute engines APIs, and the MCUcreates/destroys new mixer instances based on actual usage, allowing us to auto scale and reduce OPEX when not used. Given the costs of the highest CPU instance is quite low ($0.640 per hour), it is quite easy to translate that costs into any business plan if you have a way of charging per usage to your final customers.

Best regards
Sergio
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Alex. Gouaillard, PhD, PhD, MBA
------------------------------------------------------------------------------------
CTO - Temasys Communications, S'pore / Mountain View
President - CoSMo Software, Cambridge, MA
------------------------------------------------------------------------------------

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

Alexandre GOUAILLARD

unread,
Feb 6, 2015, 8:55:58 AM2/6/15
to discuss...@googlegroups.com
guys, 

sergio, emil, luis, (chad for dialogic, anyone for licode/lynckya?)

Every time there is a question about media servers, it ends up being a super long thread with multiple cases, with everybody pushing one way or another. Nobody is actually wrong, we just speak about different use case, without knowing. It's also almost impossible to list all the use cases, at least everytime there is a question. I do not think it has to be like that, all the product listed above have something special to them, be it programming language (C, C++. Java, ...) feature, architecture, etc. They simply do not compete, really.

So far my policy had been: provide a high level list, and let the user (who knows his use case better than us), choose, just pointing the questions that should be asked (but not answered) by us. In my e-mail, I did not advocate one way or the other, I just wanted to make sure a choice was not made too early one way or another.

Now, this is not the first thread like this, and I had a thought last time i did not follow up on by lack of time, but i will this time. Do you guys want to write a "as-simple-to-understand-as-possible", "10-questions-to-know-which-type-of-media-server-to-choose" guide? Including real world, reproducible benchmark, with the code for the benchmark put online for everybody to reproduce the results themselves on their side. Whatever the final format (document, blog post at webrtchacks, separate website), if we all were to collaborate in writing the guide, all agreed to his content, I see multiple advances:
- we stop confusing people in e-mail thread on discuss-webrtc leaving there with more question than when he came, and in a state of Fear uncertainty and doubt,
- we don't have to repeat ourselves many time, just to point to the material
- if otherwise perceived as competing members of the same ecosystem reach a consensus on a subject, users can be sure this is the right answer, and we can move forward tackling other problems.

Note that Temasys does not sell a media server, even though we have that capacity in our platform. We are kind of neutral here.

I'm happy to put my money where my mouth is and not only to start the first draft from a buyer point of view, and to provide the base for a common benchmarking test suite. 

What do you think? 

Alex,

Lorenzo Miniero

unread,
Feb 6, 2015, 9:34:17 AM2/6/15
to discuss...@googlegroups.com
Good points, I'd be glad to help (although Janus was never mentioned here :-) )

L.

Alexandre GOUAILLARD

unread,
Feb 6, 2015, 10:16:06 AM2/6/15
to discuss...@googlegroups.com
I mentionned janus:

""
for your "MCU" list, you could add:
open source
- meedooze 
- janus
""

Philipp Hancke

unread,
Feb 6, 2015, 11:11:10 AM2/6/15
to discuss...@googlegroups.com, chad...@dialogic.com
Do you guys want to write a "as-simple-to-understand-as-possible", "10-questions-to-know-which-type-of-media-server-to-choose" guide?

http://networkfuel.dialogic.com/webrtc-whitepaper shows seven reasons to use a server. Chad: maybe you can make the content available on webrtchacks?

+1 otherwise

Alexandre GOUAILLARD

unread,
Feb 6, 2015, 11:16:58 AM2/6/15
to discuss...@googlegroups.com, chad...@dialogic.com
I m surfing with chad tomorrow morning in Japan (well, in a few hours really), I ll ask him.

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Alexandre GOUAILLARD

unread,
Feb 6, 2015, 11:19:03 AM2/6/15
to discuss...@googlegroups.com, chad...@dialogic.com
In any case, I think we need a global (same applied to all) reproducible and unbiased (source code available, and every vendor can tune their installation if they want) benchmark, for several scalability metrics.

Emil Ivov

unread,
Feb 6, 2015, 3:47:01 PM2/6/15
to discuss...@googlegroups.com
Hey Alex,

On Fri, Feb 6, 2015 at 2:55 PM, Alexandre GOUAILLARD
<agoua...@gmail.com> wrote:
> guys,
>
> sergio, emil, luis, (chad for dialogic, anyone for licode/lynckya?)
>
> Every time there is a question about media servers, it ends up being a super
> long thread with multiple cases, with everybody pushing one way or another.
> Nobody is actually wrong, we just speak about different use case, without
> knowing.

Oh, I think we do know. We just don't care ;)
Sounds great, indeed! I can already provide some benchmark results
that match your requirements above and I assume everyone else can as
well, so this shouldn't be that much work.

Should we do a Google Doc?

Emil
https://jitsi.org

Jonathan Ekwempu

unread,
Feb 6, 2015, 10:30:24 PM2/6/15
to discuss...@googlegroups.com
Emil,

Any plans to support large scale broadcasting (very few speakers, several participants) in videobridge? Moreover, apart from Jitsi meet, is there a simple WebRTC application that illustrates how to integrate with videobridge?

Thanks,
Jonathan

Chad Hart

unread,
Feb 8, 2015, 1:17:31 AM2/8/15
to discuss...@googlegroups.com
Dialogic sponsored the The "Seven Reasons for WebRTC Server-Side Media Processing" paper by Tsahi, so unfortunately I cannot republish it on webrtcHacks. I can provide the direct link for anyone that does not want to provide their info on Dialogic's form: goo.gl/qEZp8Z
[Note: please use the dialogic.com link if you want to be part of Dialogic's marketing workflow on this topic: http://networkfuel.dialogic.com/webrtc-whitepaper]

We asked Tsahi to put a framework around the use cases for a media server. SFU vs. MCU considerations for "multipoint" is only covered briefly.

I agree more detail on multipoint models would be valuable to the community, especially if it was backed by some real benchmarks. Perhaps an existing consortium - someone like the IMTC (http://www.imtc.org/) - could pick this topic up. I think it will be tough to have an unbiased test-set that many vendors can participate in without a chartered, neutral-group driving it.

Emil Ivov

unread,
Feb 8, 2015, 4:52:15 AM2/8/15
to discuss...@googlegroups.com
Hey Jonathan,


On Saturday, February 7, 2015, Jonathan Ekwempu <onyi...@gmail.com> wrote:
Emil,

Any plans to support large scale broadcasting (very few speakers, several participants) in videobridge? Moreover, apart from Jitsi meet, is there a simple WebRTC application that illustrates how to integrate with videobridge?

These are actually questions best asked on our mailing lists, however, the short answers are 1) yes we are thinking of two possible approaches to this and 2) not currently but we would be happy to help you build one and provide it as an example to the community. Let me know if that's of interest.

Cheers,
Emil




--
--sent from my mobile

Pedro Rodriguez

unread,
Feb 9, 2015, 5:37:17 AM2/9/15
to discuss...@googlegroups.com
Hi all,

Pedro from Licode/Lynckia here. We would also be happy to contribute. I agree that it would be really helpful for people trying to find a solution that suits their needs.

Cheers
--
Pedro Rodriguez






Alexandre GOUAILLARD

unread,
Feb 9, 2015, 11:00:05 AM2/9/15
to discuss...@googlegroups.com
Hi guys,

I just commuted back from a webrtc conf in japan, I will follow with all of you very soon, outside of this thread, and get the ball rolling. 

cheers.

Alex.

Tsahi Levent-Levi

unread,
Feb 19, 2015, 2:02:27 AM2/19/15
to discuss...@googlegroups.com
Hi all,

I am late to this thread (subscribed, but don't follow closely the discussions here).

I'll be happy to write something targeted at SFU vs. MCU, but I think you are digressing here.

SFU & MCU are devices used for real-time multi-directional interaction (for lack of a better term). This means that everyone in a session are active participants.
Scaling to 1000's or millions makes 99.9% of the session unidirectional in nature, where broadcasting makes a lot more sense.
Broadcasting on IP is usual done using streaming servers, CDNs (that can handle live streams) or the new P2P solutions such as Peer5, Viblast and Streamroot. These architectures are very different than the SFU/MCU ones with different technologies and protocols being used (MPEG-DASH & HLS anyone?).

My 2 cents here is that an SFU/MCU play for broadcast won't cut it. The resulting architecture will need much more than we as an industry have on offer at the moment. The only thing that I know of that comes close to it (at least on paper - never tried it myself or heard anyone who did) is Flashphoner.

Tashi

Emil Ivov

unread,
Feb 19, 2015, 2:08:12 AM2/19/15
to discuss...@googlegroups.com
True indeed. "Streaming to 1000s" however, does not automatically imply that all these 1000s have to be participants in the same event. There would be a lot more use cases where these 1000s would be members of hundreds of 2 to 12 member separate conferences. This is where the server scalability comes into play.

Emil
--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Luis Lopez

unread,
Feb 19, 2015, 2:27:21 AM2/19/15
to discuss...@googlegroups.com
Hi,

Indeed file-based CDNs supporting MPEG-DASH or HLS is the solution when you want only to broadcast media in one direction (from a presenter to a number of viewers), but there are a lot of additional use cases involving eventual bi-direccional communications where such solution is not appropriate due to latency. File-based CDNs tend to have latencies on the order of magnitude of the tens of seconds. This is OK if you want a "TV like" broadcast service. However, there is a increasing interest for use cases where the broadcasting is combined with eventual feedback cases. Imagine the following scenarios
- 1 teacher broadcasting to 10000 students, but enabling eventually a student to ask him questions in real-time about the lesson (i.e. the student enters in the session as presenters for a few seconds and then leaves). In this case, a latency of 40 seconds is not compatible with the use case, because when the questions arrives to the teacher he may be talking about something completely different.
- Interview to a celebrity, broadcasted to 100000 fans, when you let fans to enter n real-time and make questions ... the same problem related to latency.

Hence, having a broadcasting system with sub-second latencies enables new use-cases that are beyond the reach of state-of-the-art CDN, and this may be an interesting opportunity for WebRTC to fill that space. IMO, enabling WebRTC broadcasting services, and enabling feedback channels for them, is a very interesting topic which may open new use and business cases and we're working in providing a solution for it.

Just my 2 cents.

Luis Lopez
Kurento.org Project Coordinator
tel +34 914 888 713tel lu...@kurento.comtel prof.luis.lopez • twitter linkedin blog youtube

Kurento.org logo

Best WoW Factor Award at WebRTC Conference & Expo 2014 (California)
Award

Audience Choice Award at WebRTC Conference & Expo 2014 (California)
Award

Best of Show Award at WebRTC conference expo 2014 (Paris)
Award



Alexandre GOUAILLARD

unread,
Feb 19, 2015, 8:18:59 AM2/19/15
to discuss...@googlegroups.com
Conceptually, IMHO, I think emil and luis are spot on.

Now, developing a **reliable** POC for this use case is tough. We have a POC that streams to 2.4M right now in the lab, but I'm not happy about the resulting quality. When I will be, I'll make a demo at webrtc world.

Happy to coordinate a session with luis and emil (and others) if they want.


Sergio Garcia Murillo

unread,
Feb 19, 2015, 8:46:43 AM2/19/15
to discuss...@googlegroups.com
Agree,

We get back to the my original point, it all depends on what are the specific requirements of your use case.

On a side note, (bit of advertisement to follow), nothing mandates you to follow a "pure" architecture, as for example, in our MCU we are able to publish a flash stream to a traditional CDN (put your favorite name here) or FMS to scale to any number of viewers.

From my point of view, the broadcasting/multiconference world is not an trivial thing, so trying to provide a catch-all solution is a bit naive.

Best regards
Sergio
--

Luis Lopez

unread,
Feb 19, 2015, 8:52:02 AM2/19/15
to discuss...@googlegroups.com
+1

Jonathan Ekwempu

unread,
Feb 22, 2015, 8:58:40 PM2/22/15
to discuss...@googlegroups.com
A good example would be great.

Thanks,
Jonathan

Ben Weekes

unread,
Feb 24, 2015, 8:34:24 PM2/24/15
to discuss...@googlegroups.com

Very interesting topic for me as we are polishing this use case at Blackboard Inc right now.

Typically we might have 6 presenters (people who are visible) and up to 1000+ viewers of the 6 presenters.
However at any time one of the viewers might instantly become a presenter and one of the presenters demoted to a viewer based on who is talking (or has not talked for the longest).
Effectively switching the media on an existing ssrc (efficient but tricky on the lip synch).
Chaining SFU units together is possible (and necessary) as long as you throttle the iFrame requests to ensure a key frame interval of no less than X seconds.
Our SFU is also capable of transcoding for recording and sending to Flash (H264 UDP/TCP) fallback in non WebRTC browsers.
CDNs/HLS are no good because of the latency.

Lots of other exciting things coming out soon.

B

Alexandre GOUAILLARD

unread,
Feb 25, 2015, 11:14:28 AM2/25/15
to discuss...@googlegroups.com
requestec's 12 years of experience at work ;-) looking forward to it.

the switching should be tricky, we haven't dared going there yet :-D

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Gustavo Garcia

unread,
Feb 25, 2015, 7:16:14 PM2/25/15
to discuss...@googlegroups.com
Not only PLIs, but you also need to be careful with the RR/REMBs to make sure a single viewer doesn't degrade the quality of everybody else.   And unless you are transcoding it is also recommended to use it in combination with simulcast or other stream scalability solution.

Ben, could you elaborate on the lip synch issue?  We are switching ssrcs in some cases (not really for broadcasting) and not having problems with lip sync as long as the RTCP is properly generated/switched.

FWIW this is a very relevant use case for us too, you can find some customer success stories in our website if you are curious.

Ben Weekes

unread,
Feb 26, 2015, 8:50:40 AM2/26/15
to discuss...@googlegroups.com
Hi,

I agree simulcast is required rather than feeding REMB/RR back to the publisher. Subscriber REMB just select which simulcast to select.

Lip synch is tricky because you need to have continuous timestamps for the media on the existing ssrc (or chrome freezes) and maintain the skew between audio and video. We are going to dig into this more next week as we still have some lip synch issues. Are you saying the timestamps can jump if the RTCP is adjusted also? 

Does simulcast work for you with screen share or just web cams? 
Can you control the actual properties of the low quality stream e.g. max bitrate and fps?

Thanks

Ben

Gustavo Garcia

unread,
Feb 26, 2015, 11:42:37 PM2/26/15
to discuss...@googlegroups.com
Yep, the safest way is to rewrite the timestamps in RTP/RTCP packets to ensure continuity.   Just adjusting the RTCP should work good enough for some use cases (small freezing when switching switching) but last time I tested it was 1 year ago, I would need to retest it again.

I haven't been able to use simulcast with screensharing, only webcams.  Even if Chrome logs say that simulcast is enabled it isn't in my tests.  This is a quick and dirty sample page I use to test simulcast with screensharing in case you want to do more research:

I don't know of any way to change the quality of the simulcasted streams independently, but for fps you have VP8 temporal scalability if you want to use it.

Regards,

Eric M

unread,
Apr 28, 2016, 5:16:56 AM4/28/16
to discuss-webrtc
Hi, guys, u could just try www.anyrtc.io, it's developed by webrtc library 100%, support all webrtc feature

在 2015年2月6日星期五 UTC+8上午8:41:40,Henry Stewart写道:
Our goal is to find a MCU server that can ingests a single WebRTC stream & then broadcast it out to many users. For our purposes we need a MCU that supports transcoding & video recording. Realtime transcoding is helpful for changing a stream for mobile/html5. 

I've looked into several libraries that offer MCU support for WebRTC broadcasting (one to many). 

The tools I've tried so far are kurento & jitsi videobridge. The reason we're looking at these libraries & not a simple TURN server is b/c we haven't found a TURN server that has video recording & can stream to mobile. 

Has anyone had any experience with MCU in production using Kurento, Jitsi/Videobridge, Lynckia/Licode, Intel's WebRTC/MCU? Please mention any other tools not listed.

Best,

Reply all
Reply to author
Forward
0 new messages