can I merge several incoming streams into one stream and send it out?

2,259 views
Skip to first unread message

Bo Xu

unread,
Dec 1, 2016, 2:58:28 PM12/1/16
to discuss-webrtc
Hi,

I am trying to use webRTC to develop a conferencing central point(MCU), for example there are 5 users in the room so the MCU has 5 incoming streams, can I merge all the packets of these 5 streams into 1 stream and send it back to users whithin the same peerconnection? I read some documents of "webRTC Bundle" but it seems it is about put audio/video into one bundle and send it out with one port, can I "Bundle" 5 incoming streams into 1 output stream? is there any sample code for this in webRTC native source code? Thanks!

Regards,
Bo Xu

xiang liu

unread,
Dec 2, 2016, 1:58:15 AM12/2/16
to discuss-webrtc
no


在 2016年12月2日星期五 UTC+8上午3:58:28,Bo Xu写道:
Message has been deleted

Bo Xu

unread,
Dec 9, 2016, 4:35:47 PM12/9/16
to discuss-webrtc
Hi,

I have build webRTC native-lib successfully on Ubuntu-16.04LTS, and the sample code peerconnection_client/server work well:

now I am trying to develop a webRTC-MCU based the above, and I am using gedit to edit code and ninja command in terminal to build webRTC now, could you tell me an IDE so I can reference webRTC source code and use ninja inside this IDE to build?  Thanks!

Regards,
Bo Xu


Sergey Grigoryev

unread,
Dec 11, 2016, 1:38:45 AM12/11/16
to discuss-webrtc
I use "sublime text" for code navigation and editing.
I'm not sure about ninja integrations, but as far as I remember, it is possible to run scripts from sublime...


Br,
Sergey

суббота, 10 декабря 2016 г., 0:35:47 UTC+3 пользователь Bo Xu написал:

Harald Alvestrand

unread,
Dec 11, 2016, 2:37:12 AM12/11/16
to WebRTC-discuss
I think you're not talking about bundling, you're talking about mixing.

If you want to do that in a browser, I think you want to look at WebAudio.



--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrtc+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/ba3663ad-07fc-4aae-9d3d-f53304582084%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Bo Xu

unread,
Dec 14, 2016, 10:03:33 AM12/14/16
to discuss-webrtc
Hi,

in the webRTC-MCU, when I am sending stream_A to participant_A, and at that time if participant_B joins the room, I am trying to clone stream_A and send it to B, I ever did similar thing with javascript in chrome as the following:
...
peerConnection_B.addStream(stream_A);
...

could you tell me where I can find the source code of addStream(javascript) so I can learn how it works? does it also call the same nativeRTC API? is it inside webRTC code on inside javascript-engine(V8) source code? 

Thanks!
Bo Xu

Bo Xu

unread,
Dec 14, 2016, 10:08:21 AM12/14/16
to discuss-webrtc

yes  I tried the webaudio-API you mentioned and it works very well to mix audio inside chrome, Thanks!

Regards,
Bo Xu

Tian Yong

unread,
Dec 14, 2016, 8:03:06 PM12/14/16
to discuss-webrtc
Maybe you need the method clone, `MediaStream.clone` https://developer.mozilla.org/en-US/docs/Web/API/MediaStream/clone

Bo Xu

unread,
Dec 15, 2016, 11:31:28 AM12/15/16
to discuss-webrtc

in examples/peerconnection_client/conductor.cc, a VideoTrack is created from the local-camera as the following:
...
  rtc::scoped_refptr<webrtc::VideoTrackInterface> video_track(
      peer_connection_factory_->CreateVideoTrack(
          kVideoLabel,
          peer_connection_factory_->CreateVideoSource(OpenVideoCaptureDevice(), NULL)));
...

in api/peerconnectioninterface.h, this VideoSource is created by cricket::VideoCapturer as the following:
...
  virtual rtc::scoped_refptr<VideoTrackSourceInterface> CreateVideoSource(
      cricket::VideoCapturer* capturer,
      const MediaConstraintsInterface* constraints) = 0;
...

so if I want to create a VideoTrack by local MP4 file, a series of screen-copy images, or a clone of another input stream, do I also need to wrap them as cricket::VideoCapturer too?  does cricket::VideoCapturer represent all kinds of "captured datasource" when creating VideoTrack?

Thanks!
Bo Xu

Bo Xu

unread,
Dec 15, 2016, 5:01:02 PM12/15/16
to discuss-webrtc

I found the customized video capture (VideoCapturerTrackSource):

Thanks!
Bo

 

Bo Xu

unread,
Dec 20, 2016, 4:24:09 PM12/20/16
to discuss-webrtc
Hi,

I learned the samples in the following (with AndroidVideoTrackSource and VideoRenderer : public rtc::VideoSinkInterface<webrtc::VideoFrame>):

so now for testing the MCU creates clones for the video-data from the cemera, and use these clones to make video-tracks and send them to different clients(chrome).  because now for every clone MCU will do 1080p-encoding so it costs more CPUs in my Ubuntu-16.04LTS, so now I am trying to clone the encoded-video and pass them to packetizer directly, could you tell me from which encoder-class I can get the encoded-video-data out and make clones? and in which packetizer-class I can pass the cloned-encoded-data in? 

Thanks!
Bo Xu

Bo Xu

unread,
Dec 29, 2016, 3:26:28 PM12/29/16
to discuss-webrtc

Hi,

if there is a chrome already connected to myMCU and a VP8-endoer already started to encode video, and at this point there is the second chrome join the conference, in this case can I ask the VP8-encoder to start to send IDR-frame so I can clone the first stream and send it to the second-chrome? could you tell me which API I can ask VP8-encoder to send IDR-frame(without restarting VP8-encoder)?

Thanks!
Bo Xu

Bo Xu

unread,
Dec 29, 2016, 5:00:24 PM12/29/16
to discuss-webrtc
Hi,

I just found the keyframe related part code in vp8_impl.cc started as the following:
...
bool send_key_frame = false;
...

Thanks!
Bo Xu

Bo Xu

unread,
Jan 10, 2017, 3:03:17 PM1/10/17
to discuss-webrtc
Hi,

I am trying to send out FIR when a new participant join the conference, I found the following in rtcp_xender.cc:
...
std::unique_ptr<rtcp::RtcpPacket> RTCPSender::BuildFIR(const RtcpContext& ctx) {
...

but I am not sure how to call it, could you tell me where I can call this method and send out the created FIR RTCP-packet?

Thanks!
Bo Xu

Bo Xu

unread,
Jan 17, 2017, 9:39:58 AM1/17/17
to discuss-webrtc
Hi,

normally in the sdp from chrome it shows UDP/TLS/RTP/SAVPF(DTLS-SRTP), if I disable DTLS(in constraint set DtlsSrtpKeyAgreement as false), then in sdp from chrome I see RTP/SAVPF, so:
does RTP/SAVPF means SDES-SRTP?
can I set some parameters so chrome will send out plain-RTP(unsecured)? or chrome will only send out SRTP(the default is DTLS-SRTP, or SDES-SRTP)

Thanks!
Bo Xu

Bo Xu

unread,
Jan 25, 2017, 3:54:13 PM1/25/17
to discuss-webrtc
Hi,

in webRTC stat  I found the following in webRTC statistics:

googCurrentDelayMs
googTargetDelayMs
googMinPlayoutDelayMs
googRenderDelayMs
googJitterBufferMs

could you tell me where I can find the details/explanation for these items? 

Regards,
Bo Xu


Bo Xu

unread,
Feb 1, 2017, 1:12:00 PM2/1/17
to discuss-webrtc
Hi,

if I set my coTURN server to support TCP-relay only(no-UDP-relay), can chrome(version-55) support TCP-relay? I searched Internet and some posts said (the older versions of )chrome doesn't support TCP-relay, could you tel me if the newest chrome already support TCP-relay or not?

Thanks!
Bo Xu

Bo Xu

unread,
Feb 1, 2017, 4:39:07 PM2/1/17
to discuss-webrtc
Hi,

I tested this and chrome(I am using v-55) can handle TCP-relay with coTURN server, just one issue in my testing I found is: chrome's TCP-relay works well, but the webrtc-internal debug-message-window still shows it is UDP-relay(and also shows a UDP port which is not actually in using)

Regards,
Bo Xu
 

Bo Xu

unread,
Mar 22, 2017, 10:19:08 AM3/22/17
to discuss-webrtc
Hi,

I just have a question about webRTC Congestion Control: there are several advanced algorithms like Pre-filtering/arrive-time filter/over-use detector are using in webRTC , my question is: if I do P2P video from chrome to my own terminal(supporting H.264+DTLS+ICE), does my own terminal also need to implement those webRTC Congestion Control algorithms using by chrome? if my own terminal doesn't support them, can it still do P2P video to chrome correctly? or there will be a problem there? 

the reason I ask this question is: when my own terminal does P2P video to a chrome, the video connection will stopped after about 2 minutes, so I just wonder if I need to implement more algorithms based on RTCP message.


Thanks!
Bo Xu

Iñaki Baz Castillo

unread,
Mar 22, 2017, 10:44:53 AM3/22/17
to discuss...@googlegroups.com
2017-03-22 15:19 GMT+01:00 Bo Xu <boxus...@gmail.com>:
> I just have a question about webRTC Congestion Control: there are several
> advanced algorithms like Pre-filtering/arrive-time filter/over-use detector
> are using in webRTC , my question is: if I do P2P video from chrome to my
> own terminal(supporting H.264+DTLS+ICE), does my own terminal also need to
> implement those webRTC Congestion Control algorithms using by chrome? if my
> own terminal doesn't support them, can it still do P2P video to chrome
> correctly? or there will be a problem there?

Your endpoint needs to implement whichever it announces in its SDP
regarding a=rctp-fb lines.

Currently Chrome implements REMB and Transport-CC as congestion
control protocols:

https://tools.ietf.org/html/draft-alvestrand-rmcat-remb-03
https://tools.ietf.org/html/draft-holmer-rmcat-transport-wide-cc-extensions-01

Those are announced into the SDP via "goog-remb" and "transport-cc" in
a=rtcp-fb lines.


> the reason I ask this question is: when my own terminal does P2P video to a
> chrome, the video connection will stopped after about 2 minutes, so I just
> wonder if I need to implement more algorithms based on RTCP message.

Even without a congestion protocol, and endpoint should "adapt" its
sending bitrate based on the packetloss indicated in received RTCP
ReceiverReports. It may also do it if it receives lot of NACKs and
PLI/FIR messages.

You may want to check whether your endpoint implements NACK and
PLI/FIR for sending and receiving.


--
Iñaki Baz Castillo
<i...@aliax.net>

Bo Xu

unread,
Mar 29, 2017, 10:00:18 AM3/29/17
to discuss-webrtc

Thanks for the info!

I have a question for SVC:  is the current Temporal Layers in VP8 part of the SVC-standard?    for example, if I have a linux-native-webrtc-endpoint sending Multiple Temporal Layers VP8 video to SFU/MCU, can SFU/MCU pickup different layers from this incoming stream and send them to different webrtc-endpoint with different bandwidth(for example, some CIF-video to Android-chrome and send 720P-video to Windows-chrome)? Thanks!

Thanks!
Bo Xu


Iñaki Baz Castillo

unread,
Mar 29, 2017, 10:14:16 AM3/29/17
to discuss...@googlegroups.com
2017-03-29 16:00 GMT+02:00 Bo Xu <boxus...@gmail.com>:
> I have a question for SVC: is the current Temporal Layers in VP8 part of
> the SVC-standard?

Regardless VP8 has temporal escalability, the way in which Chrome
signals it into the SDP (this si, no way at all) does not follow any
related standard. So to be clear, simulcast/SVC is not just about
sending different streams/layers, but also about properly signaling
them.


> for example, if I have a linux-native-webrtc-endpoint
> sending Multiple Temporal Layers VP8 video to SFU/MCU, can SFU/MCU pickup
> different layers from this incoming stream and send them to different
> webrtc-endpoint with different bandwidth(for example, some CIF-video to
> Android-chrome and send 720P-video to Windows-chrome)? Thanks!

Simulcast and SVC will be done in 2.0.0, not before. And yes, it will
satisfy the use case you ask for.

Sergio Garcia Murillo

unread,
Mar 30, 2017, 3:55:11 AM3/30/17
to discuss...@googlegroups.com

Please note that sending several resolutions of the same image in SVC is called spatial scalability. Temporal scalability is to send the same video resolution at different frame rates. You can get more info about SVC in VP9 here:

https://webrtchacks.com/chrome-vp9-svc/

As you say, it is possible to enable temporal scalability on chrome, but in order to send several image sizes you will need to enable simulcasting. Check more info here:

https://webrtchacks.com/wireshark-debug-vp8/
https://webrtchacks.com/sfu-simulcast/
http://www.rtcbits.com/2014/09/using-native-webrtc-simulcast-support.html

Best regards
Sergio
--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/c5d4edb4-75e4-444f-b871-e73fdeba8aed%40googlegroups.com.

Sergio Garcia Murillo

unread,
Mar 30, 2017, 3:56:16 AM3/30/17
to discuss...@googlegroups.com
On 29/03/2017 16:13, Iñaki Baz Castillo wrote:
> 2017-03-29 16:00 GMT+02:00 Bo Xu <boxus...@gmail.com>:
>> I have a question for SVC: is the current Temporal Layers in VP8 part of
>> the SVC-standard?
> Regardless VP8 has temporal escalability, the way in which Chrome
> signals it into the SDP (this si, no way at all) does not follow any
> related standard. So to be clear, simulcast/SVC is not just about
> sending different streams/layers, but also about properly signaling
> them.

FWIW, SVC is not signaled ;)

Best regards
Sergio

Iñaki Baz Castillo

unread,
Mar 30, 2017, 5:12:36 AM3/30/17
to discuss...@googlegroups.com
2017-03-30 9:56 GMT+02:00 Sergio Garcia Murillo
<sergio.gar...@gmail.com>:
> FWIW, SVC is not signaled ;)

If you mean Chrome then yes, I agree (Chrome does not even properly
signal simulcast or multi-stream).

If you mean IETF specs then I don't agree:

https://tools.ietf.org/html/rfc5583

Obviously there must be a way for both the offerer and the answerer to
agree on how many spacial/temporal layers to send/recv.

Sergio Garcia Murillo

unread,
Mar 30, 2017, 5:41:11 AM3/30/17
to discuss...@googlegroups.com
That rfc assumes that you send the SVC stream on different
payload/streams which is not the case on VP9 SVC.

There is no need for the offerer/receiver to negotiate anything, as the
receiver has to do nothing different to decode an VP9 SVC or VP9
non-SVC, in fact, you will not be able to tell the difference if you do
not deep dive in the encoded stream.

For SFUs, you will have to decode the VP9 payload description in order
to be aware of the scalability structure (even if you don't fully need
that data) or wait for the Frame Marking to be available.

Best regards
Sergio

Iñaki Baz Castillo

unread,
Mar 30, 2017, 5:48:44 AM3/30/17
to discuss...@googlegroups.com
2017-03-30 11:41 GMT+02:00 Sergio Garcia Murillo
<sergio.gar...@gmail.com>:
> That rfc assumes that you send the SVC stream on different payload/streams
> which is not the case on VP9 SVC.

Agreed, but H264-SVC also exists :)


> There is no need for the offerer/receiver to negotiate anything, as the
> receiver has to do nothing different to decode an VP9 SVC or VP9 non-SVC, in
> fact, you will not be able to tell the difference if you do not deep dive in
> the encoded stream.

Following the same rationale/behavior as in simulcast [*], the
receiver (let it be a SFU or whatever) may wish to tell the sender
which layers it wants to receive, which ones to pause at any time,
etc. IMHO an IETF spec may eventually define that within SDP.



[*] https://tools.ietf.org/html/draft-ietf-mmusic-sdp-simulcast

Sergio Garcia Murillo

unread,
Mar 30, 2017, 6:11:25 AM3/30/17
to discuss...@googlegroups.com
On 30/03/2017 11:48, Iñaki Baz Castillo wrote:
> 2017-03-30 11:41 GMT+02:00 Sergio Garcia Murillo
> <sergio.gar...@gmail.com>:
>> That rfc assumes that you send the SVC stream on different payload/streams
>> which is not the case on VP9 SVC.
> Agreed, but H264-SVC also exists :)
Not in webrtc.. :)

>> There is no need for the offerer/receiver to negotiate anything, as the
>> receiver has to do nothing different to decode an VP9 SVC or VP9 non-SVC, in
>> fact, you will not be able to tell the difference if you do not deep dive in
>> the encoded stream.
> Following the same rationale/behavior as in simulcast [*], the
> receiver (let it be a SFU or whatever) may wish to tell the sender
> which layers it wants to receive, which ones to pause at any time,
> etc. IMHO an IETF spec may eventually define that within SDP.
>
Please, don't add more bloat stuff into SDP, if you need to do that do
it at app signaling level and expose JS apis to control that easily.

Best regards
Sergio


Iñaki Baz Castillo

unread,
Mar 30, 2017, 6:16:36 AM3/30/17
to discuss...@googlegroups.com
2017-03-30 12:11 GMT+02:00 Sergio Garcia Murillo
<sergio.gar...@gmail.com>:
> Please, don't add more bloat stuff into SDP, if you need to do that do it at
> app signaling level and expose JS apis to control that easily.

You know that's what I want. But it's not possible to just "forget
SDP" since, at the end, you must communicate with your browser by
providing him with a SDP, and you need to know how the browser
produces such a SDP in order to extract the info you wish from it.
Message has been deleted

Bo Xu

unread,
May 3, 2017, 4:35:05 PM5/3/17
to discuss-webrtc

Hi,

Thanks for the info! I have a question about the SVC-VP9 from webRTC, in this article:

it says if we start chrome with the following parameters: 
chrome --force-fieldtrials=WebRTC-SupportVP9SVC/EnabledByFlag_2SL3TL 

then Chrome will SVC-encode every VP9 stream it sends. With the above setting, the VP9 encoder will produce 2 spatial layers (with size and size/2) and 3 temporal layers (at FPS, FPS/2 and FPS/4) and no quality layers. 

I just have a question: if I am using webRTC-linuxnative(for example, on Ubuntu-16.04), can I make my VP9 encoder output the same SVC-VP9 stream? if so could you let me know how to set the parameters for that? and in receiver-side how can I retrieve every sub-VP9-stream out from that SVS-NP9? or that SVC-VP9 is only available for chrome?

Thanks!
Bo Xu

 
Reply all
Reply to author
Forward
0 new messages