Strategies for probing connection

316 views
Skip to first unread message

Michal Śledź

unread,
Oct 6, 2022, 2:19:32 PM10/6/22
to discuss-webrtc
Hi,
we are implementing simulcast in our SFU and we reached a point where we need to probe the connection to know when to move to the higher simulcast layer.

At the moment we always generate 200kb/s of additional traffic by injecting RTP padding packets.

While this approach works fine and we are able to move between layers it is pretty slow.
We don't have temporal scalability implemented so we basically have three layers with following limits (in kb/s) 150, 500, 1500. 

We would like to make our probing a little more aggressive but looking at how much bandwidth we need to reach next layer seems to me to be too agressive. In our case to move from medium to high we would have to generate 1Mb/s of additional traffic. If we assume that we probe the connection for ~20seconds it might overhelm network too much for too long.

Maybe adding temporal scalability solves this as we would have more layers.

I am just curious if you have any thoughts you could share with me.

Philipp Hancke

unread,
Oct 7, 2022, 7:07:16 AM10/7/22
to discuss...@googlegroups.com
This is a tricky problem to solve for the SFU (less so for the sending client which can lower the encoding bitrate a bit while ramping up, see the minimum bitrates in the simulcast table).

The estimate from a certain bitrate (150, 500, ...) is typically a bit higher than that bitrate already so you need less probing than you think.
Temporal layer dropping helps greatly since you can get to ~70% of the target bitrate with just one layer.

Probing is sadly one of the less specified (or documented) areas of WebRTC...


--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/2761e08b-c64a-4277-8136-1d62ffc29eb1n%40googlegroups.com.

Sean DuBois

unread,
Oct 7, 2022, 10:33:10 AM10/7/22
to discuss...@googlegroups.com
Hi Michal,

I am interested/been trying to solve this as well.The approach I am exploring is to run Google Congestion Control[0]. I then forward the highest simulcast layer that is under the suggestion. 

In the future instead of RTP Padding I want to look at using FEC instead. It would be nice for the probing traffic to be useful.

re: specified/documented areas I would love to write a post/build a playground for people to explore and learn these things more. Would be really cool
if we could have a bandwidth shaper built in. People could actively see the active layer changing `low -> med -> low -> med -> high`, current suggested bitrate etc...


Michal Śledź

unread,
Oct 11, 2022, 6:38:01 AM10/11/22
to discuss-webrtc
Thanks a lot for the responses.

The idea to use FEC sounds very interesting.

One more thing that we observed is that receiving a lot of small packets by a client might impact its performance. We started asking next questions to ourselves:
* would it be possible to send more than 255 bytes of padding
* is there some value of `packetsReceived/s` that we for sure shouldn't exceed? E.g. probing should fit into 1000 packets/s 
* what about probing using RTP header extension? We could generate padding packet + some useless data put into some non-existing header extensions. Not sure if browsers would accept such packets but on the other hand we just want to probe the connection so how cares 
* have you tried implementing temporal scalability for H264? 

Regarding example app, I think we could create something like you said. We already show how layers are switching and we also allow to set target layer you want to receive. The only things to add would be to allow to set how much to probe and show estimated bandwidth.  

Philipp Hancke

unread,
Oct 11, 2022, 6:44:25 AM10/11/22
to discuss...@googlegroups.com
the 255 byte limit is imposed by the requirement that the amount of padding is specified by the last octet of the packet if the "P" bit is set.
Using RTX (i.e. packets already transmitted but now with a different twcc sequence number) for probing is what libwebrtc does since that gets around this limit and *may* provide some extra reliability against losses.

Michal Śledź

unread,
Oct 11, 2022, 9:00:12 AM10/11/22
to discuss...@googlegroups.com
Thanks, that's really helpful

Brian Baldino

unread,
Oct 12, 2022, 8:12:37 PM10/12/22
to discuss...@googlegroups.com
In Jitsi we pull packets from a retransmission cache (used to handle nacks) and send them via RTX.  If there's a bit of room 'left over' (not enough to retransmit another full packet) we'll use padding to fill the remainder.

Michal Śledź

unread,
Oct 13, 2022, 3:59:11 AM10/13/22
to discuss-webrtc
We did an experiment 
1. probe the connection sending 2Mb/s of padding packets (up to 255 bytes + header) + 100 kb/s of video in 320x160 + 53 kb/s of audio
2. probe the connection sending the highest possible layer i.e. 1.5Mb/s of video in 1280x720  + 53kb/s of audio

The difference is massive, sending 1.5Mb/s of video our estimation is growing up by ~300kb/s. In ~25 seconds our estimation is ~1.5Mb/s. 
When sending 2Mb/s of padding packets our estimation is growing up by up to 100kb/s.
We get an estimation every 5 seconds.

Michal Śledź

unread,
Oct 17, 2022, 12:36:54 PM10/17/22
to discuss-webrtc
Just wanted to say that we finally debugged the problem and it turned out that we had a bug in our GCC implementation.

We were calculating exp moving avg over all r_hats while we should use only r_hats from the decrease state.
Because of that we were starting in multiplicative mode but pretty fastly going into additive mode and the whole estimation was slowing down.

Thanks a lot for the help!

Reply all
Reply to author
Forward
0 new messages