Receiver Bandwidth Estimation and timestamps (and offsets) in rtp-hdrext

486 views
Skip to first unread message

Oscar Divorra

unread,
Jun 6, 2014, 10:37:52 AM6/6/14
to discuss...@googlegroups.com
Hi

By checking the default generated Chrome SDP, we can observe:

 a=extmap:2 urn:ietf:params:rtp-hdrext:toffset

    a=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time


         These enable adding extra timestamp data on RTP header extensions (aka hdrext) in order to increase the accuracy for each packet timestamp marking and, for instance, have higher precision on anything using timestamps based computations on the receiver.


   More specifically, checking Receiver Bandwidth Estimation, we see timestamp received offset (a=extmap:2 urn:ietf:params:rtp-hdrext:toffset) can be seen used in:


"void RemoteBitrateEstimatorSingleStream::IncomingPacket(

    int64_t arrival_time_ms,

    int payload_size,

    const RTPHeader& header) {

  uint32_t ssrc = header.ssrc;

  uint32_t rtp_timestamp = header.timestamp +

      header.extension.transmissionTimeOffset;

... 

"


Its use for BWE is also cited in:


We tried to understand in what practical network situations these offsets would make a real difference and which not. For this, we tested different network scenarios with/out limited bandwidth, losses, and always ensuring some reasonable network delay with p2p WebRTC sessions (Chrome 37) where cited above hdrext SDP lines would be included, and some other tests where they would not be in the SDPs. At first sight, we did not see any relevant sign of a difference in terms of BWE reaction perception (from webrtc-internals).

Can anyone advice some enforceable situations where having the extra data for timestamp offsets carried on the hdrext would make a perceivable difference for currently implemented Receiver Bandwidth Estimation? What improvements should we see?

Thank you in advance.

Best,

Òscar





Sergio Garcia Murillo

unread,
Jun 6, 2014, 10:59:49 AM6/6/14
to discuss...@googlegroups.com

Current algorithm (or at least last time I checked) presupposes that all the packets with same rtp timestamp where sent at the same time because it is works by comparing the reception time delta between two packets vs sending delta.

Also also iirc that implied that the algorithm could only be run on a per frame basis. With the extensions you could use much accurate measures on a per packet bases and apply rtp smoothing not sending all the frames of the video frame simultaneously.

Best regards
Sergio

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Oscar Divorra

unread,
Jun 7, 2014, 5:02:32 PM6/7/14
to discuss...@googlegroups.com
Right, that’s clear to me, that in theory using the hdrext with toffset, accuracy should be higher and better for rtp smoothing. 

My question is more related to practical assessments: is there an objective assessment that shows a clear increase of performance in BWE thanks to the use of RTP header extensions? 
Or what test could we reproduce that could help us validate substancial increase of performance? 

(if anyone knows).

Best,

Òscar


On 06/06/14 16.59, "Sergio Garcia Murillo" <sergio.gar...@gmail.com> wrote:

Current algorithm (or at least last time I checked) presupposes that all the packets with same rtp timestamp were sent at the same time because it works by comparing the reception time delta between two packets vs sending delta.

Justin Uberti

unread,
Jun 7, 2014, 11:44:28 PM6/7/14
to discuss-webrtc
Yes. There are nontrivial delays that can occur between capture and socket that affect BWE accuracy.

Oscar Divorra Escoda

unread,
Jun 8, 2014, 5:26:28 AM6/8/14
to discuss...@googlegroups.com
Thanks Justin.
Do you guys have any specific data?


Justin Uberti

unread,
Jun 8, 2014, 9:39:30 PM6/8/14
to discuss-webrtc
What question are you trying to answer?

Oscar Divorra

unread,
Jun 12, 2014, 8:06:54 AM6/12/14
to discuss...@googlegroups.com
Hi Justin,

I am particularly trying to answer:

Which test conditions/test scenarios would allow to see that using hdrext timestamp offset (toffset) improves clearly the behavior of Congestion Control vs the case where hdrext toffset is disabled (e.g. By SDP)?
What does improve in that case? Is Avg BWE higher? Is BWE more resilient to sudden drops? Does BWE go-up quicker? What is the overall expected improvement?

Thanks.

Òscar

Justin Uberti

unread,
Jun 12, 2014, 8:24:34 PM6/12/14
to discuss...@googlegroups.com
It avoids false congestion signals caused by local machine load that would cause the BWE to drop suddenly.

It also is required for BWE to work with packet pacing, which is the default starting in M37.

Basically, if you want to use REMB, you will want to use the header extension. It's a critical part of the mechanism.

Oscar Divorra

unread,
Jun 13, 2014, 5:29:52 AM6/13/14
to discuss...@googlegroups.com
Thank you Justin,

We’ll double check the machine load test conditions. That would make sense.
I am certainly using REMB, but without specific machine load conditions (just with network load/QoS conditions), webrtc-internals did not show much difference in numerics and statistics when the extension was used vs when it was disabled.
But some machine load conditions could change things.

Best.


Stefan Holmer

unread,
Jun 13, 2014, 5:41:53 AM6/13/14
to discuss...@googlegroups.com
Justin is right. It can also be used to decouple two network paths if the media is being relayed through an MCU, so that any delay introduced on the path from the sender to the MCU doesn't affect the estimate on the path from the MCU to the receiver.
Reply all
Reply to author
Forward
0 new messages