I found some reason that posted in webRTC issues about why bbr is bad perfomance in webRTC. " To measure the bottleneck bandwidth, we need to build network queues. To accurately measure the RTT, all network queues must be completely drained which requires sending almost nothing for at least one RTT.
These variations in target send rate do not play nice with encoders and jitter buffers "。Discussion link is here.And I'm wundering what kind of "third-party tool" you are using? When i use clumsy to applying a bandwidth, there's no buffer queue in it and will cause packet lost when there is over sending.Is it because I don't use it correctly?On Tuesday, 23 February 2021 at 14:23:56 UTC+8 aery...@gmail.com wrote:In the first figure, the red line is the bandwidth measurement result given by BBR. The blue line is the corresponding suggested encoding bitrate (given by the adaptive encoding module). The green histogram-like lines are the actual outbound bandwidth. The horizontal dotted line is the ground truth bandwidth.The second and the third figure represent the corresponding RTT and pacing queue size. They together determine the lag between server and client.By the way, I start the server and client on the same machine, only applying a bandwidth constraint to both applications using a third-party tool. So the RTprop is actually small. Does this scenario not being a "long fat network" influence the performance?在2021年2月23日星期二 UTC+8 下午2:17:39<Aerys Nan> 写道:在2021年2月23日星期二 UTC+8 下午2:14:50<Aerys Nan> 写道:Hello,Recently I am studying different congestion control algorithms of video streaming applications. I noticed that WebRTC has implemented GCC, BBR, and PCC for congestion control. However, BBR was deprecated due to some "performance issues" and removed from the codebase. I tried to run those BBR codes and found the performance is not satisfactory.According to the results of my experiment (see the figure enclosed), when BBR does an overestimation of bandwidth, the adaptive encoding module will increase the target bitrate for the encoder. This is very different from the bulk transfer, where an overestimation doesn't increase the total amount of data to be sent. The increased bitrate will cause congestion. When inflight data is bounded by CWND, the remaining data will be piled in the pacing queue, leading to an obvious lag between a server and a client.But the experiment results conflict with the fact that YouTube is using QUIC-BBR right now. So I have several questions:1. Are the same problems observed in QUIC-BBR?2. Are there some obvious differences between WebRTC-BBR and QUIC-BBR?3. Is there any open-source platform that has a successful implementation of BBR in video streaming? It would be much better if that platform has some corresponding tools that can directly test its performance.Thanks a lot.Best regards,Nan
--
You received this message because you are subscribed to the Google Groups "BBR Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bbr-dev/5a5cad39-4711-4fe2-b743-2fb1ce567d06n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bbr-dev/CADVnQynLJ2TWk5bbR7Q1uCrmSqyP1Wnj7e-ZMrZv5r5v9nfPEw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bbr-dev/CAA93jw5ahdNBb29nT9gH0P3xD-5QX47NPv76Vrk5Au0LL5qOTw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bbr-dev/CAA93jw5ahdNBb29nT9gH0P3xD-5QX47NPv76Vrk5Au0LL5qOTw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bbr-dev/C37D5010-BF7A-4B6F-A351-07B07665C44D%40gmail.com.