Hi everyone, I've done some test in a virtual environment using the Chromium client-server.
Between the client host and server host, I set up a bandwidth of 100Mb/s, 25ms of artificial delay, and a gradually increasing packet loss on a logarithmic scale as follows: 0.001%, 0.002%, 0.005%, 0.01%, 0.02%, 0.05%, ..., 1%, 2%, 5%.
I conducted 100 tests for each packet loss with a payload file of 10MByte and I found that for very small packet losses, the average throughput is 60Mbit/s, whereas for packet losses above 1%, it heavily drops and reaches around 5Mbit/s.
I thought it could be the
recovery mechanism that every time a packet loss occurs, it seems that QUIC transmits all packets until the next packet that has been acknowledged. (
https://blog.apnic.net/2022/11/03/comparing-tcp-and-quic/#:~:text=QUIC%20recovery%20and%20flow%20controll)
Does anyone have an idea of what might be causing this significant drop in throughput?
Or how exactly the recovery of Chromium QUIC works?