BBRv2 throughput behavior in the real-world environment

94 views
Skip to first unread message

Tergel Munkhbat

unread,
Aug 12, 2021, 3:05:21 AM8/12/21
to BBR Development
Hello folks,

Good day to you. May I ask a question related to BBRv2 throughput behavior? 

We measured BBRv2 throughput behavior in the real-world network, and the topology is shown in the picture below. The sender server located in Daejeon (South Korea) has got a 100Gbps interface card and a receiver server located in Amsterdam (Netherlands) has got a 10Gbps card. Between these two servers, the bottleneck link is 10Gbps and RTT is about 265ms. Furthermore, the background traffic(current usage of the network) of the whole end-to-end path is able to be measured through SNMP and flow collectors.  By the time we experimented, the background traffic was around 500 Mbps in the bottleneck, meaning the available bandwidth was around 9.5Gbps. 
Topology (Daejeon - Amsterdam.png
When testing throughput between these servers for 300 seconds, in our case, BBRv2 overestimates the optimal point of throughput over the network. The following picture displays the throughput behavior and packet losses during the test. Due to this overestimation, sudden large packet losses are counted. My question is what reasons could cause this issue? Moreover, is there anyone who has similar results like that? 
amsterdam_sender_result_1.txt.jpg
Looking forward to hearing from you.

Sincerely,
Tergel

Neal Cardwell

unread,
Aug 12, 2021, 2:03:55 PM8/12/21
to Tergel Munkhbat, BBR Development
Hi,

Thanks for the report!

A few questions and comments:

Can you please elaborate on the reasoning behind your statement: "BBRv2 overestimates the optimal point of throughput over the network"? I would be curious to understand exactly which metrics and experiments you were using to reach that conclusion.

For example: the throughput graph shows many spikes above the receiver's NIC rate, so clearly those spikes are not due to BBRv2 overestimating the bandwidth, but rather are measurement artifacts from moments when TCP byte stream sequence holes were filled in by retransmissions and the kernel on the receiving host was able to pass data up to the receiving application at a rate much faster than the receiver's NIC rate.

Regarding the packet loss: usually some amount of packet loss is inevitable if there is no ECN, and the buffer is moderately sized, and you run a capacity-seeking test (as here) and the traffic mix is dynamic (cross-traffic flows enter and leave). Rather than simply the absolute number of packets that are lost, the more important questions are:

(a) Could this packet loss have been prevented with reasonable algorithms? If the packet loss was due to new loss-based cross-traffic flows exponentially "slow-starting" into the same bottleneck, then the flow under test may have been a bystander or victim, and there may have been little that the test flow could have done to avoid the loss.

(b) What is the packet loss rate? This is much more relevant than the absolute number of packets that were lost, especially given the high data rates here.

Some additional information that would be useful to be able to interpret the BBRv2 behavior here:

(1) What is the overall average packet loss rate for the flow, as a percentage?

(2) What is the overall goodput of the flow?

(3) What does the goodput and loss rate for CUBIC look like on this path? It would be best to randomly alternate between BBRv2 and CUBIC to get at least a few samples of each.

(4) What are the send and receive buffer settings used here? It seems like the flow maxes out quite a bit below 9.5Gbps so I wonder if the flow is mostly limited by buffering settings on the end hosts?

(5) What is the sampling interval for your graphs? (1 minute buckets? 30 sec?) That should help us estimate the loss rate (as a percentage) in the time sampling buckets.

(6) Would it be possible to share periodic ss output or packet traces for your test? For example, ss logs would be interesting to see:
(for i in `seq 1 3000`; do date '+%s.%N'; ss -tinm; usleep 100000; done) > ./ss.log.txt &

Best regards,
neal

--
You received this message because you are subscribed to the Google Groups "BBR Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bbr-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bbr-dev/CAEPPYPrnjMp3mSYXjgzKuAgHgDX%2BRQzehC6X6ZOG%2BDfaPqYz7A%40mail.gmail.com.

Bob McMahon

unread,
Aug 12, 2021, 4:01:15 PM8/12/21
to Neal Cardwell, Tergel Munkhbat, BBR Development
Is it possible to take a more direct measurement of tcp, .e.g using iperf 2 (master branch). I find trying to analyze TCP from packet loss plots is difficult and it's good to get the actual TCP stats. Sync the clocks to use --trip-times. https://sourceforge.net/projects/iperf2/  One can also speed up the sampling rate (./configure --enable-fastsampling for more 

[rjmcmahon@ryzen3950 iperf2-code]$ src/iperf -c 192.168.1.64 --trip-times -i 1  -e -X -t 10
------------------------------------------------------------
Client connecting to 192.168.1.64, TCP port 5001 with pid 325819 (1 flows)
Write buffer size: 131072 Byte
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] Clock sync check (ms): RTT/Half=(0.309/0.154) OWD-send/ack/asym=(0.243/0.066/0.177)
[  1] local 192.168.1.133%enp4s0 port 54578 connected with 192.168.1.64 port 5001 (MSS=1448) (trip-times) (sock=3) (peer 2.1.4-master)
[ ID] Interval            Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
[  1] 0.0000-1.0000 sec  1.09 GBytes  9.38 Gbits/sec  8949/0         67     1431K/1075 us  1091007
[  1] 1.0000-2.0000 sec  1.10 GBytes  9.41 Gbits/sec  8978/0          7     1431K/1053 us  1117535
[  1] 2.0000-3.0000 sec  1.10 GBytes  9.42 Gbits/sec  8981/0          0     1433K/1084 us  1085939
[  1] 3.0000-4.0000 sec  1.10 GBytes  9.42 Gbits/sec  8982/0          0     1435K/1065 us  1105435
[  1] 4.0000-5.0000 sec  1.10 GBytes  9.41 Gbits/sec  8975/0          0     1443K/1080 us  1089233
[  1] 5.0000-6.0000 sec  1.10 GBytes  9.41 Gbits/sec  8978/0          0     1443K/1061 us  1109109
[  1] 6.0000-7.0000 sec  1.10 GBytes  9.42 Gbits/sec  8979/0         13     1443K/1061 us  1109232
[  1] 7.0000-8.0000 sec  1.10 GBytes  9.42 Gbits/sec  8985/0          0     1443K/1048 us  1123742
[  1] 8.0000-9.0000 sec  1.10 GBytes  9.41 Gbits/sec  8975/0          0     1443K/1055 us  1115044
[  1] 9.0000-10.0000 sec  1.10 GBytes  9.42 Gbits/sec  8984/0          0     1467K/1055 us  1116162
[  1] 0.0000-10.0100 sec  11.0 GBytes  9.40 Gbits/sec  89768/0         87     1467K/1055 us  1114128

[root@rjm-nas iperf2-code]# src/iperf -s -i 1 --histograms
------------------------------------------------------------
Server listening on TCP port 5001 with pid 7349
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
Enabled receive histograms bin-width=0.100 ms, bins=10000 (clients should use --trip-times)
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.64%enp2s0 port 5001 connected with 192.168.1.133 port 54578 (MSS=1448) (trip-times) (sock=4) (peer 2.1.4-master) on 2021-08-12 12:59:17 (PDT)
[ ID] Interval            Transfer    Bandwidth    Burst Latency avg/min/max/stdev (cnt/size) inP NetPwr  Reads=Dist
[  1] 0.0000-1.0000 sec  1.09 GBytes  9.36 Gbits/sec  2.965/0.770/6.308/0.382 ms (8923/131075) 3.32 MByte 394463  23866=3719:3760:3800:3763:7740:952:25:107
[  1] 0.0000-1.0000 sec F8-PDF: bin(w=100us):cnt(8923)=8:1,9:1,10:2,11:3,12:3,13:5,14:4,15:5,16:4,17:1,18:1,19:2,21:2,22:3,23:13,24:274,25:703,26:729,27:719,28:754,29:750,30:764,31:744,32:730,33:754,34:715,35:731,36:445,37:19,39:6,40:7,41:4,44:1,46:2,47:4,49:3,51:1,52:2,53:3,56:1,58:4,59:2,62:1,64:1 (5.00/95.00/99.7%=25/36/41,Outliers=0,obl/obu=0/0) (6.308 ms/1628798357.646599)
[  1] 1.0000-2.0000 sec  1.10 GBytes  9.41 Gbits/sec  2.962/2.270/3.631/0.347 ms (8978/131080) 3.32 MByte 397372  23671=3679:3668:3655:3737:7651:883:384:14
[  1] 1.0000-2.0000 sec F8-PDF: bin(w=100us):cnt(8978)=23:14,24:273,25:755,26:701,27:741,28:803,29:715,30:762,31:780,32:745,33:749,34:747,35:732,36:438,37:23 (5.00/95.00/99.7%=25/36/36,Outliers=0,obl/obu=0/0) (3.631 ms/1628798359.260850)
[  1] 2.0000-3.0000 sec  1.10 GBytes  9.42 Gbits/sec  2.963/2.292/3.650/0.348 ms (8979/131070) 3.32 MByte 397207  23797=3685:3709:3765:3753:7804:687:373:21
[  1] 2.0000-3.0000 sec F8-PDF: bin(w=100us):cnt(8979)=23:6,24:292,25:730,26:742,27:726,28:776,29:743,30:730,31:776,32:764,33:749,34:706,35:758,36:470,37:11 (5.00/95.00/99.7%=25/36/36,Outliers=0,obl/obu=0/0) (3.650 ms/1628798360.419843)
[  1] 3.0000-4.0000 sec  1.10 GBytes  9.41 Gbits/sec  2.963/2.289/3.681/0.347 ms (8978/131074) 3.32 MByte 397210  23904=3716:3754:3803:3766:7373:1395:75:22
[  1] 3.0000-4.0000 sec F8-PDF: bin(w=100us):cnt(8978)=23:9,24:275,25:726,26:767,27:700,28:772,29:752,30:753,31:764,32:780,33:732,34:762,35:724,36:445,37:17 (5.00/95.00/99.7%=25/36/36,Outliers=0,obl/obu=0/0) (3.681 ms/1628798361.510100)
[  1] 4.0000-5.0000 sec  1.10 GBytes  9.41 Gbits/sec  2.965/2.279/3.662/0.347 ms (8979/131069) 3.33 MByte 396959  23342=3568:3609:3628:3626:7136:1268:456:51
[  1] 4.0000-5.0000 sec F8-PDF: bin(w=100us):cnt(8979)=23:7,24:291,25:711,26:737,27:736,28:771,29:734,30:748,31:782,32:765,33:728,34:744,35:747,36:458,37:20 (5.00/95.00/99.7%=25/36/36,Outliers=0,obl/obu=0/0) (3.662 ms/1628798361.713380)
[  1] 5.0000-6.0000 sec  1.10 GBytes  9.41 Gbits/sec  2.962/2.293/3.648/0.347 ms (8978/131074) 3.32 MByte 397275  23800=3730:3688:3721:3724:7211:1716:2:8
[  1] 5.0000-6.0000 sec F8-PDF: bin(w=100us):cnt(8978)=23:6,24:306,25:711,26:759,27:717,28:763,29:744,30:741,31:784,32:765,33:745,34:735,35:735,36:451,37:16 (5.00/95.00/99.7%=25/36/36,Outliers=0,obl/obu=0/0) (3.648 ms/1628798363.295370)
[  1] 6.0000-7.0000 sec  1.10 GBytes  9.41 Gbits/sec  2.964/2.288/3.643/0.346 ms (8979/131062) 3.33 MByte 397093  23990=3724:3785:3760:3798:7617:1290:7:9
[  1] 6.0000-7.0000 sec F8-PDF: bin(w=100us):cnt(8979)=23:4,24:262,25:745,26:749,27:723,28:763,29:769,30:742,31:764,32:763,33:749,34:737,35:741,36:450,37:18 (5.00/95.00/99.7%=25/36/36,Outliers=0,obl/obu=0/0) (3.643 ms/1628798364.200219)
[  1] 7.0000-8.0000 sec  1.10 GBytes  9.41 Gbits/sec  2.962/2.295/3.643/0.345 ms (8979/131069) 3.32 MByte 397266  23564=3640:3691:3639:3669:6856:2015:46:8
[  1] 7.0000-8.0000 sec F8-PDF: bin(w=100us):cnt(8979)=23:5,24:270,25:720,26:742,27:750,28:748,29:759,30:770,31:772,32:735,33:773,34:758,35:731,36:438,37:8 (5.00/95.00/99.7%=25/35/36,Outliers=0,obl/obu=0/0) (3.643 ms/1628798365.428261)
[  1] 8.0000-9.0000 sec  1.10 GBytes  9.41 Gbits/sec  2.962/2.289/3.630/0.348 ms (8978/131082) 3.32 MByte 397320  24280=3836:3857:3875:3822:8802:67:8:13
[  1] 8.0000-9.0000 sec F8-PDF: bin(w=100us):cnt(8978)=23:7,24:304,25:720,26:734,27:742,28:767,29:733,30:740,31:761,32:775,33:739,34:737,35:753,36:450,37:16 (5.00/95.00/99.7%=25/36/36,Outliers=0,obl/obu=0/0) (3.630 ms/1628798366.572123)
[  1] 9.0000-10.0000 sec  1.10 GBytes  9.41 Gbits/sec  2.961/2.281/3.636/0.348 ms (8979/131063) 3.32 MByte 397381  22491=3313:3430:3376:3502:5577:2194:979:120
[  1] 9.0000-10.0000 sec F8-PDF: bin(w=100us):cnt(8979)=23:8,24:322,25:713,26:741,27:726,28:751,29:760,30:736,31:745,32:792,33:750,34:724,35:737,36:461,37:13 (5.00/95.00/99.7%=25/36/36,Outliers=0,obl/obu=0/0) (3.636 ms/1628798367.483970)
[  1] 0.0000-10.0040 sec  11.0 GBytes  9.41 Gbits/sec  2.963/0.770/6.308/0.351 ms (89766/131072) 3.32 MByte 396953  236799=36624:36966:37037:37174:73802:12468:2355:373
[  1] 0.0000-10.0040 sec F8(f)-PDF: bin(w=100us):cnt(89766)=8:1,9:1,10:2,11:3,12:3,13:5,14:4,15:5,16:4,17:1,18:1,19:2,21:2,22:3,23:79,24:2869,25:7237,26:7404,27:7283,28:7671,29:7462,30:7489,31:7675,32:7617,33:7471,34:7368,35:7392,36:4509,37:161,39:6,40:7,41:4,44:1,46:2,47:4,49:3,51:1,52:2,53:3,56:1,58:4,59:2,62:1,64:1 (5.00/95.00/99.7%=25/36/36,Outliers=0,obl/obu=0/0) (6.308 ms/1628798357.646599)

rjmcmahon@ryzen3950 iperf2-code]$ src/iperf -c 192.168.1.64 --trip-times -i .010  -e -X -t 1
------------------------------------------------------------
Client connecting to 192.168.1.64, TCP port 5001 with pid 325429 (1 flows)
Write buffer size: 131072 Byte
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] Clock sync check (ms): RTT/Half=(0.307/0.153) OWD-send/ack/asym=(0.244/0.063/0.181)
[  1] local 192.168.1.133%enp4s0 port 54572 connected with 192.168.1.64 port 5001 (MSS=1448) (trip-times) (sock=3) (peer 2.1.4-master)
[ ID] Interval            Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
[  1] 0.0000-0.0100 sec  9.13 MBytes  7.65 Gbits/sec  74/0          0      425K/324 us  2953190
[  1] 0.0100-0.0200 sec  12.8 MBytes  10.7 Gbits/sec  102/0          0     1042K/845 us  1582171
[  1] 0.0200-0.0300 sec  10.5 MBytes  8.81 Gbits/sec  84/0          0     1494K/1063 us  1035752
[  1] 0.0300-0.0400 sec  12.0 MBytes  10.1 Gbits/sec  96/0          0     1494K/1096 us  1148076
[  1] 0.0400-0.0500 sec  11.6 MBytes  9.75 Gbits/sec  93/0          0     1494K/1071 us  1138160
[  1] 0.0500-0.0600 sec  10.2 MBytes  8.60 Gbits/sec  82/0          0     1494K/1084 us  991504
[  1] 0.0600-0.0700 sec  12.0 MBytes  10.1 Gbits/sec  96/0          0     1494K/1068 us  1178175
[  1] 0.0700-0.0800 sec  10.5 MBytes  8.81 Gbits/sec  84/0          0     1494K/1074 us  1025144
[  1] 0.0800-0.0900 sec  11.6 MBytes  9.75 Gbits/sec  93/0          0     1494K/1090 us  1118321
[  1] 0.0900-0.1000 sec  10.6 MBytes  8.91 Gbits/sec  85/0          0     1494K/1071 us  1040254
[  1] 0.1000-0.1100 sec  12.1 MBytes  10.2 Gbits/sec  97/0          0     1494K/1046 us  1215486
[  1] 0.1100-0.1200 sec  11.0 MBytes  9.23 Gbits/sec  88/0          0     1494K/1071 us  1076969
[  1] 0.1200-0.1300 sec  11.2 MBytes  9.44 Gbits/sec  90/0          0     1494K/1071 us  1101445
[  1] 0.1300-0.1400 sec  10.9 MBytes  9.12 Gbits/sec  87/0          0     1494K/1050 us  1086025
[  1] 0.1400-0.1500 sec  10.9 MBytes  9.12 Gbits/sec  87/0          0     1494K/1044 us  1092267
[  1] 0.1500-0.1600 sec  11.4 MBytes  9.54 Gbits/sec  91/0          0     1494K/1077 us  1107479
[  1] 0.1600-0.1700 sec  11.5 MBytes  9.65 Gbits/sec  92/0          0     1494K/1038 us  1161717
[  1] 0.1700-0.1800 sec  11.2 MBytes  9.44 Gbits/sec  90/0          0     1494K/1070 us  1102475
[  1] 0.1800-0.1900 sec  11.2 MBytes  9.44 Gbits/sec  90/0          0     1494K/1057 us  1116034
[  1] 0.1900-0.2000 sec  11.4 MBytes  9.54 Gbits/sec  91/0         17     1494K/1066 us  1118907
[  1] 0.2000-0.2100 sec  11.1 MBytes  9.33 Gbits/sec  89/0          0     1494K/1062 us  1098438
[  1] 0.2100-0.2200 sec  11.0 MBytes  9.23 Gbits/sec  88/0          0     1494K/1044 us  1104821
[  1] 0.2200-0.2300 sec  11.8 MBytes  9.86 Gbits/sec  94/0          0     1494K/1073 us  1148254
[  1] 0.2300-0.2400 sec  11.4 MBytes  9.54 Gbits/sec  91/0          0     1494K/1063 us  1122065
[  1] 0.2400-0.2500 sec  10.5 MBytes  8.81 Gbits/sec  84/0         14     1494K/1064 us  1034779
[  1] 0.2500-0.2600 sec  11.6 MBytes  9.75 Gbits/sec  93/0          0     1494K/1046 us  1165363
[  1] 0.2600-0.2700 sec  11.8 MBytes  9.86 Gbits/sec  94/0          0     1494K/1067 us  1154711
[  1] 0.2700-0.2800 sec  10.2 MBytes  8.60 Gbits/sec  82/0          0     1494K/1082 us  993337
[  1] 0.2800-0.2900 sec  11.0 MBytes  9.23 Gbits/sec  88/0          0     1494K/1057 us  1091233
[  1] 0.2900-0.3000 sec  11.4 MBytes  9.54 Gbits/sec  91/0          0     1494K/1069 us  1115767
[  1] 0.3000-0.3100 sec  11.1 MBytes  9.33 Gbits/sec  89/0          0     1494K/1071 us  1089207
[  1] 0.3100-0.3200 sec  11.2 MBytes  9.44 Gbits/sec  90/0          0     1494K/1051 us  1122405
[  1] 0.3200-0.3300 sec  11.1 MBytes  9.33 Gbits/sec  89/0          0     1494K/1067 us  1093290
[  1] 0.3300-0.3400 sec  11.6 MBytes  9.75 Gbits/sec  93/0          0     1494K/1064 us  1145648
[  1] 0.3400-0.3500 sec  11.1 MBytes  9.33 Gbits/sec  89/0          0     1494K/1070 us  1090225
[  1] 0.3500-0.3600 sec  11.0 MBytes  9.23 Gbits/sec  88/0          0     1494K/1069 us  1078984
[  1] 0.3600-0.3700 sec  11.5 MBytes  9.65 Gbits/sec  92/0          0     1494K/1059 us  1138680
[  1] 0.3700-0.3800 sec  11.0 MBytes  9.23 Gbits/sec  88/0          0     1494K/1065 us  1083036
[  1] 0.3800-0.3900 sec  11.6 MBytes  9.75 Gbits/sec  93/0          0     1494K/1074 us  1134981
[  1] 0.3900-0.4000 sec  11.1 MBytes  9.33 Gbits/sec  89/0          0     1494K/1074 us  1086165
[  1] 0.4000-0.4100 sec  11.5 MBytes  9.65 Gbits/sec  92/0         18     1494K/1067 us  1130143
[  1] 0.4100-0.4200 sec  11.0 MBytes  9.23 Gbits/sec  88/0          0     1494K/1061 us  1087119
[  1] 0.4200-0.4300 sec  10.8 MBytes  9.02 Gbits/sec  86/0          0     1494K/1037 us  1087000
[  1] 0.4300-0.4400 sec  11.4 MBytes  9.54 Gbits/sec  91/0          0     1494K/1046 us  1140301
[  1] 0.4400-0.4500 sec  12.0 MBytes  10.1 Gbits/sec  96/0          0     1494K/1080 us  1165084
[  1] 0.4500-0.4600 sec  10.9 MBytes  9.12 Gbits/sec  87/0         14     1494K/1097 us  1039495
[  1] 0.4600-0.4700 sec  11.0 MBytes  9.23 Gbits/sec  88/0          0     1494K/1052 us  1096420
[  1] 0.4700-0.4800 sec  11.1 MBytes  9.33 Gbits/sec  89/0          0     1494K/1041 us  1120596
[  1] 0.4800-0.4900 sec  10.8 MBytes  9.02 Gbits/sec  86/0          0     1494K/1051 us  1072521

Bob


This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.

Bob McMahon

unread,
Aug 12, 2021, 4:03:44 PM8/12/21
to Neal Cardwell, Tergel Munkhbat, BBR Development
forgot to mention, use the -Z option to set the TCP congestion control algorithm

[rjmcmahon@ryzen3950 iperf2-code]$ src/iperf -c 192.168.1.64 --trip-times -i 1  -e -X -t 10 -Z cubic --hide-ips
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 326070 (1 flows)

Write buffer size: 131072 Byte
TCP congestion control set to cubic

TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] Clock sync check (ms): RTT/Half=(0.343/0.171) OWD-send/ack/asym=(0.275/0.068/0.207)
[  1] local *.*.*.133%enp4s0 port 54582 connected with *.*.*.64 port 5001 (MSS=1448) (trip-times) (sock=3) (peer 2.1.4-master)

[ ID] Interval            Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
[  1] 0.0000-1.0000 sec  1.08 GBytes  9.27 Gbits/sec  8838/0         74     1279K/1069 us  1083520
[  1] 1.0000-2.0000 sec  1.10 GBytes  9.41 Gbits/sec  8973/0          9     1401K/1068 us  1101226
[  1] 2.0000-3.0000 sec  1.10 GBytes  9.42 Gbits/sec  8983/0         19     1401K/1064 us  1106598
[  1] 3.0000-4.0000 sec  1.10 GBytes  9.42 Gbits/sec  8979/0         15     1402K/1064 us  1106105
[  1] 4.0000-5.0000 sec  1.10 GBytes  9.41 Gbits/sec  8976/0          0     1405K/1075 us  1094421
[  1] 5.0000-6.0000 sec  1.10 GBytes  9.41 Gbits/sec  8977/0          0     1408K/1064 us  1105858
[  1] 6.0000-7.0000 sec  1.10 GBytes  9.42 Gbits/sec  8980/0          0     1418K/1071 us  1098998
[  1] 7.0000-8.0000 sec  1.10 GBytes  9.41 Gbits/sec  8976/0          2     1418K/1065 us  1104697
[  1] 8.0000-9.0000 sec  1.10 GBytes  9.42 Gbits/sec  8981/0          0     1418K/1059 us  1111575
[  1] 9.0000-10.0000 sec  1.10 GBytes  9.41 Gbits/sec  8977/0          0     1418K/1052 us  1118473
[  1] 0.0000-10.0106 sec  10.9 GBytes  9.39 Gbits/sec  89642/0        119     1418K/1052 us  1115675
Reply all
Reply to author
Forward
0 new messages