Massive drop in data channel throughput when increasing RTT latency

1,930 views
Skip to first unread message

Rasmus Eskola

unread,
Jul 1, 2014, 3:25:31 AM7/1/14
to discuss...@googlegroups.com
Hi,

Does anyone know why SCTP data channel performance incurs a significant penalty with even small RTT latencies? I'm using the 'netem' facilities of the Linux kernel in order to simulate both up/downlink network latency, and a simple javascript program which calls dc.send() in a tight loop to keep the data channel send buffer filled:


Here are the results with varying RTT, approximately:

RTT (ms)Throughput on Chromium (MB/s)Throughput on Firefox (MB/s)
0710
1038
201.5
4

Attached is an image showing the iograph of Wireshark, generated from a packet capture on a test run of the application with 50ms RTT. The red graph describes packets by the sender, green graph describes packets sent by the receiver (presumably some sort of ACK packets). Seems like after a certain number of sent packets, the sender will stop sending entirely and wait for ACK packets before sending more data. It also seems the sender tries to ramp up the amount of sent packets, only to dial it down again (this wave-like form of the graph continues for the entire duration of the data transfer). Could this be caused by a flaky congestion control algorithm? Interestingly, Firefox seems to do a little better at higher RTT than Chromium. Any hints where to look for the reason behind this, be it in the source code or somewhere else?

The tests were performed with the latest nightly versions of Chromium and Firefox as of writing.
25ms_up_25ms_down.png

Jiayang Liu

unread,
Jul 2, 2014, 11:45:39 AM7/2/14
to discuss...@googlegroups.com
What's the graph for Firefox like?

Rasmus Eskola

unread,
Jul 3, 2014, 5:38:10 AM7/3/14
to discuss...@googlegroups.com
Here's the graph for Firefox with 50ms RTT. Seems like Firefox handles latency slightly by sending more data per burst, but still far from ideal. Both tests were performed in a LAN with 100 mbit/s link speeds.
25updown_ff.png

Rasmus Eskola

unread,
Jul 3, 2014, 5:43:41 AM7/3/14
to discuss...@googlegroups.com
Oops, that graph uses bytes/tick on the y-axis, here's one with packets/tick for comparison with the Chromium iograph.
25updown_ff_packets.png

Jiayang Liu

unread,
Jul 7, 2014, 12:07:23 PM7/7/14
to discuss...@googlegroups.com, tue...@fh-muenster.de
Adding Michael for advice.

It looks like Chrome keeps resetting the send window but FireFox does fine.

Michael,

what do you think could cause this? 

Shachar

unread,
Jul 8, 2014, 2:21:11 AM7/8/14
to discuss...@googlegroups.com, tue...@fh-muenster.de
Rasmus, can you repeat the experiment with unreliable datachannels and post results?

Rasmus Eskola

unread,
Jul 8, 2014, 3:36:21 AM7/8/14
to discuss...@googlegroups.com, tue...@fh-muenster.de
Certainly. I repeated the experiment both with reliable and unreliable datachannels on both Chromium and Firefox, again with 50ms RTT. See attached images for results. Y-axis displays bytes/tick. It doesn't look like there are any differences between reliable/unreliable datachannels at the moment.

Just to make sure I did this right: is this currently the correct way to open an unreliable datachannel?
var dcOpts = {
    ordered: false,
    maxRetransmits: 0
};

var channel = pc.createDataChannel("benchmark", dcOpts);

25updown_reliable_chromium.png
25updown_reliable_ff.png
25updown_unreliable_chromium.png
25updown_unreliable_ff.png

Michael Tüxen

unread,
Jul 8, 2014, 4:37:30 AM7/8/14
to discuss...@googlegroups.com, tue...@fh-muenster.de


On Monday, July 7, 2014 6:07:23 PM UTC+2, Jiayang Liu wrote:
Adding Michael for advice.

It looks like Chrome keeps resetting the send window but FireFox does fine.

Michael,

what do you think could cause this? 
To nail this down, you need to see the SCTP packets and get some information,
when packets are provided to the SCTP stack. I'm not sure how this works on
Chrome. The SCTP stack has an interface to enable logging and to dump all
sent/received packets using a ASCII format which can be converted to a file
which is readable by Wireshark. On Firefox you can enable the logging by
setting the environment variable NSPR_LOG_MODULES to SCTP:5,DataChannel:5
and NSPR_LOG_FILE to the filename of the logfile (for example /Users/tuexen/logfile).
After running the test you can do
grep SCTP_PACKET logfile > sctp.log
to extract the information of the packets and using
text2pcap -n -l 248 -D -t '%H:%M:%S.' sctp.log sctp.pcapng
you can convert it into a pcapng file which is readable by wireshark (at
least recent ones).
I'm not sure if something like this is possible with the Chrome browser.
If yes, please provide the information. If not, try to run Chrome against
Firefox (both combinations) and provide the information collected by
Firefox. This doesn't give us the information about the upper layer
interface, but it is better than nothing.

Best regards
Michael
 

Michael Tüxen

unread,
Jul 8, 2014, 4:40:00 AM7/8/14
to discuss...@googlegroups.com, tue...@fh-muenster.de


On Tuesday, July 8, 2014 9:36:21 AM UTC+2, Rasmus Eskola wrote:
Certainly. I repeated the experiment both with reliable and unreliable datachannels on both Chromium and Firefox, again with 50ms RTT. See attached images for results. Y-axis displays bytes/tick. It doesn't look like there are any differences between reliable/unreliable datachannels at the moment.
I wouldn't expect substantial differences between ordered/unordered or reliable/unreliable. Since Chrome and Firefox use
the same SCTP stack (possibly with different revisions), I suspect that the difference is related to different parameters or
a different use of the API...

Best regards
Michael

Michael Tüxen

unread,
Jul 8, 2014, 5:33:51 AM7/8/14
to discuss...@googlegroups.com
The above indicates that if you double the RTT, you half the throughput. This is something like throughput = W / rtt.
I don't think this is related to the CC algorithm. It could be the SCTP flow control, so it would be the receiver buffer. However, 8 * 1024 * 1024 * 0.01 would be a window of 80 KB. I think the default is about 128 K and it is not changed in Firefox. Not sure about Chrome.
Am I right that you send small user messages of 1-256 Byte? We add in some overhead for the receiver buffer, so it could be that limit. To test this, use a message size of 1024 byte. If the throughput increases and we get nearer the 128K window limit, than this is the limit.

Best regards
Michael
then this is the right track...

Iñaki Baz Castillo

unread,
Jul 8, 2014, 6:03:07 AM7/8/14
to discuss...@googlegroups.com
Hi,

Please check this mail I sent days ago in which I tell about the wrong DataChannel.send() behavior which blocks the JS loop for some milliseconds:





--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Iñaki Baz Castillo
<i...@aliax.net>

Rasmus Eskola

unread,
Jul 8, 2014, 8:21:39 AM7/8/14
to discuss...@googlegroups.com, tue...@fh-muenster.de
Thank you very much Michael for the information on how to log SCTP packets in Firefox.

Until now I have been testing with an Uint8Array that is filled with 32 KB of random data once, and then sent over the datachannel in a setTimeout() loop of 0 ms, with a check of the datachannel send buffer size before sending so we don't overfill it.

The reason why I've been using 32KB message sizes is that lower message sizes resulted in considerably slower throughput, now I noticed why: JavaScript execution can't keep up with the available bandwidth and the dc buffer is empty every time we call dc.send(). To fix that I put a while loop inside of the setTimeout() loop: the while loop calls dc.send() until the buffer is full.

With this fix I'm able to measure slightly better throughput performance with 2 KB message sizes than with 32 KB. With 50ms RTT, Firefox gets up to 1.91 MB/s - measured on the receiving side of the benchmarking application, so that doesn't account for any overhead. The 'iotop' tool says the total bandwidth usage towards the receiving machine is about 2.14 MB/s. So when accounting for all the overhead we are now getting pretty close to the 128K window size: 1024 * 1024 * 2.14 * 0.05 = 112K. 2K seems to be at least very close to the optimal message size, I also tested some other nearby values but 2K gave the best throughput and it also held up in tests with no simulated latency.

Results for different RTT values with 2KB message size, total usage measured with iotop:
0ms   10.1 MB/s  (11.2 MB/s total)
10ms  8.84 MB/s  (9.88 MB/s total)
20ms  4.67 MB/s  (5.22 MB/s total)
40ms  2.37 MB/s  (2.65 MB/s total)
80ms  1.20 MB/s  (1.35 MB/s total)
160ms 0.60 MB/s  (0.67 MB/s total)
320ms 0.30 MB/s  (0.34 MB/s total)

You're right, looks very much like throughput is halved when RTT is doubled, which makes sense if the window size is indeed hardcoded.

The results give us these window sizes (using the values for total bandwidth usage):
10ms  104K
20ms  109K
40ms  111K
80ms  113K
160ms 112K
320ms 114K

Fairly close to 128K.

I'm not allowed to attach pcapng files, so below is a link to a ~3 second SCTP capture of a Firefox to Firefox throughput benchmark with 2KB message size and 40ms RTT.

I can capture Firefox to Chromium (+ vice versa) or Chromium to Chromium tomorrow if needed.

Thanks for your help. I'm researching WebRTC datachannels for my bachelor's thesis.

Best regards,
Rasmus Eskola

Michael Tüxen

unread,
Jul 8, 2014, 2:23:19 PM7/8/14
to discuss...@googlegroups.com, tue...@fh-muenster.de


On Tuesday, July 8, 2014 2:21:39 PM UTC+2, Rasmus Eskola wrote:
Thank you very much Michael for the information on how to log SCTP packets in Firefox.

Until now I have been testing with an Uint8Array that is filled with 32 KB of random data once, and then sent over the datachannel in a setTimeout() loop of 0 ms, with a check of the datachannel send buffer size before sending so we don't overfill it.

The reason why I've been using 32KB message sizes is that lower message sizes resulted in considerably slower throughput, now I noticed why: JavaScript execution can't keep up with the available bandwidth and the dc buffer is empty every time we call dc.send(). To fix that I put a while loop inside of the setTimeout() loop: the while loop calls dc.send() until the buffer is full.

With this fix I'm able to measure slightly better throughput performance with 2 KB message sizes than with 32 KB. With 50ms RTT, Firefox gets up to 1.91 MB/s - measured on the receiving side of the benchmarking application, so that doesn't account for any overhead. The 'iotop' tool says the total bandwidth usage towards the receiving machine is about 2.14 MB/s. So when accounting for all the overhead we are now getting pretty close to the 128K window size: 1024 * 1024 * 2.14 * 0.05 = 112K. 2K seems to be at least very close to the optimal message size, I also tested some other nearby values but 2K gave the best throughput and it also held up in tests with no simulated latency.

Results for different RTT values with 2KB message size, total usage measured with iotop:
0ms   10.1 MB/s  (11.2 MB/s total)
10ms  8.84 MB/s  (9.88 MB/s total)
20ms  4.67 MB/s  (5.22 MB/s total)
40ms  2.37 MB/s  (2.65 MB/s total)
80ms  1.20 MB/s  (1.35 MB/s total)
160ms 0.60 MB/s  (0.67 MB/s total)
320ms 0.30 MB/s  (0.34 MB/s total)

You're right, looks very much like throughput is halved when RTT is doubled, which makes sense if the window size is indeed hardcoded.

The results give us these window sizes (using the values for total bandwidth usage):
10ms  104K
20ms  109K
40ms  111K
80ms  113K
160ms 112K
320ms 114K

Fairly close to 128K.

I'm not allowed to attach pcapng files, so below is a link to a ~3 second SCTP capture of a Firefox to Firefox throughput benchmark with 2KB message size and 40ms RTT.
Great. If you plot the TSNs (Telephony/Analyze this association) you can see the SCTP
transfer over time and the impact of the RTT.

I can capture Firefox to Chromium (+ vice versa) or Chromium to Chromium tomorrow if needed.
If you look at the INIT/INIT-ACK from Chromium, you know what the receive window is.

Thanks for your help. I'm researching WebRTC datachannels for my bachelor's thesis.
Great subject. Drop me an email (tue...@fh-muenster.de) in case you have further SCTP related questions.

Best regards
Michael

Jiayang Liu

unread,
Jul 8, 2014, 7:10:35 PM7/8/14
to discuss...@googlegroups.com, tue...@fh-muenster.de
This is where Chrome initialize the SCTP stack:


I don't see any code to change the configuration of flow control or CC.

Michael Tüxen

unread,
Jul 9, 2014, 2:20:24 AM7/9/14
to discuss...@googlegroups.com, tue...@fh-muenster.de


On Wednesday, July 9, 2014 1:10:35 AM UTC+2, Jiayang Liu wrote:
This is where Chrome initialize the SCTP stack:


I don't see any code to change the configuration of flow control or CC.
That looks good. Rasmus can verify that the receive window is about 128 KB by looking at the 'advertised receiver window' in the INIT-/INIT-ACK chunk from the Chrome browser when looking at the wireshark trace. I'm not sure what the difference is between Firefox and Chrome...

Best regards
Michael

Rasmus Eskola

unread,
Jul 9, 2014, 6:59:59 AM7/9/14
to discuss...@googlegroups.com, tue...@fh-muenster.de
I've tested with both Firefox and Chromium on the receiving side, and I can confirm that the advertised receive window is 131072 byte (128 KB) in both browsers.

Best regards,
Rasmus

Shachar

unread,
Jul 10, 2014, 3:58:54 AM7/10/14
to discuss...@googlegroups.com, tue...@fh-muenster.de
The window referred here is the recieved window, since the link's throughput >> actual throughput and there's no additional traffic, right?
So the begged question is, why is the receiver window 128k and not more? - If there was link throughput problems the congestion control would take care of that. So why not better utilize a link which is free just 'far away'?

Rasmus Eskola

unread,
Jul 10, 2014, 8:46:54 AM7/10/14
to discuss...@googlegroups.com, tue...@fh-muenster.de
Yes, the link's throughput is 100 Mbit/s, and I can verify this with iperf. There is no additional traffic. Michael pointed out that Firefox does not (yet?) change the receiver window size, thus the SCTP stack default of 128K is used. Chromium likely does not support changing the window size either as is evident from a SCTP capture between Chromium-Firefox: the advertised receiver window size stays at 128K throughout the data transfer.

Best regards,
Rasmus

Michael Tüxen

unread,
Jul 10, 2014, 9:58:51 AM7/10/14
to discuss...@googlegroups.com, tue...@fh-muenster.de
On Thursday, July 10, 2014 9:58:54 AM UTC+2, Shachar wrote:
The window referred here is the recieved window, since the link's throughput >> actual throughput and there's no additional traffic, right?
So the begged question is, why is the receiver window 128k and not more? - If there was link throughput problems the congestion control would take care of that. So why not better utilize a link which is free just 'far away'?
It is just a parameter you can increase, it limits your throughput. However, the larger the value you configure, the more SCTP is allowed to send out (if not prohibited by the congestion control), the more you might fill up buffers in routers, the more RTT increases. This RTT increase will also affect the media streams. Currently the media and non-media streams are handled independent...

Best regards
Michael

Rasmus Eskola

unread,
Jul 10, 2014, 10:00:05 AM7/10/14
to discuss...@googlegroups.com
It seems we have to adjust both the sender and receiver SCTP window sizes.
I ran the tests with the following addition to the Firefox datachannel init code:

diff -r 7f5a8526b55a netwerk/sctp/datachannel/DataChannel.cpp
--- a/netwerk/sctp/datachannel/DataChannel.cpp  Tue May 13 12:41:43 2014 +0200
+++ b/netwerk/sctp/datachannel/DataChannel.cpp  Thu Jul 10 15:51:31 2014 +0300
@@ -372,6 +372,19 @@
     return false;
   }

+  int rcvbuf_size = 131072; //default
+  int sndbuf_size = 131072; //default
+  if (usrsctp_setsockopt(mMasterSocket, SOL_SOCKET, SO_RCVBUF,
+                         (const void *)&rcvbuf_size, sizeof(rcvbuf_size)) < 0) {
+    LOG(("Couldn't change receive buffer size on SCTP socket"));
+    goto error_cleanup;
+  }
+  if (usrsctp_setsockopt(mMasterSocket, SOL_SOCKET, SO_SNDBUF,
+                         (const void *)&sndbuf_size, sizeof(sndbuf_size)) < 0) {
+    LOG(("Couldn't change send buffer size on SCTP socket"));
+    goto error_cleanup;
+  }
+
   // Make non-blocking for bind/connect.  SCTP over UDP defaults to non-blocking
   // in associations for normal IO
   if (usrsctp_set_non_blocking(mMasterSocket, 1) < 0) {


By setting the receive window to 1MB on the receiver side, I'm hardly noticing any difference in throughput performance. Same goes for if I set the send window to 1MB on the sender side. But if I do both at once, I get a lot better data throughput.

With 1MB send/receive window sizes on 40 ms RTT, I'm now measuring an average of up to 5.0 MB/s throughput, as opposed to 2.37 MB/s in previous tests. The Wireshark IO graph looks a lot better too:

It's still not as good as it could be though (not using full link capacity), and if we zoom out a little bit we see there are some drops in throughput rate every now and then:

Here's one of the drops zoomed in:

I found something interesting while looking for reasons behind the data throughput drops every now and then. Just before the data drop occurs, the advertised receiver window size gradually falls towards a lower value, usually to about 620000, then quickly rises up to 1M again once we stop receiving data at the previous rate. What could be the reason behind behavior such as this?

Also, is there an easy way of automatically adjusting the sender/receiver window sizes by setting some socket option on the SCTP socket that I'm overlooking? Or is this supposed to be handled by the application (browser in this case)?

Best regards,
Rasmus

Michael Tüxen

unread,
Jul 10, 2014, 10:12:48 AM7/10/14
to discuss...@googlegroups.com


On Thursday, July 10, 2014 4:00:05 PM UTC+2, Rasmus Eskola wrote:
It seems we have to adjust both the sender and receiver SCTP window sizes.
Sure. Sorry for not mentioning it...
When the SCTP stack can deliver DATA to the user and therefore frees the receive buffer, it doesn't send window updates for each small
increase in receiver window, but it waits until a larger portion is free again (I think it is about 1/4 of the buffer, but I can look it up, if you want).
That is an optimization to avoid SACK bursts.

Also, is there an easy way of automatically adjusting the sender/receiver window sizes by setting some socket option on the SCTP socket that I'm overlooking? Or is this supposed to be handled by the application (browser in this case)?
It is supposed to be handled  by the application i.e. browser. One could implement some buffer scaling, but it hasn't been done up to now...

Best regards
Michael
 

Best regards,
Rasmus

Rasmus Eskola

unread,
Jul 11, 2014, 5:31:54 AM7/11/14
to discuss...@googlegroups.com
Thank you for the answers Michael.

I'm still wondering whether there is a problem in the congestion control algorithm. Now that I've been testing with larger send/receive windows (1M), the browser tries to ramp up data transmission rates gradually, but with high enough latencies (100ms, 500ms were tested) it always seems to reach a point where the transmission rate drops quickly and then starts climbing again, in a sawtooth-like wave:

The strange thing is that around these drops, I'm not seeing any seeing any signs of packet loss. Everything looks normal except for a_rwnd in the SACK chunks which starts falling at a steady rate just before the transmission rate also drops, and after that a_rwnd quickly rises up to the normal value. During the rest of the transmission a_rwnd stays stable at this value (1048576). I can provide packet captures if needed.

At least I'm reaching pretty good average transmission rates with 100ms now!

Best regards,
Rasmus

Michael Tüxen

unread,
Jul 11, 2014, 5:37:24 AM7/11/14
to discuss...@googlegroups.com


On Friday, July 11, 2014 11:31:54 AM UTC+2, Rasmus Eskola wrote:
Thank you for the answers Michael.

I'm still wondering whether there is a problem in the congestion control algorithm. Now that I've been testing with larger send/receive windows (1M), the browser tries to ramp up data transmission rates gradually, but with high enough latencies (100ms, 500ms were tested) it always seems to reach a point where the transmission rate drops quickly and then starts climbing again, in a sawtooth-like wave:
That sawtooth is how the SCTP (and TCP) CC works. I guess there are timeouts on the sending side, which results in the going down of the sawtooth curve (you use slow start until half of what you had before the timeout and congestion avoidance (linear growth) until it happens again.
If you are not limited by flow control, you are limited by congestion control (or by the sender or receiver CPU limit, or by the sending application).
I guess now you are limited by the network bandwidth.

Best regards
Michael
 

Shachar

unread,
Jul 14, 2014, 3:44:02 AM7/14/14
to discuss...@googlegroups.com
Does congestion control apply the same way in unordered, unreliable SCTP? - Isn't it suppose to be more UDP like and keep the bit-rate the application demands?

Michael Tüxen

unread,
Jul 14, 2014, 3:59:40 AM7/14/14
to discuss...@googlegroups.com


On Monday, July 14, 2014 9:44:02 AM UTC+2, Shachar wrote:
Does congestion control apply the same way in unordered, unreliable SCTP? - Isn't it suppose to be more UDP like and keep the bit-rate the application demands?
The congestion control applies to all data channels, no matter whether they are reliable or not, ordered or not. An unordered, unreliable data channel with 0 retransmissions gives you a similar service as UDP, but CC is is provided, whereas in case of UDP, the application needs to provide the congestion control.

Best regards
Michael

Shachar

unread,
Jul 14, 2014, 9:03:28 AM7/14/14
to discuss...@googlegroups.com
Basically the only difference is that in reliable there are retransmits in case of packet loss, and in ordered the SCTP takes care of 'poping' the messages out of its buffer to the application in an order?
Would you expect any throughput changes between reliable/not ordered/not?

Michael Tüxen

unread,
Jul 14, 2014, 10:15:00 AM7/14/14
to discuss...@googlegroups.com


On Monday, July 14, 2014 3:03:28 PM UTC+2, Shachar wrote:
Basically the only difference is that in reliable there are retransmits in case of packet loss, and in ordered the SCTP takes care of 'poping' the messages out of its buffer to the application in an order?
If you neglect the bandwidth for the retransmissions, I wouldn't expect a difference in throughput.
Would you expect any throughput changes between reliable/not ordered/not?
No. It is more about latency...

Best regards
Michael

Shachar

unread,
Jul 21, 2014, 7:57:48 AM7/21/14
to discuss...@googlegroups.com
On Monday, July 14, 2014 5:15:00 PM UTC+3, Michael Tüxen wrote:


On Monday, July 14, 2014 3:03:28 PM UTC+2, Shachar wrote:
Basically the only difference is that in reliable there are retransmits in case of packet loss, and in ordered the SCTP takes care of 'poping' the messages out of its buffer to the application in an order?
If you neglect the bandwidth for the retransmissions, I wouldn't expect a difference in throughput.
Would you expect any throughput changes between reliable/not ordered/not?
No. It is more about latency...
But didn't we just see that latency directly affects throughput? 

Michael Tüxen

unread,
Jul 21, 2014, 9:04:04 AM7/21/14
to discuss...@googlegroups.com


On Monday, July 21, 2014 7:57:48 AM UTC-4, Shachar wrote:
On Monday, July 14, 2014 5:15:00 PM UTC+3, Michael Tüxen wrote:


On Monday, July 14, 2014 3:03:28 PM UTC+2, Shachar wrote:
Basically the only difference is that in reliable there are retransmits in case of packet loss, and in ordered the SCTP takes care of 'poping' the messages out of its buffer to the application in an order?
If you neglect the bandwidth for the retransmissions, I wouldn't expect a difference in throughput.
Would you expect any throughput changes between reliable/not ordered/not?
No. It is more about latency...
But didn't we just see that latency directly affects throughput? 
yes, but when using unreliable or unordered data channels you only get a performance gain in
case of message loss. You can avoid the buffering time at the receiver to get things in the correct
order.
If you don't have loss or reordering in the network, unreliable/unordered doesn't gain much.

Best regards
Michael 

Lally Singh

unread,
May 19, 2015, 10:10:47 AM5/19/15
to discuss...@googlegroups.com
Also, let's note that chrome uses a socketpair over loopback for its internal IPC, and that netem will affect that too.  I suspect that's what's going on here.  When trying out our own testing, we couldn't reproduce this performance problem on real hardware (e.g., using netem on the ethernet interface), but could do so easily when using loopback.

Guillaume Egles

unread,
May 19, 2015, 7:30:11 PM5/19/15
to discuss...@googlegroups.com
Has there been any progress on this throughput issue?

I am doing simple tests on the same machine, one peer is a browser and one peer is a native C++ App using the WebRTC Native libs.

I need a bit more time to confirm and present my results, but so far the throughput, especially on Chrome is really terrible......

I know there is an open issue on this (Issue 4468) but I was wondering if anybody had figured out the "magic" fix (even if only experimental, i.e. WebRTC native libs)?

Let me know. Cheers. G. 

Lally Singh

unread,
May 20, 2015, 9:49:17 AM5/20/15
to discuss-webrtc
There's a connection to the send/receive buffers -- making them bigger helps throughput substantially.  We're tracking down why the performance is poor at the default buffer sizes.


Cheers,
$ ls

Lally Singh | Google Ideas | Engineering | la...@google.com
 


--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/0synE_0zeCQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.

赵岩峰

unread,
Mar 14, 2017, 4:43:32 AM3/14/17
to discuss-webrtc
Hello, is there any updates?
I have met the same problem after three years..
When the latency is 200ms, the bandwidth can only be 500KB at most.

在 2015年5月20日星期三 UTC+8下午9:49:17,Lally Singh写道:
Reply all
Reply to author
Forward
0 new messages