NDT results suboptimal for 10Gbps paths

131 views
Skip to first unread message

Tate Baumrucker

unread,
Jan 26, 2022, 11:12:03 AM1/26/22
to discuss
Good morning,
We're seeing lower than expected tput results from a self-served NDT server for tests across a locally switched 10Gbps path.  iperf3 test results over the same path between the same client/server yield bidirectional > 5Gbps, but NDT results are ~1Gbps down and ~300Mbps up.  Tcpdump reveals significant tcp zero windows during the NDT session.

The server host is running BBR with appropriately tuned stack variables (proven by iperf3 tests).  

Are there any known limitations for 10Gbps link speeds?  Any hints for tuning or places to investigate further?
Thanks in advance,
Tate

Chris Ritzo

unread,
Jan 27, 2022, 5:07:58 PM1/27/22
to discuss, ta...@baumrucker.org
Thanks for writing to ask about self-provisioned use of ndt-server, tuning, and benchmarking on high capacity links. On our own servers, M-Lab provisions up to 10 Gbps links at maximum, and this is shared across many clients of course. In fact, we have a rate limiter kicking in at 2.5Gb/s which will stop the subsequent test if one exceeds 2.5 in order to preserve measurement quality across all users of a given server at a given moment. If your goal is to measure the full capacity of the link, you might be better off using iPerf since NDT is a single TCP stream test anyway. However, we would be interested in your findings as it could show previously undetected bugs. If you could provide the following that would be very useful:
  • pcaps for affected measurements
  • hardware specs for your server,
  • ndt-server version/git commit you are running
  • and the client code and version/git commit you are using to test
It would also be interesting to know whether you're using our ndt-server image from Dockerhub, running code directly from ndt-server's master branch on github, or a tagged release from github. Thanks again for reporting this, and testing/benchmarking ndt-server. We hope to hear back from you on the info above.

Robert Enger

unread,
Jan 27, 2022, 9:31:16 PM1/27/22
to Chris Ritzo, discuss, ta...@baumrucker.org
AT&T has recently announced availability of 5Gbps FTTH.  And Qualcomm touts up to 10Gbps capability in the X65 modem for 5G.

Artificially capping results at 2.5Gbps would seem to introduce some bias in aggregate datasets.  E.g. lowering average speed results, which some may be referencing for policy making decisions. (It may also be a lost opportunity to provide objective evidence that the 5 and 10Gbps promises are fictitious in real-world deployment?)

Perhaps it may be appropriate to redesign the server SW to queue requests and serve only one client simultaneously?  (And bolster Mlab servers with carefully implemented higher capacity interfaces and ISP connectivity?)

If consumers are finally being provided with fast connections (or empty promises of such), objective tests should be there to recognize the circumstance.




--
You received this message because you are subscribed to the Google Groups "discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@measurementlab.net.
To view this discussion on the web visit https://groups.google.com/a/measurementlab.net/d/msgid/discuss/5eafcc16-15ed-4279-b57d-d9992f6122c0n%40measurementlab.net.

Roberto D'Auria

unread,
Jan 28, 2022, 7:15:32 AM1/28/22
to Robert Enger, Chris Ritzo, discuss, ta...@baumrucker.org
Robert: I agree with your concern and, in fact, we considered this in the platform design. Further clarifying Chris' point: M-Lab does not cap results at 2.5Gbps. When a measurement exceeding 2.5Gbps is detected, the server won't accept new measurements until that one is complete, to preserve the measurement quality, essentially implementing the "one client at a time" policy for a limited time. Clients should move on to the next server provided by the Locate load balancer in case one of them becomes temporarily unavailable due to this.

The above is only true on the M-Lab infrastructure and does not apply to self-hosted ndt-server instances, which are only limited by the link speed and the hardware capabilities of the machines used to run the test.

Almost all the M-Lab sites have 10Gbps uplink, with some exceptions (10 sites that can be found searching for "1g" on https://siteinfo.mlab-oti.measurementlab.net/v2/sites/sites.json).

Hope this helps!

-Roberto


Chris Ritzo

unread,
Jan 28, 2022, 8:03:44 AM1/28/22
to discuss, rob...@measurementlab.net, discuss, ta...@baumrucker.org, robertm...@gmail.com
Thanks for adding that detail, Roberto, and clarifying that the initial question posed in this thread concerned testing of a self-provisioned ndt-server. Regarding M-Lab's production servers though, one other thing that should be mentioned is that if providers are advertising 5 Gbps and higher link speeds, I am certain that those speeds would only be within their network. Since M-Lab servers are always hosted in peering locations and networks, and NDT is a single stream test, we shouldn't expect measurements via those connections to be of total possible link capacity.

Best, Chris

To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@measurementlab.net.

--
You received this message because you are subscribed to the Google Groups "discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@measurementlab.net.

Tate Baumrucker

unread,
Jan 28, 2022, 8:03:54 AM1/28/22
to Roberto D'Auria, Robert Enger, Chris Ritzo, discuss
Can this functionality be added to the self-hosted versions?  
Thanks,
Tate

Roberto D'Auria

unread,
Jan 28, 2022, 10:18:55 AM1/28/22
to Chris Ritzo, discuss, ta...@baumrucker.org, robertm...@gmail.com
Can this functionality be added to the self-hosted versions?  

Assuming you're on a Linux environment, yes. This is implemented in the github.com/m-lab/access Go package and enabled via ndt-server's command-line flags:

-txcontroller.max-rate=<rate in bits/s>
-txcontroller.device=<interface name>

This will monitor the data usage on the specified interface and prevent new measurements from starting when it exceeds max-rate.
You can verify that it worked by starting a measurement faster than max-rate, then a separate one in parallel. It should fail to connect until the first measurement has been completed.

Please note, however, that this approach has a known issue. Specifically, if the client runs a download measurement and an upload measurement immediately afterward and the download speed was above max-rate, the upload will fail to connect. The download and upload measurements are independent events with regard to the ndt7 protocol, even if all the clients I know of run both by default. You can find more details at https://github.com/m-lab/ndt-server/issues/334

-Roberto


To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@measurementlab.net.

--
You received this message because you are subscribed to the Google Groups "discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@measurementlab.net.

Matt Mathis

unread,
Jan 28, 2022, 10:20:53 AM1/28/22
to Robert Enger, Chris Ritzo, discuss, ta...@baumrucker.org
Robert, the changes you suggest would either raise the cost of our fleet by 10x or reduce our service capacity by 1000x.  (You can't tell when a test starts if it needs a dedicated server.)

Today we see a few tests per day at rates  in excess of 1Gb/s, and some minority of these tests cause congestion within our infrastructure and affect other measurements.  In aggregate across the entire fleet (~500 servers) we see a few congested seconds per day.

But there is a deeper problem.  I am not aware of any consumer grade applications* that actually need more than about 100 Mb/s.   The race for more speed is a  stunt ("mine is bigger!" and "we have to keep up") by the marketing departments at all ISPs and are universally hated^H^H^H^H dreaded by the engineering team at the same ISPs.

What is actually happening is there are other problems in the network (typically queuing issues) that cause application stalls unrelated to raw throughput.  The current situation is sort of analogous to fixing the flat tire by upgrading the engine to 5,000 HP.  The mechanic installing the new engine has to beef up the rest of the drivetrain and along the way might accidentally fix the root problem.  But in any case the (new) tires don't have enough traction to do anything useful with 5k HP, so the car doesn't actually go any faster than the original design (which might indeed be faster than driving on flats).

We (the network testing community) are part of the problem, because we routinely publish performance numbers that are so high that they are not relevant.   This is why we are pivoting to do better diagnosis, so we can help people fix the right problem.  We need to stop wasting vast resources fixing the wrong problem.

I have often toyed with the idea of somehow redacting all results above some threshold (200 Mb/s?) in an attempt to reduce the insanity, but we too are part of an irrelevant race to an irrelevant goal.

*The only exception would be people trying to run commercial grade services on consumer grade connectivity.  Our tools aren't designed for them.   At some point we may have to put in stop lists to reject tests from non-consumer grade users.



--
Thanks,
--MM--

Robert Enger

unread,
Jan 31, 2022, 9:49:02 AM1/31/22
to Matt Mathis, Chris Ritzo, discuss, ta...@baumrucker.org
Matt:

A consumer-grade user may be increasingly hard to define, as more and more of us work from home.
I am retired and do volunteer work.  I have sent large "pro video" source files up to file sharing services to be transferred to editors.  

GOOGLE DRIVE allows sustained upload at over 500Mbps, which really cuts down on the wait time.  10GiByte file uploads complete in a few minutes.
(It would be nice to be faster, but the bottleneck appears to be at GoogleDrive or Google's PI with the ISP).   Mlab and other speed test measurements inform that finger pointing.)

In the LA area, there are many folks who produce and edit from home (for a living), and need to upload and download large files.
While folks in LA (and other pro content creation hubs) are shooting with professional camera gear, consumer generation of large video files may become more common as higher image fidelity creation is supported on mobile devices.
(Apple sponsors its "Shot on iPhone" program.  And other device manufacturers, including Google, are including improved camera sensors in their phones.  I've seen "shot on iphone" short films run at film festivals.  Submittal to the festival was by online upload.)

Velma does clinical monitoring of advanced cancer treatments.  While she travels to research centers, she does have a home office and accesses some resources remotely from time to time.

I understand from her and the media that remote-reading of medical imaging occurs frequently.  (The radiologist that evaluates a given test may be in another state or country.)   As the imaging resolutions improve, transferred files will get larger.  (Detail counts when looking for lesions.)  We repeatedly hear that there is a  groundswell of employee support for "work from home".   High performance FTTH implementations make that increasingly feasible.

MLAB testing can ensure that the promises of FTTH providers are actually delivered.
Disparaging high performance seems like the old "no one will need more than 640k" mantra.
I prefer "if you build it they will come".  

I certainly enjoy being able to download OS patches and new SW "quasi-instantaneously". 
Indeed, downloads are so fast now that when a CDN errantly serves you from sub-optimal source (say one half-way around the globe in Europe) the degradation is readily apparent.

When Velma's company's IT staff first deployed a remote update to her (then new) machine, they called her and told her they suspected it was not working correctly, as it completed so quickly.  (At the time she had one of the newest machines in her company.  It is M.2 based, and she is GigE attached to the home LAN, with Gig FTTH ISP service.)

Windows-11 will force a lot of folks to replace their legacy PCs.  A whole lot of folks will be getting their hands on newer machines, many will be built upon M.2 NVMe.  
This will remove yet another layer of performance impediment from the consumer-grade user. Ditto for migrations to wifi-6E mesh systems, and mm-wave 5G for mobile devices (at least in good signal areas).

I think Mlab and the other test services can continue to add value, even as consumer last-mile bottlenecks are removed (albeit at a seeming glacial pace with some ISPs).
I hope Mlab will continue to support testing of high-speed connections, including the multi-Gig FTTH being deployed by Google, AT&T and some municipal ISPs.

Bob Enger

Matt Mathis

unread,
Feb 8, 2022, 10:50:03 AM2/8/22
to Robert Enger, Chris Ritzo, discuss, ta...@baumrucker.org
One of my regrets about MLab is that we have been overly focused on whether rich people are getting what they want, rather than whether poor people are getting what they need.

You are correct, we could raise our performance ceiling.   I first started working on Internet performance at the Pittsburgh Supercomputing Center in 1990.   That year, I was trying to get PSC -> SDSC to run faster than 5 Mb/s.  For many years, my day job was "TCP tuning", and every few years we would fix one global problem, but then have a new goal and a new set of problems to debug and deploy.  Yes we could raise MLab's target performance to 10 Gb/s.  Improving our  tools is probably doable, but upgrading our fleet would be problematic.

However, today I worry far more about the other end of the spectrum.    I fear there are several billion of people (including millions of Americans) who don't have sufficient Internet to do basic things that the rest of us now take for granted.  

We do not know if MLab tools get reliable measurements below 1 Mb/s, and that means we can't effectively measure Internet coverage for a huge number of people.  We do know that some of the optimizations that are likely to help the high end have the potential to hurt measurements at the low end (specifically larger buffers).  We do know the data collection and processing pipeline is capable of reporting all the way down to about 1kbit/sec, but the each diagnostic client has it's own limits.

You might find this plot useful: it was on my home page at PSC for many years:

RateChart.png


--
Thanks,
--MM--
Reply all
Reply to author
Forward
0 new messages