M-Lab SamKnows Impacts?

153 views
Skip to first unread message

Livingood, Jason

unread,
Nov 6, 2014, 8:57:58 AM11/6/14
to dis...@measurementlab.net
As a follow-up, the Cogent prioritization changes obviously impacted M-Lab NDT results. Since M-Lab servers are being used by SamKnows in the FCC’s ongoing Measuring Broadband America testing, is M-Lab planning to investigate how this change may have impacted the FCC’s testing (positively or negatively)? Also, do we know if the FCC was aware that this prioritization change was introduced and that it affected the servers they tested against and may have had some influence on the tests? 

Related to this, I would suggest researchers may want to compare the testing results over time between SamKnows servers on Cogent’s network with the ones  on Level 3’s network, as well as ISP networks. That in and of itself may be quite interesting.

Regards
Jason

PS – This list is awful quiet. I thought researchers were a more inquisitive bunch. ;-)

Matt Mathis

unread,
Nov 6, 2014, 11:54:43 AM11/6/14
to Livingood, Jason, dis...@measurementlab.net
Good point Jason, although you got the argument exactly backwards.

MBA is narrowly defined to be last mile performance only.  During this year's measurement sample they accidentally encountered widespread transit/interconnect congestion and the access ISPs petitioned the FCC to redact a week of data, which they did.

As far as I am concerned, this was the only interesting data, because it showed a more realistic view of the network under load, and the actual consequences for users.

No amount of prioritization can make the network faster than an empty network, nor can it repeal the speed of light.  As MBA is currently defined, raising priorities outside of the access ISP can not affect calibration.

It might fix a lot of things if MBA were redefined require testing from all measurement clients to servers all of the the top N ISPs, such that it implicitly included NxN testing of the interconnect mesh.   The set of covered ISPs should include all of the large transit ISPs, even if they don't provide consumer grade access.  And yes they should all have to comply with similar rules about prioritization and transparency.

If you want to pursue questions about interconnect congestion with the FCC, I suggest that you petition them to unredact that week of MBA data.    What did they hide?

Thanks,
--MM--
The best way to predict the future is to create it.  - Alan Kay

Privacy matters!  We know from recent events that people are using our services to speak in defiance of unjust governments.   We treat privacy and security as matters of life and death, because for some users, they are.

--
You received this message because you are subscribed to the Google Groups "discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@measurementlab.net.
To post to this group, send email to dis...@measurementlab.net.
Visit this group at http://groups.google.com/a/measurementlab.net/group/discuss/.

James Miller

unread,
Nov 6, 2014, 12:38:15 PM11/6/14
to Matt Mathis, Livingood, Jason, dis...@measurementlab.net, James Miller
Matt,

On the question of redaction, I wanted to remind everyone that the raw data for all the tests is available in the fixed program.  The policy you're mentioning is the on the data used in our reports and the "validated" data, not a redaction per se.

There's a long discussion on the policy and issues that have come up that are described in ex parte and other filings in the General Docket No. 12-264.  We've documented how to access those documents in the developers FAQ.


As always we're appreciative of all of M-Lab and other stakeholder's work.  

On the NxN proposal, is the idea that all nodes would test against a single server and round robin through measurment server lists?  

thanks..




--
James Miller, Esq.

"Japanese is so Eighties..."
Anonymous FCC Colleague

Jim Partridge

unread,
Nov 6, 2014, 2:48:30 PM11/6/14
to dis...@measurementlab.net, Jason_L...@cable.comcast.com
Matt, as James Miller pointed out, all data is available, although there is a time delay in its release. I also wanted to clarify some of your mis-statements. The following language from the initial Measuring Broadband America report (at pg. 10) describes the scope of the measurements, which cover more than the last mile. Specifically, the test servers are off-net, meaning they aren't on the networks of the ISPs being tested. Cogent supplies transit to four of the off-net M-Lab test servers, therefore actions taken by Cogent on its transit links did and do impact the measurements taken in the Measuring Broadband America program. I don't completely follow your comment about accidentally encountering widespread transit/interconnect congestion, or which year you are referring to, but in each instance where there has been an adjustment to the testing period or procedures, it has been the FCC that has initiated and proposed any changes.

This study focused on those elements of the Internet pathway under the direct or indirect

control of a consumer’s ISP on that ISP’s own network: from the consumer gateway—the

modem used by the consumer to access the Internet—to a nearby major Internet gateway

point (from the modem to the Internet gateway in Figure 1, above). This focus aligns with the

broadband service advertised to consumers and allows a direct comparison across broadband

providers of actual performance delivered to the household.


Jim 

John Simpkins

unread,
Nov 6, 2014, 3:35:10 PM11/6/14
to dis...@measurementlab.net, Jason_L...@cable.comcast.com, jpartr...@gmail.com
There was a NANOG post about this back in October, just as an FYI: http://mailman.nanog.org/pipermail/nanog/2014-October/070428.html.

James Miller

unread,
Nov 7, 2014, 12:23:13 PM11/7/14
to Jim Partridge, Matt Mathis, dis...@measurementlab.net, Jason_L...@cable.comcast.com
Matt's concern to understand clearly what is measured and why is important and thanks for Jim's clarifications as well.  

For the FCC's MBA program, we started with a need to provide information to consumers and other stakeholders interested in the broadband ISP's performance.  That endeavor starts with understanding what portion of the end-to-end is under the control and management of the provider.

So it's close to "last mile performance" but our system is instrumented to capture performance within the scope of the provider's network as it touches the "Internet", e.g. the nearest tier1 peering point, down to the point where the customer takes control of the link and plugs into equipment provided or managed by the provider.    As our program has evolved we've explored "special studies" on WiFi performance in the home and other topics that are beyond our V 1.0 of MBA.  

Matt, I didn't understand your change to the MBA methodology to "redefine[ to] require testing from all measurement clients to servers".  The clients do a latency check to determine the latency-closest servers to test again but they're across all ISPs.  If you're proposing some clients testing on a round-robin of all servers to get a feel for the inter-network latencies, that is a different experiment and our current instrumentation may not be tuned to capture issues along the path that might influence performance as the server gets deeper into interconnected links between the providers first tier1 peering point and the other measurement servers.  We have folks looking more closely at that issue and you could certainly bring it up at an upcoming collaborative meeting where we discuss proposals and questions.

Jason's discussion started with a discussion of how prioritization may have influenced performance, and mostly wanted to clarify first the data is available for anyone interested in looking at it!  Super cool stuff..


--
James Miller, Esq.

"Japanese is so Eighties..."
Anonymous FCC Colleague

Livingood, Jason

unread,
Nov 9, 2014, 12:52:19 AM11/9/14
to James...@nihonlinks.com, Jim Partridge, Matt Mathis, dis...@measurementlab.net
On 11/7/14, 12:23 PM, "James Miller" <yosi...@gmail.com> wrote:
the providers first tier1 peering point and the other measurement servers.  We have folks looking more closely at that issue and you could certainly bring it up at an upcoming collaborative meeting where we discuss proposals and questions.

Any idea when the next collaborative meeting will be? It’d be cool to talk about special studies on IPv6 (such as comparative IPv4 vs IPv6 performance across a range of protocols & sources & destinations), maybe something on DNSSEC (not sure what precisely), and other topics.

JL

Matt Mathis

unread,
Nov 10, 2014, 4:53:43 PM11/10/14
to James...@nihonlinks.com, dis...@measurementlab.net, James Miller
Thanks for the pointers. Yes I was sloppy about language. Sorry about that.

> On the NxN proposal, is the idea that all nodes would test against a single server and round robin through measurement server lists?

No not at all.

I was imagining multiple Measurement Points in all of the the top N
transit ISPs, such that each measurement client could test against all
N geographically closest measurement points in N different transit
ISPs. MBA would then directly cover any access or interconnection
performance problems between any content in the top N transit ISPs and
all users.

Side issues:

* N has to be large enough where we believe that ISPs outside of the
top N have multiple choices in choosing peers to reach all eyeballs.

* All N transit ISPs are somewhat privileged in that their performance
is explicitly monitored, so they should be subject to access like
rules even if they are not classified as access ISPs. (e.g. regarding
disclosing prioritization, etc.) Note that I don't consider
prioritization to be a bad thing, as long as it is transparent and
stakeholders can understand and verify its consequences.

* There needs to be enough MP where all transit ISP have have good
geographical coverage of all users. Note that M-Lab is making
progress here. Alternatively use something like Model Based
Metrics[1], which can calibrate out the effects of long RTTs.

*There has to be enough total test volume where all bins have
statistically significant test populations. This approximately
raises the total number of required tests by a factor of N.

Note that the above is really a sketch: many of the details can be
addressed in multiple ways.

[1] M. Mathis, A. Morton, "Model Based Bulk Performance Metrics",
IETF work in progress, July 2014.
Thanks,
--MM--
The best way to predict the future is to create it. - Alan Kay

Privacy matters! We know from recent events that people are using our
services to speak in defiance of unjust governments. We treat
privacy and security as matters of life and death, because for some
users, they are.


Reply all
Reply to author
Forward
0 new messages