Intent to Ship: Network Information navigator.connection.downlinkMax and onchange event

407 views
Skip to first unread message

Josh Karlin

unread,
Sep 25, 2015, 7:21:14 PM9/25/15
to blink-dev, igri...@chromium.org, owe...@chromium.org

Contact emails

jka...@google.com, igri...@chromium.org, owe...@chromium.org


Spec

https://w3c.github.io/netinfo/

Summary

Developers have expressed interest in knowing the network subtype (2g vs 3g) as well as the general category (wifi vs cellular). navigator.connection.type shows the category, but not the subtype. Rather than exposing an ever-changing enum, the downlinkMax attribute exposes a theoretical maximum bandwidth supported for the current network subtype through navigator.connection.downlinkMax.


Link to “Intent to Implement” blink-dev discussion

Is this feature supported on all six Blink platforms (Windows, Mac, Linux, Chrome OS, Android, and Android WebView)?
NetInfo is only supported on Android and ChromeOS currently.

WebView support for NetInfo is under active development. 

Desktop support is nearly ready for Mac and Linux but needs some work on Windows.


Demo link

To use the demo run Android dev channel and be sure to enable the experimental web platform features flag. Note that WiFi on Android reports Infinity for downlinkMax as Chrome recently dropped the required permission to get Wifi linkSpeed. Cellular works well however.


Compatibility Risk

Mozilla already has navigator.connection on FFOS and has expressed interested in starting on adding downlinkMax sometime next year. The spec was written in collaboration with Mozilla's Marcos Cáceres.


Yoav Weiss

unread,
Sep 26, 2015, 6:07:33 PM9/26/15
to Josh Karlin, blink-dev, Ilya Grigorik, Owen, Marcos Caceres
+marcos

As much as I'd love to see us expose more information about the network, I have my doubts about the usefulness of current downlinkMax.
While a low value would indicate certain poor network bandwidth, a high value is no guaranty that the bandwidth is in any way adequate.

So, I'm worried that authors would take a high downlinkMax value as an indication that there's no reason to be skimpy about resource sizes, when in fact the network conditions can be extremely poor.

Wouldn't exposing an estimate (such as the one that the Network Quality Estimator  would provide) be more beneficial and less prone to abuse?

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Ilya Grigorik

unread,
Sep 28, 2015, 12:21:24 PM9/28/15
to Yoav Weiss, Josh Karlin, blink-dev, Owen, Marcos Caceres
On Sat, Sep 26, 2015 at 3:07 PM, Yoav Weiss <yo...@yoav.ws> wrote:
+marcos
 
Wouldn't exposing an estimate (such as the one that the Network Quality Estimator  would provide) be more beneficial and less prone to abuse?

The plan is to do exactly that in the future, once we get more experience and confidence in NQE. Note how downlinkMax is defined [1]:

"The relationship between an underlying connection technology and its maximum downlink speed is determined by the properties of the current network interface (e.g signal strength, modulation algorithm, and other relevant "network weather" variables), or where such interface information is not available, is determined by the standardized, or generally accepted, maximum download data rate captured in the table of maximum downlink speeds."

In other words, in absence of any "network weather" signals the value should return the maximum data rate for the current connection type. And, if and when "network weather" data is available (based on signals from the interface itself, past performance data, <whatever>), then it should report the UA's best estimate for the max download data rate. 

Also, note that even with NQE you have to account for cases where you may not have any historical data or useful data from the network interface.. In which case NQE will have to fallback to max speed of current interface (as we have it defined here).

tl;dr: we have a clean path to improve downlinkMax accuracy in the future, current implementation is necessary step #1.

ig

David Dorwin

unread,
Sep 28, 2015, 6:55:41 PM9/28/15
to Ilya Grigorik, Yoav Weiss, Josh Karlin, blink-dev, Owen, Marcos Caceres, net...@chromium.org
+net-dev

Would it be better (for the spec) to be honest that downlinkMax is unknown rather than pretending to be able to saturate the connection technology? That seems less likely to cause trouble. If user agents ship with inaccurate values, authors may never use this feature or have to blacklist values in the Table of maximum downlink speeds as invalid.

We know that the values in the Table of maximum downlink speeds are going to be very inaccurate in some cases. For example, WiFi speeds are theoretical, and GigE connected to a DSL modem could be off by orders of magnitude.

David

sle...@google.com

unread,
Sep 28, 2015, 7:15:11 PM9/28/15
to blink-dev, igri...@google.com, yo...@yoav.ws, jka...@google.com, owe...@chromium.org, mar...@marcosc.com, net...@chromium.org


On Monday, September 28, 2015 at 3:55:41 PM UTC-7, David Dorwin wrote:
+net-dev

Would it be better (for the spec) to be honest that downlinkMax is unknown rather than pretending to be able to saturate the connection technology? That seems less likely to cause trouble. If user agents ship with inaccurate values, authors may never use this feature or have to blacklist values in the Table of maximum downlink speeds as invalid.

+1 . If we ship this knowing we're "lying" (acting on incomplete information but presenting it indistinguishably from more complete information, or just botching it entirely), then this effectively kills its utility for web developers, but becomes something we won't be able to quite kill off. It'll take a long time to restore trust in the #s provided by the API - or it will encourage user agent sniffing to sniff agents who 'know' things better.
 

We know that the values in the Table of maximum downlink speeds are going to be very inaccurate in some cases. For example, WiFi speeds are theoretical, and GigE connected to a DSL modem could be off by orders of magnitude.

I'm also quite nervous about the privacy implications of such an API. Knowing that a users' downlinkMax is changing can reveal information about what the user is doing (e.g. assuming we get it accurate, via NQE) or where the user is going - watching the user transition from cellular to wifi, for example, may reveal once a user is outside an office building, and watching the user transition from cellular 2g to cellular 3g to cellular 2g, along with other ambient sensors of a device (such as accelerometers, via http://w3c.github.io/deviceorientation/spec-source-orientation.html ) could reveal a user as they transition from cellular towers or move around a city.

It seems the NetInfo spec doesn't have any privacy/security concerns listed, nor does it seem like there's been a TAG review yet.

While I realize I may be articulating poorly, there's a sense of dread with this API that, much like some of the Timing APIs or Device Orientation do. At the least, it seems like this should be a Powerful Feature, perhaps requiring user consent. As noted in the Intent to Ship, we have to lie about the Wifi speed because Chrome itself needs a permission from the Android OS to figure this out - a strong signal that perhaps this is too powerful to just expose out there.

Ilya Grigorik

unread,
Sep 29, 2015, 12:45:34 AM9/29/15
to Ryan Sleevi, blink-dev, Yoav Weiss, Josh Karlin, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
The choice of words matters a great deal here. We're not "lying", nor are we pretending that we can saturate the link. As with any network "weather prediction" algorithm, you take as many signals as you have access to, and you make the best of them: sometimes all you have is the type of interface you're on; sometimes you also have quality signals from the interface; sometimes you also have historical data; <insert other inputs here>. The resulting value is not a guarantee of performance, nor is it advertised as such; the resulting value is a best-effort estimate for the ceiling on your throughput.

What we're shipping here is the base case (interface type only as the input to above algorithm) that we can build on and improve in the future with additional signals. There is no "complete information" case when it comes to estimate, and current implementation is a strict improvement on what we offer to developers today... which is exactly nothing.

On Mon, Sep 28, 2015 at 4:15 PM, <sle...@google.com> wrote:
On Monday, September 28, 2015 at 3:55:41 PM UTC-7, David Dorwin wrote:
I'm also quite nervous about the privacy implications of such an API. Knowing that a users' downlinkMax is changing can reveal information about what the user is doing (e.g. assuming we get it accurate, via NQE) or where the user is going - watching the user transition from cellular to wifi, for example, may reveal once a user is outside an office building, and watching the user transition from cellular 2g to cellular 3g to cellular 2g, along with other ambient sensors of a device (such as accelerometers, via http://w3c.github.io/deviceorientation/spec-source-orientation.html ) could reveal a user as they transition from cellular towers or move around a city.

Great feedback. Opened https://github.com/w3c/netinfo/issues/26 to track this.

While I realize I may be articulating poorly, there's a sense of dread with this API that...

We need to provide this information to developers to enable them to build optimized experiences for the wide range of deployed networks in the real world. Not knowing that you're on 2G or slow 2G+ network is exactly why we see so many 60s+ page load times in many markets. Also, in terms of using this API in the real world, the primary use case is as a guard for slow connections, not to detect "fast" connections, e.g...

if (navigator.connection.downlinkMax < 0.5) { ... } // uh oh, 2G or slow 2G+ connection.. do something smart.

With the implementation we're shipping here the above condition would only trigger on ~2G connection types, and in the future may also be triggered on other connection types if we believe that the throughput is low due to <insert NQE reasons here>. 

As noted in the Intent to Ship, we have to lie about the Wifi speed because Chrome itself needs a permission from the Android OS to figure this out - a strong signal that perhaps this is too powerful to just expose out there.

The (recent) requirement for the WiFi permissions bit is due to completely orthogonal issues with Chrome's upgrade process. As far as reporting "infinity"... it's consistent with the use case I described above; we want an API that's friendly to new connection types (or types that we don't know anything about), so we assume that they're "fast" unless and until we teach the system to say otherwise.

ig

Yoav Weiss

unread,
Sep 29, 2015, 7:10:51 AM9/29/15
to Ilya Grigorik, Ryan Sleevi, blink-dev, Josh Karlin, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
On Tue, Sep 29, 2015 at 6:44 AM, Ilya Grigorik <igri...@google.com> wrote:
The choice of words matters a great deal here. We're not "lying", nor are we pretending that we can saturate the link. As with any network "weather prediction" algorithm, you take as many signals as you have access to, and you make the best of them: sometimes all you have is the type of interface you're on; sometimes you also have quality signals from the interface; sometimes you also have historical data; <insert other inputs here>. The resulting value is not a guarantee of performance, nor is it advertised as such; the resulting value is a best-effort estimate for the ceiling on your throughput.

I'd claim that we should not fallback to throughput ceiling (which is in most certainly not realistic). If the weather prediction algorithm needs a seed to start with, it should be the expected throughput of the various network types rather than the max. Otherwise, as David and Ryan already said, developers cannot rely on that value to have any other meaning than "Is the user on an Edge network"?

Furthermore, AFAIUI NQE is about end-to-end estimation, where the downlinkMax definition (implicitly) refers to first-hop. So I'm not sure adding NQE later on would be compliant with current spec.


What we're shipping here is the base case (interface type only as the input to above algorithm) that we can build on and improve in the future with additional signals. There is no "complete information" case when it comes to estimate, and current implementation is a strict improvement on what we offer to developers today... which is exactly nothing.

I don't think this base case can be extended if developers are not aware about the nature the value that they are getting. Whether the value is a theoretical maximum or an estimate changes things a great deal. 

Related: I wrote (in length) my thoughts on ways to improve this specific API and others that can provide developers info about the user's current conditions. Specifically, I propose that we would expose a set of discrete values describing the end-to-end network state instead of a specific number indicating bandwidth. I believe that would resolve the use cases without exposing internal implementation details and while reducing the privacy implications of that API.
 

Ryan Sleevi

unread,
Sep 29, 2015, 7:31:04 AM9/29/15
to Ilya Grigorik, net-dev, owe...@chromium.org, Yoav Weiss, blink-dev, Marcos Caceres, Josh Karlin


On Sep 28, 2015 9:45 PM, "Ilya Grigorik" <igri...@google.com> wrote:
>
> the resulting value is a best-effort estimate for the ceiling on your throughput.

Of course, when best effort varies across vendors, as it necessarily will with any network weather prediction, you end up with a host on non-determinism such that authors can't safely or reliably use an API without also examining the source of the prediction (the user agent), and sniffing is undesirable.

> What we're shipping here is the base case (interface type only as the input to above algorithm) that we can build on and improve in the future with additional signals. There is no "complete information" case when it comes to estimate, and current implementation is a strict improvement on what we offer to developers today... which is exactly nothing.

It isn't that there is a goal of complete information, but the issue is the quality of information - and unfortunately, the quality is not constant over time or UAs. We could speak in terms of false positives and false negatives - both of which are types of lies from the POV of an author trusting the UA - as being how we best measure quality. And the inconsistency over time / UAs is that as UAs implement new methods, their accuracy goes up - and the perceived accuracy of every UA not implementing is seen to go down.

> We need to provide this information to developers to enable them to build optimized experiences for the wide range of deployed networks in the real world. Not knowing that you're on 2G or slow 2G+ network is exactly why we see so many 60s+ page load times in many markets.

This last statement feels really quite disingenuous, as knowing link speed is certainly not within the high order bits of mobile performance failures at all. I don't disagree that some authors feel they could craft better experiences if they had more knowledge, but I disagree with the urgency of it or relative importance to explaining the awfulness of the mobile web experience - the worst offenders are explicitly not taking advantage of the other tools we have provided them, and there's no reason to believe they would take advantage of this if offered.

The good actors are already within user tolerances, so yes, it can help improve the user experience at the high end of responsible mobile development, but we can't argue it will help those sitting in the middle or long tail of the mobile web.

I also have issue with the proposed solution; I'm all for extending the web forward with the lowest level primitives we can afford, but I think there are some low levels we shouldn't expose, because not all platforms are capable of going that low, and because of the inherent escalation of security/privacy risks the lower you go. This API is a prime example of how tricky these concerns are.

So knowing that, and knowing the issues, this seems like it's a case of exposing the right primitives to authors that lets the user agent intervene based on the confidence intervals for network quality that the UA knows users will accept, rather than force authors to independently rediscover that such decisions matter to users and end up having to UA sniff to determine if the quality of the signal is within the users' tolerance thresholds.

>
> With the implementation we're shipping here the above condition would only trigger on ~2G connection types, and in the future may also be triggered on other connection types if we believe that the throughput is low due to <insert NQE reasons here>. 

I wish I could believe the code would be structured that way, but having seen plenty of code on other platforms that deal with this, it usually ends up the inverse - testing for fast, not slow, and thus failing to adapt as new fast types come out. Just consider how the definition of 'mobile connection' has changed from 'carrier pigeon speeds' to 'faster than many users home internet connections'.

> The (recent) requirement for the WiFi permissions bit is due to completely orthogonal issues with Chrome's upgrade process.

But it isn't really... If you want to have high confidence network signal quality on Android, you need a distinct permission. Regardless of how Chrome itself or WebView acquire this permission (its own special layer of platform inconsistency), the fact that 'trusted' code needs to acquire this permission is exactly why we shouldn't be exposing it to untrusted, hostile code, which is what we must assume all web code is in terms of risk.

An API like this, or arguably most platform information APIs, shouldn't be to just expose the information - it should be to accomplish the user or author's use case without revealing the information, or, when it's indirectly revealed (e.g. timing behaviours, network fetches, GPU pixel values), to reveal as little of it as possible while still accomplishing the goal.

If we want to encourage developers to optimize for slow connections, for example, can we give them a way to express that desire without giving them the APIs to let them botch it (like testing for fast)? Declarative loading APIs rather than imperative?

Though I'm disagreeing on importance, I'm not disagreeing on some of the use cases - but I'm just questioning whether alternate solutions may be more appropriate, given the UA interop concerns, the privacy concerns, and the implementation complexity concerns.

Alex Russell

unread,
Sep 29, 2015, 1:18:35 PM9/29/15
to Ryan Sleevi, Ilya Grigorik, net-dev, Owen, Yoav Weiss, blink-dev, Marcos Caceres, Josh Karlin
This discussion is deeply disappointing.

Ryan: there's nothing new in your response from the previous versions of these debates. That you don't agree with developers who aren't you about the need for this information is also not new.

It's unclear how to move forward. Objecting to not providing information that's observable other ways by side-effect isn't a tennable position for platform API design. We don't allow it in other areas and shouldn't allow it here.

I'd like to see this API ship ASAP. LGTM.

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Ryan Sleevi

unread,
Sep 29, 2015, 1:30:54 PM9/29/15
to Alex Russell, Ilya Grigorik, owe...@chromium.org, blink-dev, Yoav Weiss, net-dev, Josh Karlin, Marcos Caceres


On Sep 29, 2015 10:18 AM, "Alex Russell" <sligh...@google.com> wrote:
>
> This discussion is deeply disappointing.
>

Because you disagree with the conclusion? The technical arguments? Or the manner in which they're presented? It doesn't help move the argument forward without knowing what your concerns are.

> Ryan: there's nothing new in your response from the previous versions of these debates. That you don't agree with developers who aren't you about the need for this information is also not new.
>

Alex: Please re-read my response, especially the closing remarks. I'm disappointed that you came away reading a conclusion that is quite the opposite of what I suggested, and it feels as if you either didn't read my remarks or that we failed to communicate. The former can only be solved by you, while the latter is something where more feedback than "this is deeply disappointing".

> It's unclear how to move forward. Objecting to not providing information that's observable other ways by side-effect isn't a tennable position for platform API design. We don't allow it in other areas and shouldn't allow it here.

By this rationale, the fact that we exposed DeviceOrientation to all suggests we shouldn't treat geolocation as a powerful feature, and instead just allow everyone to have it.

I know this isn't your argument, and I know you'd react quite negatively to such a broad generalization, so you can understand and appreciate how negatively your broad generalization is received.

I offered concrete suggestions on a way to move forward, and Yoav offered significantly more developed arguments. If you disagree, it would help to explain why.

>
> I'd like to see this API ship ASAP. LGTM.
>

Alex, this feels somewhat hollow and frustrating. If you are not receptive to feedback, if you're not going to address feedback, and if you're going to ship something without considering alternatives, the consequences to privacy, or the platform implications, then it suggests that we shouldn't have an exercise of intent to ship.

I know you care deeply about platform health, and I'm somewhat surprised to see you supporting something that has so many deficiencies, especially when viable alternatives to meet the use cases, address the privacy issues, and reflect reality exist.

Chris Harrelson

unread,
Sep 30, 2015, 12:53:40 PM9/30/15
to Josh Karlin, blink-dev, igri...@chromium.org, Owen
Hi Josh,

Am I correct that this intent also means shipping the navigator.connection object? Is the request to ship some subset of the spec?

Chris

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Josh Karlin

unread,
Sep 30, 2015, 12:58:42 PM9/30/15
to Chris Harrelson, blink-dev, igri...@chromium.org, Owen

Am I correct that this intent also means shipping the navigator.connection object? Is the request to ship some subset of the spec?

navigator.connection shipped (for Android and ChromeOS) last year (which includes navigator.connection.type). This intent is for adding navigator.connection.downlinkMax.

Chris Harrelson

unread,
Sep 30, 2015, 1:10:40 PM9/30/15
to Josh Karlin, blink-dev, igri...@chromium.org, Owen
On Wed, Sep 30, 2015 at 9:58 AM, Josh Karlin <jka...@google.com> wrote:

Am I correct that this intent also means shipping the navigator.connection object? Is the request to ship some subset of the spec?

navigator.connection shipped (for Android and ChromeOS) last year (which includes navigator.connection.type).

Ah ok, that was the source of my confusion when testing. Sorry for the noise.
 
This intent is for adding navigator.connection.downlinkMax.

And onchange right? 

Josh Karlin

unread,
Sep 30, 2015, 1:31:40 PM9/30/15
to Chris Harrelson, blink-dev, igri...@chromium.org, Owen

 
This intent is for adding navigator.connection.downlinkMax.

And onchange right? 

Yep! 

Ilya Grigorik

unread,
Sep 30, 2015, 2:18:20 PM9/30/15
to Ryan Sleevi, net-dev, Owen Campbell-Moore, Yoav Weiss, blink-dev, Marcos Caceres, Josh Karlin
On Tue, Sep 29, 2015 at 4:10 AM, Yoav Weiss <yo...@yoav.ws> wrote:
Related: I wrote (in length) my thoughts on ways to improve this specific API and others that can provide developers info about the user's current conditions. Specifically, I propose that we would expose a set of discrete values describing the end-to-end network state instead of a specific number indicating bandwidth.

Phew, long read, thanks for putting it together! I'll try to respond to the overall themes...

I think you need to dig deeper on what you mean by "effective bandwidth" and why that's an elusive goal. I agree that it would be great to have an oracle that tells you precise bandwidth and RTT estimates for the _exact request_ you're about to make, but that's simply not practical. The user can be on a GigE network, have an HD streaming going at full throttle, and simultaneously be downloading another resource at a trickle because the network path between them and that particular origin is slow. In fact, even if the user requests a resource from the same "fast" server, there are absolutely no guarantees on how quickly that response will come back -- e.g. one response is streamed from cache while another is taking seconds to generate and is trickling out bit by bit (note that this is effectively the same as your example of WiFi tethering). 

Which is to say, claiming that we ought to aim to accurately predict "end-to-end / effective" bandwidth prior to making the request is not going to get you far. You're always working with incomplete information before you make the request, and the best you can do here is to account for what you know about your local network weather and extrapolate those implications for the end-to-end story -- e.g. you can never go faster than your slowest hop, and if your first hop is slow then you know that you will never exceed that data rate; you're sharing the link with multiple requests; the throughput is not just a function of the network path but also the server load and a dozen other variables.

On that note, it's also worth highlighting that NQE leverages the exact same data we're surfacing here -- see "OS-provided information" section -- to bootstrap its predictions, and then layers observed performance to further refine its estimates... Which is consistent with my earlier statements about leveraging NQE in the future to refine what downlinkMax reports.

Re, discrete values: this wouldn't address any of the concerns you raised earlier about end-to-end vs first hop. Also, developers that I've talked to want access to the raw information, as exposed by the current API, to allow them to build own and smarter frameworks and libraries on top of this data. One app's "slow" is another's "fast" (e.g. fetching text vs video vs HD video vs 4K video) and we should defer these decisions to app developers that better understand their own requirements and context.

Re, "we should improve our overall capabilities to adapt content based on user conditions": agreed! I'm hoping to see Save-Data i2i+s out soon and crbug.com/467945 will provide further building blocks to allow developers to measure actual achieved throughput (amongst other use cases enabled by it). Combination of these and related features will finally give developers the necessary tools to start experimenting with and building better adaptive experiences.

If the weather prediction algorithm needs a seed to start with, it should be the expected throughput of the various network types rather than the max.

Sometimes we don't know what the max for a particular type is -- e.g. we don't have sufficient permissions, or it's a completely new interface we've never encountered before. In order to be future-friendly we have to assume they are "fast" until we teach the system to report otherwise; developers are not dumb either (let's stop assuming they are) and know that there is no such thing as "infinite bandwidth".


On Tue, Sep 29, 2015 at 4:30 AM, Ryan Sleevi <rsl...@chromium.org> wrote:
  On Sep 28, 2015 9:45 PM, "Ilya Grigorik" <igri...@google.com> wrote: 

> What we're shipping here is the base case (interface type only as the input to above algorithm) that we can build on and improve in the future with additional signals. There is no "complete information" case when it comes to estimate, and current implementation is a strict improvement on what we offer to developers today... which is exactly nothing.

It isn't that there is a goal of complete information, but the issue is the quality of information - and unfortunately, the quality is not constant over time or UAs. We could speak in terms of false positives and false negatives - both of which are types of lies from the POV of an author trusting the UA - as being how we best measure quality. And the inconsistency over time / UAs is that as UAs implement new methods, their accuracy goes up - and the perceived accuracy of every UA not implementing is seen to go down.

That's fair, but withholding this feature from developers on premise that all UA's can't agree on exact prediction algorithm is not a tenable position. We can't continue to pretend that all networks are equal and fast, as we do today. 

> We need to provide this information to developers to enable them to build optimized experiences for the wide range of deployed networks in the real world. Not knowing that you're on 2G or slow 2G+ network is exactly why we see so many 60s+ page load times in many markets.

This last statement feels really quite disingenuous, as knowing link speed is certainly not within the high order bits of mobile performance failures at all. I don't disagree that some authors feel they could craft better experiences if they had more knowledge, but I disagree with the urgency of it or relative importance to explaining the awfulness of the mobile web experience - the worst offenders are explicitly not taking advantage of the other tools we have provided them, and there's no reason to believe they would take advantage of this if offered.

FWIW, my experience based on interaction with many dev teams -- in particular whenever we discuss 'emerging markets' -- shows otherwise. 

The good actors are already within user tolerances, so yes, it can help improve the user experience at the high end of responsible mobile development, but we can't argue it will help those sitting in the middle or long tail of the mobile web.

I'm not sure I follow where we're heading here. We shouldn't ship features unless we can prove universal and instant adoption? I believe our goal is to maximize cumulative user value, and even if -- to start -- the head of the distribution is leading on adopting this feature, then that's consistent with our goal -- better experience for millions of everyday users.

> With the implementation we're shipping here the above condition would only trigger on ~2G connection types, and in the future may also be triggered on other connection types if we believe that the throughput is low due to <insert NQE reasons here>. 

I wish I could believe the code would be structured that way, but having seen plenty of code on other platforms that deal with this, it usually ends up the inverse - testing for fast, not slow, and thus failing to adapt as new fast types come out. Just consider how the definition of 'mobile connection' has changed from 'carrier pigeon speeds' to 'faster than many users home internet connections'.

We can't stop bad developers from doing bad things. That shouldn't block smart developers from doing smart things.

An API like this, or arguably most platform information APIs, shouldn't be to just expose the information - it should be to accomplish the user or author's use case without revealing the information, or, when it's indirectly revealed (e.g. timing behaviours, network fetches, GPU pixel values), to reveal as little of it as possible while still accomplishing the goal.

If we want to encourage developers to optimize for slow connections, for example, can we give them a way to express that desire without giving them the APIs to let them botch it (like testing for fast)? Declarative loading APIs rather than imperative?

That'd be nice, and I'm all for pursuing this strategy in parallel. Realistically though, this requires a close scrub through all existing and new platform features with detailed discussions on how they should react under various constraints (device constraints, user constraints, network constrains, etc)... and that's a very long and tedious process. Also I'm not convinced that the UA can get it right either... different apps want different tradeoffs. We need to give developers the tools to build and experiment with own implementations and if and when patterns emerge we can codify them in the platform.

Though I'm disagreeing on importance, I'm not disagreeing on some of the use cases - but I'm just questioning whether alternate solutions may be more appropriate, given the UA interop concerns, the privacy concerns, and the implementation complexity concerns.

All great feedback, appreciate it, and I do agree that we do need to think through the privacy implications here a bit more.

ig

Alex Russell

unread,
Sep 30, 2015, 2:54:22 PM9/30/15
to Ryan Sleevi, Ilya Grigorik, Owen, blink-dev, Yoav Weiss, net-dev, Josh Karlin, Marcos Caceres
Apologies both for the slow reply and my previously hasty response.

On Tue, Sep 29, 2015 at 10:30 AM, Ryan Sleevi <rsl...@chromium.org> wrote:


On Sep 29, 2015 10:18 AM, "Alex Russell" <sligh...@google.com> wrote:
>
> This discussion is deeply disappointing.
>

Because you disagree with the conclusion? The technical arguments? Or the manner in which they're presented? It doesn't help move the argument forward without knowing what your concerns are.

Some of each. Apologies, again, for not being clearer.

> Ryan: there's nothing new in your response from the previous versions of these debates. That you don't agree with developers who aren't you about the need for this information is also not new.
>

Alex: Please re-read my response, especially the closing remarks. I'm disappointed that you came away reading a conclusion that is quite the opposite of what I suggested, and it feels as if you either didn't read my remarks or that we failed to communicate. The former can only be solved by you, while the latter is something where more feedback than "this is deeply disappointing".

My concern has several aspects:
  • The design of this feature is a compromise borne of a multi-year debate. Other obvious points in the design space have been explored and rejected (e.g., reflecting the actual radio connection type, the way Android's framework does)
  • I'd assume the implementation complexity is a conversation that would have happened in code-review as this feature was developed. Odd to see it here.
  • Given that this data is available by side-channel today, it's unclear what privacy impact there could possibly be.

> It's unclear how to move forward. Objecting to not providing information that's observable other ways by side-effect isn't a tennable position for platform API design. We don't allow it in other areas and shouldn't allow it here.

By this rationale, the fact that we exposed DeviceOrientation to all suggests we shouldn't treat geolocation as a powerful feature, and instead just allow everyone to have it.

DeviceOrientation and geolocation have very different granularity...perhaps I misunderstand something?

I know this isn't your argument, and I know you'd react quite negatively to such a broad generalization, so you can understand and appreciate how negatively your broad generalization is received.

I offered concrete suggestions on a way to move forward, and Yoav offered significantly more developed arguments. If you disagree, it would help to explain why.

>
> I'd like to see this API ship ASAP. LGTM.
>

Alex, this feels somewhat hollow and frustrating. If you are not receptive to feedback, if you're not going to address feedback, and if you're going to ship something without considering alternatives, the consequences to privacy, or the platform implications, then it suggests that we shouldn't have an exercise of intent to ship.

This feature is a compromise from many year's of iteration. My reaction came from what appears to be late-stage re-litigation about a discussion which has been had many, many times in many other places. I'm questioning the need to re-discuss here when so many other discussions have already taken place on this topic.

I know you care deeply about platform health, and I'm somewhat surprised to see you supporting something that has so many deficiencies, especially when viable alternatives to meet the use cases, address the privacy issues, and reflect reality exist.

NQE is a separate feature that I don't think should be added under the cover of the downlinkMax API as it will give us a significantly different view of the world. Having both is useful.

Regards

Ryan Sleevi

unread,
Sep 30, 2015, 3:37:27 PM9/30/15
to Alex Russell, Ilya Grigorik, owe...@chromium.org, net-dev, Yoav Weiss, blink-dev, Marcos Caceres, Josh Karlin, j...@chromium.org, Eric Roman


On Sep 30, 2015 11:54 AM, "Alex Russell" <sligh...@google.com> wrote:
>

> My concern has several aspects:
> The design of this feature is a compromise borne of a multi-year debate. Other obvious points in the design space have been explored and rejected (e.g., reflecting the actual radio connection type, the way Android's framework does)

That hasn't been a suggestion by either Yoav or myself, and for what it's worth, I would agree that is obviously incorrect.

> I'd assume the implementation complexity is a conversation that would have happened in code-review as this feature was developed. Odd to see it here.

That presumes every reviewer is globally aware of the implications, platform considerations, and ongoing efforts, which in the increasingly silo'd Chrome codebase, is simply not the case. I don't mean this to suggest authors or reviewers failed - simply that when we look at the holistic integration and exposure, considering the plentitude of platforms Chrome runs on, it is not unreasonable or unexpected that "implementation complexity" will emerge as a concern from something that might be locally consistent.

The point remains, and is stated on the I2I, that the very behaviour of this cross-platform is inconsistent, and necessarily so, because we build atop a variety of inconsistent platforms that expose different layers of fidelity. Much like we didn't simply expose "DirectX" to the web when 'everyone' was running Windows, much like APIs like WebGL have themselves stripped the edges from OpenGL and have necessitated complex solutions like ANGLE, and yet are still unable to avoid developers needing to engage in GPU sniffing, we need to carefully evaluate what is the lowest layer of reasonable abstraction, given the variety of security models and APIs in play on the platforms Chrome builds on.

My objections here are that this is too low level a detail for a user agent to reliably implement across a diversity of platforms (which we know this already) _and_ that it necessarily precludes _advancements_ in the platform.

For example, the onchange event is unquestionably at odds with MP-TCP and QUIC, which should give serious pause. The notions of surfacing onchange - or there being a singular egress interface, that it will flap, or that the UA will even know information about either egress or ingress interfaces - are not solid notions, and being actively challenged on multiple axis in mobile platforms, as they have been on others (satellite and bonded DSL as two examples)

Further, the onchange event from the perspective of Chrome is extremely unreliable. Both Eric Roman and myself _discourage_ people from writing native code in the browser that interacts with this notification, because of this necessary unreliably, the necessary low fidelity, and for the fact that even in a collection of incredibly bright, motivated, earnest to "do good code" developers, it is still botched consistently, to the detriment of our users. I know you experienced this first hand at the Shenzhen TPAC due to the networking conditions there, and our ability and reliability have gotten worse, not better, as we expand across platforms and the platforms we run across change, deprecate, and innovate.

I'm suggesting that the complexity here necessarily paints the browser in a corner, much like sync XHRs or unload-blocking alerts. From the "spec litigations", it is clear these concerns haven't been discussed in depth, and suggesting that we ignore feedback solely because it's " already been litigated" is to ignore the experience of those most familiar with the past, present, and future developments in this space.

> Given that this data is available by side-channel today, it's unclear what privacy impact there could possibly be.

This is simply not true that it is available by side channel to the fidelity proposed, much in the way that you have noted DeviceOrientation and Geolocation have different buckets of fidelity. It is this fidelity that should naturally give pause to the privacy aware, and give concern to whether we can even consistently offer the fidelity suggested.

> This feature is a compromise from many year's of iteration. My reaction came from what appears to be late-stage re-litigation about a discussion which has been had many, many times in many other places. I'm questioning the need to re-discuss here when so many other discussions have already taken place on this topic.

Just because it was borne of compromise does not make it technically sound. XHTML2 was borne of a wide variety of compromises and spec litigation, as were XSL and XML-DSig, but that doesn't argue to their fitness.

I don't mean to dismiss the work of many people active in this space, passionate for solutions, and the many discussions, but you of all people are no doubt most aware that it is simply impossible to participate in all the standards discussions all the time, so it feels dismissive to suggest that "If you wanted to have a say, you should have known about this the years before the I2S and spoken then." Arguably, our I2S process is much like IETF or W3C last call - trying to get broad feedback from a variety of stakeholders after those most motivated for something have eked out compromise and solutions. However, that doesn't exempt it from critical, but hopefully constructive, review; rather, it should encourage it.

In the vein, you're hearing concerns from both Yoav and myself as to the fitness. I have concerns to the ability of Chrome to implement this reliably across platforms, I have concern for the privacy implications, and I have concern that such a proposal is necessarily at odds with what multiple vendors are independently pursuing in an effort to enhance the networking. Yoav has articulated far better than I the concerns to developers, API trust, and the implications that such an API shape has.

This isn't saying "You should never ship," it is saying that it appears to be that a number of concerns were not considered or weighed when developing the compromise of this API, that they materially affect both the implementation and experience of the API, and we should hold off shipping until we have taken time to earnestly and thoughtfully weigh these considerations, and either declare we don't care, or adjust things if we do.

> NQE is a separate feature that I don't think should be added under the cover of the downlinkMax API as it will give us a significantly different view of the world. Having both is useful.

This position is considerably different than the one advanced during this I2S, so that too should give pause for consideration. I agree that shipping this is at odds with NQE - that is, that we can't retroactively integrate it - but that is precisely what is being proposed here. For that reason, we should be sure that we have agreement as to what the roadmap looks like, rather than haphazardly shipping and iterating, so that we can ensure we are releasing a web platform that is consistent, reasoned, and capable, especially if we will not be able to revisit this discussion for the many years that any deprecation would necessarily entail.

Yoav Weiss

unread,
Sep 30, 2015, 3:43:48 PM9/30/15
to Ilya Grigorik, Ryan Sleevi, net-dev, Owen Campbell-Moore, blink-dev, Marcos Caceres, Josh Karlin
On Wed, Sep 30, 2015 at 8:17 PM, Ilya Grigorik <igri...@google.com> wrote:
On Tue, Sep 29, 2015 at 4:10 AM, Yoav Weiss <yo...@yoav.ws> wrote:
Related: I wrote (in length) my thoughts on ways to improve this specific API and others that can provide developers info about the user's current conditions. Specifically, I propose that we would expose a set of discrete values describing the end-to-end network state instead of a specific number indicating bandwidth.

Phew, long read, thanks for putting it together! I'll try to respond to the overall themes...

I think you need to dig deeper on what you mean by "effective bandwidth" and why that's an elusive goal. I agree that it would be great to have an oracle that tells you precise bandwidth and RTT estimates for the _exact request_ you're about to make, but that's simply not practical. The user can be on a GigE network, have an HD streaming going at full throttle, and simultaneously be downloading another resource at a trickle because the network path between them and that particular origin is slow. In fact, even if the user requests a resource from the same "fast" server, there are absolutely no guarantees on how quickly that response will come back -- e.g. one response is streamed from cache while another is taking seconds to generate and is trickling out bit by bit (note that this is effectively the same as your example of WiFi tethering). 

At no point was I claiming that we can predict the future :) Any bandwidth estimation we could have, be it first hop or end-to-end, would give us a picture of the past, not the future. Same is true for downlinkMax, which can tell us which network we're using right now (when a request goes out, or when a JS API is queried), but cannot tell us what would be the underlying network by the time the response comes in.
I think we all agree on that point.
 

Which is to say, claiming that we ought to aim to accurately predict "end-to-end / effective" bandwidth prior to making the request is not going to get you far. You're always working with incomplete information before you make the request, and the best you can do here is to account for what you know about your local network weather and extrapolate those implications for the end-to-end story

I never claimed absolute accuracy is required, quite the contrary. I think that exposing vaguer values would allow us to iterate and improve our predictions without exposing implementation details that Web apps will grow to rely on. (e.g. "384 in Chrome means the user is on edge so he actually has no connectivity while 384 in Firefox means they actually have 384Kbps, which is good enough")

-- e.g. you can never go faster than your slowest hop, and if your first hop is slow then you know that you will never exceed that data rate; you're sharing the link with multiple requests; the throughput is not just a function of the network path but also the server load and a dozen other variables.

Sure
 

On that note, it's also worth highlighting that NQE leverages the exact same data we're surfacing here -- see "OS-provided information" section -- to bootstrap its predictions, and then layers observed performance to further refine its estimates... Which is consistent with my earlier statements about leveraging NQE in the future to refine what downlinkMax reports.

I claim that you won't be able to use NQE to refine downlinkMax because they both have different semantics. You could use NQE to refine downlinkExpected though.
 

Re, discrete values: this wouldn't address any of the concerns you raised earlier about end-to-end vs first hop. Also, developers that I've talked to want access to the raw information, as exposed by the current API, to allow them to build own and smarter frameworks and libraries on top of this data.

We definitely should give developers the raw bandwidth and RTT data and let them build bandwidth estimation on top of that (assuming it has no privacy concerns). *But* that raw data cannot be exposed as a single point in time while expecting for it to be useful. We should expose such data as the continuous stream that it is, most probably on top of PerformanceTimeline.
 
One app's "slow" is another's "fast" (e.g. fetching text vs video vs HD video vs 4K video) and we should defer these decisions to app developers that better understand their own requirements and context.

Fair point. I'd be fine with clamped bandwidth values as well.
 

Re, "we should improve our overall capabilities to adapt content based on user conditions": agreed! I'm hoping to see Save-Data i2i+s out soon and crbug.com/467945 will provide further building blocks to allow developers to measure actual achieved throughput (amongst other use cases enabled by it). Combination of these and related features will finally give developers the necessary tools to start experimenting with and building better adaptive experiences.

If the weather prediction algorithm needs a seed to start with, it should be the expected throughput of the various network types rather than the max.

Sometimes we don't know what the max for a particular type is -- e.g. we don't have sufficient permissions, or it's a completely new interface we've never encountered before. In order to be future-friendly we have to assume they are "fast" until we teach the system to report otherwise;

I'm fine with an infinite expected bandwidth in the rare cases that we know nothing. To emphasize: my problem is that "max" and "estimated" have different semantics, so you can't grow one to be the other while still expecting devs to be able to do something with an "either this or that" value. 
 
developers are not dumb either (let's stop assuming they are) and know that there is no such thing as "infinite bandwidth".

Never claimed or assumed they are. They are busy folks though, which is why we should expose useful high level APIs (which I hope this API would be) for the ones that don't care to re-invent the wheel, and (hopefully) expose low level APIs (e.g. PerformanceTimeline based raw network data) for the ones that want to come up with a better wheel.

Simon Pieters

unread,
Oct 1, 2015, 3:22:18 AM10/1/15
to Ilya Grigorik, Yoav Weiss, Ryan Sleevi, blink-dev, Josh Karlin, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
On Tue, 29 Sep 2015 13:10:45 +0200, Yoav Weiss <yo...@yoav.ws> wrote:

> Related: I wrote <http://blog.yoav.ws/adapting_without_assumptions/> ...

From the post:

> Having discrete and imprecise values also has the advantage of enabling
> browsers to evolve what these values mean over time, since today’s
> “decent” may very well be tomorrow’s “bad”.

I think it's a bad idea to have the values change meaning over time, as it
would regress UX when a UA decides to change what those values mean.
Consider a video site that gives HD video for "good" and above and SD for
"decent" and below. Then the UA changes "good" to "decent". The user
suddenly gets SD but the network conditions are the same.

To avoid that, you would have to mint new values that are better than
"excellent", which seems like it would probably look silly after a while.
:-)

--
Simon Pieters
Opera Software

Yoav Weiss

unread,
Oct 1, 2015, 3:36:10 AM10/1/15
to Simon Pieters, Ilya Grigorik, Ryan Sleevi, blink-dev, Josh Karlin, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
That's a very good point, and compounding that with Ilya's arguments from upthread convinced me that discrete values is not the way to go.
OTOH, providing exact bandwidth numbers will increase variance, potentially compromise privacy and will expose implementation details, making it harder to evolve bandwidth estimated over time.

So, all in all, I think that the best approach would be:
* Change 'downlinkMax' to 'expectedDownlink' - regardless of naming, the semantics of the exposed bandwidth should be that it represents the actual bandwidth that the user might have, not an unattainable theoretical max. Then we can iterate that value over time, and make it more precise.
* Clamp that bandwidth value such that it provides enough info for developers while not exposing too many internal details. Maybe clamping to 10Kbps below 100Kbps, clamping to 100Kbps below 1000Kbps and to 1000Kbps above that?

Ryan Sleevi

unread,
Oct 1, 2015, 3:41:12 AM10/1/15
to Yoav Weiss, Simon Pieters, Ilya Grigorik, blink-dev, Josh Karlin, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org


On Thu, Oct 1, 2015 at 12:36 AM, Yoav Weiss <yo...@yoav.ws> wrote:

So, all in all, I think that the best approach would be:
* Change 'downlinkMax' to 'expectedDownlink' - regardless of naming, the semantics of the exposed bandwidth should be that it represents the actual bandwidth that the user might have, not an unattainable theoretical max. Then we can iterate that value over time, and make it more precise.
* Clamp that bandwidth value such that it provides enough info for developers while not exposing too many internal details. Maybe clamping to 10Kbps below 100Kbps, clamping to 100Kbps below 1000Kbps and to 1000Kbps above that?


It still has the problems of the onchange event and how that affects things where you may have a partial network change but not complete (e.g. new requests going over cellular, old requests continuing over wifi; requests transparently transitioning network interfaces in flight; multiple network interfaces)

I'm not sure how best to reconcile this. This is a fundamental problem with the API, in that it assumes a singular connection (both type and downlink), and that's incompatible with where the web is going (and where experiments already are). Exposing multiple interfaces creates a privacy concern and sometimes there isn't even that notion that fits well in the API (e.g. there might be a 'single' connection type, but the change events are transparently handled by the OS and opaque to the browser - thus to the WebApp)

Yoav Weiss

unread,
Oct 1, 2015, 4:17:45 AM10/1/15
to Ryan Sleevi, Simon Pieters, Ilya Grigorik, blink-dev, Josh Karlin, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire.

Assuming we want the bandwidth value to be something we can iterate on and improve with estimates, that definition must change as well, which means we could redefine onchange to semantically mean "the bandwidth estimation value has significantly changed" rather than "the underlying single interface have changed modes" (which I agree doesn't sit well with a multi-interfaces world).

Josh Karlin

unread,
Oct 1, 2015, 1:07:12 PM10/1/15
to Yoav Weiss, Ryan Sleevi, Simon Pieters, Ilya Grigorik, blink-dev, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
The intention of downlinkMax was expressly not to provide an estimate of end-to-end bandwidth but instead to provide a next-hop upper bound. If this is not made clear by the spec then it should be. I think we can come to agree that if we expose NQE in NetInfo down the road, it should be in a new attribute. 

I agree, +Infinity as a default upper bound when the UA doesn't know the underlying type is lame, but it's an upper bound. The UA is reporting truth. 

We would all love to have really great bandwidth prediction. Maybe that will happen someday. But in the meanwhile the first-hop capacity is useful in the case of slow technologies and easy to provide in the most relevant (mobile) circumstances.

Good point about future proofing concerns with multi path. I think it's reasonable for now for downlinkMax and type to provide the "default connection" of the UA. Perhaps that should be added to the spec.

Dimitri Glazkov

unread,
Oct 1, 2015, 1:14:24 PM10/1/15
to Josh Karlin, Yoav Weiss, Ryan Sleevi, Simon Pieters, Ilya Grigorik, blink-dev, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
Should it then just be named "downlinkUpperBound" or something?

:DG<

Elliott Sprehn

unread,
Oct 1, 2015, 1:39:22 PM10/1/15
to Josh Karlin, Yoav Weiss, Ryan Sleevi, Simon Pieters, Ilya Grigorik, blink-dev, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
On Thu, Oct 1, 2015 at 1:06 PM, 'Josh Karlin' via blink-dev <blin...@chromium.org> wrote:
The intention of downlinkMax was expressly not to provide an estimate of end-to-end bandwidth but instead to provide a next-hop upper bound. If this is not made clear by the spec then it should be. I think we can come to agree that if we expose NQE in NetInfo down the road, it should be in a new attribute. 

I agree, +Infinity as a default upper bound when the UA doesn't know the underlying type is lame, but it's an upper bound. The UA is reporting truth. 


What does the android and iOS APIs return for this? I actually can't find this API for iOS at all.

- E

Josh Karlin

unread,
Oct 1, 2015, 1:42:50 PM10/1/15
to Elliott Sprehn, Yoav Weiss, Ryan Sleevi, Simon Pieters, Ilya Grigorik, blink-dev, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
No support for iOS. Android returns values from the table for cellular connections, and +Infinity for wifi until/if Chrome gets the WiFi permission again.

Elliott Sprehn

unread,
Oct 1, 2015, 1:50:27 PM10/1/15
to Josh Karlin, Yoav Weiss, Ryan Sleevi, Simon Pieters, Ilya Grigorik, blink-dev, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
On Thu, Oct 1, 2015 at 1:42 PM, Josh Karlin <jka...@google.com> wrote:
No support for iOS. Android returns values from the table for cellular connections, and +Infinity for wifi until/if Chrome gets the WiFi permission again.


What does desktop Mac/Windows return?

- E

Josh Karlin

unread,
Oct 1, 2015, 2:05:15 PM10/1/15
to Elliott Sprehn, Yoav Weiss, Ryan Sleevi, Simon Pieters, Ilya Grigorik, blink-dev, Owen Campbell-Moore, Marcos Caceres, net...@chromium.org
Android: connection.type support, downlinkMax support for cellular types (otherwise +Infinity)
ChromeOS: connection.type support, downlinkMax support for cellular types (otherwise +Infinity)
Webview: Should have Android-level support in near future, thanks to timvolodine@'s efforts
Win/Mac/Linux: Working on bringing connection.type support (wifi, ethernet, and none), downlinkMax will return +Infinity.


Ryan Sleevi

unread,
Oct 1, 2015, 2:14:09 PM10/1/15
to Josh Karlin, net-dev, owe...@chromium.org, Yoav Weiss, Elliott Sprehn, Marcos Caceres, Simon Pieters, Ilya Grigorik, blink-dev

Does this include .onchange support on all platforms?

Is .onchange implemented in terms of the NetworkChangeNotifier of //net?

Josh Karlin

unread,
Oct 1, 2015, 2:19:32 PM10/1/15
to Ryan Sleevi, net-dev, Owen, Yoav Weiss, Elliott Sprehn, Marcos Caceres, Simon Pieters, Ilya Grigorik, blink-dev
Note that onchange is essentially a rename of the already launched ontypechange (but also is called when downlinkMax is changed). Yes, it's triggered by net::NetworkChangeNotifier::MaxBandwidthObserver.

Josh Karlin

unread,
Oct 1, 2015, 2:22:15 PM10/1/15
to Ryan Sleevi, net-dev, Owen, Yoav Weiss, Elliott Sprehn, Marcos Caceres, Simon Pieters, Ilya Grigorik, blink-dev
Ah, I missed part of your question. onchange will be supported on the enabled platforms.

Ryan Sleevi

unread,
Oct 1, 2015, 2:42:21 PM10/1/15
to Josh Karlin, Ryan Sleevi, net-dev, Owen, Yoav Weiss, Elliott Sprehn, Marcos Caceres, Simon Pieters, Ilya Grigorik, blink-dev
So to make sure I'm summarizing correctly:

iOS: No support for .type, no support for downlinkMax, no support for .onChange
Android (Chrome): Support for .type, support for downlinkMax when type != WiFi, support for .onchange when .type != WiFi (or when transiting WiFi & non-Wifi)
Android (WebView): Unclear at present, but expected to be similar to Chrome. Appears there's a caveat that a WebView functionality is tied to the hosting Apps' permissions within Android (e.g. if it has WiFi permission, then it supports .downlinkMax for all types, and supports .onchange)
ChromeOS: Support for .type, support for downlinkMax when type != WiFi, support for onchange when type != WiFi (or transiting WiFi & non-WiFi)

Linux: Partial support for .type (Wifi, ethernet, none), no support for downlinkMax, support for onchange for .type changes only
Mac: Partial support for .type (Wifi, ethernet, none), no support for downlinkMax, support for onchange for .type changes only
Win: Partial support for .type (Wifi, ethernet, none), no support for downlinkMax, support for onchange for .type changes only

And with the further caveat that support for .type on some platforms (Mac & Windows, I believe - not sure the situation on Windows) may be inaccurate and is based on Chrome-reverse heuristics (e.g. I'm thinking of the NetworkChangeNotifier's behaviour on Mac, which is based on polling system interfaces on a timer, and I thought Windows gave us *too* much signal and we had to best-guess)

Elliott Sprehn

unread,
Oct 1, 2015, 2:48:14 PM10/1/15
to Ryan Sleevi, Josh Karlin, net-dev, Owen, Yoav Weiss, Marcos Caceres, Simon Pieters, Ilya Grigorik, blink-dev
On Thu, Oct 1, 2015 at 2:42 PM, Ryan Sleevi <rsl...@chromium.org> wrote:
So to make sure I'm summarizing correctly:

iOS: No support for .type, no support for downlinkMax, no support for .onChange


I think iOS could theoretically support .type; AFNetworkReachabilityStatus can report if you're on wifi or cellular, but there's no way to report downlinkMax without system framework level changes.

- E

Chris Harrelson

unread,
Oct 1, 2015, 4:29:28 PM10/1/15
to Josh Karlin, blink-dev, igri...@chromium.org, Owen
Josh: Regarding the potential privacy issue: how about we modify the Intent to restrict usage to secure contexts? I chatted offline with Ilya to confirm this doesn't break known use cases, and he thinks it's reasonable.

Chris

Ilya Grigorik

unread,
Oct 1, 2015, 4:59:29 PM10/1/15
to Elliott Sprehn, Ryan Sleevi, Josh Karlin, net-dev, Owen, Yoav Weiss, Marcos Caceres, Simon Pieters, blink-dev

On Thu, Oct 1, 2015 at 11:47 AM, Elliott Sprehn <esp...@chromium.org> wrote:
I think iOS could theoretically support .type; AFNetworkReachabilityStatus can report if you're on wifi or cellular, but there's no way to report downlinkMax without system framework level changes.

iOS does allow you to get more detailed interface data via CTTelephonyNetworkInfo. Here's list of exposed values on Android + iOS:

Ilya Grigorik

unread,
Oct 1, 2015, 5:19:08 PM10/1/15
to Elliott Sprehn, Ryan Sleevi, Josh Karlin, net-dev, Owen, Yoav Weiss, Marcos Caceres, Simon Pieters, blink-dev
On Thu, Oct 1, 2015 at 1:17 AM, Yoav Weiss <yo...@yoav.ws> wrote:
Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire.

"If the underlying connection technology changes (in either connection type or maximum downlink speed)" [1] ... Hmm, I think I see where I confusion is. My claim that we're NQE-compatible is based on "or maximum downlink speed changes" reading of above sentence, where "maximum downlink speed" is allowed to be adjusted based on network weather. Admittedly, not the greatest wording (I'm to blame :-)). I do agree that we should not limit notifications to changes to "underlying connection type". 

"If the properties of the connection change (e.g. changes in connection type, downlink speed, or other criteria), the user agent must run the steps to update the connection values..." ~~ would something like that clarify our intent?

If we want to go a step further, we could also either (a) drop type from onchange (this is a breaking change, but "type" adoption is low), or (b) define a new on<something> that only triggers when our estimated Mbps value changes and does not mention 'type' at all... I believe that would address the multi-{path, interface} use cases?

ig

Ryan Sleevi

unread,
Oct 1, 2015, 5:30:51 PM10/1/15
to Ilya Grigorik, Elliott Sprehn, Ryan Sleevi, Josh Karlin, net-dev, Owen, Yoav Weiss, Marcos Caceres, Simon Pieters, blink-dev
On Thu, Oct 1, 2015 at 2:18 PM, Ilya Grigorik <igri...@google.com> wrote:
On Thu, Oct 1, 2015 at 1:17 AM, Yoav Weiss <yo...@yoav.ws> wrote:
Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire.

"If the underlying connection technology changes (in either connection type or maximum downlink speed)" [1] ... Hmm, I think I see where I confusion is. My claim that we're NQE-compatible is based on "or maximum downlink speed changes" reading of above sentence, where "maximum downlink speed" is allowed to be adjusted based on network weather. Admittedly, not the greatest wording (I'm to blame :-)). I do agree that we should not limit notifications to changes to "underlying connection type".  

"If the properties of the connection change (e.g. changes in connection type, downlink speed, or other criteria), the user agent must run the steps to update the connection values..." ~~ would something like that clarify our intent?

I think it's still unclear whether or not we want the singular value to be NQE-compatible. You've put forward the argument it can be (with slight spec changes), but from Alex's remarks, and from Yoav's, and from mine, I think there's some concern that making it NQE compatible makes it less appealing to developers.

Yoav's suggestion for exposing NQE related data seems more flexible, in that it detaches the API from implicit assumptions that:
- A given request transits a single connection (this is accomplished by virtue of time-interval reporting for a request)
- That there is a singular connection or network 'overall' (the surface of the API being a singular value)
- Possibility to expose, if necessary, additional network-related information in a way that makes sense (e.g. latency)
 
If we want to go a step further, we could also either (a) drop type from onchange (this is a breaking change, but "type" adoption is low), or (b) define a new on<something> that only triggers when our estimated Mbps value changes and does not mention 'type' at all... I believe that would address the multi-{path, interface} use cases?

 I don't think so. I think an event-based trigger on such a change comes with an implicit assumption that "If you kick off this request, right now, this is what you'd get", which isn't necessarily the case, even in multi-path.

That said, of the two proposals you've put forth (e.g. disregarding Yoav), getting rid of 'type' (in general and from onchange) would be a positive step towards providing necessary platform flexibility and intelligence.

Ilya Grigorik

unread,
Oct 1, 2015, 6:24:52 PM10/1/15
to Ryan Sleevi, Elliott Sprehn, Josh Karlin, net-dev, Owen, Yoav Weiss, Marcos Caceres, Simon Pieters, blink-dev
On Thu, Oct 1, 2015 at 2:30 PM, Ryan Sleevi <rsl...@chromium.org> wrote:
On Thu, Oct 1, 2015 at 2:18 PM, Ilya Grigorik <igri...@google.com> wrote:
On Thu, Oct 1, 2015 at 1:17 AM, Yoav Weiss <yo...@yoav.ws> wrote:
Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire.

"If the underlying connection technology changes (in either connection type or maximum downlink speed)" [1] ... Hmm, I think I see where I confusion is. My claim that we're NQE-compatible is based on "or maximum downlink speed changes" reading of above sentence, where "maximum downlink speed" is allowed to be adjusted based on network weather. Admittedly, not the greatest wording (I'm to blame :-)). I do agree that we should not limit notifications to changes to "underlying connection type".  

"If the properties of the connection change (e.g. changes in connection type, downlink speed, or other criteria), the user agent must run the steps to update the connection values..." ~~ would something like that clarify our intent?

I think it's still unclear whether or not we want the singular value to be NQE-compatible. You've put forward the argument it can be (with slight spec changes), but from Alex's remarks, and from Yoav's, and from mine, I think there's some concern that making it NQE compatible makes it less appealing to developers.

Perhaps the missing bit here is the "level of confidence" of the provided estimate? As in, in absence of any historical information we have to fallback to the interface, which will give us the upper bound, but we're also unlikely to reach that bound (low confidence). Then, once we have some historical data we can refine that estimate and offer a higher-confidence estimate? 

Perhaps we need a 'confidence -> {low, medium, high}' attribute, or some such?
 
Yoav's suggestion for exposing NQE related data seems more flexible, in that it detaches the API from implicit assumptions that:
- A given request transits a single connection (this is accomplished by virtue of time-interval reporting for a request)
- That there is a singular connection or network 'overall' (the surface of the API being a singular value)
- Possibility to expose, if necessary, additional network-related information in a way that makes sense (e.g. latency) 

Hmm, I don't really see the functional difference between, ~connection.on<something> vs Performance Timeline delivery: both emit events, and these events can carry arbitrary attributes (Mbps estimate, RTT, confidence, etc)...? That said, we are positioning perf timeline + observer as the common primitive to surface perf-related data.. emitting 'network' events via the same mechanism does seem reasonable.
 
 If we want to go a step further, we could also either (a) drop type from onchange (this is a breaking change, but "type" adoption is low), or (b) define a new on<something> that only triggers when our estimated Mbps value changes and does not mention 'type' at all... I believe that would address the multi-{path, interface} use cases?

 I don't think so. I think an event-based trigger on such a change comes with an implicit assumption that "If you kick off this request, right now, this is what you'd get", which isn't necessarily the case, even in multi-path.

We have the same issue regardless of how we deliver the event (via on<something> or Perf Timeline). Developers will cache the last seen value and use it as a directional signal to modify what and how they fetch. And on that note, regardless of how we deliver the change events we do need to provide "connection.downlinkMax" that can be queried on demand.. Without that you can't get an estimate during the initial load sequence of the page, which is an important use case for this API.

ig

Matt Menke

unread,
Oct 1, 2015, 6:45:53 PM10/1/15
to Ilya Grigorik, Ryan Sleevi, Elliott Sprehn, Josh Karlin, net-dev, Owen, Yoav Weiss, Marcos Caceres, Simon Pieters, blink-dev
On Thu, Oct 1, 2015 at 6:24 PM, 'Ilya Grigorik' via net-dev <net...@chromium.org> wrote:


On Thu, Oct 1, 2015 at 2:30 PM, Ryan Sleevi <rsl...@chromium.org> wrote:
On Thu, Oct 1, 2015 at 2:18 PM, Ilya Grigorik <igri...@google.com> wrote:
On Thu, Oct 1, 2015 at 1:17 AM, Yoav Weiss <yo...@yoav.ws> wrote:
Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire.

"If the underlying connection technology changes (in either connection type or maximum downlink speed)" [1] ... Hmm, I think I see where I confusion is. My claim that we're NQE-compatible is based on "or maximum downlink speed changes" reading of above sentence, where "maximum downlink speed" is allowed to be adjusted based on network weather. Admittedly, not the greatest wording (I'm to blame :-)). I do agree that we should not limit notifications to changes to "underlying connection type".  

"If the properties of the connection change (e.g. changes in connection type, downlink speed, or other criteria), the user agent must run the steps to update the connection values..." ~~ would something like that clarify our intent?

I think it's still unclear whether or not we want the singular value to be NQE-compatible. You've put forward the argument it can be (with slight spec changes), but from Alex's remarks, and from Yoav's, and from mine, I think there's some concern that making it NQE compatible makes it less appealing to developers.

Perhaps the missing bit here is the "level of confidence" of the provided estimate? As in, in absence of any historical information we have to fallback to the interface, which will give us the upper bound, but we're also unlikely to reach that bound (low confidence). Then, once we have some historical data we can refine that estimate and offer a higher-confidence estimate? 

Perhaps we need a 'confidence -> {low, medium, high}' attribute, or some such?

I'm having a hard time imagining how someone would actually use that information productively.
 
Yoav's suggestion for exposing NQE related data seems more flexible, in that it detaches the API from implicit assumptions that:
- A given request transits a single connection (this is accomplished by virtue of time-interval reporting for a request)
- That there is a singular connection or network 'overall' (the surface of the API being a singular value)
- Possibility to expose, if necessary, additional network-related information in a way that makes sense (e.g. latency) 

Hmm, I don't really see the functional difference between, ~connection.on<something> vs Performance Timeline delivery: both emit events, and these events can carry arbitrary attributes (Mbps estimate, RTT, confidence, etc)...? That said, we are positioning perf timeline + observer as the common primitive to surface perf-related data.. emitting 'network' events via the same mechanism does seem reasonable.
 
 If we want to go a step further, we could also either (a) drop type from onchange (this is a breaking change, but "type" adoption is low), or (b) define a new on<something> that only triggers when our estimated Mbps value changes and does not mention 'type' at all... I believe that would address the multi-{path, interface} use cases?

 I don't think so. I think an event-based trigger on such a change comes with an implicit assumption that "If you kick off this request, right now, this is what you'd get", which isn't necessarily the case, even in multi-path.

We have the same issue regardless of how we deliver the event (via on<something> or Perf Timeline). Developers will cache the last seen value and use it as a directional signal to modify what and how they fetch. And on that note, regardless of how we deliver the change events we do need to provide "connection.downlinkMax" that can be queried on demand.. Without that you can't get an estimate during the initial load sequence of the page, which is an important use case for this API.

ig

--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CADXXVKpjt_RU%3De2jxqUaW9MHPc1GjqefJig22P2%3DWbrne6VovA%40mail.gmail.com.

Josh Karlin

unread,
Oct 2, 2015, 8:31:37 AM10/2/15
to Chris Harrelson, blink-dev, igri...@chromium.org, Owen
So secure contexts for which features? downlinkMax? type? onchange? All of navigator.connection?

Chris Harrelson

unread,
Oct 2, 2015, 12:50:51 PM10/2/15
to Josh Karlin, blink-dev, igri...@chromium.org, Owen
Just the new features maybe? (downlinkMax and onchange). Otherwise it wouldn't be web compatible... I do agree this would be pretty weird though.

Ben Greenstein

unread,
Oct 2, 2015, 1:23:12 PM10/2/15
to Matt Menke, Ilya Grigorik, Ryan Sleevi, Elliott Sprehn, Josh Karlin, net-dev, Owen, Yoav Weiss, Marcos Caceres, Simon Pieters, blink-dev
On Thu, Oct 1, 2015 at 3:45 PM Matt Menke <mme...@chromium.org> wrote:
On Thu, Oct 1, 2015 at 6:24 PM, 'Ilya Grigorik' via net-dev <net...@chromium.org> wrote:


On Thu, Oct 1, 2015 at 2:30 PM, Ryan Sleevi <rsl...@chromium.org> wrote:
On Thu, Oct 1, 2015 at 2:18 PM, Ilya Grigorik <igri...@google.com> wrote:
On Thu, Oct 1, 2015 at 1:17 AM, Yoav Weiss <yo...@yoav.ws> wrote:
Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire.

"If the underlying connection technology changes (in either connection type or maximum downlink speed)" [1] ... Hmm, I think I see where I confusion is. My claim that we're NQE-compatible is based on "or maximum downlink speed changes" reading of above sentence, where "maximum downlink speed" is allowed to be adjusted based on network weather. Admittedly, not the greatest wording (I'm to blame :-)). I do agree that we should not limit notifications to changes to "underlying connection type".  

"If the properties of the connection change (e.g. changes in connection type, downlink speed, or other criteria), the user agent must run the steps to update the connection values..." ~~ would something like that clarify our intent?

I think it's still unclear whether or not we want the singular value to be NQE-compatible. You've put forward the argument it can be (with slight spec changes), but from Alex's remarks, and from Yoav's, and from mine, I think there's some concern that making it NQE compatible makes it less appealing to developers.

Perhaps the missing bit here is the "level of confidence" of the provided estimate? As in, in absence of any historical information we have to fallback to the interface, which will give us the upper bound, but we're also unlikely to reach that bound (low confidence). Then, once we have some historical data we can refine that estimate and offer a higher-confidence estimate? 

Perhaps we need a 'confidence -> {low, medium, high}' attribute, or some such?

I'm having a hard time imagining how someone would actually use that information productively.

I agree with Matt here. We went back and forth when designing NQE on exposing confidence and dropped it because we couldn't imagine how it would be used.

On the topic of how this stuff will be used, I remember years ago considering an interface that provided a small number of discrete values for network conditions, e.g., "barely functional", "slow and steady", "fast enough", and "amazingly fast", with latency/bw that more or less mapped to 2G, 3G, 4G/broadband, fiber. What ever happened to those discussions? I worry that the only developers that are sophisticated enough to use actual BW or latency measurements are those that can implement their own network quality estimators.
 
 
Yoav's suggestion for exposing NQE related data seems more flexible, in that it detaches the API from implicit assumptions that:
- A given request transits a single connection (this is accomplished by virtue of time-interval reporting for a request)
- That there is a singular connection or network 'overall' (the surface of the API being a singular value)
- Possibility to expose, if necessary, additional network-related information in a way that makes sense (e.g. latency) 

Hmm, I don't really see the functional difference between, ~connection.on<something> vs Performance Timeline delivery: both emit events, and these events can carry arbitrary attributes (Mbps estimate, RTT, confidence, etc)...? That said, we are positioning perf timeline + observer as the common primitive to surface perf-related data.. emitting 'network' events via the same mechanism does seem reasonable.
 
 If we want to go a step further, we could also either (a) drop type from onchange (this is a breaking change, but "type" adoption is low), or (b) define a new on<something> that only triggers when our estimated Mbps value changes and does not mention 'type' at all... I believe that would address the multi-{path, interface} use cases?

 I don't think so. I think an event-based trigger on such a change comes with an implicit assumption that "If you kick off this request, right now, this is what you'd get", which isn't necessarily the case, even in multi-path.

We have the same issue regardless of how we deliver the event (via on<something> or Perf Timeline). Developers will cache the last seen value and use it as a directional signal to modify what and how they fetch. And on that note, regardless of how we deliver the change events we do need to provide "connection.downlinkMax" that can be queried on demand.. Without that you can't get an estimate during the initial load sequence of the page, which is an important use case for this API.

ig

--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CADXXVKpjt_RU%3De2jxqUaW9MHPc1GjqefJig22P2%3DWbrne6VovA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.

David Benjamin

unread,
Oct 2, 2015, 1:35:08 PM10/2/15
to Josh Karlin, Chris Harrelson, blink-dev, igri...@chromium.org, Owen
I don't think secure contexts would do anything do address whatever privacy concerns there may be (without commenting on the concerns themselves; I haven't swapped in all of this thread and leave that to the rest of you all :-) ).

Secure contexts isn't some peace offering to throw at us security-minded folks to appease us. :-) There's a principled reason to require secure contexts for certain kinds of features, and that is the user is making a decision about an origin. If we as a browser ask the user "is it okay for example.com to access the webcam/your location/whatever", then it's our prerogative to make sure we can act on that question. That is, we need to do it only for origins that we have any reason to believe are actually "example.com".

But in this case, we're not asking the user "is it okay for example.com to learn this or that information about your network". We're proposing to just give origins access blanket to this information. That an origin is secure doesn't mean it is *trustworthy* enough to get this information (again, I'm not commenting on whether this particular information should be treated as sensitive). It's necessary, but not sufficient.

(Trustworthy we get from the user. If the user typed a password into a page, we assume the user trusts the origin with the password. If the user said yes, it can have my location, we assume the user trusts the origin with the location. Etc.)

David

Rick Byers

unread,
Oct 2, 2015, 5:55:36 PM10/2/15
to David Benjamin, Josh Karlin, Chris Harrelson, blink-dev, igri...@chromium.org, Owen
Could we make this debate more concrete by involving a potential customer with deep experience in this space?  Eg. I imagine there are people at YouTube who are very familiar with a lot of these trade offs and how valuable knowing the downlink speed is for them in practice.  I'd much rather we make a decision on whether or not we ship this API based on the deep experience of customers who have already been relying on similar signals on other platforms (eg. Android).

From my perspective, if YouTube (or any other big customer with a lot of experience in this space) can explain why max downlink speed is the signal they need, then we shouldn't let philosophical debates hold us back from unblocking such customers now (although privacy is of course still a potentially blocking issue).  Debate about the future of this API can continue in the context of the working group or other small specialist group, not as part of an intent-to-ship when it's really quite late to be redesigning the API (and designing it by large committee).

Also, all things being relatively equal, we should err on the side of the lowest level primitive that we can reasonably implement across platforms (eg. for extensible web reasons).  Presumably sophisticated customers will want to iterate on their own NQE algorithms in application-specific ways anyway, so we should aim to give them the same low-level inputs available to them on other platforms.
 
Rick

Ilya Grigorik

unread,
Oct 2, 2015, 7:31:30 PM10/2/15
to Rick Byers, David Benjamin, Josh Karlin, Chris Harrelson, blink-dev, Owen
On Fri, Oct 2, 2015 at 10:22 AM, Ben Greenstein <be...@chromium.org> wrote:
Perhaps we need a 'confidence -> {low, medium, high}' attribute, or some such?
I'm having a hard time imagining how someone would actually use that information productively.
I agree with Matt here. We went back and forth when designing NQE on exposing confidence and dropped it because we couldn't imagine how it would be used.

Fair enough, wanted to float the idea. 
 
On the topic of how this stuff will be used, I remember years ago considering an interface that provided a small number of discrete values for network conditions, e.g., "barely functional", "slow and steady", "fast enough", and "amazingly fast", with latency/bw that more or less mapped to 2G, 3G, 4G/broadband, fiber. What ever happened to those discussions? I worry that the only developers that are sophisticated enough to use actual BW or latency measurements are those that can implement their own network quality estimators.

This is exactly what Yoav proposed earlier, see the earlier replies for why that doesn't work in the long-term (and the short-term, really).

On Fri, Oct 2, 2015 at 2:55 PM, Rick Byers <rby...@chromium.org> wrote:
Could we make this debate more concrete by involving a potential customer with deep experience in this space?  Eg. I imagine there are people at YouTube who are very familiar with a lot of these trade offs and how valuable knowing the downlink speed is for them in practice.  I'd much rather we make a decision on whether or not we ship this API based on the deep experience of customers who have already been relying on similar signals on other platforms (eg. Android).

From my perspective, if YouTube (or any other big customer with a lot of experience in this space) can explain why max downlink speed is the signal they need, then we shouldn't let philosophical debates hold us back from unblocking such customers now (although privacy is of course still a potentially blocking issue). 

I'll ping some folks and see if they can chime in. 

FWIW, many large-ish teams have already built homegrown ~NQE estimators that measure handshake RTTs on the server, identify and classify visitors by IP subnet or carrier, etc. However, these approaches are expensive to implement (e.g. building and maintaining a subnet map is a non-trivial exercise; instrumenting edge servers is not an option for most), and have many edge cases where they fail (e.g. RTTs are often misleading due to TCP proxies in carrier networks; proxy browsing, and so on). 

The intent behind downlinkMax is to (a) enable existing teams to improve their implementations by removing a lot of the guesswork that they have to do today, and (b) to lower the barrier for the rest of the developers who are not able instrument servers, maintain subnet maps, etc. And, to be clear, downlinkMax is not the end-all for this space... it's a building block that enables developers to build own and smarter ~NQE implementations.
 
Also, all things being relatively equal, we should err on the side of the lowest level primitive that we can reasonably implement across platforms (eg. for extensible web reasons).  Presumably sophisticated customers will want to iterate on their own NQE algorithms in application-specific ways anyway, so we should aim to give them the same low-level inputs available to them on other platforms.

Yes, exactly. This is a _hard_ space with no obvious answer, which is why I err on the side of providing low-level data to enable developers to experiment and innovate on their own terms. 

---- 

Taking a step back here.. There is good feedback in this thread and we should address it in appropriate forums:

1) Clarify onchange + NQE interaction; investigate if Performance Timeline is a better mechanism to deliver these notifications. Ryan, Yoav (and anyone else who's interested :)), let's take this discussion here: https://github.com/w3c/netinfo/issues/27

2) Think through privacy + security implications: https://github.com/w3c/netinfo/issues/26.. David, thanks for the feedback, I might pull you in for this one :)

I'll ping this thread once we have these resolved.

ig

ericschur...@gmail.com

unread,
Oct 3, 2015, 8:06:29 AM10/3/15
to blink-dev, davi...@chromium.org, jka...@google.com, chri...@chromium.org, igri...@chromium.org, owe...@chromium.org
>Could we make this debate more concrete by involving a potential customer with deep experience in this space?

Hi there - customer checking in here. I work on a lot of sites for Amazon.com, and there are many different use cases for which an understanding of the network from the browser would be very helpful. Some of the most important are in developing countries like India. 

Ideally our servers would be told in the initial request headers about the quality of the network (like in clientHints). This would allow us to construct on the server a page that's reasonable. In the case of a poor 2G connection, fetching a typical mobile page and it's assets can take literally many minutes. 

Within javascript on the client we're interested both in the current quality and significant changes. If a user's browser loses the network entirely, we want to know. If they've dropped from HSPA to EDGE (or CDMA), we want to behave differently. If they're on LTE but the network is only operating at 50kbps, we'd like to know that if possible (though if we only got downlinkmax, it would still be much better than today).

The techniques for estimating throughput and round trip times from the browser through javascript are limited, don't work on initial requests, take significant time to return a reasonably accurate response, and take customer resources (network, battery, CPU, etc). 

Wifi 3G/4G dongles may be an interesting use case to consider. This is a scenario where the network conditions (which reflects 3G/4G) are very different from the actual type of connection (wifi). 

-Eric Schurman
Amazon.com

Yoav Weiss

unread,
Oct 4, 2015, 3:29:06 AM10/4/15
to ericschur...@gmail.com, blink-dev, David Benjamin, Josh Karlin, Chris Harrelson, Ilya Grigorik, Owen
On Sat, Oct 3, 2015 at 2:06 PM, <ericschur...@gmail.com> wrote:
>Could we make this debate more concrete by involving a potential customer with deep experience in this space?

Hi there - customer checking in here. I work on a lot of sites for Amazon.com, and there are many different use cases for which an understanding of the network from the browser would be very helpful. Some of the most important are in developing countries like India. 

 
Hi Eric, thanks for chiming in! :) 
 
Ideally our servers would be told in the initial request headers about the quality of the network (like in clientHints). This would allow us to construct on the server a page that's reasonable. In the case of a poor 2G connection, fetching a typical mobile page and it's assets can take literally many minutes. 

Within javascript on the client we're interested both in the current quality and significant changes. If a user's browser loses the network entirely, we want to know. If they've dropped from HSPA to EDGE (or CDMA), we want to behave differently. If they're on LTE but the network is only operating at 50kbps, we'd like to know that if possible (though if we only got downlinkmax, it would still be much better than today).

Is there any particular reason that you're interested in network changes other than the effective bandwidth you would expect the user to have?
Can you describe how you would use downlinkMax to help you serve better experiences to your users? 

ericschur...@gmail.com

unread,
Oct 4, 2015, 7:04:55 AM10/4/15
to blink-dev, ericschur...@gmail.com, davi...@chromium.org, jka...@google.com, chri...@chromium.org, igri...@chromium.org, owe...@chromium.org
>Is there any particular reason that you're interested in network changes other than the effective bandwidth you would expect the user to have?
A few things: 
* It's helpful to track cases where there's no connection - in the case of a page transition, if there's no network you probably want to show messaging to the customer about that, rather than navigating. In the case of things like Ajax requests, we could avoid making them in the first place and avoid any sort of queuing of retries. 
* It's not just bandwidth. If I can tell someone is on a 2G connection I have a strong signal that their RTT's may be very high. Given this, I may try to design the page to have fewer separate connections - to inline or combine content into fewer requests to avoid round trips. 

>Can you describe how you would use downlinkMax to help you serve better experiences to your users? 
Here's a list of some:
* deliver a lighter view of the page - a more text-centric one to 2G customers (or to those with downlinkMax below a certain level)
* choose a different image to send - either different dimensions, resolution, quality, or compression technique. This might be a bigger or smaller image depending on the info in downlinkMax.
* choose a video oriented feature over a static image based one
* we can use it to refine other models for predicting network performance
* delay fetching some content until a user interaction if we know content fetches are fast
* send data at the bottom of the HTML of the page rather than depending on an AJAX request, if we know the connection is slow

Those are just a few off the top of my head. Is that helpful?

-Eric

Yoav Weiss

unread,
Oct 4, 2015, 3:35:29 PM10/4/15
to ericschur...@gmail.com, blink-dev, David Benjamin, Josh Karlin, Chris Harrelson, Ilya Grigorik, Owen
On Sun, Oct 4, 2015 at 1:04 PM, <ericschur...@gmail.com> wrote:
>Is there any particular reason that you're interested in network changes other than the effective bandwidth you would expect the user to have?
A few things: 
* It's helpful to track cases where there's no connection - in the case of a page transition, if there's no network you probably want to show messaging to the customer about that, rather than navigating. In the case of things like Ajax requests, we could avoid making them in the first place and avoid any sort of queuing of retries. 

Isn't window.onoffline enough to serve that use-case? Or you're interested in tackling the "Lie-Fi" case? For the latter, I'm afraid "downlinkMax" as defined will do you no good.

* It's not just bandwidth. If I can tell someone is on a 2G connection I have a strong signal that their RTT's may be very high. Given this, I may try to design the page to have fewer separate connections - to inline or combine content into fewer requests to avoid round trips. 

I'd argue that HTTP2 will make this less of an issue, and for HTTP1 you want to minimize your RTTs regardless of network.
But I get your point. Still, downlinkMax doesn't give you that unless you go the "conclude network type out of max bandwidth" route.
  

>Can you describe how you would use downlinkMax to help you serve better experiences to your users? 
Here's a list of some:
* deliver a lighter view of the page - a more text-centric one to 2G customers (or to those with downlinkMax below a certain level)

I agree downlinkMax would give you that, if you use it with a "if(downlinkMax <= 384)" logic (which is equivalent to an "is user on Edge" boolean),
 
* choose a different image to send - either different dimensions, resolution, quality, or compression technique. This might be a bigger or smaller image depending on the info in downlinkMax.

IIUC, what's interesting here is mostly (expected estimation of) effective bandwidth, which downlinkMax doesn't give you.
 
* choose a video oriented feature over a static image based one
 
Same as above
 
* we can use it to refine other models for predicting network performance

OK
 
* delay fetching some content until a user interaction if we know content fetches are fast

Again, downlinkMax doesn't give you that
 
* send data at the bottom of the HTML of the page rather than depending on an AJAX request, if we know the connection is slow

Yeah, downlinkMax can give you that with a (similar to above) "is user on Edge" logic.
 

Those are just a few off the top of my head. Is that helpful?

Yes, very helpful. I hope we'd be able to expose an API that enables tackling these use-cases.

Jochen Eisinger

unread,
Oct 5, 2015, 10:05:37 AM10/5/15
to David Benjamin, Josh Karlin, Chris Harrelson, blink-dev, igri...@chromium.org, Owen
On Fri, Oct 2, 2015 at 7:35 PM David Benjamin <davi...@chromium.org> wrote:
I don't think secure contexts would do anything do address whatever privacy concerns there may be (without commenting on the concerns themselves; I haven't swapped in all of this thread and leave that to the rest of you all :-) ).


The secure context requirement is not meant to appease anyone. We (api owners) acknowledge that this information falls into the category of potentially sensitive device sensor related information, and as such should be only exposed to secure contexts.

This doesn't automatically solve privacy concerns or security concerns, but that was also not implied.

best
-jochen