Contact emails
jka...@google.com, igri...@chromium.org, owe...@chromium.org
Spec
https://w3c.github.io/netinfo/Summary
Developers have expressed interest in knowing the network subtype (2g vs 3g) as well as the general category (wifi vs cellular). navigator.connection.type shows the category, but not the subtype. Rather than exposing an ever-changing enum, the downlinkMax attribute exposes a theoretical maximum bandwidth supported for the current network subtype through navigator.connection.downlinkMax.
To use the demo run Android dev channel and be sure to enable the experimental web platform features flag. Note that WiFi on Android reports Infinity for downlinkMax as Chrome recently dropped the required permission to get Wifi linkSpeed. Cellular works well however.
Compatibility Risk
Mozilla already has navigator.connection on FFOS and has expressed interested in starting on adding downlinkMax sometime next year. The spec was written in collaboration with Mozilla's Marcos Cáceres.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
Wouldn't exposing an estimate (such as the one that the Network Quality Estimator would provide) be more beneficial and less prone to abuse?
+net-devWould it be better (for the spec) to be honest that downlinkMax is unknown rather than pretending to be able to saturate the connection technology? That seems less likely to cause trouble. If user agents ship with inaccurate values, authors may never use this feature or have to blacklist values in the Table of maximum downlink speeds as invalid.
We know that the values in the Table of maximum downlink speeds are going to be very inaccurate in some cases. For example, WiFi speeds are theoretical, and GigE connected to a DSL modem could be off by orders of magnitude.
On Monday, September 28, 2015 at 3:55:41 PM UTC-7, David Dorwin wrote:I'm also quite nervous about the privacy implications of such an API. Knowing that a users' downlinkMax is changing can reveal information about what the user is doing (e.g. assuming we get it accurate, via NQE) or where the user is going - watching the user transition from cellular to wifi, for example, may reveal once a user is outside an office building, and watching the user transition from cellular 2g to cellular 3g to cellular 2g, along with other ambient sensors of a device (such as accelerometers, via http://w3c.github.io/deviceorientation/spec-source-orientation.html ) could reveal a user as they transition from cellular towers or move around a city.
While I realize I may be articulating poorly, there's a sense of dread with this API that...
As noted in the Intent to Ship, we have to lie about the Wifi speed because Chrome itself needs a permission from the Android OS to figure this out - a strong signal that perhaps this is too powerful to just expose out there.
The choice of words matters a great deal here. We're not "lying", nor are we pretending that we can saturate the link. As with any network "weather prediction" algorithm, you take as many signals as you have access to, and you make the best of them: sometimes all you have is the type of interface you're on; sometimes you also have quality signals from the interface; sometimes you also have historical data; <insert other inputs here>. The resulting value is not a guarantee of performance, nor is it advertised as such; the resulting value is a best-effort estimate for the ceiling on your throughput.
What we're shipping here is the base case (interface type only as the input to above algorithm) that we can build on and improve in the future with additional signals. There is no "complete information" case when it comes to estimate, and current implementation is a strict improvement on what we offer to developers today... which is exactly nothing.
On Sep 28, 2015 9:45 PM, "Ilya Grigorik" <igri...@google.com> wrote:
>
> the resulting value is a best-effort estimate for the ceiling on your throughput.
Of course, when best effort varies across vendors, as it necessarily will with any network weather prediction, you end up with a host on non-determinism such that authors can't safely or reliably use an API without also examining the source of the prediction (the user agent), and sniffing is undesirable.
> What we're shipping here is the base case (interface type only as the input to above algorithm) that we can build on and improve in the future with additional signals. There is no "complete information" case when it comes to estimate, and current implementation is a strict improvement on what we offer to developers today... which is exactly nothing.
It isn't that there is a goal of complete information, but the issue is the quality of information - and unfortunately, the quality is not constant over time or UAs. We could speak in terms of false positives and false negatives - both of which are types of lies from the POV of an author trusting the UA - as being how we best measure quality. And the inconsistency over time / UAs is that as UAs implement new methods, their accuracy goes up - and the perceived accuracy of every UA not implementing is seen to go down.
> We need to provide this information to developers to enable them to build optimized experiences for the wide range of deployed networks in the real world. Not knowing that you're on 2G or slow 2G+ network is exactly why we see so many 60s+ page load times in many markets.
This last statement feels really quite disingenuous, as knowing link speed is certainly not within the high order bits of mobile performance failures at all. I don't disagree that some authors feel they could craft better experiences if they had more knowledge, but I disagree with the urgency of it or relative importance to explaining the awfulness of the mobile web experience - the worst offenders are explicitly not taking advantage of the other tools we have provided them, and there's no reason to believe they would take advantage of this if offered.
The good actors are already within user tolerances, so yes, it can help improve the user experience at the high end of responsible mobile development, but we can't argue it will help those sitting in the middle or long tail of the mobile web.
I also have issue with the proposed solution; I'm all for extending the web forward with the lowest level primitives we can afford, but I think there are some low levels we shouldn't expose, because not all platforms are capable of going that low, and because of the inherent escalation of security/privacy risks the lower you go. This API is a prime example of how tricky these concerns are.
So knowing that, and knowing the issues, this seems like it's a case of exposing the right primitives to authors that lets the user agent intervene based on the confidence intervals for network quality that the UA knows users will accept, rather than force authors to independently rediscover that such decisions matter to users and end up having to UA sniff to determine if the quality of the signal is within the users' tolerance thresholds.
>
> With the implementation we're shipping here the above condition would only trigger on ~2G connection types, and in the future may also be triggered on other connection types if we believe that the throughput is low due to <insert NQE reasons here>.
I wish I could believe the code would be structured that way, but having seen plenty of code on other platforms that deal with this, it usually ends up the inverse - testing for fast, not slow, and thus failing to adapt as new fast types come out. Just consider how the definition of 'mobile connection' has changed from 'carrier pigeon speeds' to 'faster than many users home internet connections'.
> The (recent) requirement for the WiFi permissions bit is due to completely orthogonal issues with Chrome's upgrade process.
But it isn't really... If you want to have high confidence network signal quality on Android, you need a distinct permission. Regardless of how Chrome itself or WebView acquire this permission (its own special layer of platform inconsistency), the fact that 'trusted' code needs to acquire this permission is exactly why we shouldn't be exposing it to untrusted, hostile code, which is what we must assume all web code is in terms of risk.
An API like this, or arguably most platform information APIs, shouldn't be to just expose the information - it should be to accomplish the user or author's use case without revealing the information, or, when it's indirectly revealed (e.g. timing behaviours, network fetches, GPU pixel values), to reveal as little of it as possible while still accomplishing the goal.
If we want to encourage developers to optimize for slow connections, for example, can we give them a way to express that desire without giving them the APIs to let them botch it (like testing for fast)? Declarative loading APIs rather than imperative?
Though I'm disagreeing on importance, I'm not disagreeing on some of the use cases - but I'm just questioning whether alternate solutions may be more appropriate, given the UA interop concerns, the privacy concerns, and the implementation complexity concerns.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
On Sep 29, 2015 10:18 AM, "Alex Russell" <sligh...@google.com> wrote:
>
> This discussion is deeply disappointing.
>
Because you disagree with the conclusion? The technical arguments? Or the manner in which they're presented? It doesn't help move the argument forward without knowing what your concerns are.
> Ryan: there's nothing new in your response from the previous versions of these debates. That you don't agree with developers who aren't you about the need for this information is also not new.
>
Alex: Please re-read my response, especially the closing remarks. I'm disappointed that you came away reading a conclusion that is quite the opposite of what I suggested, and it feels as if you either didn't read my remarks or that we failed to communicate. The former can only be solved by you, while the latter is something where more feedback than "this is deeply disappointing".
> It's unclear how to move forward. Objecting to not providing information that's observable other ways by side-effect isn't a tennable position for platform API design. We don't allow it in other areas and shouldn't allow it here.
By this rationale, the fact that we exposed DeviceOrientation to all suggests we shouldn't treat geolocation as a powerful feature, and instead just allow everyone to have it.
I know this isn't your argument, and I know you'd react quite negatively to such a broad generalization, so you can understand and appreciate how negatively your broad generalization is received.
I offered concrete suggestions on a way to move forward, and Yoav offered significantly more developed arguments. If you disagree, it would help to explain why.
>
> I'd like to see this API ship ASAP. LGTM.
>
Alex, this feels somewhat hollow and frustrating. If you are not receptive to feedback, if you're not going to address feedback, and if you're going to ship something without considering alternatives, the consequences to privacy, or the platform implications, then it suggests that we shouldn't have an exercise of intent to ship.
I know you care deeply about platform health, and I'm somewhat surprised to see you supporting something that has so many deficiencies, especially when viable alternatives to meet the use cases, address the privacy issues, and reflect reality exist.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
Am I correct that this intent also means shipping the navigator.connection object? Is the request to ship some subset of the spec?
Am I correct that this intent also means shipping the navigator.connection object? Is the request to ship some subset of the spec?navigator.connection shipped (for Android and ChromeOS) last year (which includes navigator.connection.type).
This intent is for adding navigator.connection.downlinkMax.
This intent is for adding navigator.connection.downlinkMax.And onchange right?
Related: I wrote (in length) my thoughts on ways to improve this specific API and others that can provide developers info about the user's current conditions. Specifically, I propose that we would expose a set of discrete values describing the end-to-end network state instead of a specific number indicating bandwidth.
If the weather prediction algorithm needs a seed to start with, it should be the expected throughput of the various network types rather than the max.
On Tue, Sep 29, 2015 at 4:30 AM, Ryan Sleevi <rsl...@chromium.org> wrote:
On Sep 28, 2015 9:45 PM, "Ilya Grigorik" <igri...@google.com> wrote:
> What we're shipping here is the base case (interface type only as the input to above algorithm) that we can build on and improve in the future with additional signals. There is no "complete information" case when it comes to estimate, and current implementation is a strict improvement on what we offer to developers today... which is exactly nothing.
It isn't that there is a goal of complete information, but the issue is the quality of information - and unfortunately, the quality is not constant over time or UAs. We could speak in terms of false positives and false negatives - both of which are types of lies from the POV of an author trusting the UA - as being how we best measure quality. And the inconsistency over time / UAs is that as UAs implement new methods, their accuracy goes up - and the perceived accuracy of every UA not implementing is seen to go down.
> We need to provide this information to developers to enable them to build optimized experiences for the wide range of deployed networks in the real world. Not knowing that you're on 2G or slow 2G+ network is exactly why we see so many 60s+ page load times in many markets.
This last statement feels really quite disingenuous, as knowing link speed is certainly not within the high order bits of mobile performance failures at all. I don't disagree that some authors feel they could craft better experiences if they had more knowledge, but I disagree with the urgency of it or relative importance to explaining the awfulness of the mobile web experience - the worst offenders are explicitly not taking advantage of the other tools we have provided them, and there's no reason to believe they would take advantage of this if offered.
The good actors are already within user tolerances, so yes, it can help improve the user experience at the high end of responsible mobile development, but we can't argue it will help those sitting in the middle or long tail of the mobile web.
> With the implementation we're shipping here the above condition would only trigger on ~2G connection types, and in the future may also be triggered on other connection types if we believe that the throughput is low due to <insert NQE reasons here>.
I wish I could believe the code would be structured that way, but having seen plenty of code on other platforms that deal with this, it usually ends up the inverse - testing for fast, not slow, and thus failing to adapt as new fast types come out. Just consider how the definition of 'mobile connection' has changed from 'carrier pigeon speeds' to 'faster than many users home internet connections'.
An API like this, or arguably most platform information APIs, shouldn't be to just expose the information - it should be to accomplish the user or author's use case without revealing the information, or, when it's indirectly revealed (e.g. timing behaviours, network fetches, GPU pixel values), to reveal as little of it as possible while still accomplishing the goal.
If we want to encourage developers to optimize for slow connections, for example, can we give them a way to express that desire without giving them the APIs to let them botch it (like testing for fast)? Declarative loading APIs rather than imperative?
Though I'm disagreeing on importance, I'm not disagreeing on some of the use cases - but I'm just questioning whether alternate solutions may be more appropriate, given the UA interop concerns, the privacy concerns, and the implementation complexity concerns.
On Sep 29, 2015 10:18 AM, "Alex Russell" <sligh...@google.com> wrote:
>
> This discussion is deeply disappointing.
>Because you disagree with the conclusion? The technical arguments? Or the manner in which they're presented? It doesn't help move the argument forward without knowing what your concerns are.
> Ryan: there's nothing new in your response from the previous versions of these debates. That you don't agree with developers who aren't you about the need for this information is also not new.
>Alex: Please re-read my response, especially the closing remarks. I'm disappointed that you came away reading a conclusion that is quite the opposite of what I suggested, and it feels as if you either didn't read my remarks or that we failed to communicate. The former can only be solved by you, while the latter is something where more feedback than "this is deeply disappointing".
> It's unclear how to move forward. Objecting to not providing information that's observable other ways by side-effect isn't a tennable position for platform API design. We don't allow it in other areas and shouldn't allow it here.
By this rationale, the fact that we exposed DeviceOrientation to all suggests we shouldn't treat geolocation as a powerful feature, and instead just allow everyone to have it.
I know this isn't your argument, and I know you'd react quite negatively to such a broad generalization, so you can understand and appreciate how negatively your broad generalization is received.
I offered concrete suggestions on a way to move forward, and Yoav offered significantly more developed arguments. If you disagree, it would help to explain why.
>
> I'd like to see this API ship ASAP. LGTM.
>Alex, this feels somewhat hollow and frustrating. If you are not receptive to feedback, if you're not going to address feedback, and if you're going to ship something without considering alternatives, the consequences to privacy, or the platform implications, then it suggests that we shouldn't have an exercise of intent to ship.
I know you care deeply about platform health, and I'm somewhat surprised to see you supporting something that has so many deficiencies, especially when viable alternatives to meet the use cases, address the privacy issues, and reflect reality exist.
On Sep 30, 2015 11:54 AM, "Alex Russell" <sligh...@google.com> wrote:
>
> My concern has several aspects:
> The design of this feature is a compromise borne of a multi-year debate. Other obvious points in the design space have been explored and rejected (e.g., reflecting the actual radio connection type, the way Android's framework does)
That hasn't been a suggestion by either Yoav or myself, and for what it's worth, I would agree that is obviously incorrect.
> I'd assume the implementation complexity is a conversation that would have happened in code-review as this feature was developed. Odd to see it here.
That presumes every reviewer is globally aware of the implications, platform considerations, and ongoing efforts, which in the increasingly silo'd Chrome codebase, is simply not the case. I don't mean this to suggest authors or reviewers failed - simply that when we look at the holistic integration and exposure, considering the plentitude of platforms Chrome runs on, it is not unreasonable or unexpected that "implementation complexity" will emerge as a concern from something that might be locally consistent.
The point remains, and is stated on the I2I, that the very behaviour of this cross-platform is inconsistent, and necessarily so, because we build atop a variety of inconsistent platforms that expose different layers of fidelity. Much like we didn't simply expose "DirectX" to the web when 'everyone' was running Windows, much like APIs like WebGL have themselves stripped the edges from OpenGL and have necessitated complex solutions like ANGLE, and yet are still unable to avoid developers needing to engage in GPU sniffing, we need to carefully evaluate what is the lowest layer of reasonable abstraction, given the variety of security models and APIs in play on the platforms Chrome builds on.
My objections here are that this is too low level a detail for a user agent to reliably implement across a diversity of platforms (which we know this already) _and_ that it necessarily precludes _advancements_ in the platform.
For example, the onchange event is unquestionably at odds with MP-TCP and QUIC, which should give serious pause. The notions of surfacing onchange - or there being a singular egress interface, that it will flap, or that the UA will even know information about either egress or ingress interfaces - are not solid notions, and being actively challenged on multiple axis in mobile platforms, as they have been on others (satellite and bonded DSL as two examples)
Further, the onchange event from the perspective of Chrome is extremely unreliable. Both Eric Roman and myself _discourage_ people from writing native code in the browser that interacts with this notification, because of this necessary unreliably, the necessary low fidelity, and for the fact that even in a collection of incredibly bright, motivated, earnest to "do good code" developers, it is still botched consistently, to the detriment of our users. I know you experienced this first hand at the Shenzhen TPAC due to the networking conditions there, and our ability and reliability have gotten worse, not better, as we expand across platforms and the platforms we run across change, deprecate, and innovate.
I'm suggesting that the complexity here necessarily paints the browser in a corner, much like sync XHRs or unload-blocking alerts. From the "spec litigations", it is clear these concerns haven't been discussed in depth, and suggesting that we ignore feedback solely because it's " already been litigated" is to ignore the experience of those most familiar with the past, present, and future developments in this space.
> Given that this data is available by side-channel today, it's unclear what privacy impact there could possibly be.
This is simply not true that it is available by side channel to the fidelity proposed, much in the way that you have noted DeviceOrientation and Geolocation have different buckets of fidelity. It is this fidelity that should naturally give pause to the privacy aware, and give concern to whether we can even consistently offer the fidelity suggested.
> This feature is a compromise from many year's of iteration. My reaction came from what appears to be late-stage re-litigation about a discussion which has been had many, many times in many other places. I'm questioning the need to re-discuss here when so many other discussions have already taken place on this topic.
Just because it was borne of compromise does not make it technically sound. XHTML2 was borne of a wide variety of compromises and spec litigation, as were XSL and XML-DSig, but that doesn't argue to their fitness.
I don't mean to dismiss the work of many people active in this space, passionate for solutions, and the many discussions, but you of all people are no doubt most aware that it is simply impossible to participate in all the standards discussions all the time, so it feels dismissive to suggest that "If you wanted to have a say, you should have known about this the years before the I2S and spoken then." Arguably, our I2S process is much like IETF or W3C last call - trying to get broad feedback from a variety of stakeholders after those most motivated for something have eked out compromise and solutions. However, that doesn't exempt it from critical, but hopefully constructive, review; rather, it should encourage it.
In the vein, you're hearing concerns from both Yoav and myself as to the fitness. I have concerns to the ability of Chrome to implement this reliably across platforms, I have concern for the privacy implications, and I have concern that such a proposal is necessarily at odds with what multiple vendors are independently pursuing in an effort to enhance the networking. Yoav has articulated far better than I the concerns to developers, API trust, and the implications that such an API shape has.
This isn't saying "You should never ship," it is saying that it appears to be that a number of concerns were not considered or weighed when developing the compromise of this API, that they materially affect both the implementation and experience of the API, and we should hold off shipping until we have taken time to earnestly and thoughtfully weigh these considerations, and either declare we don't care, or adjust things if we do.
> NQE is a separate feature that I don't think should be added under the cover of the downlinkMax API as it will give us a significantly different view of the world. Having both is useful.
This position is considerably different than the one advanced during this I2S, so that too should give pause for consideration. I agree that shipping this is at odds with NQE - that is, that we can't retroactively integrate it - but that is precisely what is being proposed here. For that reason, we should be sure that we have agreement as to what the roadmap looks like, rather than haphazardly shipping and iterating, so that we can ensure we are releasing a web platform that is consistent, reasoned, and capable, especially if we will not be able to revisit this discussion for the many years that any deprecation would necessarily entail.
On Tue, Sep 29, 2015 at 4:10 AM, Yoav Weiss <yo...@yoav.ws> wrote:Related: I wrote (in length) my thoughts on ways to improve this specific API and others that can provide developers info about the user's current conditions. Specifically, I propose that we would expose a set of discrete values describing the end-to-end network state instead of a specific number indicating bandwidth.Phew, long read, thanks for putting it together! I'll try to respond to the overall themes...I think you need to dig deeper on what you mean by "effective bandwidth" and why that's an elusive goal. I agree that it would be great to have an oracle that tells you precise bandwidth and RTT estimates for the _exact request_ you're about to make, but that's simply not practical. The user can be on a GigE network, have an HD streaming going at full throttle, and simultaneously be downloading another resource at a trickle because the network path between them and that particular origin is slow. In fact, even if the user requests a resource from the same "fast" server, there are absolutely no guarantees on how quickly that response will come back -- e.g. one response is streamed from cache while another is taking seconds to generate and is trickling out bit by bit (note that this is effectively the same as your example of WiFi tethering).
Which is to say, claiming that we ought to aim to accurately predict "end-to-end / effective" bandwidth prior to making the request is not going to get you far. You're always working with incomplete information before you make the request, and the best you can do here is to account for what you know about your local network weather and extrapolate those implications for the end-to-end story
-- e.g. you can never go faster than your slowest hop, and if your first hop is slow then you know that you will never exceed that data rate; you're sharing the link with multiple requests; the throughput is not just a function of the network path but also the server load and a dozen other variables.
On that note, it's also worth highlighting that NQE leverages the exact same data we're surfacing here -- see "OS-provided information" section -- to bootstrap its predictions, and then layers observed performance to further refine its estimates... Which is consistent with my earlier statements about leveraging NQE in the future to refine what downlinkMax reports.
Re, discrete values: this wouldn't address any of the concerns you raised earlier about end-to-end vs first hop. Also, developers that I've talked to want access to the raw information, as exposed by the current API, to allow them to build own and smarter frameworks and libraries on top of this data.
One app's "slow" is another's "fast" (e.g. fetching text vs video vs HD video vs 4K video) and we should defer these decisions to app developers that better understand their own requirements and context.
Re, "we should improve our overall capabilities to adapt content based on user conditions": agreed! I'm hoping to see Save-Data i2i+s out soon and crbug.com/467945 will provide further building blocks to allow developers to measure actual achieved throughput (amongst other use cases enabled by it). Combination of these and related features will finally give developers the necessary tools to start experimenting with and building better adaptive experiences.
If the weather prediction algorithm needs a seed to start with, it should be the expected throughput of the various network types rather than the max.Sometimes we don't know what the max for a particular type is -- e.g. we don't have sufficient permissions, or it's a completely new interface we've never encountered before. In order to be future-friendly we have to assume they are "fast" until we teach the system to report otherwise;
developers are not dumb either (let's stop assuming they are) and know that there is no such thing as "infinite bandwidth".
So, all in all, I think that the best approach would be:* Change 'downlinkMax' to 'expectedDownlink' - regardless of naming, the semantics of the exposed bandwidth should be that it represents the actual bandwidth that the user might have, not an unattainable theoretical max. Then we can iterate that value over time, and make it more precise.* Clamp that bandwidth value such that it provides enough info for developers while not exposing too many internal details. Maybe clamping to 10Kbps below 100Kbps, clamping to 100Kbps below 1000Kbps and to 1000Kbps above that?
The intention of downlinkMax was expressly not to provide an estimate of end-to-end bandwidth but instead to provide a next-hop upper bound. If this is not made clear by the spec then it should be. I think we can come to agree that if we expose NQE in NetInfo down the road, it should be in a new attribute.I agree, +Infinity as a default upper bound when the UA doesn't know the underlying type is lame, but it's an upper bound. The UA is reporting truth.
No support for iOS. Android returns values from the table for cellular connections, and +Infinity for wifi until/if Chrome gets the WiFi permission again.
Does this include .onchange support on all platforms?
Is .onchange implemented in terms of the NetworkChangeNotifier of //net?