Contact emails
jka...@google.com, igri...@chromium.org, owe...@chromium.org
Spec
https://w3c.github.io/netinfo/Summary
Developers have expressed interest in knowing the network subtype (2g vs 3g) as well as the general category (wifi vs cellular). navigator.connection.type shows the category, but not the subtype. Rather than exposing an ever-changing enum, the downlinkMax attribute exposes a theoretical maximum bandwidth supported for the current network subtype through navigator.connection.downlinkMax.
To use the demo run Android dev channel and be sure to enable the experimental web platform features flag. Note that WiFi on Android reports Infinity for downlinkMax as Chrome recently dropped the required permission to get Wifi linkSpeed. Cellular works well however.
Compatibility Risk
Mozilla already has navigator.connection on FFOS and has expressed interested in starting on adding downlinkMax sometime next year. The spec was written in collaboration with Mozilla's Marcos Cáceres.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
Wouldn't exposing an estimate (such as the one that the Network Quality Estimator would provide) be more beneficial and less prone to abuse?
+net-devWould it be better (for the spec) to be honest that downlinkMax is unknown rather than pretending to be able to saturate the connection technology? That seems less likely to cause trouble. If user agents ship with inaccurate values, authors may never use this feature or have to blacklist values in the Table of maximum downlink speeds as invalid.
We know that the values in the Table of maximum downlink speeds are going to be very inaccurate in some cases. For example, WiFi speeds are theoretical, and GigE connected to a DSL modem could be off by orders of magnitude.
On Monday, September 28, 2015 at 3:55:41 PM UTC-7, David Dorwin wrote:I'm also quite nervous about the privacy implications of such an API. Knowing that a users' downlinkMax is changing can reveal information about what the user is doing (e.g. assuming we get it accurate, via NQE) or where the user is going - watching the user transition from cellular to wifi, for example, may reveal once a user is outside an office building, and watching the user transition from cellular 2g to cellular 3g to cellular 2g, along with other ambient sensors of a device (such as accelerometers, via http://w3c.github.io/deviceorientation/spec-source-orientation.html ) could reveal a user as they transition from cellular towers or move around a city.
While I realize I may be articulating poorly, there's a sense of dread with this API that...
As noted in the Intent to Ship, we have to lie about the Wifi speed because Chrome itself needs a permission from the Android OS to figure this out - a strong signal that perhaps this is too powerful to just expose out there.
The choice of words matters a great deal here. We're not "lying", nor are we pretending that we can saturate the link. As with any network "weather prediction" algorithm, you take as many signals as you have access to, and you make the best of them: sometimes all you have is the type of interface you're on; sometimes you also have quality signals from the interface; sometimes you also have historical data; <insert other inputs here>. The resulting value is not a guarantee of performance, nor is it advertised as such; the resulting value is a best-effort estimate for the ceiling on your throughput.
What we're shipping here is the base case (interface type only as the input to above algorithm) that we can build on and improve in the future with additional signals. There is no "complete information" case when it comes to estimate, and current implementation is a strict improvement on what we offer to developers today... which is exactly nothing.
On Sep 28, 2015 9:45 PM, "Ilya Grigorik" <igri...@google.com> wrote:
>
> the resulting value is a best-effort estimate for the ceiling on your throughput.
Of course, when best effort varies across vendors, as it necessarily will with any network weather prediction, you end up with a host on non-determinism such that authors can't safely or reliably use an API without also examining the source of the prediction (the user agent), and sniffing is undesirable.
> What we're shipping here is the base case (interface type only as the input to above algorithm) that we can build on and improve in the future with additional signals. There is no "complete information" case when it comes to estimate, and current implementation is a strict improvement on what we offer to developers today... which is exactly nothing.
It isn't that there is a goal of complete information, but the issue is the quality of information - and unfortunately, the quality is not constant over time or UAs. We could speak in terms of false positives and false negatives - both of which are types of lies from the POV of an author trusting the UA - as being how we best measure quality. And the inconsistency over time / UAs is that as UAs implement new methods, their accuracy goes up - and the perceived accuracy of every UA not implementing is seen to go down.
> We need to provide this information to developers to enable them to build optimized experiences for the wide range of deployed networks in the real world. Not knowing that you're on 2G or slow 2G+ network is exactly why we see so many 60s+ page load times in many markets.
This last statement feels really quite disingenuous, as knowing link speed is certainly not within the high order bits of mobile performance failures at all. I don't disagree that some authors feel they could craft better experiences if they had more knowledge, but I disagree with the urgency of it or relative importance to explaining the awfulness of the mobile web experience - the worst offenders are explicitly not taking advantage of the other tools we have provided them, and there's no reason to believe they would take advantage of this if offered.
The good actors are already within user tolerances, so yes, it can help improve the user experience at the high end of responsible mobile development, but we can't argue it will help those sitting in the middle or long tail of the mobile web.
I also have issue with the proposed solution; I'm all for extending the web forward with the lowest level primitives we can afford, but I think there are some low levels we shouldn't expose, because not all platforms are capable of going that low, and because of the inherent escalation of security/privacy risks the lower you go. This API is a prime example of how tricky these concerns are.
So knowing that, and knowing the issues, this seems like it's a case of exposing the right primitives to authors that lets the user agent intervene based on the confidence intervals for network quality that the UA knows users will accept, rather than force authors to independently rediscover that such decisions matter to users and end up having to UA sniff to determine if the quality of the signal is within the users' tolerance thresholds.
>
> With the implementation we're shipping here the above condition would only trigger on ~2G connection types, and in the future may also be triggered on other connection types if we believe that the throughput is low due to <insert NQE reasons here>.
I wish I could believe the code would be structured that way, but having seen plenty of code on other platforms that deal with this, it usually ends up the inverse - testing for fast, not slow, and thus failing to adapt as new fast types come out. Just consider how the definition of 'mobile connection' has changed from 'carrier pigeon speeds' to 'faster than many users home internet connections'.
> The (recent) requirement for the WiFi permissions bit is due to completely orthogonal issues with Chrome's upgrade process.
But it isn't really... If you want to have high confidence network signal quality on Android, you need a distinct permission. Regardless of how Chrome itself or WebView acquire this permission (its own special layer of platform inconsistency), the fact that 'trusted' code needs to acquire this permission is exactly why we shouldn't be exposing it to untrusted, hostile code, which is what we must assume all web code is in terms of risk.
An API like this, or arguably most platform information APIs, shouldn't be to just expose the information - it should be to accomplish the user or author's use case without revealing the information, or, when it's indirectly revealed (e.g. timing behaviours, network fetches, GPU pixel values), to reveal as little of it as possible while still accomplishing the goal.
If we want to encourage developers to optimize for slow connections, for example, can we give them a way to express that desire without giving them the APIs to let them botch it (like testing for fast)? Declarative loading APIs rather than imperative?
Though I'm disagreeing on importance, I'm not disagreeing on some of the use cases - but I'm just questioning whether alternate solutions may be more appropriate, given the UA interop concerns, the privacy concerns, and the implementation complexity concerns.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
On Sep 29, 2015 10:18 AM, "Alex Russell" <sligh...@google.com> wrote:
>
> This discussion is deeply disappointing.
>
Because you disagree with the conclusion? The technical arguments? Or the manner in which they're presented? It doesn't help move the argument forward without knowing what your concerns are.
> Ryan: there's nothing new in your response from the previous versions of these debates. That you don't agree with developers who aren't you about the need for this information is also not new.
>
Alex: Please re-read my response, especially the closing remarks. I'm disappointed that you came away reading a conclusion that is quite the opposite of what I suggested, and it feels as if you either didn't read my remarks or that we failed to communicate. The former can only be solved by you, while the latter is something where more feedback than "this is deeply disappointing".
> It's unclear how to move forward. Objecting to not providing information that's observable other ways by side-effect isn't a tennable position for platform API design. We don't allow it in other areas and shouldn't allow it here.
By this rationale, the fact that we exposed DeviceOrientation to all suggests we shouldn't treat geolocation as a powerful feature, and instead just allow everyone to have it.
I know this isn't your argument, and I know you'd react quite negatively to such a broad generalization, so you can understand and appreciate how negatively your broad generalization is received.
I offered concrete suggestions on a way to move forward, and Yoav offered significantly more developed arguments. If you disagree, it would help to explain why.
>
> I'd like to see this API ship ASAP. LGTM.
>
Alex, this feels somewhat hollow and frustrating. If you are not receptive to feedback, if you're not going to address feedback, and if you're going to ship something without considering alternatives, the consequences to privacy, or the platform implications, then it suggests that we shouldn't have an exercise of intent to ship.
I know you care deeply about platform health, and I'm somewhat surprised to see you supporting something that has so many deficiencies, especially when viable alternatives to meet the use cases, address the privacy issues, and reflect reality exist.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
Am I correct that this intent also means shipping the navigator.connection object? Is the request to ship some subset of the spec?
Am I correct that this intent also means shipping the navigator.connection object? Is the request to ship some subset of the spec?navigator.connection shipped (for Android and ChromeOS) last year (which includes navigator.connection.type).
This intent is for adding navigator.connection.downlinkMax.
This intent is for adding navigator.connection.downlinkMax.And onchange right?
Related: I wrote (in length) my thoughts on ways to improve this specific API and others that can provide developers info about the user's current conditions. Specifically, I propose that we would expose a set of discrete values describing the end-to-end network state instead of a specific number indicating bandwidth.
If the weather prediction algorithm needs a seed to start with, it should be the expected throughput of the various network types rather than the max.
On Tue, Sep 29, 2015 at 4:30 AM, Ryan Sleevi <rsl...@chromium.org> wrote:
On Sep 28, 2015 9:45 PM, "Ilya Grigorik" <igri...@google.com> wrote:
> What we're shipping here is the base case (interface type only as the input to above algorithm) that we can build on and improve in the future with additional signals. There is no "complete information" case when it comes to estimate, and current implementation is a strict improvement on what we offer to developers today... which is exactly nothing.
It isn't that there is a goal of complete information, but the issue is the quality of information - and unfortunately, the quality is not constant over time or UAs. We could speak in terms of false positives and false negatives - both of which are types of lies from the POV of an author trusting the UA - as being how we best measure quality. And the inconsistency over time / UAs is that as UAs implement new methods, their accuracy goes up - and the perceived accuracy of every UA not implementing is seen to go down.
> We need to provide this information to developers to enable them to build optimized experiences for the wide range of deployed networks in the real world. Not knowing that you're on 2G or slow 2G+ network is exactly why we see so many 60s+ page load times in many markets.
This last statement feels really quite disingenuous, as knowing link speed is certainly not within the high order bits of mobile performance failures at all. I don't disagree that some authors feel they could craft better experiences if they had more knowledge, but I disagree with the urgency of it or relative importance to explaining the awfulness of the mobile web experience - the worst offenders are explicitly not taking advantage of the other tools we have provided them, and there's no reason to believe they would take advantage of this if offered.
The good actors are already within user tolerances, so yes, it can help improve the user experience at the high end of responsible mobile development, but we can't argue it will help those sitting in the middle or long tail of the mobile web.
> With the implementation we're shipping here the above condition would only trigger on ~2G connection types, and in the future may also be triggered on other connection types if we believe that the throughput is low due to <insert NQE reasons here>.
I wish I could believe the code would be structured that way, but having seen plenty of code on other platforms that deal with this, it usually ends up the inverse - testing for fast, not slow, and thus failing to adapt as new fast types come out. Just consider how the definition of 'mobile connection' has changed from 'carrier pigeon speeds' to 'faster than many users home internet connections'.
An API like this, or arguably most platform information APIs, shouldn't be to just expose the information - it should be to accomplish the user or author's use case without revealing the information, or, when it's indirectly revealed (e.g. timing behaviours, network fetches, GPU pixel values), to reveal as little of it as possible while still accomplishing the goal.
If we want to encourage developers to optimize for slow connections, for example, can we give them a way to express that desire without giving them the APIs to let them botch it (like testing for fast)? Declarative loading APIs rather than imperative?
Though I'm disagreeing on importance, I'm not disagreeing on some of the use cases - but I'm just questioning whether alternate solutions may be more appropriate, given the UA interop concerns, the privacy concerns, and the implementation complexity concerns.
On Sep 29, 2015 10:18 AM, "Alex Russell" <sligh...@google.com> wrote:
>
> This discussion is deeply disappointing.
>Because you disagree with the conclusion? The technical arguments? Or the manner in which they're presented? It doesn't help move the argument forward without knowing what your concerns are.
> Ryan: there's nothing new in your response from the previous versions of these debates. That you don't agree with developers who aren't you about the need for this information is also not new.
>Alex: Please re-read my response, especially the closing remarks. I'm disappointed that you came away reading a conclusion that is quite the opposite of what I suggested, and it feels as if you either didn't read my remarks or that we failed to communicate. The former can only be solved by you, while the latter is something where more feedback than "this is deeply disappointing".
> It's unclear how to move forward. Objecting to not providing information that's observable other ways by side-effect isn't a tennable position for platform API design. We don't allow it in other areas and shouldn't allow it here.
By this rationale, the fact that we exposed DeviceOrientation to all suggests we shouldn't treat geolocation as a powerful feature, and instead just allow everyone to have it.
I know this isn't your argument, and I know you'd react quite negatively to such a broad generalization, so you can understand and appreciate how negatively your broad generalization is received.
I offered concrete suggestions on a way to move forward, and Yoav offered significantly more developed arguments. If you disagree, it would help to explain why.
>
> I'd like to see this API ship ASAP. LGTM.
>Alex, this feels somewhat hollow and frustrating. If you are not receptive to feedback, if you're not going to address feedback, and if you're going to ship something without considering alternatives, the consequences to privacy, or the platform implications, then it suggests that we shouldn't have an exercise of intent to ship.
I know you care deeply about platform health, and I'm somewhat surprised to see you supporting something that has so many deficiencies, especially when viable alternatives to meet the use cases, address the privacy issues, and reflect reality exist.
On Sep 30, 2015 11:54 AM, "Alex Russell" <sligh...@google.com> wrote:
>
> My concern has several aspects:
> The design of this feature is a compromise borne of a multi-year debate. Other obvious points in the design space have been explored and rejected (e.g., reflecting the actual radio connection type, the way Android's framework does)
That hasn't been a suggestion by either Yoav or myself, and for what it's worth, I would agree that is obviously incorrect.
> I'd assume the implementation complexity is a conversation that would have happened in code-review as this feature was developed. Odd to see it here.
That presumes every reviewer is globally aware of the implications, platform considerations, and ongoing efforts, which in the increasingly silo'd Chrome codebase, is simply not the case. I don't mean this to suggest authors or reviewers failed - simply that when we look at the holistic integration and exposure, considering the plentitude of platforms Chrome runs on, it is not unreasonable or unexpected that "implementation complexity" will emerge as a concern from something that might be locally consistent.
The point remains, and is stated on the I2I, that the very behaviour of this cross-platform is inconsistent, and necessarily so, because we build atop a variety of inconsistent platforms that expose different layers of fidelity. Much like we didn't simply expose "DirectX" to the web when 'everyone' was running Windows, much like APIs like WebGL have themselves stripped the edges from OpenGL and have necessitated complex solutions like ANGLE, and yet are still unable to avoid developers needing to engage in GPU sniffing, we need to carefully evaluate what is the lowest layer of reasonable abstraction, given the variety of security models and APIs in play on the platforms Chrome builds on.
My objections here are that this is too low level a detail for a user agent to reliably implement across a diversity of platforms (which we know this already) _and_ that it necessarily precludes _advancements_ in the platform.
For example, the onchange event is unquestionably at odds with MP-TCP and QUIC, which should give serious pause. The notions of surfacing onchange - or there being a singular egress interface, that it will flap, or that the UA will even know information about either egress or ingress interfaces - are not solid notions, and being actively challenged on multiple axis in mobile platforms, as they have been on others (satellite and bonded DSL as two examples)
Further, the onchange event from the perspective of Chrome is extremely unreliable. Both Eric Roman and myself _discourage_ people from writing native code in the browser that interacts with this notification, because of this necessary unreliably, the necessary low fidelity, and for the fact that even in a collection of incredibly bright, motivated, earnest to "do good code" developers, it is still botched consistently, to the detriment of our users. I know you experienced this first hand at the Shenzhen TPAC due to the networking conditions there, and our ability and reliability have gotten worse, not better, as we expand across platforms and the platforms we run across change, deprecate, and innovate.
I'm suggesting that the complexity here necessarily paints the browser in a corner, much like sync XHRs or unload-blocking alerts. From the "spec litigations", it is clear these concerns haven't been discussed in depth, and suggesting that we ignore feedback solely because it's " already been litigated" is to ignore the experience of those most familiar with the past, present, and future developments in this space.
> Given that this data is available by side-channel today, it's unclear what privacy impact there could possibly be.
This is simply not true that it is available by side channel to the fidelity proposed, much in the way that you have noted DeviceOrientation and Geolocation have different buckets of fidelity. It is this fidelity that should naturally give pause to the privacy aware, and give concern to whether we can even consistently offer the fidelity suggested.
> This feature is a compromise from many year's of iteration. My reaction came from what appears to be late-stage re-litigation about a discussion which has been had many, many times in many other places. I'm questioning the need to re-discuss here when so many other discussions have already taken place on this topic.
Just because it was borne of compromise does not make it technically sound. XHTML2 was borne of a wide variety of compromises and spec litigation, as were XSL and XML-DSig, but that doesn't argue to their fitness.
I don't mean to dismiss the work of many people active in this space, passionate for solutions, and the many discussions, but you of all people are no doubt most aware that it is simply impossible to participate in all the standards discussions all the time, so it feels dismissive to suggest that "If you wanted to have a say, you should have known about this the years before the I2S and spoken then." Arguably, our I2S process is much like IETF or W3C last call - trying to get broad feedback from a variety of stakeholders after those most motivated for something have eked out compromise and solutions. However, that doesn't exempt it from critical, but hopefully constructive, review; rather, it should encourage it.
In the vein, you're hearing concerns from both Yoav and myself as to the fitness. I have concerns to the ability of Chrome to implement this reliably across platforms, I have concern for the privacy implications, and I have concern that such a proposal is necessarily at odds with what multiple vendors are independently pursuing in an effort to enhance the networking. Yoav has articulated far better than I the concerns to developers, API trust, and the implications that such an API shape has.
This isn't saying "You should never ship," it is saying that it appears to be that a number of concerns were not considered or weighed when developing the compromise of this API, that they materially affect both the implementation and experience of the API, and we should hold off shipping until we have taken time to earnestly and thoughtfully weigh these considerations, and either declare we don't care, or adjust things if we do.
> NQE is a separate feature that I don't think should be added under the cover of the downlinkMax API as it will give us a significantly different view of the world. Having both is useful.
This position is considerably different than the one advanced during this I2S, so that too should give pause for consideration. I agree that shipping this is at odds with NQE - that is, that we can't retroactively integrate it - but that is precisely what is being proposed here. For that reason, we should be sure that we have agreement as to what the roadmap looks like, rather than haphazardly shipping and iterating, so that we can ensure we are releasing a web platform that is consistent, reasoned, and capable, especially if we will not be able to revisit this discussion for the many years that any deprecation would necessarily entail.
On Tue, Sep 29, 2015 at 4:10 AM, Yoav Weiss <yo...@yoav.ws> wrote:Related: I wrote (in length) my thoughts on ways to improve this specific API and others that can provide developers info about the user's current conditions. Specifically, I propose that we would expose a set of discrete values describing the end-to-end network state instead of a specific number indicating bandwidth.Phew, long read, thanks for putting it together! I'll try to respond to the overall themes...I think you need to dig deeper on what you mean by "effective bandwidth" and why that's an elusive goal. I agree that it would be great to have an oracle that tells you precise bandwidth and RTT estimates for the _exact request_ you're about to make, but that's simply not practical. The user can be on a GigE network, have an HD streaming going at full throttle, and simultaneously be downloading another resource at a trickle because the network path between them and that particular origin is slow. In fact, even if the user requests a resource from the same "fast" server, there are absolutely no guarantees on how quickly that response will come back -- e.g. one response is streamed from cache while another is taking seconds to generate and is trickling out bit by bit (note that this is effectively the same as your example of WiFi tethering).
Which is to say, claiming that we ought to aim to accurately predict "end-to-end / effective" bandwidth prior to making the request is not going to get you far. You're always working with incomplete information before you make the request, and the best you can do here is to account for what you know about your local network weather and extrapolate those implications for the end-to-end story
-- e.g. you can never go faster than your slowest hop, and if your first hop is slow then you know that you will never exceed that data rate; you're sharing the link with multiple requests; the throughput is not just a function of the network path but also the server load and a dozen other variables.
On that note, it's also worth highlighting that NQE leverages the exact same data we're surfacing here -- see "OS-provided information" section -- to bootstrap its predictions, and then layers observed performance to further refine its estimates... Which is consistent with my earlier statements about leveraging NQE in the future to refine what downlinkMax reports.
Re, discrete values: this wouldn't address any of the concerns you raised earlier about end-to-end vs first hop. Also, developers that I've talked to want access to the raw information, as exposed by the current API, to allow them to build own and smarter frameworks and libraries on top of this data.
One app's "slow" is another's "fast" (e.g. fetching text vs video vs HD video vs 4K video) and we should defer these decisions to app developers that better understand their own requirements and context.
Re, "we should improve our overall capabilities to adapt content based on user conditions": agreed! I'm hoping to see Save-Data i2i+s out soon and crbug.com/467945 will provide further building blocks to allow developers to measure actual achieved throughput (amongst other use cases enabled by it). Combination of these and related features will finally give developers the necessary tools to start experimenting with and building better adaptive experiences.
If the weather prediction algorithm needs a seed to start with, it should be the expected throughput of the various network types rather than the max.Sometimes we don't know what the max for a particular type is -- e.g. we don't have sufficient permissions, or it's a completely new interface we've never encountered before. In order to be future-friendly we have to assume they are "fast" until we teach the system to report otherwise;
developers are not dumb either (let's stop assuming they are) and know that there is no such thing as "infinite bandwidth".
So, all in all, I think that the best approach would be:* Change 'downlinkMax' to 'expectedDownlink' - regardless of naming, the semantics of the exposed bandwidth should be that it represents the actual bandwidth that the user might have, not an unattainable theoretical max. Then we can iterate that value over time, and make it more precise.* Clamp that bandwidth value such that it provides enough info for developers while not exposing too many internal details. Maybe clamping to 10Kbps below 100Kbps, clamping to 100Kbps below 1000Kbps and to 1000Kbps above that?
The intention of downlinkMax was expressly not to provide an estimate of end-to-end bandwidth but instead to provide a next-hop upper bound. If this is not made clear by the spec then it should be. I think we can come to agree that if we expose NQE in NetInfo down the road, it should be in a new attribute.I agree, +Infinity as a default upper bound when the UA doesn't know the underlying type is lame, but it's an upper bound. The UA is reporting truth.
No support for iOS. Android returns values from the table for cellular connections, and +Infinity for wifi until/if Chrome gets the WiFi permission again.
Does this include .onchange support on all platforms?
Is .onchange implemented in terms of the NetworkChangeNotifier of //net?
So to make sure I'm summarizing correctly:iOS: No support for .type, no support for downlinkMax, no support for .onChange
I think iOS could theoretically support .type; AFNetworkReachabilityStatus can report if you're on wifi or cellular, but there's no way to report downlinkMax without system framework level changes.
Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire.
On Thu, Oct 1, 2015 at 1:17 AM, Yoav Weiss <yo...@yoav.ws> wrote:Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire."If the underlying connection technology changes (in either connection type or maximum downlink speed)" [1] ... Hmm, I think I see where I confusion is. My claim that we're NQE-compatible is based on "or maximum downlink speed changes" reading of above sentence, where "maximum downlink speed" is allowed to be adjusted based on network weather. Admittedly, not the greatest wording (I'm to blame :-)). I do agree that we should not limit notifications to changes to "underlying connection type".
"If the properties of the connection change (e.g. changes in connection type, downlink speed, or other criteria), the user agent must run the steps to update the connection values..." ~~ would something like that clarify our intent?
If we want to go a step further, we could also either (a) drop type from onchange (this is a breaking change, but "type" adoption is low), or (b) define a new on<something> that only triggers when our estimated Mbps value changes and does not mention 'type' at all... I believe that would address the multi-{path, interface} use cases?
On Thu, Oct 1, 2015 at 2:18 PM, Ilya Grigorik <igri...@google.com> wrote:On Thu, Oct 1, 2015 at 1:17 AM, Yoav Weiss <yo...@yoav.ws> wrote:Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire."If the underlying connection technology changes (in either connection type or maximum downlink speed)" [1] ... Hmm, I think I see where I confusion is. My claim that we're NQE-compatible is based on "or maximum downlink speed changes" reading of above sentence, where "maximum downlink speed" is allowed to be adjusted based on network weather. Admittedly, not the greatest wording (I'm to blame :-)). I do agree that we should not limit notifications to changes to "underlying connection type"."If the properties of the connection change (e.g. changes in connection type, downlink speed, or other criteria), the user agent must run the steps to update the connection values..." ~~ would something like that clarify our intent?I think it's still unclear whether or not we want the singular value to be NQE-compatible. You've put forward the argument it can be (with slight spec changes), but from Alex's remarks, and from Yoav's, and from mine, I think there's some concern that making it NQE compatible makes it less appealing to developers.
Yoav's suggestion for exposing NQE related data seems more flexible, in that it detaches the API from implicit assumptions that:- A given request transits a single connection (this is accomplished by virtue of time-interval reporting for a request)- That there is a singular connection or network 'overall' (the surface of the API being a singular value)- Possibility to expose, if necessary, additional network-related information in a way that makes sense (e.g. latency)
If we want to go a step further, we could also either (a) drop type from onchange (this is a breaking change, but "type" adoption is low), or (b) define a new on<something> that only triggers when our estimated Mbps value changes and does not mention 'type' at all... I believe that would address the multi-{path, interface} use cases?I don't think so. I think an event-based trigger on such a change comes with an implicit assumption that "If you kick off this request, right now, this is what you'd get", which isn't necessarily the case, even in multi-path.
On Thu, Oct 1, 2015 at 2:30 PM, Ryan Sleevi <rsl...@chromium.org> wrote:On Thu, Oct 1, 2015 at 2:18 PM, Ilya Grigorik <igri...@google.com> wrote:On Thu, Oct 1, 2015 at 1:17 AM, Yoav Weiss <yo...@yoav.ws> wrote:Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire."If the underlying connection technology changes (in either connection type or maximum downlink speed)" [1] ... Hmm, I think I see where I confusion is. My claim that we're NQE-compatible is based on "or maximum downlink speed changes" reading of above sentence, where "maximum downlink speed" is allowed to be adjusted based on network weather. Admittedly, not the greatest wording (I'm to blame :-)). I do agree that we should not limit notifications to changes to "underlying connection type"."If the properties of the connection change (e.g. changes in connection type, downlink speed, or other criteria), the user agent must run the steps to update the connection values..." ~~ would something like that clarify our intent?I think it's still unclear whether or not we want the singular value to be NQE-compatible. You've put forward the argument it can be (with slight spec changes), but from Alex's remarks, and from Yoav's, and from mine, I think there's some concern that making it NQE compatible makes it less appealing to developers.Perhaps the missing bit here is the "level of confidence" of the provided estimate? As in, in absence of any historical information we have to fallback to the interface, which will give us the upper bound, but we're also unlikely to reach that bound (low confidence). Then, once we have some historical data we can refine that estimate and offer a higher-confidence estimate?Perhaps we need a 'confidence -> {low, medium, high}' attribute, or some such?
Yoav's suggestion for exposing NQE related data seems more flexible, in that it detaches the API from implicit assumptions that:- A given request transits a single connection (this is accomplished by virtue of time-interval reporting for a request)- That there is a singular connection or network 'overall' (the surface of the API being a singular value)- Possibility to expose, if necessary, additional network-related information in a way that makes sense (e.g. latency)Hmm, I don't really see the functional difference between, ~connection.on<something> vs Performance Timeline delivery: both emit events, and these events can carry arbitrary attributes (Mbps estimate, RTT, confidence, etc)...? That said, we are positioning perf timeline + observer as the common primitive to surface perf-related data.. emitting 'network' events via the same mechanism does seem reasonable.If we want to go a step further, we could also either (a) drop type from onchange (this is a breaking change, but "type" adoption is low), or (b) define a new on<something> that only triggers when our estimated Mbps value changes and does not mention 'type' at all... I believe that would address the multi-{path, interface} use cases?I don't think so. I think an event-based trigger on such a change comes with an implicit assumption that "If you kick off this request, right now, this is what you'd get", which isn't necessarily the case, even in multi-path.We have the same issue regardless of how we deliver the event (via on<something> or Perf Timeline). Developers will cache the last seen value and use it as a directional signal to modify what and how they fetch. And on that note, regardless of how we deliver the change events we do need to provide "connection.downlinkMax" that can be queried on demand.. Without that you can't get an estimate during the initial load sequence of the page, which is an important use case for this API.ig
--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CADXXVKpjt_RU%3De2jxqUaW9MHPc1GjqefJig22P2%3DWbrne6VovA%40mail.gmail.com.
On Thu, Oct 1, 2015 at 6:24 PM, 'Ilya Grigorik' via net-dev <net...@chromium.org> wrote:On Thu, Oct 1, 2015 at 2:30 PM, Ryan Sleevi <rsl...@chromium.org> wrote:On Thu, Oct 1, 2015 at 2:18 PM, Ilya Grigorik <igri...@google.com> wrote:On Thu, Oct 1, 2015 at 1:17 AM, Yoav Weiss <yo...@yoav.ws> wrote:Looking at the current processing model for onchange, it becomes even more obvious that the current spec cannot sit well with NQE based estimations as the downlinkMax value should change only when the underlying network connection changed, and when it does, onchange should fire."If the underlying connection technology changes (in either connection type or maximum downlink speed)" [1] ... Hmm, I think I see where I confusion is. My claim that we're NQE-compatible is based on "or maximum downlink speed changes" reading of above sentence, where "maximum downlink speed" is allowed to be adjusted based on network weather. Admittedly, not the greatest wording (I'm to blame :-)). I do agree that we should not limit notifications to changes to "underlying connection type"."If the properties of the connection change (e.g. changes in connection type, downlink speed, or other criteria), the user agent must run the steps to update the connection values..." ~~ would something like that clarify our intent?I think it's still unclear whether or not we want the singular value to be NQE-compatible. You've put forward the argument it can be (with slight spec changes), but from Alex's remarks, and from Yoav's, and from mine, I think there's some concern that making it NQE compatible makes it less appealing to developers.Perhaps the missing bit here is the "level of confidence" of the provided estimate? As in, in absence of any historical information we have to fallback to the interface, which will give us the upper bound, but we're also unlikely to reach that bound (low confidence). Then, once we have some historical data we can refine that estimate and offer a higher-confidence estimate?Perhaps we need a 'confidence -> {low, medium, high}' attribute, or some such?I'm having a hard time imagining how someone would actually use that information productively.
----Yoav's suggestion for exposing NQE related data seems more flexible, in that it detaches the API from implicit assumptions that:- A given request transits a single connection (this is accomplished by virtue of time-interval reporting for a request)- That there is a singular connection or network 'overall' (the surface of the API being a singular value)- Possibility to expose, if necessary, additional network-related information in a way that makes sense (e.g. latency)Hmm, I don't really see the functional difference between, ~connection.on<something> vs Performance Timeline delivery: both emit events, and these events can carry arbitrary attributes (Mbps estimate, RTT, confidence, etc)...? That said, we are positioning perf timeline + observer as the common primitive to surface perf-related data.. emitting 'network' events via the same mechanism does seem reasonable.If we want to go a step further, we could also either (a) drop type from onchange (this is a breaking change, but "type" adoption is low), or (b) define a new on<something> that only triggers when our estimated Mbps value changes and does not mention 'type' at all... I believe that would address the multi-{path, interface} use cases?I don't think so. I think an event-based trigger on such a change comes with an implicit assumption that "If you kick off this request, right now, this is what you'd get", which isn't necessarily the case, even in multi-path.We have the same issue regardless of how we deliver the event (via on<something> or Perf Timeline). Developers will cache the last seen value and use it as a directional signal to modify what and how they fetch. And on that note, regardless of how we deliver the change events we do need to provide "connection.downlinkMax" that can be queried on demand.. Without that you can't get an estimate during the initial load sequence of the page, which is an important use case for this API.ig
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CADXXVKpjt_RU%3De2jxqUaW9MHPc1GjqefJig22P2%3DWbrne6VovA%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CAEK7mvpW%2BPBg1m%2BfJnnx21d5%2Bu4CtrZXt%2B-ptZCAJLg%2B5bgLLA%40mail.gmail.com.
Perhaps we need a 'confidence -> {low, medium, high}' attribute, or some such?I'm having a hard time imagining how someone would actually use that information productively.I agree with Matt here. We went back and forth when designing NQE on exposing confidence and dropped it because we couldn't imagine how it would be used.
On the topic of how this stuff will be used, I remember years ago considering an interface that provided a small number of discrete values for network conditions, e.g., "barely functional", "slow and steady", "fast enough", and "amazingly fast", with latency/bw that more or less mapped to 2G, 3G, 4G/broadband, fiber. What ever happened to those discussions? I worry that the only developers that are sophisticated enough to use actual BW or latency measurements are those that can implement their own network quality estimators.
Could we make this debate more concrete by involving a potential customer with deep experience in this space? Eg. I imagine there are people at YouTube who are very familiar with a lot of these trade offs and how valuable knowing the downlink speed is for them in practice. I'd much rather we make a decision on whether or not we ship this API based on the deep experience of customers who have already been relying on similar signals on other platforms (eg. Android).From my perspective, if YouTube (or any other big customer with a lot of experience in this space) can explain why max downlink speed is the signal they need, then we shouldn't let philosophical debates hold us back from unblocking such customers now (although privacy is of course still a potentially blocking issue).
Also, all things being relatively equal, we should err on the side of the lowest level primitive that we can reasonably implement across platforms (eg. for extensible web reasons). Presumably sophisticated customers will want to iterate on their own NQE algorithms in application-specific ways anyway, so we should aim to give them the same low-level inputs available to them on other platforms.
>Could we make this debate more concrete by involving a potential customer with deep experience in this space?
Hi there - customer checking in here. I work on a lot of sites for Amazon.com, and there are many different use cases for which an understanding of the network from the browser would be very helpful. Some of the most important are in developing countries like India.
Ideally our servers would be told in the initial request headers about the quality of the network (like in clientHints). This would allow us to construct on the server a page that's reasonable. In the case of a poor 2G connection, fetching a typical mobile page and it's assets can take literally many minutes.
Within javascript on the client we're interested both in the current quality and significant changes. If a user's browser loses the network entirely, we want to know. If they've dropped from HSPA to EDGE (or CDMA), we want to behave differently. If they're on LTE but the network is only operating at 50kbps, we'd like to know that if possible (though if we only got downlinkmax, it would still be much better than today).
>Is there any particular reason that you're interested in network changes other than the effective bandwidth you would expect the user to have?A few things:* It's helpful to track cases where there's no connection - in the case of a page transition, if there's no network you probably want to show messaging to the customer about that, rather than navigating. In the case of things like Ajax requests, we could avoid making them in the first place and avoid any sort of queuing of retries.
* It's not just bandwidth. If I can tell someone is on a 2G connection I have a strong signal that their RTT's may be very high. Given this, I may try to design the page to have fewer separate connections - to inline or combine content into fewer requests to avoid round trips.
>Can you describe how you would use downlinkMax to help you serve better experiences to your users?Here's a list of some:* deliver a lighter view of the page - a more text-centric one to 2G customers (or to those with downlinkMax below a certain level)
* choose a different image to send - either different dimensions, resolution, quality, or compression technique. This might be a bigger or smaller image depending on the info in downlinkMax.
* choose a video oriented feature over a static image based one
* we can use it to refine other models for predicting network performance
* delay fetching some content until a user interaction if we know content fetches are fast
* send data at the bottom of the HTML of the page rather than depending on an AJAX request, if we know the connection is slow
Those are just a few off the top of my head. Is that helpful?
I don't think secure contexts would do anything do address whatever privacy concerns there may be (without commenting on the concerns themselves; I haven't swapped in all of this thread and leave that to the rest of you all :-) ).
Those are just a few off the top of my head. Is that helpful?Yes, very helpful. I hope we'd be able to expose an API that enables tackling these use-cases.