tba...@chromium.org, be...@chromium.org, igri...@chromium.org
Spec
We shipped NetInfo in Chrome Android M38. This i2s is for new extensions that provide network quality signals to developers: downlink, rtt, effectiveType. Definitions:
Summary
The goal is to expose network performance information to developers, as perceived by the UA, in a format that’s easy to consume and act upon: UA monitors latency and throughput of recent requests and provides estimates for RTT, throughput, and effective connection type that developers should optimize for -- e.g. if the recently observed latency and/or throughput is low, the effective connection type will be mapped to a “low” value like 2G or 3G, regardless of the underlying network technology in use.
Notes from BlinkOn, and followup discussions: https://github.com/WICG/netinfo/issues/46#issuecomment-276804272
Note: we have been using these signals internally in Chrome to trigger various interventions where effective connection type is mapped to 2g or slow-2g. This API exposes these signals to developers, allowing them to adapt and optimize their content to avoid the need for such interventions.
Link to “Intent to Implement” blink-dev discussion
https://groups.google.com/a/chromium.org/d/msg/blink-dev/TS9zT_u2M4k/ydZK5WpTBwAJ
Is this feature supported on all six Blink platforms (Windows, Mac, Linux, Chrome OS, Android, and Android WebView)?
Currently this is supported only on Chrome OS and Android.
Debuggability
The network quality estimates are also exposed via chrome://net-internals. Developers can use net-internals, in a local environment to better understand the behavior of the estimation process: open Events tab and look for “network_quality_estimator” records. Developers can also override the network quality estimate using the Chrome switch “force-effective-connection-type” which can be set from chrome://flags or as a command line switch.
Interoperability and Compatibility Risk
Low. This is a new extension to the NetInfo API.
Is this feature fully tested by web-platform-tests?
Yes, will be. Implementation in progress: https://crbug.com/719108.
OWP launch tracking bug
https://bugs.chromium.org/p/chromium/issues/detail?id=723068
Entry on the feature dashboard
https://www.chromestatus.com/feature/5108786398232576
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/e208082e-8b52-4dce-9559-6f3945b1fd5f%40chromium.org.
There have not been any signals from other vendors yet.
On Thu, May 18, 2017 at 10:48 AM, Ben Kelly <bke...@mozilla.com> wrote:On Thu, May 18, 2017 at 1:24 PM, tbansal via blink-dev <blin...@chromium.org> wrote:There have not been any signals from other vendors yet.FWIW mozilla had a discussion about it on our mailing list last year:I would say there is a fair amount of concern. I believe we are collecting telemetry to possibly remove NetworkInformation due to privacy concerns.I'd appreciate any feedback on https://wicg.github.io/netinfo/#privacy-considerations -- that content was the result of a very similar discussion on the earlier i2s for downlinkMax [1]. With regards to other points: multi path is something we considered [2], identifying metered connections would be great but really hard in practice [3], and NQE (I believe) should go a long way towards making the data more actionable.
I asked the folks with concerns to take a look and reply here. Since I don't personally have an objection I don't want to try to represent their views here.
On Saturday, May 20, 2017 at 2:06:36 AM UTC+10, Ben Kelly wrote:I asked the folks with concerns to take a look and reply here. Since I don't personally have an objection I don't want to try to represent their views here.There were two classes of concerns. Privacy, and fit-for-purpose.It seems like the authors of the spec have addressed the privacy concerns with the usual "sure, the house is on fire, but it wasn't *my* firebomb that started it" argument. In this case, it's a pretty compelling argument: this API doesn't expose anything that a site can't learn in various other ways [1].The arguments about whether the API is actually any good were a bigger concern. As with anything we do, the cost-benefit analysis needs to be performed. What we have here is of questionable value.This API presents a downlink estimate, which is increasingly meaningless other than as a fingerprinting input. It's a value that approximates what an attacker might independently measure, but it's a different value. In exchange for this modest increase in fingerprinting surface is information that is very difficult to use. The use case that I think is intended is deciding on quality metrics: pushing a bigger video or less degraded images.
This metric isn't suitable for those cases, because what you really need is an *end-to-end* estimate of throughput.
The theoretical speed of the local link is rarely a good proxy for that. The best - and arguably only - way to arrive at an end-to-end estimate is to do what video sites already do: use the link then measure and adapt.
The browser can't realistically do any different.The API presents a connection type, which is equally meaningless. In the discussion we had, it was observed that "is this link free" was the question that this was aimed at answering. The actual information that can be extracted from a type is marginal. Again, it does expose information that sites might not have otherwise had.The RTT measure is underspecified. Using "recently observed round-trip times on the client", doesn't mean anything. Is this to the origin only or does it include other origins? Is this measured at the TCP layer, TLS layer, or HTTP layer? Is this a minimum, or is it going to be subject to noise induced by packet loss?
I really don't understand the logic behind exposing an "effective network type" and who benefits from it. If it is a simple table lookup, a site could easily perform the same calculation. But see the above discussion about the value of the primitives that are input to that calculation.
Finally, the API doesn't acknowledge the possibility that there might be multiple active interfaces. The API adds "mixed" as a type, but that is likely to be the only value you will ever see on some classes of device.--Martin[1] Regarding the privacy considerations, the WebRTC example isn't relevant. I'd remove it. It also overstates capabilities, see https://tools.ietf.org/html/draft-ietf-rtcweb-ip-handling for the latest.
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/6744a5fc-da9f-43d3-8263-2d7413cf652e%40chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/58390de8-9059-4ea6-9eb5-af28f6cee7e7%40chromium.org.
The arguments about whether the API is actually any good were a bigger concern. As with anything we do, the cost-benefit analysis needs to be performed. What we have here is of questionable value.
This API presents a downlink estimate, which is increasingly meaningless other than as a fingerprinting input. It's a value that approximates what an attacker might independently measure, but it's a different value. In exchange for this modest increase in fingerprinting surface is information that is very difficult to use. The use case that I think is intended is deciding on quality metrics: pushing a bigger video or less degraded images. This metric isn't suitable for those cases, because what you really need is an *end-to-end* estimate of throughput.
The theoretical speed of the local link is rarely a good proxy for that. The best - and arguably only - way to arrive at an end-to-end estimate is to do what video sites already do: use the link then measure and adapt. The browser can't realistically do any different.
On Mon, May 22, 2017 at 4:01 AM <mtho...@mozilla.com> wrote:On Saturday, May 20, 2017 at 2:06:36 AM UTC+10, Ben Kelly wrote:The RTT measure is underspecified. Using "recently observed round-trip times on the client", doesn't mean anything. Is this to the origin only or does it include other origins? Is this measured at the TCP layer, TLS layer, or HTTP layer? Is this a minimum, or is it going to be subject to noise induced by packet loss?I agree this would benefit from having a tighter definition. (as would the "observed throughput" definition)
Looks like Mozilla folks are in the loop. Have you tried to reach out to Microsoft and Apple folks in some public place? In addition to poking people on GitHub, an option is to file bugs at https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/new/ and https://webkit.org/new-bug.I found https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/110956/ but that was probably closed due to no information.
If you haven't already done so, can you file a request for review at https://github.com/w3ctag/design-reviews/issues?
2) "This API does not address the use cases it's aiming to address" concerns.
The "downlink" and "effectiveType" and "rtt" attributes are good steps forward in terms of addressing use cases that were presented during the discussion. The "type" and "downlinkMax" attributes are at best misleading except _maybe_ for the case of a cell phone talking to a cell tower as the first hop; they are not helpful for the use cases that were presented.
Sadly, the spec document itself does not describe what use cases it's trying to address (it does the "here's the solution we picked" thing instead), but as I recall the metered vs non-metered use case was something people were concerned about and is unaddressed.
There's no hard and fast rule for what blocks shipping and for how long, so whenever you think that all that can be said has been said, if there's still no conclusion, please poke this thread.
On Sun, May 21, 2017 at 10:01 PM, <mtho...@mozilla.com> wrote:
The arguments about whether the API is actually any good were a bigger concern. As with anything we do, the cost-benefit analysis needs to be performed. What we have here is of questionable value.This API presents a downlink estimate, which is increasingly meaningless other than as a fingerprinting input. It's a value that approximates what an attacker might independently measure, but it's a different value. In exchange for this modest increase in fingerprinting surface is information that is very difficult to use. The use case that I think is intended is deciding on quality metrics: pushing a bigger video or less degraded images. This metric isn't suitable for those cases, because what you really need is an *end-to-end* estimate of throughput.This is a rehash of the earlier discussion in [1]. The tl;dr is: knowing that you're on a downlinkMax ~2G connection is a strong signal regardless of end-to-end measurements -- you're constrained by the last hop and much of the world coming online still falls into this bucket. Similarly, in absence of useful end-to-end measurement (e.g., after the interface has been quiet for a while), you still need to fallback to downlinkMax.The theoretical speed of the local link is rarely a good proxy for that. The best - and arguably only - way to arrive at an end-to-end estimate is to do what video sites already do: use the link then measure and adapt. The browser can't realistically do any different.That's exactly what this i2s is providing.On Mon, May 22, 2017 at 7:01 AM, Yoav Weiss <yo...@yoav.ws> wrote:On Mon, May 22, 2017 at 4:01 AM <mtho...@mozilla.com> wrote:On Saturday, May 20, 2017 at 2:06:36 AM UTC+10, Ben Kelly wrote:The RTT measure is underspecified. Using "recently observed round-trip times on the client", doesn't mean anything. Is this to the origin only or does it include other origins? Is this measured at the TCP layer, TLS layer, or HTTP layer? Is this a minimum, or is it going to be subject to noise induced by packet loss?I agree this would benefit from having a tighter definition. (as would the "observed throughput" definition)Updated in https://github.com/WICG/netinfo/issues/56 -- if you have other suggestions, please chime in there!
On Mon, May 22, 2017 at 11:38 AM, Philip Jägenstedt <foolip@chromium.org> wrote:Looks like Mozilla folks are in the loop. Have you tried to reach out to Microsoft and Apple folks in some public place? In addition to poking people on GitHub, an option is to file bugs at https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/new/ and https://webkit.org/new-bug.I found https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/110956/ but that was probably closed due to no information.Edge uservoice request has been open for a while. Opened new bug on Webkit tracker.If you haven't already done so, can you file a request for review at https://github.com/w3ctag/design-reviews/issues?"In terms of API design, this seems reasonable. We understand that there are a number of trade offs in this design space, and that there is some risk here. However, we feel that publishing something in this space is generally good, because it meets a significant need in the platform -- even if it is not initially perfect. We'd encourage you to continue to gather information from developers and discuss it with other implementers."On Mon, May 22, 2017 at 9:14 PM, Boris Zbarsky <bzba...@mit.edu> wrote:2) "This API does not address the use cases it's aiming to address" concerns.
The "downlink" and "effectiveType" and "rtt" attributes are good steps forward in terms of addressing use cases that were presented during the discussion. The "type" and "downlinkMax" attributes are at best misleading except _maybe_ for the case of a cell phone talking to a cell tower as the first hop; they are not helpful for the use cases that were presented.Right. The key observation (and motivation for this API) to keep in mind: that condition holds for significant fraction of the billions coming online.Sadly, the spec document itself does not describe what use cases it's trying to address (it does the "here's the solution we picked" thing instead), but as I recall the metered vs non-metered use case was something people were concerned about and is unaddressed.There is an ongoing discussion about this in https://github.com/WICG/netinfo/issues/41. I don't think we need to block this i2s on that, and I'd appreciate your input on that thread.---------Stepping back, a few points:Given that most of our users coming online are on slow (~2G~3G) connections, we need to enable developers to make the right tradeoffs and optimization decisions. The list of interventions that we (Chrome) trigger on such connections continues to grow—and is based on the same signals we're exposing here—and we need to give developers the signals and tools to do the right thing, such that we can stop adding and forcing these interventions on their behalf. Also, as a side note, I don't think this is a pain shared equally by all browsers, which is reflected in varying levels (read, lack of.. for the most part) of engagement in this space.
- Chrome for Android shipped support for downlinkMax in M48.
- This i2s uses (1) as a building block to enable network quality signals.
- We're still trying to figure out if and how we can report "metered" signals -- this doesn't block (2).
If there are still gaps in the spec definitions for the signals we're proposing here, I'm happy to iterate on that -- please file bugs on GH! So far, I believe we've addressed all the issues raised above.
On Mon, May 22, 2017 at 11:38 AM, Philip Jägenstedt <foolip@chromium.org> wrote:There's no hard and fast rule for what blocks shipping and for how long, so whenever you think that all that can be said has been said, if there's still no conclusion, please poke this thread.Poking it now :-). Personally, I'd like push for a higher sense of urgency here for reasons stated above.
On Tue, May 23, 2017 at 1:42 PM, <tba...@google.com> wrote:
On Monday, May 22, 2017 at 4:45:04 PM UTC-7, Martin Thomson wrote:On Mon, May 22, 2017 at 9:01 PM, Yoav Weiss <yo...@yoav.ws> wrote:
>> The theoretical speed of the local link is rarely a good proxy for that.
>> The best - and arguably only - way to arrive at an end-to-end estimate is to
>> do what video sites already do: use the link then measure and adapt.
>
>
> That's feasible for video quality adaptation, but not for images,
> alternative content, etc.
How do you think that the browser will produce this estimate, if not
using the same technique? Some sites are beefy enough that they get
out of slow start, but many source content from other origins in ways
that ensure that the estimate is guaranteed to be bad.The current implementation in Chromium computes throughput acrossrequests (irrespective of their origin). The algorithm is a bitheuristic -- A throughput observation is taken when the network ispredicted to be saturated. This prediction can be made on the basisof number of requests in flight, or by looking at changes in the packetloss rate etc.
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CADXXVKq50-wgiQZeSi5WigTBFBZFTN_hvERsg0WDu5xcRr7HKA%40mail.gmail.com.
Have you tried to reach out to Microsoft and Apple folks in some public place? In addition to poking people on GitHub, an option is to file bugs at https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/new/ and https://webkit.org/new-bug.
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/107225a9-7f49-00ef-f4ff-2f5adc5a6167%40mit.edu.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAARdPYdw2M%2BZmCY8rVYe%3DbfqJaOxmETYBKttYeN7Y%3D7YD6LNUQ%40mail.gmail.com.
The chromstatus entry says no signal for other web browsers, but it looks like the signals are at least "concerned". It also lists positive signals from web developers - can you point at some statements I could read to better understand the use cases?
It looks a bit like a service could already use existing signals (mobile UA, IP from range that belongs to a mobile carrier) deduce a fair bit of information, so I share Mozilla's privacy concerns here.On the other hand, I wonder whether e.g. an ad network that wants to push a large video would refrain from doing so if they had a signal that there's limited bandwidth, or whether we should rather go into the direction of feature policies and tell a site that it doesn't get to download more than xxx KB of media, and block it if it tries to?
The downlinkMax attribute _does_ report more fine-grained information, of course. But I still see no need for distinguishing between the various kinds of "wifi", or the various forms of "ethernet". There might be something to say about the different "bluetooth"s. But is there a practical reason to distinguish between CDMA and 1xRTT, say?
I, personally, would be much happier if we had a few buckets for the downlinkMax, insteaad of a huge table of lots of possible values. For the use cases I've seen described that would be sufficient.
On Tue, May 30, 2017 at 3:14 PM Jochen Eisinger <joc...@chromium.org> wrote:The chromstatus entry says no signal for other web browsers, but it looks like the signals are at least "concerned". It also lists positive signals from web developers - can you point at some statements I could read to better understand the use cases?
Facebook was also involved in the API discussions, so it seems like they are supportive.
downlinkMax was added in a later intent, and then there appears to have been interest for FFOS and +Marcos Caceres was mentioned and included in the thread.It looks to me like the 3 new attributes address some of the shortcomings of the 2 existing attributes, and are orthogonal in their definitions rather than building upon the old ones.
LGTM2, with the caveat that it's not an expression of confidence that there's not a privacy problem here, I will leave that to privacy review and https://github.com/WICG/netinfo/issues/58.
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/6d04148e-6f4f-5d79-329e-09e4a807d7b0%40mit.edu.
On June 5, 2017 at 4:48:54 PM, Jochen Eisinger (joc...@chromium.org) wrote:
> Have we considered exposing whether the user configured the OS to consider
> the connection as metered, and whether the browser will apply interventions
> for slow networks or not?
(Editor's hat on...)
I'm not a networking expert, so I don't have any opinion about the new
attributes that have been added to the spec (apart that I trust Ilya,
Yoav, et al., who are experts, to do the due diligence on the use
cases).
Regarding "metered", those of us that worked on the spec have
considered it in the past. However, people wanted some kind of
solution whereby you could magically determined if the connection was
tethered. We should just stop and give up on trying to find that magic
solution.
So for metered, we should just agree to use the explicit signal from
the OS (like you can set in Windows 10 on a connection-by-connection
basis).
Rick pinged me on the intents spreadsheet, however, I still feel I don't have all the information I'd need to approve this.We've shipped a version of the API that other browser vendors are actively opposed to (issue 60), and the extension to the surface isn't ultimately answering the question whether your connection is metered or not,
exposes cross origin information (issue 58).
On Wed, May 31, 2017 at 7:10 AM, Philip Jägenstedt <foo...@chromium.org> wrote:downlinkMax was added in a later intent, and then there appears to have been interest for FFOS and +Marcos Caceres was mentioned and included in the thread.It looks to me like the 3 new attributes address some of the shortcomings of the 2 existing attributes, and are orthogonal in their definitions rather than building upon the old ones.Not entirely. In absence of recent (end-to-end) network activity data effectiveType does fallback to downlinkMax value of the first network hop. I think the underlying question in some of the above discussions is: the ECT attributes we're discussing in this i2s improve the signal we expose to developers by taking into account both the end-to-end and first hop properties of the client and, as such, do we still need to expose the first hop as a standalone signal?My personal take is.. In the short~medium term, yes: NetInfo.type usage is ~0.9% [1] and NetInfo.downlinkMax ~0.5% [2]. In the longer term, maybe not: if we can drive adoption of ECT attributes, perhaps we can sunset type and downlinkMax down the road.
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAARdPYdRAdaZZ6uV_M454eECejknAtqp%3D494_wTQsSZjxZQE7w%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAOMQ%2Bw_fYaAVf--gsSvMXsge1hkx2ntzznJ3KH1vL9rVJ2eDpQ%40mail.gmail.com.