Seeking net shepherd for Resource Timing size fields

27 views
Skip to first unread message

Adam Rice

unread,
Jun 7, 2016, 12:05:43 AM6/7/16
to net-dev
Hello net folks,

I am working to add three new size fields to the PerformanceResourceTiming API (see issue 467945 and the spec at https://www.w3.org/TR/resource-timing/).

There are two size fields that are not currently exported by net. Specifically the "transferSize" field, which is the total byte size of the HTTP response including same-origin redirects, and the "encodedBodySize" field, which is the size of body prior to removing content compression. Some (relatively small) changes will be needed in net to export these values.

I am currently at the stage of finishing up the design and starting the implementation.

I am hoping someone from net will take a look at the design doc: https://docs.google.com/document/d/1ckL-rKLFRsdI4nn1golvQ6I1zRIvxgFkDXMrZb8KduY/edit

If the same person later did the code reviews, that would save time.

Thanks,
Adam

Ryan Sleevi

unread,
Jun 7, 2016, 12:49:30 PM6/7/16
to Adam Rice, net-dev
I would suggest that the problem may be more complex than you've suggested in the design doc. I don't know if I'm a good person to be the reviewer, however, because I'm not sure what the 'right answer' is. Perhaps rdsmith@, gavinp@, or mmenke@ would know?

Consider the case of 'encodedBodySize'. My understanding (perhaps incorrect) of our current cache implementation is that the entry we write to the cache is after filters have been done. This is particularly important for the SDCH case, since the SDCH dictionary is an independent resource and things would get all sorts of 'hinky' if we were decompressing on the fly (again, this could be an incorrect understanding, and perhaps we're already in the Hinky Hinterlands and trying to fix it). As a result, we don't store the encodedBodySize in the disk cache - because we're always storing the entry decoded after content translations.

I'm a little nervous about your proposal for handling transferSize. Innately, it feels like it's the wrong layer - but I'm not sure what the concrete suggestion should be. //net is generally unaware of the CORS aspects of Fetch, intentionally, while it seems like transferSize needs to handle that. Changing the timing of Drain() so that you can count the body bytes against the request also seems... less than ideal... even though it's clear why you desire that to be the case (since, at least at present, we still drain the socket in the event we might be able to use it for a keep-alive). There's something in my gut that tells me it's going to have any number of edge cases with non-success response codes, but I can't put my finger on the concrete concern. It may introduce flake where there was none - in the past, we could continue with the next request, and the Drain() cost would be counted against returning the socket to the pool (IIRC), whereas now, if the Drain() fails, it'd manifest as a request job failure (because it'd be handled as a failure before being handled as a redirect). It's also not clear how 'synthetic' redirects (e.g. HSTS) will behave - we inject some data in via a synthetic job, but it's not actually transferred over the wire.



--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CAHixhFrqBGXeQxseM3U1pFi-bZWY3d1XWcAOsykuKCMtxg7WVQ%40mail.gmail.com.

Randy Smith

unread,
Jun 7, 2016, 12:57:01 PM6/7/16
to Ryan Sleevi, Adam Rice, net-dev
On Tue, Jun 7, 2016 at 12:48 PM, Ryan Sleevi <rsl...@chromium.org> wrote:
I would suggest that the problem may be more complex than you've suggested in the design doc. I don't know if I'm a good person to be the reviewer, however, because I'm not sure what the 'right answer' is. Perhaps rdsmith@, gavinp@, or mmenke@ would know?  

Consider the case of 'encodedBodySize'. My understanding (perhaps incorrect) of our current cache implementation is that the entry we write to the cache is after filters have been done. This is particularly important for the SDCH case, since the SDCH dictionary is an independent resource and things would get all sorts of 'hinky' if we were decompressing on the fly (again, this could be an incorrect understanding, and perhaps we're already in the Hinky Hinterlands and trying to fix it). As a result, we don't store the encodedBodySize in the disk cache - because we're always storing the entry decoded after content translations.

Nope, sadly, we currently reside in Hinky Hinterlands (i.e. what's stored in the cache is the encoded value, which we decode on the fly, and while that respects that network stack architectural layering, it causes all the problems Ryan alludes to).

(Not responding to the rest of this email thread because I don't, at least currently, have the background context.)

-- Randy


 
I'm a little nervous about your proposal for handling transferSize. Innately, it feels like it's the wrong layer - but I'm not sure what the concrete suggestion should be. //net is generally unaware of the CORS aspects of Fetch, intentionally, while it seems like transferSize needs to handle that. Changing the timing of Drain() so that you can count the body bytes against the request also seems... less than ideal... even though it's clear why you desire that to be the case (since, at least at present, we still drain the socket in the event we might be able to use it for a keep-alive). There's something in my gut that tells me it's going to have any number of edge cases with non-success response codes, but I can't put my finger on the concrete concern. It may introduce flake where there was none - in the past, we could continue with the next request, and the Drain() cost would be counted against returning the socket to the pool (IIRC), whereas now, if the Drain() fails, it'd manifest as a request job failure (because it'd be handled as a failure before being handled as a redirect). It's also not clear how 'synthetic' redirects (e.g. HSTS) will behave - we inject some data in via a synthetic job, but it's not actually transferred over the wire.



On Mon, Jun 6, 2016 at 9:05 PM, 'Adam Rice' via net-dev <net...@chromium.org> wrote:
Hello net folks,

I am working to add three new size fields to the PerformanceResourceTiming API (see issue 467945 and the spec at https://www.w3.org/TR/resource-timing/).

There are two size fields that are not currently exported by net. Specifically the "transferSize" field, which is the total byte size of the HTTP response including same-origin redirects, and the "encodedBodySize" field, which is the size of body prior to removing content compression. Some (relatively small) changes will be needed in net to export these values.

I am currently at the stage of finishing up the design and starting the implementation.

I am hoping someone from net will take a look at the design doc: https://docs.google.com/document/d/1ckL-rKLFRsdI4nn1golvQ6I1zRIvxgFkDXMrZb8KduY/edit

If the same person later did the code reviews, that would save time.

Thanks,
Adam

--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CAHixhFrqBGXeQxseM3U1pFi-bZWY3d1XWcAOsykuKCMtxg7WVQ%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.

Chris Bentzel

unread,
Jun 7, 2016, 1:55:37 PM6/7/16
to Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org, Adam Rice, net-dev
Adding bengr+tbansal in case they have some ideas, since they have done some work on data accounting.

David Benjamin

unread,
Jun 7, 2016, 2:11:56 PM6/7/16
to Chris Bentzel, Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org, Adam Rice, net-dev
On Tue, Jun 7, 2016 at 1:55 PM Chris Bentzel <cben...@chromium.org> wrote:
Adding bengr+tbansal in case they have some ideas, since they have done some work on data accounting.


On Tue, Jun 7, 2016 at 12:57 PM Randy Smith <rds...@chromium.org> wrote:
On Tue, Jun 7, 2016 at 12:48 PM, Ryan Sleevi <rsl...@chromium.org> wrote:
I would suggest that the problem may be more complex than you've suggested in the design doc. I don't know if I'm a good person to be the reviewer, however, because I'm not sure what the 'right answer' is. Perhaps rdsmith@, gavinp@, or mmenke@ would know?  

Consider the case of 'encodedBodySize'. My understanding (perhaps incorrect) of our current cache implementation is that the entry we write to the cache is after filters have been done.

No, as far as HTTP is concerned, the resource *is* the gzipped body, not the uncompressed stuff. If the server is asked to slice via Range, it slices the gzipped body. If the server takes the same response and gzips it differently, it is considered to have changed the resource. Because of this, HTTP caches basically must store the filtered data, otherwise they could not implement Range.

(Yes, this means HTTP APIs which undo Content-Encoding, like ours and what the web prescribes, basically do not work if you attempt to do a Range request on a Content-Encoding: gzip resource. It's kind of silly.)

Transfer-Encoding, in contrast, *is* undone before the cache and would factor in here. Though I don't believe Transfer-Encoding gzip is a thing that actually exists on the web.
 

Adam Rice

unread,
Jun 7, 2016, 9:44:36 PM6/7/16
to Ryan Sleevi, net-dev
Thanks for your feedback. The issue with encodedBodySize and SDCH has been adequately covered by others so I won't rehash it here.

On 8 June 2016 at 01:48, Ryan Sleevi <rsl...@chromium.org> wrote:
I'm a little nervous about your proposal for handling transferSize. Innately, it feels like it's the wrong layer - but I'm not sure what the concrete suggestion should be. //net is generally unaware of the CORS aspects of Fetch, intentionally, while it seems like transferSize needs to handle that. Changing the timing of Drain() so that you can count the body bytes against the request also seems... less than ideal... even though it's clear why you desire that to be the case (since, at least at present, we still drain the socket in the event we might be able to use it for a keep-alive). There's something in my gut that tells me it's going to have any number of edge cases with non-success response codes, but I can't put my finger on the concrete concern. It may introduce flake where there was none - in the past, we could continue with the next request, and the Drain() cost would be counted against returning the socket to the pool (IIRC), whereas now, if the Drain() fails, it'd manifest as a request job failure (because it'd be handled as a failure before being handled as a redirect). It's also not clear how 'synthetic' redirects (e.g. HSTS) will behave - we inject some data in via a synthetic job, but it's not actually transferred over the wire.

The issue with redirects and CORS has been bothering me for a couple of days now. It may that the accumulation actually needs to be done inside the renderer. But I don't like sending information inside the sandbox that should not be exposed to the web page. I will speak to our resident CORS expert.

I don't actually know whether it makes any difference whether we include the body of redirects in transferSize or not. I have added a note to that effect to the design doc.

My understanding is that synthetic redirects never set TotalReceivedBytes to a non-zero value, and so shouldn't change the result.

Thanks,
Adam

Matt Menke

unread,
Jun 7, 2016, 10:52:45 PM6/7/16
to David Benjamin, Chris Bentzel, Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org, Adam Rice, net-dev
Transfer-Encoding is treated as Content-Encoding, and I believe we even allow things like "Transfer-Encoding: gzip", "Content-Encoding: gzip" on a file, and expect it to only be gzipped once.  Isn't the web great?

What does body size mean, in the context of QUIC/SPDY?  Does it include the framing around the body or not?  What about chunked-encoding?  If we're using a SPDY proxy, does it include the SPDY frames, then?  Does it even make sense to send this information in the case of a proxy?  We've long since given up on exposing a proxy is in use, of course, but we're given the webpage information that doesn't seem particularly relevant to them.  And if there are auth headers, we're potentially exposing the length of auth passwords, aren't we?

Given that behavior varies here for redirects across origins, I'm also not sure we really want URLRequest to track this.  The behavior seems to closely tied to a weird HTML API, that it seems to make more sense to accumulate the data further down the stack, maybe even in the renderer.

Matt Menke

unread,
Jun 7, 2016, 10:59:06 PM6/7/16
to David Benjamin, Chris Bentzel, Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org, Adam Rice, net-dev
Ahh, I guess we only give response header size, not request header size, so scratch the auth password concern, though with HTTP requests over a proxy, they still get auth challenge lengths, which still seems really weird.

David Benjamin

unread,
Jun 7, 2016, 11:41:31 PM6/7/16
to Matt Menke, Chris Bentzel, Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org, Adam Rice, net-dev
On Tue, Jun 7, 2016 at 10:59 PM Matt Menke <mme...@chromium.org> wrote:
Ahh, I guess we only give response header size, not request header size, so scratch the auth password concern, though with HTTP requests over a proxy, they still get auth challenge lengths, which still seems really weird.

On Tue, Jun 7, 2016 at 10:52 PM, Matt Menke <mme...@chromium.org> wrote:
Transfer-Encoding is treated as Content-Encoding, and I believe we even allow things like "Transfer-Encoding: gzip", "Content-Encoding: gzip" on a file, and expect it to only be gzipped once.  Isn't the web great?

Fortunately, we don't advertise a TE header at all, and TE itself seems to be dead as of HTTP/2.

"""
HTTP/2 does not use the Connection header field to indicate connection-specific header fields; in this protocol, connection-specific metadata is conveyed by other means. An endpoint MUST NOT generate an HTTP/2 message containing connection-specific header fields; any message containing connection-specific header fields MUST be treated as malformed (Section 8.1.2.6).

The only exception to this is the TE header field, which MAY be present in an HTTP/2 request; when it is, it MUST NOT contain any value other than "trailers".
"""

Matt Menke

unread,
Jun 7, 2016, 11:50:53 PM6/7/16
to David Benjamin, Chris Bentzel, Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org, Adam Rice, net-dev
Oh, I was wrong - we just ignore Transfer-Encoding.  We've never supported it.  Hrm....

Adam Rice

unread,
Jun 8, 2016, 1:32:41 AM6/8/16
to Matt Menke, David Benjamin, Chris Bentzel, Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org, net-dev, Ilya Grigorik
+Ilya as he is active on the spec.

On 8 June 2016 at 11:52, Matt Menke <mme...@chromium.org> wrote:
What does body size mean, in the context of QUIC/SPDY?  Does it include the framing around the body or not?  What about chunked-encoding?  If we're using a SPDY proxy, does it include the SPDY frames, then?  Does it even make sense to send this information in the case of a proxy?  We've long since given up on exposing a proxy is in use, of course, but we're given the webpage information that doesn't seem particularly relevant to them.  And if there are auth headers, we're potentially exposing the length of auth passwords, aren't we?

body size means the same regardless of protocol. I assume you mean transfer size.

The relevant language from the spec is:

This attribute should include HTTP overhead (such as HTTP/1.1 chunked encoding and whitespace around header fields, including newlines, and HTTP/2 frame overhead, along with other server-to-client frames on the same stream), but should not include lower-layer protocol overhead (such as TLS [RFC5246]or TCP).

I am assuming we will apply the HTTP/2 language to SPDY, ie. it includes framing. GetTotalReceivedBytes() looks like it does the right thing here, but I haven't verified it beyond reasonable doubt.

The spec could be read to imply that encryption overhead and TCP-equivalent functionality should not be included in the transferSize for QUIC. However, I feel this is an overly-fussy reading and the spec is intended to be pragmatic on this point.

My interpretation of the spec is that yes, chunked-encoding overhead is included. This appears to correspond with what the HttpStreamParser class actually does.

I dreamt that the spec said that proxy overhead is included when the request is sent to the proxy in plain text, but not when it is tunnelled. But now I can find no trace of that. I think this is how GetTotalBodySize() actually works.

It's certainly weird that transferSize can expose information about whether a proxy was used, but I don't consider it pathological.

Given that cross-origin access to these fields is opt-in via the Timing-Allow-Origin response header, I think the security concerns of leaking the size of auth challenges and Set-Cookie headers are not significant, but I would be interested to be proven wrong.

Thanks,
Adam

tba...@google.com

unread,
Jun 8, 2016, 3:47:25 AM6/8/16
to net-dev, mme...@chromium.org, davi...@chromium.org, cben...@chromium.org, rds...@chromium.org, rsl...@chromium.org, be...@chromium.org, tba...@chromium.org, igri...@google.com, ri...@google.com
One problem with GetTotalReceivedBytes() may be that currently it includes the framing overhead for SPDY but not for QUIC.

For the data use experiment, the goal was to attribute the bytes received at the network layer (as measured by the network carrier) to the individual URL requests. The goal here is different, so the approximate algorithms that we used are probably not needed here.

Adam Rice

unread,
Jun 9, 2016, 2:50:31 AM6/9/16
to tba...@google.com, net-dev, Matt Menke, David Benjamin, Chris Bentzel, Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org, Ilya Grigorik
I have added a mention of adding the framing overhead for QUIC to the "Future Work" section. Would it be acceptable if the existing GetTotalReceivedBytes() started including QUIC framing bytes? Or would we need a new API?

Chris Bentzel

unread,
Jun 9, 2016, 1:25:13 PM6/9/16
to Adam Rice, tba...@google.com, net-dev, Matt Menke, David Benjamin, Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org, Ilya Grigorik
That sounds like a spec issue? Happy to clarify here, but also possible on the Resource Timing/WebPerf WG.

Ilya Grigorik

unread,
Jun 10, 2016, 3:08:11 PM6/10/16
to Chris Bentzel, Adam Rice, Tarun Bansal, net-dev, Matt Menke, David Benjamin, Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org
On Thu, Jun 9, 2016 at 1:25 PM, Chris Bentzel <cben...@chromium.org> wrote:
That sounds like a spec issue? Happy to clarify here, but also possible on the Resource Timing/WebPerf WG.

Chris, which part are you referring to? Specifically calling out in the spec that QUIC overhead should be accounted for?

I think Adam's question was more specifically oriented towards our internal API which omits the framing overhead for QUIC.. (but should, I think :)) 

Ryan Sleevi

unread,
Jun 10, 2016, 3:15:38 PM6/10/16
to Ilya Grigorik, Chris Bentzel, Adam Rice, Tarun Bansal, net-dev, Matt Menke, David Benjamin, Randy Smith, Ryan Sleevi, Ben Greenstein, tba...@chromium.org
On Fri, Jun 10, 2016 at 12:07 PM, 'Ilya Grigorik' via net-dev <net...@chromium.org> wrote:
Chris, which part are you referring to? Specifically calling out in the spec that QUIC overhead should be accounted for?

I think Adam's question was more specifically oriented towards our internal API which omits the framing overhead for QUIC.. (but should, I think :)) 

I believe the spec should be clearer, such that it's unambiguous what is intended (e.g. are framing overheads included), and if so, that can guide the implementation to match the spec. At least, until a time where we can no longer account for such framing overheads reasonably due to multiplexing, multipathing, etc... but at least it's a living spec :) 

David Benjamin

unread,
Jun 10, 2016, 3:23:31 PM6/10/16
to rsl...@chromium.org, Ilya Grigorik, Chris Bentzel, Adam Rice, Tarun Bansal, net-dev, Matt Menke, Randy Smith, Ben Greenstein, tba...@chromium.org
We already can't really account for such things. We might receive, say, a TLS record with half of one HTTP/2 frame and half of the next. Which request is that counted for?

What problem is the spec actually trying to solve? Per-request bandwidth-counting is basically meaningless, so depending on what it's count for, it'll need to make different decisions. (Is it okay for bandwidth to go uncounted? Is it okay to double-count bandwidth? Are you trying to compare the cost of different requests? Are you trying to compare bandwidth usage across UAs? Is it just some vague guessimate to feed into this or that heuristic so we're okay with inconsistent and systematic inaccuracies across the board? etc.)

David

Ilya Grigorik

unread,
Jun 14, 2016, 4:12:05 AM6/14/16
to David Benjamin, Ryan Sleevi, Chris Bentzel, Adam Rice, Tarun Bansal, net-dev, Matt Menke, Randy Smith, Ben Greenstein, tba...@chromium.org
Wait, I'm confused.. The definition we provide in the spec excludes "lower layer" overhead - e.g. TLS, TCP, etc. 


"This attribute should include HTTP overhead (such as HTTP/1.1 chunked encoding and whitespace around header fields, including newlines, and HTTP/2 frame overhead, along with other server-to-client frames on the same stream), but should not include lower-layer protocol overhead (such as TLS [RFC5246]or TCP)."


FWIW, I think it's reasonable to read QUIC overhead as ~HTTP overhead.

David Benjamin

unread,
Jun 14, 2016, 12:17:22 PM6/14/16
to Ilya Grigorik, Ryan Sleevi, Chris Bentzel, Adam Rice, Tarun Bansal, net-dev, Matt Menke, Randy Smith, Ben Greenstein, tba...@chromium.org
On Tue, Jun 14, 2016 at 4:12 AM Ilya Grigorik <igri...@google.com> wrote:
Wait, I'm confused.. The definition we provide in the spec excludes "lower layer" overhead - e.g. TLS, TCP, etc. 


"This attribute should include HTTP overhead (such as HTTP/1.1 chunked encoding and whitespace around header fields, including newlines, and HTTP/2 frame overhead, along with other server-to-client frames on the same stream), but should not include lower-layer protocol overhead (such as TLS [RFC5246]or TCP)."


FWIW, I think it's reasonable to read QUIC overhead as ~HTTP overhead.

QUIC does things at many layers. QUIC folks can elaborate further, but it has an embedding of HTTP/2 (which, sure, is probably roughly HTTP overhead), but it also has a security layer (TLS) and handles packet loss, reorder, congestion control, etc. (TCP).

So you would have to cut QUIC's overhead in the middle. Which, sure, that's a thing one can do, but I have to ask again, what concrete problem is the spec trying to solve here? I'm puzzled what use this bandwidth measurement is. (For instance, I could pathologically chunk my stream into records at the TLS layer to cause the real bandwidth used to be nearly 40x the reported number or so. Though no one would do that since it's crazy.)

Ryan Sleevi

unread,
Jun 14, 2016, 12:18:57 PM6/14/16
to Ilya Grigorik, David Benjamin, Ryan Sleevi, Chris Bentzel, Adam Rice, Tarun Bansal, net-dev, Matt Menke, Randy Smith, Ben Greenstein, tba...@chromium.org
On Tue, Jun 14, 2016 at 1:11 AM, Ilya Grigorik <igri...@google.com> wrote:
Wait, I'm confused.. The definition we provide in the spec excludes "lower layer" overhead - e.g. TLS, TCP, etc. 


"This attribute should include HTTP overhead (such as HTTP/1.1 chunked encoding and whitespace around header fields, including newlines, and HTTP/2 frame overhead, along with other server-to-client frames on the same stream), but should not include lower-layer protocol overhead (such as TLS [RFC5246]or TCP)."


FWIW, I think it's reasonable to read QUIC overhead as ~HTTP overhead.

I don't, and I think the issue is the spec supports both our readings.

In your view, QUIC is like HTTP/2 - just another framing mechanism - and thus its overhead should be included, as framing.
In my view, QUIC is like TLS or TCP - providing reliable encrypted stream semantics - and thus its overhead should not be included, as it's lower-layer protocol overhead.

In both cases, we're right. In both cases, we're wrong. The 'bug' is the spec language assuming there is a clear split, when in cases like QUIC, there isn't. Whether that's a fault of QUIC (for blurring the lines) or the spec (for assuming defined lines), I'm not sure, but I don't feel the spec unambiguously supports your conclusion, so there's something to be improved here. 

Ben Maurer

unread,
Jun 14, 2016, 12:44:36 PM6/14/16
to net-dev
Hey

Wanted to jump in here w some thoughts about what kinds of use cases FB would have here and I think other developers might have.

IMHO we will mostly rely on this info for estimates of resource size. We understand that framing introduces overhead at multiple levels. we'd prefer an api that gives the most accurate possible estimate of size. Of that estimate changes over time or across browsers that's ok so long as we only get more accurate. If a browser could reasonably account for overhead from quic or ssl we'd love to have the spec change to allow for that to be accounted for.

Ryan Sleevi

unread,
Jun 14, 2016, 12:48:16 PM6/14/16
to Ben Maurer, net-dev
I guess the question is less so "What" (do you want) and moreso "Why" (although it was phrased as "What do you want it for"). Understanding how this information could or is intended to be used is very helpful in shaping the understanding of how much work we do, because I suppose the point is that we cannot, nor ever, reasonably account for that overhead, and given that, what problem do we fail to solve for if we decide to be more decisive, less wafflely, and be explicit about that.

Ilya Grigorik

unread,
Jun 15, 2016, 4:38:41 AM6/15/16
to Ryan Sleevi, Ben Maurer, net-dev
On Tue, Jun 14, 2016 at 6:17 PM, David Benjamin <davi...@chromium.org> wrote:
I'm puzzled what use this bandwidth measurement is. (For instance, I could pathologically chunk my stream into records at the TLS layer to cause the real bandwidth used to be nearly 40x the reported number or so. Though no one would do that since it's crazy.)

Identifying such pathological cases is precisely one of the use cases -- e.g. due to a misbehaving upstream proxy/CDN node/whatever.

On Tue, Jun 14, 2016 at 6:18 PM, Ryan Sleevi <rsl...@chromium.org> wrote:
In both cases, we're right. In both cases, we're wrong. The 'bug' is the spec language assuming there is a clear split, when in cases like QUIC, there isn't. Whether that's a fault of QUIC (for blurring the lines) or the spec (for assuming defined lines), I'm not sure, but I don't feel the spec unambiguously supports your conclusion, so there's something to be improved here. 

Fair enough. I guess we'd have to explicitly name QUIC and address it in the spec. 

If we want "apples to apples" to h1/h2, I guess we'd want to exclude the QUIC common header [1] and focus on frame packets [2] associated with the stream?



--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.

Ben Maurer

unread,
Jun 15, 2016, 6:38:26 AM6/15/16
to Ryan Sleevi, net-dev
Hey --

So there are three main use cases for us, roughly in order of priority:

(1) Telling if the request hit the network or not. One of the biggest gaps in data right now is that we can't definitively tell if a request was served by cache or not. transferSize will be very useful here because it will let us figure out if a request had to go to a server (transferSize > 0) or if it was served by the cache (transferSize = 0) it will also give us a fairly good signal if a revalidation occurred (transferSize << encodedBodySize). 

(2) Help us account for the bandwidth usage of large resources. Many of our resources are large enough that protocol overhead is not substantial. While it's theoretically possible for us to calculate encodedBodySize by fetching the URL in our analysis tools there are some issues here -- first, doing this can be expensive since we have to request the image from our own CDN, slowing down the tools. Also we'd have to figure out how to account for things like "was the user using brotli or gzip". decodedBodySize is also useful for understanding the amount of JS/CSS/HTML that needs to be parsed.

(3) Taking protocol overhead into account. The more accurately we are able to measure the true bandwidth usage of a request the better we can do at prioritizing the reduction of that overhead. I think HTTP headers are probably the biggest thing to capture here. I know there was a lot of discussion around redirects -- at least for us that's not really that important. Most of the cost of redirects is in the round trips, not in the size of headers/body.

Put another way, I think in terms of how it would impact the types of analysis we do I'd stack rank the features in the following priority:

(1) Being able to differentiate cache hit vs network request (this could be implemented via nextHopProtocol as well as via transferSize)
(2) Getting encodedBodySize/decodedBodySize
(3) Getting HTTP header information in transferSize
(4) Accounting for other types of overhead in trasnferSize (http framing, quic, ssl [if we changed the spec to allow that], redirects, etc). 

We'd much prefer to see rough metrics ship sooner than waiting for perfection here. Any churn caused by minor changes in metrics due to increasing the accuracy of transferSize is far outweighed by the benefits of getting critical information earlier.

I'll be at BlinkOn this week and happy to talk to folks if they have questions about our use cases.

-b

David Benjamin

unread,
Jun 15, 2016, 12:03:10 PM6/15/16
to Ilya Grigorik, Ryan Sleevi, Ben Maurer, net-dev
On Wed, Jun 15, 2016 at 4:38 AM 'Ilya Grigorik' via net-dev <net...@chromium.org> wrote:
On Tue, Jun 14, 2016 at 6:17 PM, David Benjamin <davi...@chromium.org> wrote:
I'm puzzled what use this bandwidth measurement is. (For instance, I could pathologically chunk my stream into records at the TLS layer to cause the real bandwidth used to be nearly 40x the reported number or so. Though no one would do that since it's crazy.)

Identifying such pathological cases is precisely one of the use cases -- e.g. due to a misbehaving upstream proxy/CDN node/whatever.

The spec does not serve that use case. As specified, you won't be able to identify it because you can't measure this without the TLS overhead. And accounting for the TLS overhead on a per-request basis is not a well-defined notion because of multiplexing.

David

Ryan Sleevi

unread,
Jun 16, 2016, 5:37:01 PM6/16/16
to Ben Maurer, Ryan Sleevi, net-dev
On Wed, Jun 15, 2016 at 3:38 AM, Ben Maurer <ben.m...@gmail.com> wrote:
(4) Accounting for other types of overhead in trasnferSize (http framing, quic, ssl [if we changed the spec to allow that], redirects, etc). 

Thanks for replying, Ben. I totally agree with you that the perfect is the enemy of the good. If the spec made no mention of overheads or trying to include them, I think that'd be great!

I think for all of the use case you mentioned on this last point, this is all something that can be recorded at the server - save for redirect overhead (which I agree, is less about bandwidth and more about latency). For example, the TLS overhead in sending resources is dictated by the server. The HTTP/2 overhead and header compression is dictated by the server (although I suppose also the number of requests the client has in flight, but any given request must be compressed by the server). The same applies to QUIC. So it's unclear what adding it to the browser side would be as the intrinsic value-add - the uncharitable, but may be entirely accurate, view would be that it makes browser developers do the work that server developers could/should do.

Ilya, given the fundamental issues with accounting for "overhead" or quantifying things other than the request, would you be supportive of removing any such mentions from the spec, and a focus on the Chromium implementation of ignoring those general overheads in favor of specific, quantifiable measurements?

Ilya Grigorik

unread,
Jun 22, 2016, 2:02:56 AM6/22/16
to Ryan Sleevi, Ben Maurer, net-dev
On Thu, Jun 16, 2016 at 2:36 PM, Ryan Sleevi <rsl...@chromium.org> wrote:


On Wed, Jun 15, 2016 at 3:38 AM, Ben Maurer <ben.m...@gmail.com> wrote:
(4) Accounting for other types of overhead in trasnferSize (http framing, quic, ssl [if we changed the spec to allow that], redirects, etc). 

Thanks for replying, Ben. I totally agree with you that the perfect is the enemy of the good. If the spec made no mention of overheads or trying to include them, I think that'd be great!

I think for all of the use case you mentioned on this last point, this is all something that can be recorded at the server - save for redirect overhead (which I agree, is less about bandwidth and more about latency). For example, the TLS overhead in sending resources is dictated by the server. The HTTP/2 overhead and header compression is dictated by the server (although I suppose also the number of requests the client has in flight, but any given request must be compressed by the server). The same applies to QUIC. So it's unclear what adding it to the browser side would be as the intrinsic value-add - the uncharitable, but may be entirely accurate, view would be that it makes browser developers do the work that server developers could/should do.

The value is that getting this information from the server may be impossible in many cases -- e.g. I use an upstream CDN/proxy which I do not control directly and/or unable to instrument. Most sites don't run their own edge servers...
 
Ilya, given the fundamental issues with accounting for "overhead" or quantifying things other than the request, would you be supportive of removing any such mentions from the spec, and a focus on the Chromium implementation of ignoring those general overheads in favor of specific, quantifiable measurements?

I agree that QUIC has some complications, but I'm not convinced that we should remove protocol overhead from the spec yet.. :) Also, FF has already shipped an implementation which is compatible with current spec definition (although they don't have to worry about QUIC).

What about my earlier suggestion of counting QUIC common header bytes only? The extra complication here, with respect to the spec, is that QUIC is not an "official thing" as of yet. It's a Chrome only thing and I'm not convinced that we need to block on spec bits as long as we agree on a reasonable solution amongst ourselves.. at least for time being.

ig

Ben Maurer

unread,
Jun 22, 2016, 2:56:25 AM6/22/16
to Ilya Grigorik, Ryan Sleevi, net-dev
My suggestion would be to make the spec use language like this:

Implementations should strive to capture as much overhead as possible. Some types of overheads (eg, complex protocols like QUIC or framing overheads of TCP and SSL) may be difficult or impossible to measure. Implementations must only attribute bytes that can be exclusively attributed to a request.

This type of language makes it clear that implementations should capture as much as possible. If an implementation can measure ssl or tcp overhead, they should do it! 

Ilya Grigorik

unread,
Jun 24, 2016, 7:34:49 PM6/24/16
to Ben Maurer, Ryan Sleevi, net-dev
On Tue, Jun 21, 2016 at 11:56 PM, Ben Maurer <ben.m...@gmail.com> wrote:
My suggestion would be to make the spec use language like this:

Implementations should strive to capture as much overhead as possible. Some types of overheads (eg, complex protocols like QUIC or framing overheads of TCP and SSL) may be difficult or impossible to measure. Implementations must only attribute bytes that can be exclusively attributed to a request.

*shrug*.. I think that's too loose. QUIC is hard(er) because it integrates all the layers, but it's not "impossible".. 

> What about my earlier suggestion of counting QUIC common header bytes only?

^ are there any reasons why this wouldn't work?

Ryan Hamilton

unread,
Jun 24, 2016, 7:43:37 PM6/24/16
to Ilya Grigorik, Ben Maurer, Ryan Sleevi, net-dev
​We should make sure that we do, roughly, the same thing for QUIC that we do for HTTP/22. So for example, in QUIC we can add in the STREAM_FRAME overhead just as we could add the HTTP/2​ DATA frame overhead. But we should not count the QUIC public header overhead, or the encryption overhead unless we are also counting the TLS and TCP/IP overhead. So when we was "QUIC common header bytes" does that refer to the STREAM_FRAME or the public header?

Ilya Grigorik

unread,
Jun 27, 2016, 12:16:03 PM6/27/16
to Ryan Hamilton, Ben Maurer, Ryan Sleevi, net-dev
O_o ... HTTP/22 -- do tell me more! :-) On a more serious note..

I think I may have been looking at an older draft before. Is this a reasonably up-to-date version: https://tools.ietf.org/html/draft-tsvwg-quic-protocol-02

For parity with current h2 definition, I think we want to:
- Count all server-to-client frames for particular stream (i.e. not limited to STREAM)
- Exclude public header [1] overhead for all such frames

Does that sound reasonable?




 

Ryan Sleevi

unread,
Jun 27, 2016, 12:57:19 PM6/27/16
to Ilya Grigorik, Ryan Hamilton, Ben Maurer, Ryan Sleevi, net-dev


On Mon, Jun 27, 2016 at 9:15 AM, Ilya Grigorik <igri...@google.com> wrote:
Does that sound reasonable?


Has there been identified an actual use case for that level of granularity? As far as I can tell, there hasn't been.

It seems like there was agreement that leaving this off in the spec is fine, and revisiting it when there's a clearer use case and more defined need would be an acceptable solution.

(To your specific question, no, it doesn't sound reasonable, but I'm trying to find a productive path forward here) 

Ilya Grigorik

unread,
Jun 28, 2016, 7:30:17 PM6/28/16
to Ryan Sleevi, Ryan Hamilton, Ben Maurer, net-dev
On Mon, Jun 27, 2016 at 9:56 AM, Ryan Sleevi <rsl...@chromium.org> wrote:


On Mon, Jun 27, 2016 at 9:15 AM, Ilya Grigorik <igri...@google.com> wrote:
Does that sound reasonable?

Has there been identified an actual use case for that level of granularity? As far as I can tell, there hasn't been.

Sorry, I guess it wasn't clear to me that we were trying to do so in this thread. Doing some thread archeology, past discussion on this:


FWIW, if you think current spec definition is not right, then we should open a discussion on the RT github repo. That said, I do think there are valid use cases that current definition captures and we should account for.

It seems like there was agreement that leaving this off in the spec is fine, and revisiting it when there's a clearer use case and more defined need would be an acceptable solution.

That wasn't my impression. For one, doing so would make our implementation incompatible with FF, which already shipped support for these attributes. Also, I think it would be weird for us to account for overhead in H2 but then omit it entirely in QUIC.. hence my question about omitting the public header. 

Matt Menke

unread,
Jun 29, 2016, 10:53:29 AM6/29/16
to Ilya Grigorik, Ryan Sleevi, Ryan Hamilton, Ben Maurer, net-dev
I skimmed over all of those but the w3 archive (Why they insist on using such an incredibly bad interface for navigating archives has always been a mystery to me, unless they don't want people actually going through it).

The only two motivations I saw was "We don't know what CDNs are really doing", and a lovely chunked response from a defunct www.sun.com server:  https://gist.github.com/mnot/1138792

--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
Reply all
Reply to author
Forward
0 new messages