Primary eng (and PM) emails
las...@chromium.org, dsch...@chromium.org, b...@chromium.org, ians...@chromium.org
Summary
Remove the ability to receive, keep in memory, and use HTTP/2 and gQUIC push streams sent by the server. Send SETTINGS_ENABLE_PUSH = 0 at the beginning of every HTTP/2 and gQUIC connection to request that servers not send them.
Background
HTTP/2 allows servers to “push” a resource that the client will likely need before the client actually requests it, see https://httpwg.org/specs/rfc7540.html#PushResources. The specification allows the client to reject the pushed resource at its discretion.
gQUIC is a non-standard protocol supported by Google servers, Akamai, and Litespeed. It also allows servers to push resources in a fashion very similar to HTTP/2. gQUIC is being obsoleted by HTTP/3 and support will eventually be removed entirely.
HTTP/3 is an almost-RFC protocol that also defines server push, see https://quicwg.org/base-drafts/draft-ietf-quic-http.html#name-server-push.
Chrome currently supports handling push streams over HTTP/2 and gQUIC, and this intent is about removing support over both protocols. Chrome does not support push over HTTP/3 and adding support is not on the roadmap.
Motivation
Almost five and a half years after the publication of the HTTP/2 RFC, server push is still extremely rarely used. Over the past 28 days, 99.95% of HTTP/2 connections created by Chrome never received a pushed stream (Net.SpdyStreamsPushedPerSession), and 99.97% of connections never received a pushed stream that got matched with a request (Net.SpdyStreamsPushedAndClaimedPerSession). These numbers are exactly the same as in June 2019. In June 2018, 99.96% of HTTP/2 connections never received a pushed stream. These numbers indicate the lack of active efforts by server operators to increase push usage. On top of this, less than 40% of received pushes are used, down from 63.51% two years ago. The rest are invalid, never get matched to a request, or already in cache.
Server push is very difficult to use well. Akamai publicly shared two studies showing that push over HTTP/2 either does not change performance or improves performance marginally when used with certain restrictions, see https://lists.w3.org/Archives/Public/ietf-http-wg/2019JulSep/0078.html. However, these studies had been done more than a year ago and push usage data still does not indicate wide deployment even within Akamai’s network. In light of this it is doubtful that smaller server operators have the resources to successfully deploy server push.
A contrasting result has been presented to the same IETF in 2018: a large experiment done by Chrome to measure the effect of server push on page load latency when using HTTP/2 showed that push increases latency at the long tail, see https://github.com/httpwg/wg-materials/blob/gh-pages/ietf102/chrome_push.pdf.
As far as gQUIC goes, over the past 28 days, less than 1 out of 1,200,000 connections have seen any pushed streams. We are not aware of any studies done on the effect of latency.
There is significant code complexity associated with Chromium supporting push: pushed requests have to be stored in an in-memory cache, looked up by URL across connections but only if the connection is authoritative for the request, matched by other parameters (method and other headers), and evicted after a timeout. Running associated tests burden developers and infrastructure when developing unrelated features, and users bear the burden of increased binary size. We believe that these outweigh the theoretical performance benefits.
Interoperability and Compatibility Risk
There is no compatibility risk. Server push is meant as a performance optimization, a client is allowed to choose to reject any pushed streams. In order to minimize wasted downlink capacity, Chromium would cancel pushed streams that a buggy server might open despite Chromium disabling server push by sending SETTINGS_ENABLE_PUSH = 0 in the initial SETTINGS frame. Note however that in the experiment conducted two years ago no server was observed to send pushes when this setting was sent.
Firefox and Safari both support HTTP/2 push and we are not aware of any plans for removal.
Alternative implementation suggestion for web developers
It is recommended that servers use the <link rel="preload"> element to notify the client about subresources that it is expected to need. Admittedly these resources will then need to be requested by the client, making them arrive 1 RTT later than a pushed resource would. On the other hand, <link rel="preload"> allows the client to avoid fetching resources which are already cached. Repeated efforts to analyze the latency gains of push found little or no benefit.
Because of limitations of HTTP/2 stream multiplexing, a server may normally only send <link rel="preload"> elements when the rest of the final headers are available, otherwise it would block other responses on the connection. The 103 Early Hints status code (https://tools.ietf.org/html/rfc8297) is expected to provide better performance under certain circumstances by allowing information about resources that the client should request to be sent as soon as available and then making the connection available for carrying data on other streams until the final response is sent. There is active effort to measure the potential latency gain of 103 Early Hints (compared to not using it, without using server push in either case), with the understanding that resources will be allocated to implementing it in Chromium in case data look promising. See https://chromium.googlesource.com/chromium/src/+/master/docs/early-hints.md for more details.
It is interesting to note that server push has been used in ways other than originally intended. One prominent example is to stream data from the server to the client, which will be better served by the upcoming WebTransport protocol.
Usage information from UseCounter
N/A: this network stack feature is not instrumented to use UseCounter.
Entry on the feature dashboard
Chrome currently supports handling push streams over HTTP/2 and gQUIC, and this intent is about removing support over both protocols. Chrome does not support push over HTTP/3 and adding support is not on the roadmap.
We disagree with this decision based on the real world data we have seen and the products we build around Http/2 Push. Just making our objection known.--Jay PhelpsOutsmartly
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CALjsk17dH_4W0Dsbhb1CRfh50tE7OdSTrZrRJ%2BmVbs8bKXjJww%40mail.gmail.com.
Hello,Frameworks and solutions using server push are just at the beginning of the testing phase. Server push can permit a real time application like with mercure, you can find a description here :
https://symfony.com/blog/symfony-gets-real-time-push-capabilities
Server push is not only for a better loading of the css and simples styles ans images resources. This is for the data resources that it can be a real new solution.
Exact, sorry for the wrong link : it's vulcain another part of the api platform which used the server push : https://github.com/dunglas/vulcain
For the connection less, i'm not sure, i suppose that a identification mark (jwt) is exchanged and resend if the connection is lost.
Also, there's another good reason why devs did not implement it in 2017, and why they would in 2021. Google started pushing website owners to think about performance in 2018:
https://developers.google.com/web/updates/2018/07/search-ads-speed
...
So, I claim that it wasn't possible to use Push before 2018, and it's not realistic to expect massive use in 2020.
Hi!
Vulcain and Mercure maintainer here.
Indeed Mercure doesn't rely on Server Push at all (it relies on Server Sent Events), and it will not be impacted by this change. "Connection-less" push is a capability provided by SSE. The spec allows the carrier to use a lower level mobile protocol to route the event to the device: https://html.spec.whatwg.org/multipage/server-sent-events.html#eventsource-push
AFAIK, this feature is not widely implemented (and is probably not compatible with HTTPS).
However Vulcain relies a lot on HTTP/2 Server Push. This specification is dedicated to web APIs (REST / hypermedia). It allows to use Server Push as an alternative to GraphQL-like compound documents. The main idea is to reduce the latency introduced by a traditional, atomic, REST-like API. When following this style, each document has a dedicated URL and the client needs to download and to parse the response associated with the first URL before requesting its relations. This introduces latency, and if the client needs to retrieve a graph of resources, this creates a waterfall effect.
Vulcain fixes this latency problem by pushing all the relations needed by the client in response to the explicit HTTP request.
The main benefits of this approach over creating compound documents (GraphQL, JSON:API's include...) are:
- the ability to send several documents in parallel in different HTTP streams (multiplexing)
- a better hit rate if you use a HTTP cache layer (because you have several small atomic documents with their own URLs, instead of a big document containing everything and that will have to be invalidated frequently)
- the ability to leverage the semantic of HTTP for each resource: authorization, content negotiation and of course cache
- it's usually simpler for the API developer to implement and secure (small documents, no embedding, no nesting...)
According to our benchmarks, using this strategy allows to be as fast as using compound documents (and sometimes faster, because of multiplexing).
Vulcain already supports using Preload links instead of Server Push. The reverse proxy we provide (which implements the spec, built on top of Caddy Server) does it automatically when Server Push is disabled.
Using Preload links increases (a bit) the latency compared to using Server Push but on the other hand this also improves caching: the browser never requests a resource if it already has it. With Server Push - because Cache Digest has never been finished and implemented - the server starts pushing all relations. It doesn't known if the browser already has this resource in cache or not. The browser cancels the push early if it already has the resource in cache, but some server power and some bandwidth are wasted. Using preload hints also decreases the complexity and removes some limitations of common web servers regarding which headers are copied in Server Push requests: https://github.com/w3c/preload/pull/149
Using Preload links with Vulcain would benefit from Early Hints (the 103 status code). Early Hints will reduce the latency compared to waiting for all headers to be ready. This will still be 1 additional RTT compared to Server Push, but the removed complexity is maybe worth it.
Unfortunately, unlike Server Push, Early Hints are not supported at all by the ecosystem (Chrome, Firefox, the Go language... none of these tools support them). Server Push works right now, even if the lack of bloom filter is annoying. Server Push is already implemented by all modern browsers, Go, NGINX, Apache, Caddy...
Regarding WebTransport, it's not implemented by the ecosystem either, and it's too low level to be really useful in our case. One strength of Vulcain (and of Server Push) is that using it is totally transparent for the frontend developer. You don't have to install a JS lib or something like that. You use the primitives provided by the browser such as fetch and it works.
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/cdd151dd-beba-480a-aa15-c0c7d81089a4n%40chromium.org.
> As is mentioned in the intent, there's some ongoing measurement so that the potential impact of Early Hints can be evaluated in Chrome. If you think this feature can be a potential replacement, participating in this and starting to send Early Hints could help us and the ecosystem measure more data. (Doing so should virtually have no side-effects for now but would allow browsers and ecosystem to evaluate more)
I do think Early Hints are the far superior solution because they are infinitely simpler to implement and they properly leverage the browser cache. That's why I did the work to add support for it in various Ruby/Rails ecosystem tools.However I think that this data oriented experiment is kind of doomed from the start. I got a lot of push back from various players (inside my org, and in the open source community) from experimenting with it because:- They do not provide any gains whatsoever today.- If deployed incorrectly they might breaks older browsers.So it is seen as a risky investment that is unlikely to yield anything any time soon. So nobody want's to do the first move.I know this thread is more about Server Push, but if you want Early Hints to be an alternative, Chrome has to actually implement them.
In reference to the Akamai study that was published last year, I disagree with the characterization presented here that its conclusion was, "HTTP/2 either does not change performance or improves performance marginally when used with certain restrictions". The study did in fact show that Server Push provided a non-marginal benefit (improvements of hundreds of milliseconds to DocTime) to a set of Akamai customer web sites when the quantile of 1% overall slowest responses was excluded from the set.
I agree with the comments in regard to the implementation complexity of Server Push; I know that Akamai had made some significant efforts in optimizing the Server Push implementation on the server. It feels like that continued effort and investment in Server Push would have heralded further optimizations and improvements in both client-side and server-side implementations as the broader community gained more experience with it.
Early Hints + Preload could be an interesting replacement to Server Push, and while simpler in terms of implementation complexity, it's not quite the same in that there is a loss of a roundtrip in sending back data and it may not cover exactly the same set of use cases that Server Push is covering. In the end, it's the performance that matters, and it is possible that Early Hits + Preload could provide equivalent performance. I think some of us have actually viewed