Primary eng (and PM) emails
las...@chromium.org, dsch...@chromium.org, b...@chromium.org, ians...@chromium.org
Summary
Remove the ability to receive, keep in memory, and use HTTP/2 and gQUIC push streams sent by the server. Send SETTINGS_ENABLE_PUSH = 0 at the beginning of every HTTP/2 and gQUIC connection to request that servers not send them.
Background
HTTP/2 allows servers to “push” a resource that the client will likely need before the client actually requests it, see https://httpwg.org/specs/rfc7540.html#PushResources. The specification allows the client to reject the pushed resource at its discretion.
gQUIC is a non-standard protocol supported by Google servers, Akamai, and Litespeed. It also allows servers to push resources in a fashion very similar to HTTP/2. gQUIC is being obsoleted by HTTP/3 and support will eventually be removed entirely.
HTTP/3 is an almost-RFC protocol that also defines server push, see https://quicwg.org/base-drafts/draft-ietf-quic-http.html#name-server-push.
Chrome currently supports handling push streams over HTTP/2 and gQUIC, and this intent is about removing support over both protocols. Chrome does not support push over HTTP/3 and adding support is not on the roadmap.
Motivation
Almost five and a half years after the publication of the HTTP/2 RFC, server push is still extremely rarely used. Over the past 28 days, 99.95% of HTTP/2 connections created by Chrome never received a pushed stream (Net.SpdyStreamsPushedPerSession), and 99.97% of connections never received a pushed stream that got matched with a request (Net.SpdyStreamsPushedAndClaimedPerSession). These numbers are exactly the same as in June 2019. In June 2018, 99.96% of HTTP/2 connections never received a pushed stream. These numbers indicate the lack of active efforts by server operators to increase push usage. On top of this, less than 40% of received pushes are used, down from 63.51% two years ago. The rest are invalid, never get matched to a request, or already in cache.
Server push is very difficult to use well. Akamai publicly shared two studies showing that push over HTTP/2 either does not change performance or improves performance marginally when used with certain restrictions, see https://lists.w3.org/Archives/Public/ietf-http-wg/2019JulSep/0078.html. However, these studies had been done more than a year ago and push usage data still does not indicate wide deployment even within Akamai’s network. In light of this it is doubtful that smaller server operators have the resources to successfully deploy server push.
A contrasting result has been presented to the same IETF in 2018: a large experiment done by Chrome to measure the effect of server push on page load latency when using HTTP/2 showed that push increases latency at the long tail, see https://github.com/httpwg/wg-materials/blob/gh-pages/ietf102/chrome_push.pdf.
As far as gQUIC goes, over the past 28 days, less than 1 out of 1,200,000 connections have seen any pushed streams. We are not aware of any studies done on the effect of latency.
There is significant code complexity associated with Chromium supporting push: pushed requests have to be stored in an in-memory cache, looked up by URL across connections but only if the connection is authoritative for the request, matched by other parameters (method and other headers), and evicted after a timeout. Running associated tests burden developers and infrastructure when developing unrelated features, and users bear the burden of increased binary size. We believe that these outweigh the theoretical performance benefits.
Interoperability and Compatibility Risk
There is no compatibility risk. Server push is meant as a performance optimization, a client is allowed to choose to reject any pushed streams. In order to minimize wasted downlink capacity, Chromium would cancel pushed streams that a buggy server might open despite Chromium disabling server push by sending SETTINGS_ENABLE_PUSH = 0 in the initial SETTINGS frame. Note however that in the experiment conducted two years ago no server was observed to send pushes when this setting was sent.
Firefox and Safari both support HTTP/2 push and we are not aware of any plans for removal.
Alternative implementation suggestion for web developers
It is recommended that servers use the <link rel="preload"> element to notify the client about subresources that it is expected to need. Admittedly these resources will then need to be requested by the client, making them arrive 1 RTT later than a pushed resource would. On the other hand, <link rel="preload"> allows the client to avoid fetching resources which are already cached. Repeated efforts to analyze the latency gains of push found little or no benefit.
Because of limitations of HTTP/2 stream multiplexing, a server may normally only send <link rel="preload"> elements when the rest of the final headers are available, otherwise it would block other responses on the connection. The 103 Early Hints status code (https://tools.ietf.org/html/rfc8297) is expected to provide better performance under certain circumstances by allowing information about resources that the client should request to be sent as soon as available and then making the connection available for carrying data on other streams until the final response is sent. There is active effort to measure the potential latency gain of 103 Early Hints (compared to not using it, without using server push in either case), with the understanding that resources will be allocated to implementing it in Chromium in case data look promising. See https://chromium.googlesource.com/chromium/src/+/master/docs/early-hints.md for more details.
It is interesting to note that server push has been used in ways other than originally intended. One prominent example is to stream data from the server to the client, which will be better served by the upcoming WebTransport protocol.
Usage information from UseCounter
N/A: this network stack feature is not instrumented to use UseCounter.
Entry on the feature dashboard
Chrome currently supports handling push streams over HTTP/2 and gQUIC, and this intent is about removing support over both protocols. Chrome does not support push over HTTP/3 and adding support is not on the roadmap.
We disagree with this decision based on the real world data we have seen and the products we build around Http/2 Push. Just making our objection known.--Jay PhelpsOutsmartly
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CALjsk17dH_4W0Dsbhb1CRfh50tE7OdSTrZrRJ%2BmVbs8bKXjJww%40mail.gmail.com.
Hello,Frameworks and solutions using server push are just at the beginning of the testing phase. Server push can permit a real time application like with mercure, you can find a description here :
https://symfony.com/blog/symfony-gets-real-time-push-capabilities
Server push is not only for a better loading of the css and simples styles ans images resources. This is for the data resources that it can be a real new solution.
Exact, sorry for the wrong link : it's vulcain another part of the api platform which used the server push : https://github.com/dunglas/vulcain
For the connection less, i'm not sure, i suppose that a identification mark (jwt) is exchanged and resend if the connection is lost.
Also, there's another good reason why devs did not implement it in 2017, and why they would in 2021. Google started pushing website owners to think about performance in 2018:
https://developers.google.com/web/updates/2018/07/search-ads-speed
...
So, I claim that it wasn't possible to use Push before 2018, and it's not realistic to expect massive use in 2020.
Hi!
Vulcain and Mercure maintainer here.
Indeed Mercure doesn't rely on Server Push at all (it relies on Server Sent Events), and it will not be impacted by this change. "Connection-less" push is a capability provided by SSE. The spec allows the carrier to use a lower level mobile protocol to route the event to the device: https://html.spec.whatwg.org/multipage/server-sent-events.html#eventsource-push
AFAIK, this feature is not widely implemented (and is probably not compatible with HTTPS).
However Vulcain relies a lot on HTTP/2 Server Push. This specification is dedicated to web APIs (REST / hypermedia). It allows to use Server Push as an alternative to GraphQL-like compound documents. The main idea is to reduce the latency introduced by a traditional, atomic, REST-like API. When following this style, each document has a dedicated URL and the client needs to download and to parse the response associated with the first URL before requesting its relations. This introduces latency, and if the client needs to retrieve a graph of resources, this creates a waterfall effect.
Vulcain fixes this latency problem by pushing all the relations needed by the client in response to the explicit HTTP request.
The main benefits of this approach over creating compound documents (GraphQL, JSON:API's include...) are:
- the ability to send several documents in parallel in different HTTP streams (multiplexing)
- a better hit rate if you use a HTTP cache layer (because you have several small atomic documents with their own URLs, instead of a big document containing everything and that will have to be invalidated frequently)
- the ability to leverage the semantic of HTTP for each resource: authorization, content negotiation and of course cache
- it's usually simpler for the API developer to implement and secure (small documents, no embedding, no nesting...)
According to our benchmarks, using this strategy allows to be as fast as using compound documents (and sometimes faster, because of multiplexing).
Vulcain already supports using Preload links instead of Server Push. The reverse proxy we provide (which implements the spec, built on top of Caddy Server) does it automatically when Server Push is disabled.
Using Preload links increases (a bit) the latency compared to using Server Push but on the other hand this also improves caching: the browser never requests a resource if it already has it. With Server Push - because Cache Digest has never been finished and implemented - the server starts pushing all relations. It doesn't known if the browser already has this resource in cache or not. The browser cancels the push early if it already has the resource in cache, but some server power and some bandwidth are wasted. Using preload hints also decreases the complexity and removes some limitations of common web servers regarding which headers are copied in Server Push requests: https://github.com/w3c/preload/pull/149
Using Preload links with Vulcain would benefit from Early Hints (the 103 status code). Early Hints will reduce the latency compared to waiting for all headers to be ready. This will still be 1 additional RTT compared to Server Push, but the removed complexity is maybe worth it.
Unfortunately, unlike Server Push, Early Hints are not supported at all by the ecosystem (Chrome, Firefox, the Go language... none of these tools support them). Server Push works right now, even if the lack of bloom filter is annoying. Server Push is already implemented by all modern browsers, Go, NGINX, Apache, Caddy...
Regarding WebTransport, it's not implemented by the ecosystem either, and it's too low level to be really useful in our case. One strength of Vulcain (and of Server Push) is that using it is totally transparent for the frontend developer. You don't have to install a JS lib or something like that. You use the primitives provided by the browser such as fetch and it works.
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/c