Intent to Unship: SDCH

2,812 views
Skip to first unread message

Ryan Sleevi

unread,
Nov 22, 2016, 2:09:29 PM11/22/16
to blink-dev, Randy Smith
Primary Eng:

Summary
Move SDCH behind an experimental flag until the implementation is staffed, the specification matures, and broader consensus emerges. If those don’t happen, remove support from code entirely.


Motivation
Since its first release, Chromium has supported SDCH, an experimental compression protocol proposed in 2008. [1] Unfortunately, since this original proposal, few non-Chromium browsers have adopted support for this and it has seen limited standards activity or cross-browser interest.

As the original proposal had IPR concerns [2] that prevented it from being standardized, a new version has been submitted to the IETF [3]. While the current I-D is as an Individual submission, the intent is to work with other interested vendors, such as Yandex and LinkedIn, on standardizing this effort. This was presented at IETF97 and met with some skepticism and concern over the security/privacy aspects, and mitigating those concerns may reduce or eliminate the performance benefits. Further, there are alternative proposals for encoding schemes that might better integrate within the platform.

In addition to other vendors’ concerns, there are concerns from the //net team regarding the robustness of the implementation, the current implementation’s compliance to the specification, staffing resources to maintain and support the specification work and implementation, and SDCH’s overall integration and explanation within the Web Platform - such as interaction with Service Workers, the Fetch API, or the Same-Origin Policy.

Because of these reasons, the proposal is to “unship” SDCH, by no longer advertising in the Accept-Encoding request header, nor processing in the Content-Encoding response header, support for SDCH.

Due to other Chromium embedders wishing to continue experimentation with SDCH, both inside and outside Google, the code will not be immediately removed. However, Chromium-based projects will not advertise SDCH support, until such a time as appropriate resources can be devoted to investing in the specification, the implementation, and driving cross-browser consensus and cooperation. If those goals can’t be met in a timely fashion, then the code support for SDCH may either be moved to implementations that are less concerned about web platform interoperability (for example, for use in //components/cronet, which may be used in cases where both the client and the server are operated by a single entity, and thus platform concerns apply less), or removed entirely from Chromium.


Compatibility Risk
For properly implemented servers, removing SDCH should cause them to fall back to other, supported forms of Content-Encoding with limited observable effect.

There is a risk of performance regressions. However, on the basis of 7 day aggregations examining both mobile and desktop performance, using a trial in which connections in which SDCH would otherwise be used is intentionally ‘held back’, the mode lacks a significant difference in time from the first byte read on the connection to the end of the last read. More details are available on request, but the responsible metric is Sdch3.Experiment3_Decode and Sdch3.Experiment3_Holdback

The current implementation does not match the current ‘specification’ or draft, so there is a risk other implementors - both client and server - will build in or rely on behaviours that will present compatibility issues on the path to standardization. Disabling the current implementation reflects the current investment and our attempt to ensure we are good stewards of the Web Platform, by driving broader consensus and better documentation prior to shipping.

Alternative implementation suggestions for web developers
Despite Fetch’s prohibitions regarding Accept-Encoding manipulation, Service Workers may offer a viable strategy for polyfilling new and novel compression techniques in advance of standardization, and with a more defined interaction and security model than SDCH presently provides, through the use of custom headers and processing.


eri...@amazon.com

unread,
Nov 22, 2016, 3:06:17 PM11/22/16
to blink-dev, rds...@chromium.org, rsl...@chromium.org
Hi there – This is Eric Schurman from Amazon.com.

We have been experimenting with SDCH and have rolled it out across many of our properties and are rapidly expanding its use across more page types and marketplaces. We have seen many substantial improvements, both to page speed and business metrics. We are very interested in seeing continued support.

When we launched SDCH it had a big impact on many of our metrics. These were especially noticeable on mobile and in marketplaces with slow networks and poor quality devices, although it had a positive page speed and business impact in the US as well. Our metrics don’t seem to line up with the numbers describes about your trial removal. Here are some improvements we saw.

We looked at total time spent transferring HTML bytes over the network (timing.responseEnd – timing.responseStart). We saw improvements across the board. Here are examples for our product detail pages from some very different usecases:
* In India for mobile browser pages, we saw a reduction of over 6 seconds at the 99th percentile.
* In India for desktop browser pages, we saw a reduction of over 20% at the 90th percentile.
* In the US for desktop browser pages, we saw a reduction of 25% at the 99th percentile and almost 10% at the 90th percentile.

We also track when key components of the page are rendered, and we saw improvements for our customers of between 1-24% in these metrics, depending on page type, network, and scenario. We saw significantly lower page abandonment rates by customers.

In addition to these performance improvements, we also saw significant improvements across our key business metrics around customer interaction and sales.

We would far prefer that this class of work be done by the browser natively in its network stack, rather than through something like Service Worker. If needed, we can go into this separately.

-Eric

ben.m...@gmail.com

unread,
Nov 22, 2016, 3:41:23 PM11/22/16
to blink-dev, rds...@chromium.org, rsl...@chromium.org
Hey,

I very much agree with the concerns laid out here. The current spec is poorly integrated with the Web Platform. It also only meets a subset of the delta encoding use case (namely, it works well for dynamic content such as HTML but poorly for upgrading version 1 of a JS file to version 2).

However, I do think that having a working implementation of an SDCH-like approach in the field is important for justifying and developing a fuller protocol. Having the feature behind a flag does not really aid in this -- it's just as easy to calculate the benefit of delta compression programmatically as it is to test locally in a browser. What's hard is demonstrating the impact of that compression on real world data.

In theory, this is possible with service worker. However, I think it's a far step from that to demonstrating that SW is a practical platform for deploying such a feature. For example, it's not clear what kind of performance issues might be encountered by implementing this feature in service worker. Facebook's findings with service workers so far is that given how new the technology is that careful planning is needed to ensure that ideas that are conceptually good are actually implemented well in the platform. It may also be difficult for a site to implement this idea in combination with preflight requests which are likely to become important for performance in service worker.

I think that putting SDCH behind a flag -- while a very understandable desire -- would have the effect of discouraging experimentation in this area. I don't believe that having the feature behind a flag in any way helps that situation. Would it make sense to render a more immediate decision here -- either keep sdch support (maybe using something like origin trials to more formally declare that it is a feature being tested) or just remove it from the code base (and use version control history to revive any necessary code if the feature is standardized). At least in my mind either of these options seems better than a flag -- the first option allows more rapid testing, the second relieving the burden of SDCH maintinance more immediately.

-b

On Tuesday, November 22, 2016 at 11:09:29 AM UTC-8, Ryan Sleevi wrote:

Ryan Sleevi

unread,
Nov 22, 2016, 3:53:43 PM11/22/16
to Ben Maurer, blink-dev, Randy Smith, Ryan Sleevi
On Tue, Nov 22, 2016 at 12:41 PM, <ben.m...@gmail.com> wrote:
I think that putting SDCH behind a flag -- while a very understandable desire -- would have the effect of discouraging experimentation in this area. I don't believe that having the feature behind a flag in any way helps that situation. Would it make sense to render a more immediate decision here -- either keep sdch support (maybe using something like origin trials to more formally declare that it is a feature being tested) or just remove it from the code base (and use version control history to revive any necessary code if the feature is standardized). At least in my mind either of these options seems better than a flag -- the first option allows more rapid testing, the second relieving the burden of SDCH maintinance more immediately.


From the //net team, there's generally strong support for removing support entirely. However, we've heard from other Chromium embedders, both first and third-party, a desire to see the support removed in a way that will allow them to add it back themselves. This creates a bit of an issue - if we don't unship it from the Web Platform, we don't have a good way to move the code while still supporting it in the Web Platform.

So to get there, we first need to signal that Chromium will not continue to support SDCH in it's present form. Further, we'll allow a limited amount of time for proponents of SDCH to attempt to garner standardization and cross-browser support, and allowing down-level Chromium embedders the flexibility to gather the data they need towards that goal. As mentioned, this was recently discussed at IETF97, with a more negative than positive reaction to SDCH specifically, but relative interest in exploring delta encoding if it can be done so securely. However, it also sets the path, as indicated, to be able to move that code outside of being part of Chromium, and leaving it as something that first-party (Google) and third-party (non-Google) Chromium embedders can use.

With respect to Origin Trials, as I tried to capture in the Intent, this is not something that Chromium developers are generally maintaining and has a number of issues, so an Origin Trial just prolongs these issues even more than a flag does.

Rick Byers

unread,
Nov 22, 2016, 4:17:12 PM11/22/16
to Ryan Sleevi, Ben Maurer, blink-dev, Randy Smith
Eric, thank you for supplying the data from Amazon here, that changes the discussion entirely IMHO!  Is this the first time you've shared these metrics, or is it something other browser vendors might already be aware of?  I'm just wondering how much of the problem of lack of progress across the industry is due to a lack of awareness and appreciation for the results.

Ryan, can you comment on the apparent disagreement between your metrics and those seen by Amazon?  Do you believe their experience isn't representative for some reason, or are you measuring different things from them?

Ben, does Facebook use SDCH?  Do you have any metrics you could share?

Given how hard we're pushing the "every second saved in page load is more money for your site" story (a theme across almost every CDS talk this year and probably the #1 priority for the web platform team overall), it seems to me like it would be a real shame to remove the huge benefits Eric says they see on Amazon - regardless of the reason.  Perhaps partnering with Amazon we could make faster progress with the other vendors on an interoperable solution?  I'd even be happy partnering with one other major browser engine incubating a specification and interoperable implementation outside the IETF for now if the standards process is slowing things down. 

If the data is accurate and businesses can see a "significant improvement in sales" by using SDCH then that should really drive motivation across the browser industry, right?  If the business metrics are that good, then perhaps Amazon might even replace it's annoying "Install our native app" mobile interstitial with a "Install our native app or a browser that supports SDHC" one only when running browsers without SDHC.  That would create a pretty strong incentive to collaborate between engines I think ;-)

Rick

Ryan Sleevi

unread,
Nov 22, 2016, 4:32:06 PM11/22/16
to Rick Byers, Ryan Sleevi, Ben Maurer, blink-dev, Randy Smith
On Tue, Nov 22, 2016 at 1:16 PM, Rick Byers <rby...@chromium.org> wrote:
Ryan, can you comment on the apparent disagreement between your metrics and those seen by Amazon?  Do you believe their experience isn't representative for some reason, or are you measuring different things from them?

I'm hoping Randy can comment more here on the metrics, as he's been driving the measurements. My understanding is that, unsurprisingly, the nature of the content can affect the savings, and dynamic content receives much less benefit than mostly static content. For a shopping site like Amazon, which has significant areas of static content, this may offer considerably more savings.

Additionally, we know from some implementors that have experimented with it that their savings is, in part, derived from the fact that SDCH violates the Same Origin Policy, in that its security model reflects (roughly) that of cookies liberal policy and reliance on the public suffix list. This makes it considerably more difficult to explain in terms of the web platform, and we've been told that changing that to a more secure, SOP-enforced version, may substantially eliminate the savings from those who have deployed SDCH.
 
Given how hard we're pushing the "every second saved in page load is more money for your site" story (a theme across almost every CDS talk this year and probably the #1 priority for the web platform team overall), it seems to me like it would be a real shame to remove the huge benefits Eric says they see on Amazon - regardless of the reason.  Perhaps partnering with Amazon we could make faster progress with the other vendors on an interoperable solution?  I'd even be happy partnering with one other major browser engine incubating a specification and interoperable implementation outside the IETF for now if the standards process is slowing things down. 

I think the standards story may have been misinterpreted. Google has, in the four years since introducing SDCH, not actively pursued standardization or partnering with other organizations. At present, there is no engineering capable of maintaining the specification, or interested in that, nor of working to address both Google's and other vendors' concerns about the security (both of the implementation, which relies on unmaintained third-party code, and of the execution, namely, concerns around compression-over-TLS attacks such as CRIME, TIME, and BREACH, which introduce significant security risks to the platform).

Additionally, we know that there are alternative solutions, as https://tools.ietf.org/html/draft-vkrasnov-h2-compression-dictionaries-01 , which may be able to offer simpler implementations, better security, and better compression. That's why httpbis is interested in exploring the space loosely, but the topic of compression and security is one which a number of participants are particularly skeptical of (and was a significant challenge when specifying HTTP/2, to avoid many of the security risks).

We've heard similar performance interest from parties that control both endpoints (client and server), and while we know there may be a compelling case, as it stands, there has been no staffing to be able to allocate to explore these issues for the past year. So I don't think these numbers significantly change the issue that we're seeing - unmaintained code, effectively unmaintained specification, disinterested browsers (despite knowing the performance gains), and known and anticipated security issues.
 
If the data is accurate and businesses can see a "significant improvement in sales" by using SDCH then that should really drive motivation across the browser industry, right?  If the business metrics are that good, then perhaps Amazon might even replace it's annoying "Install our native app" mobile interstitial with a "Install our native app or a browser that supports SDHC" one only when running browsers without SDHC.  That would create a pretty strong incentive to collaborate between engines I think ;-)

Well, I want to be careful here. We wouldn't re-enable TLS compression, even if and despite it showing positive metrics, precisely because it's an unsecurable security disaster (and is why every implementation disabled it). 

ben.m...@gmail.com

unread,
Nov 22, 2016, 4:47:20 PM11/22/16
to blink-dev, rsl...@chromium.org, ben.m...@gmail.com, rds...@chromium.org


On Tuesday, November 22, 2016 at 1:17:12 PM UTC-8, Rick Byers wrote:
Ben, does Facebook use SDCH?  Do you have any metrics you could share?

We do not use SDCH currently. That said, this area of compression is very important to us, in particular if we can extend it to working well for versioning of JS/CSS assets since Facebook has a culture of continuous deployment. That said, we're also very sensitive to the maintenance burden that this causes for the networking/loading team, especially given the recent findings we've had around service worker startup time and local cache performance. It's possible that SDCH may be good but not the best prioritization right now. 
 
Given how hard we're pushing the "every second saved in page load is more money for your site" story (a theme across almost every CDS talk this year and probably the #1 priority for the web platform team overall), it seems to me like it would be a real shame to remove the huge benefits Eric says they see on Amazon - regardless of the reason.  Perhaps partnering with Amazon we could make faster progress with the other vendors on an interoperable solution?  I'd even be happy partnering with one other major browser engine incubating a specification and interoperable implementation outside the IETF for now if the standards process is slowing things down.  

I think there are two primary questions here:
1) Is SDCH something that should be defined in the browser or implemented by something like service worker
2) Does the currently shipped implementation accelerate the usage of SDCH like features. 

My take on (1) is that we simply don't have enough data. I think it will be very hard to justify standardizing SDCH in any form until somebody tries and fails to implement it with service worker. Even if SDCH should ultimately be standardized, service worker seems like an ideal platform for the development of the spec. In fact, if SDCH were standardized I would hope that it could be done in the form "here's a service worker that should be installed natively in the browser". After all, if the SW platform is not performant enough to write this kind of filter, we're in deep trouble :-).

For (2) I think that a production deployment of SDCH gives web sites like FB and Amazon a platform for testing SDCH-like approaches. As I mentioned in my previous email, I believe that having the feature behind a flag does not offer any more aid in adoption than not having the feature at all.

Ultimately SDCH is a tricky feature. The upside is very large. We know we ship users the same bytes over and over again. It's also something that is difficult even for large and sophisticated sites like Facebook to adopt. Unlike brotli adopting SDCH requires active research in terms of creating a dictionary, deciding on an update frequency, creating an internal distribution system for dictionaries and updating the dictionaries. It is not clear how close SDCH is to standardization. It integrates deeply with the web platform and will be tricky to define well. We need to make sure it meets more use cases (eg JS/CSS).

-b

ben.m...@gmail.com

unread,
Nov 22, 2016, 4:55:37 PM11/22/16
to blink-dev, rby...@chromium.org, rsl...@chromium.org, ben.m...@gmail.com, rds...@chromium.org


On Tuesday, November 22, 2016 at 1:32:06 PM UTC-8, Ryan Sleevi wrote:

, and of the execution, namely, concerns around compression-over-TLS attacks such as CRIME, TIME, and BREACH, which introduce significant security risks to the platform

Do you believe there are risks SDCH has over brotli/gzip?

From the //net team, there's generally strong support for removing support entirely. However, we've heard from other Chromium embedders, both first and third-party, a desire to see the support removed in a way that will allow them to add it back themselves. This creates a bit of an issue - if we don't unship it from the Web Platform, we don't have a good way to move the code while still supporting it in the Web Platform.

Do you have examples of these use cases?
 
So to get there, we first need to signal that Chromium will not continue to support SDCH in it's present form. Further, we'll allow a limited amount of time for proponents of SDCH to attempt to garner standardization and cross-browser support, and allowing down-level Chromium embedders the flexibility to gather the data they need towards that goal. As mentioned, this was recently discussed at IETF97, with a more negative than positive reaction to SDCH specifically, but relative interest in exploring delta encoding if it can be done so securely. However, it also sets the path, as indicated, to be able to move that code outside of being part of Chromium, and leaving it as something that first-party (Google) and third-party (non-Google) Chromium embedders can use.

What about allowing SDCH to be used in its current form during that period? IE saying "we're going to delete the code from chromium on <date> unless there's substantial forward progress on standardization". That would seem to allow sites like Amazon to continue to benefit (and provide motivating data) while still ensuring that the code gets deleted unless there is a path forward.

-b

Ryan Sleevi

unread,
Nov 22, 2016, 5:06:44 PM11/22/16
to Ben Maurer, blink-dev, Rick Byers, Ryan Sleevi, Randy Smith
On Tue, Nov 22, 2016 at 1:55 PM, <ben.m...@gmail.com> wrote:
Do you believe there are risks SDCH has over brotli/gzip?

Yes. In particular, SDCH's non-SOP binding means greater chance of cross-origin dictionary issues; the very thing that drives benefits for some implementors (hoping they'll chime in directly on this thread, as they're aware of it)

Do you have examples of these use cases?

There are applications, first and third-party, that embed Chromium's network stack. This is 'supported' in as much as via //components/cronet is a path for this. These applications may not be directly rendering web content - e.g. some are using JSON API endpoints, which a targeted dictionary, based on the API specification, can offer considerable compression benefits, where service workers aren't viable (because they're not consuming the //content API or the web platform in general), and for which interactions like Fetch are irrelevant.
 
What about allowing SDCH to be used in its current form during that period? IE saying "we're going to delete the code from chromium on <date> unless there's substantial forward progress on standardization". That would seem to allow sites like Amazon to continue to benefit (and provide motivating data) while still ensuring that the code gets deleted unless there is a path forward.

We've been having limited internal and external discussions with some of the parties actively involved in discussing SDCH in the past, and have already been signalling that message for a while. Indeed, this itself was delayed for several months, while we worked towards trying to put together a draft for IETF 97 and discuss more broadly. While the reaction to SDCH itself was more negative than positive, and folks at Mozilla expressed reasonable concern that this is not something that should be on by default, there was still interest in exploring schemes for delta encoding. However, we're at an inflection point where, with the current implementation, we're seeing more sites (like Amazon) adopt Chromium's franken-version of SDCH, and we're concerned that this will further ossify implementations and limit the ability of the spec to respond to the considerations of the community, such as the aforementioned security considerations. For example, we've heard strong negative reaction towards restricting SDCH to same-origin dictionaries, because of concerns that will substantially eliminate the performance gains seen, even if it represents a very reasonable and minimal step towards better security.

There's also the quality of implementation vs surface risk cost. Given that this is exposed and implemented in the browser process, we've seen a variety of novel security issues attacking SDCH support, and similarly, have envisioned several theoretical attacks. Mitigating these requires a lot of time, care, and broader review, and we haven't really seen support for those needs, organizationally or collectively. 

ben.m...@gmail.com

unread,
Nov 22, 2016, 5:27:02 PM11/22/16
to blink-dev, ben.m...@gmail.com, rby...@chromium.org, rsl...@chromium.org, rds...@chromium.org


On Tuesday, November 22, 2016 at 2:06:44 PM UTC-8, Ryan Sleevi wrote:


On Tue, Nov 22, 2016 at 1:55 PM, <ben.m...@gmail.com> wrote:
Do you believe there are risks SDCH has over brotli/gzip?

Yes. In particular, SDCH's non-SOP binding means greater chance of cross-origin dictionary issues; the very thing that drives benefits for some implementors (hoping they'll chime in directly on this thread, as they're aware of it)

Yeah, I can understand the perf concerns of using the same origin. While FB has the infra that makes www.facebook.com essentialy a CDN many sites do not.

How hard would it be to make SDCH enforce CORS headers exist if the resource is cross-origin. This seems entirely reasonable to ask users to do and seems like it has the same security benefits as same origin. Would this change your stance on the feature?

 
What about allowing SDCH to be used in its current form during that period? IE saying "we're going to delete the code from chromium on <date> unless there's substantial forward progress on standardization". That would seem to allow sites like Amazon to continue to benefit (and provide motivating data) while still ensuring that the code gets deleted unless there is a path forward.

We've been having limited internal and external discussions with some of the parties actively involved in discussing SDCH in the past, and have already been signalling that message for a while. Indeed, this itself was delayed for several months, while we worked towards trying to put together a draft for IETF 97 and discuss more broadly. While the reaction to SDCH itself was more negative than positive, and folks at Mozilla expressed reasonable concern that this is not something that should be on by default, there was still interest in exploring schemes for delta encoding. However, we're at an inflection point where, with the current implementation, we're seeing more sites (like Amazon) adopt Chromium's franken-version of SDCH, and we're concerned that this will further ossify implementations and limit the ability of the spec to respond to the considerations of the community, such as the aforementioned security considerations. For example, we've heard strong negative reaction towards restricting SDCH to same-origin dictionaries, because of concerns that will substantially eliminate the performance gains seen, even if it represents a very reasonable and minimal step towards better security.

There's also the quality of implementation vs surface risk cost. Given that this is exposed and implemented in the browser process, we've seen a variety of novel security issues attacking SDCH support, and similarly, have envisioned several theoretical attacks. Mitigating these requires a lot of time, care, and broader review, and we haven't really seen support for those needs, organizationally or collectively. 

 My personal sense is that SDCH is pretty far off from the ideal production version. I'd love to see SDCH supported -- and my sense is that SDCH is difficult enough to implement that the risk of ossification is somewhat low (as only sophisticated parties can implement it). Personally I believe the most compelling arguments are (1) imminent security risks that can not be easily addressed (2) the burden on the networking team to support the feature.

Ryan Sleevi

unread,
Nov 22, 2016, 5:37:20 PM11/22/16
to Ben Maurer, blink-dev, Rick Byers, Ryan Sleevi, Randy Smith
On Tue, Nov 22, 2016 at 2:27 PM, <ben.m...@gmail.com> wrote:
How hard would it be to make SDCH enforce CORS headers exist if the resource is cross-origin.

Actually, rather considerably difficult, which is part of the concern :) Similar the ability to explain it in terms of Fetch (SDCH can't, and as implemented in Chromium, does all sorts of 'surprising' things, because it sits below any of the infrastructure defined in the Fetch spec)
 
This seems entirely reasonable to ask users to do and seems like it has the same security benefits as same origin. Would this change your stance on the feature?

I'm sorry that I didn't make the attack clearer. Even with a CORS-enabled opt-in, the concern is that the ability to use the compression context across resource types, and across origins, makes it more likely that a BREACH/CRIME style attack can be mounted on the compression context. Although SDCH has the advantage that it represents a static dictionary set (as opposed to a dynamic window, which makes such attacks easier), the ability to do length determinations based on the dictionary used presents this risk.
 
 My personal sense is that SDCH is pretty far off from the ideal production version. I'd love to see SDCH supported -- and my sense is that SDCH is difficult enough to implement that the risk of ossification is somewhat low (as only sophisticated parties can implement it). Personally I believe the most compelling arguments are (1) imminent security risks that can not be easily addressed (2) the burden on the networking team to support the feature.

I would say the ossification risks concern exist largely from efforts from the //net team to rationalize some of the behaviours. The negative reactions from implementors towards origin-restriction (and the lack of a good standards discussion or support from other vendors as to the security necessity of such a restriction), and the performance complaints encountered while attempting to rationalize dictionary storage (which has a substantial negative interaction with the HTTP disk cache and with browser memory), suggest that some ossification has occurred, or that the complexity costs in responding to changes is substantially high for those that have already gotten over the initial hurdles. 

Randy Smith

unread,
Nov 22, 2016, 5:41:21 PM11/22/16
to Ryan Sleevi, Rick Byers, Ben Maurer, blink-dev
On Tue, Nov 22, 2016 at 4:31 PM, Ryan Sleevi <rsl...@chromium.org> wrote:


On Tue, Nov 22, 2016 at 1:16 PM, Rick Byers <rby...@chromium.org> wrote:
Ryan, can you comment on the apparent disagreement between your metrics and those seen by Amazon?  Do you believe their experience isn't representative for some reason, or are you measuring different things from them?

I'm hoping Randy can comment more here on the metrics, as he's been driving the measurements. My understanding is that, unsurprisingly, the nature of the content can affect the savings, and dynamic content receives much less benefit than mostly static content. For a shopping site like Amazon, which has significant areas of static content, this may offer considerably more savings.

The simple answer here is that I don't trust the in-Chromium metrics around SDCH; they don't show a major advantage, but I don't have faith that they're measuring the right things.  One of the things I've wanted for a while is to put some careful thought into what the right measurements in Chromium would be, then implement those metrics and gather the results.  But that's not happened because of the lack of staffing on our end.  

So if Amazon has measured the results and says it's amazing, I'm tentatively inclined to believe them, but not comfortable with letting that drive keeping or tossing SDCH because it's not over the entire pool of Chromium instances and doesn't necessarily balance/include the costs (memory, network fetching of dictionaries). 

Having said that, I'm a bit surprised we didn't see something like what Eric reports in the quick scan I did--my evaluation was based on metrics which should be at least somewhat parallel to responseEnd - responseStart.  If anyone wants to take a look at the code, my sketch of the pathways and the stats that they feed into follows.  Maybe there's an important difference between responseStart and end of first read?

Methodology: End of first read (request_time_snapshot_) and end of last read (final_packet_time_) are recorded in URLRequestHttpJob::UpdatePacketReadTimes(), which is called after each raw read from the network.  Which histogram the difference between those values is placed in is chosen in URLRequestHttpJob::AddExtraHeaders() based on the dictionaries being advertised, state in the SdchManager about whether we've previously successfully done SDCH to this host, and a random assignment to decode or holdback.  The histogram names are Sdch3.Experiment3_{Decode,Holdback}.  

-- Randy

Ben Maurer

unread,
Nov 22, 2016, 6:03:50 PM11/22/16
to Rick Byers, Ryan Sleevi, blink-dev, Randy Smith
One other pitfall I wanted to point out in removing this -- if we want to push people to implement SDCH using service workers, having the ability to do a fair performance comparison of a service worker version of SDCH vs a native one would be a persuasive argument against implementing the feature in-browser. It'd be a shame to lose such an opportunity, especially since it might more permanently relieve the networking team of having to support SDCH like features :-)

As a thought here -- Eric you seem pretty passionate about the benefits of this feature for Amazon. I know you mentioned your preference not to use SW to implement this. Could you expand on this? Would you guys have the resources to build an SDCH implementation in service worker? If you guys could A/B test native vs SW SDCH that could either (1) provide an argument that SDCH need not be in the platform and that even if SDCH should ultimately be native that there's a good platform for testing it or (2) expose deficiencies that demonstrate a need for either SW improvements or for SDCH to be made native. 

On Tue, Nov 22, 2016 at 1:16 PM, Rick Byers <rby...@chromium.org> wrote:

eri...@amazon.com

unread,
Nov 22, 2016, 6:11:43 PM11/22/16
to blink-dev, rsl...@chromium.org, rby...@chromium.org, ben.m...@gmail.com, rds...@chromium.org

A few notes:
* We use a dictionary from the same origin as the web content being compressed.
* At this point, Amazon only uses this only for web pages - not for separately served JS or CSS files. We're considering usage for JS/CSS, but are first focusing on HTML.
* Amazon pages are extremely dynamic, with changes being deployed pretty constantly to both page structure and supporting resources like JS/CSS.
* Amazon does a lot of lot of optimization around when javascript and CSS is inlined versus referenced. SDCH especially helps those cases where we're inlining content.
* We only use SDCH to compress a small fraction of the types of pages we serve. So even though the browser may advertise a dictionary on a request, for most page types we'll still return a GZIPped page with the x-sdch-encode:0 header. We started with product detail pages and have been rolling it out more broadly.

Ryan Sleevi

unread,
Nov 22, 2016, 6:18:29 PM11/22/16
to Ben Maurer, Rick Byers, Ryan Sleevi, blink-dev, Randy Smith
On Tue, Nov 22, 2016 at 3:03 PM, Ben Maurer <ben.m...@gmail.com> wrote:
One other pitfall I wanted to point out in removing this -- if we want to push people to implement SDCH using service workers, having the ability to do a fair performance comparison of a service worker version of SDCH vs a native one would be a persuasive argument against implementing the feature in-browser. It'd be a shame to lose such an opportunity, especially since it might more permanently relieve the networking team of having to support SDCH like features :-)

Would showing the performance benefits of native vs SW be that useful? Wouldn't the comparison be against no-SW vs with-SW, in order to show there's benefits to be had? Showing native vs SW would seem to be about optimizing the current implementation, but if there isn't cross-browser support for the current implementation, would optimizing help?

I suppose it's partially a pragmatic concern - showing great performance benefits, without security rails in place, makes it much harder to introduce security features, because it's constantly fighting against inertia. Developing and improving performance in parallel with security often leads to a better mix of results, which is where going a SW-only path can help encourage.

eri...@amazon.com

unread,
Nov 22, 2016, 6:32:53 PM11/22/16
to blink-dev, ben.m...@gmail.com, rby...@chromium.org, rsl...@chromium.org, rds...@chromium.org

w.r.t cross browser support - I spoke with the IE/Edge team about early SDCH results we had a while ago, and they said that if we had good numbers to release on the topic that they'd be open to implementing it. My first public release of the results we saw was on this thread though.

For service worker we’ve got a number of concerns.

One of the biggest is just that the nature of service worker makes it a concern to roll out a site-wide service worker on an established web site that has hundreds or thousands of different applications owned by different teams and with different needs, but hosted on the same domain. If we were starting from scratch today we might start with a rigid policy around subdomain usage or explicit paths for each application, but we don’t have that today and we have 20 years of SEO’d URL’s that aren’t structured in the way that service worker prefers and would allow us to do more isolation. Though even if we could have white lists and black lists they would be extremely difficult to keep up to date just because of the mix of products on a single domain.

Additionally, Service worker is still relatively immature, like the fact that streaming was just recently added, and limits what’s possible compared to native implementation in Chrome. My understanding from talking with Chrome developers over the years is that SDCH required a fair amount of optimization in how and when dictionaries were loaded into memory which would be difficult/impossible with ServiceWorker – so it seems really unlikely we’d be able to get similar performance. I know that currently service workers can have a noticeably negative performance impact on pages if they essentially don’t act on them, which would be a very common situation for us.

Soon I'm going to be leaving town for the holiday - back on Monday.

-Eric

Ben Maurer

unread,
Nov 22, 2016, 7:02:38 PM11/22/16
to eri...@amazon.com, blink-dev, Rick Byers, Ryan Sleevi, Randy Smith
On Tue, Nov 22, 2016 at 3:32 PM, <eri...@amazon.com> wrote:
One of the biggest is just that the nature of service worker makes it a concern to roll out a site-wide service worker on an established web site that has hundreds or thousands of different applications owned by different teams and with different needs, but hosted on the same domain. If we were starting from scratch today we might start with a rigid policy around subdomain usage or explicit paths for each application, but we don’t have that today and we have 20 years of SEO’d URL’s that aren’t structured in the way that service worker prefers and would allow us to do more isolation. Though even if we could have white lists and black lists they would be extremely difficult to keep up to date just because of the mix of products on a single domain.

At least for SDCH this seems like a non-issue -- the SW would only have to add a header advertising dictionary support. In fact SW would give you more options to scope this advertisement than native SDCH. 

Additionally, Service worker is still relatively immature, like the fact that streaming was just recently added, and limits what’s possible compared to native implementation in Chrome. My understanding from talking with Chrome developers over the years is that SDCH required a fair amount of optimization in how and when dictionaries were loaded into memory which would be difficult/impossible with ServiceWorker – so it seems really unlikely we’d be able to get similar performance. I know that currently service workers can have a noticeably negative performance impact on pages if they essentially don’t act on them, which would be a very common situation for us.

This is my concern too, and why I think having a native vs SW bakeoff is really important 

Yoav Weiss

unread,
Nov 23, 2016, 4:04:09 AM11/23/16
to Ben Maurer, eri...@amazon.com, blink-dev, Rick Byers, Ryan Sleevi, Randy Smith
It should be noted that LinkedIn also experimented with SDCH, and got good results.

I've played around with SDCH in the past, attempting to speed up various sites. 
My impression was:
* SDCH benefits can be huge, depending on dictionary. There are obvious tradeoffs between dictionary size and benefits.
* Dictionary creation is cumbersome and costly.
* Sending down the dictionary out-of-band has non-negligible costs (in terms of user bandwidth), which may offset some of the gains made in later navigations. OOB sending works very well for very large sites with often-returning users, but less so for smaller sites. For them, the user may pay the dictionary "price" on first navigation, and never return to the site to benefit from SDCH on followup navigations.

So, all in all I think delta compression can be very promising, but not sure SDCH is the best scheme to make it happen. (IMO, If it was, it would have made more progress and gotten more adoption in the ~8 years it's been around)

As mentioned above, Service Workers are indeed a very promising avenue for delta compression experimentation (SDCH or otherwise).

j...@chromium.org

unread,
Nov 23, 2016, 3:53:31 PM11/23/16
to blink-dev, rsl...@chromium.org, rby...@chromium.org, ben.m...@gmail.com, rds...@chromium.org, jar_...@roskind.com

It is unsurprising that Amazon was able to see large gains via SDCH, reported elsewhere on this thread.  Historically, at Google, when I first implemented SDCH in Chrome while others implemented the server support, and we rolled it out, it provided very significant gains to what was then Google's Desktop Search.  At the time, I recall that we could compress roughly 50% of the search results to Chrome (limited mostly by middlebox tampering of HTTP), and the average latency savings for those successfully SDCH compressed search-result payloads was over of 40+ms (which was pretty gigantic at the time, relative to total search response time, and was verified by meticulous A/B tests). The average compressed payload at the time was roughly 40% smaller than mere gzip compression, (due to excellent work by folks generating dictionaries!!) which produced critically large gains for what I'd term "serialization latency." (re: If you send less data, the knot-hole bandwidth limitation can be traversed faster). [Side note: the savings were so significant, it made me push to increase the TCP initial congestion window to fully achieve the potential of this compression, and that too is resolved today].  I believe all of this has been discussed in numerous public forums, but I'm recounting it here to help support the understanding of the value proposition of SDCH on this thread.

 

...but things have changed in various places....

 

Today Google search probably doesn't routinely rely on a simple HTTP GET and response result for search, so I don't have any idea what the savings are.  I can see in my current Chrome, via about:net-internals, and searching for "content-encoding: SDCH" that Google continues to use SDCH, but I have no idea how often they update their dictionaries to re-push, and I haven't looked deeper to try to evaluate the compression ratios that I'd anecdotally get.  Folks at google that generate current dictionaries may be able to comment on the expected current compression rates, but with Google Search being so radically different today (I think), results may be vastly different from what I saw in the past.  These changes *might* impact the value appraisal seen in Chrome Histograms, as noted by Randy, but that should not suggest that SDCH less valuable.  At a minimum, such histograms should be teased apart to look at performance in places like India, after verifying that Google's dictionaries are currently helpful :-).

 

Another change in the world is certainly that bandwidth has continued to rise, even though (as I've stressed with QUIC), the speed of light has remained constant, and hence RTTs have not fallen.  When we look at geographic areas such as India or Russia, bandwidth might not be growing so fast, RTT times routinely approach 400ms, and worse yet, packet loss can routinely approach 15-20% <gulp!!>.  With such long RTTs, and the significant loss rates, the compression rate has an even larger impact on latency than mere "serialization" costs.  The more packets you send, the more likely one is to get lost, and the more likely an additional RTT (or 2+!!) will be required for payload delivery.  As a result, it is very unsurprising that Amazon sees big savings in the tail of the distribution, when using SDCH in such areas.  I'd expect that most any sites that were willing to make the effort of supporting SDCH (and have repeat visitors!!!) have a fine chance of seeing similar significant latency gains (and sites can estimate the compression rate for a given dictionary before wasting the time to ship the dictionary!).

 

In countries with large RTTs, the background download (of a dictionary) may be a very small price to pay for the reduced page latency that makes a product usable.  SDCH sure seems like a big part of that... until we have something notably better.  Consistent use of TLS (HTTPS) precludes middle-box-tampering, which was perhaps the hardest obstacle to overcome in the deployment of SDCH (that I worked on). I'm hoping that recent work to allow SDCH to be used over TLS in Chrome will drive us further, and that Mozilla will get on the bandwagon.

 

SDCH does not solve world hunger, but it sure helps network performance and latency.  With current approaches in HTTP/2 and QUIC, to more gracefully handle header compression, a lot of the historical security problems have become manageable (and I think this is then separate from SDCH).  Given that it is no longer necessary to shard hostnames (re: HTTP/2 and QUIC allow for large number of pending fetches at a single host), I strongly suspect that SDCH could be trivially tightened up (if need be) to better respond to potential cross-site security issues (i.e., cross site may not be needed anymore!).  There may also be an issue with enhancing the identification of the dictionary, as compared with the slightly weaker use of a crypto hash currently... but that sure seems like small potatoes, compared with evolving a really valuable compression scheme... that works.

 

I hope Chrome doesn't unship it.

 

YMMV,

 

Jim


Opinions expressed are mine, and not that of my company.

Ryan Sleevi

unread,
Nov 23, 2016, 4:26:12 PM11/23/16
to Jim Roskind, blink-dev, Ryan Sleevi, Rick Byers, Ben Maurer, Randy Smith, jar_...@roskind.com
On Wed, Nov 23, 2016 at 12:53 PM, <j...@chromium.org> wrote:

SDCH does not solve world hunger, but it sure helps network performance and latency.  With current approaches in HTTP/2 and QUIC, to more gracefully handle header compression, a lot of the historical security problems have become manageable (and I think this is then separate from SDCH).  Given that it is no longer necessary to shard hostnames (re: HTTP/2 and QUIC allow for large number of pending fetches at a single host), I strongly suspect that SDCH could be trivially tightened up (if need be) to better respond to potential cross-site security issues (i.e., cross site may not be needed anymore!).  There may also be an issue with enhancing the identification of the dictionary, as compared with the slightly weaker use of a crypto hash currently... but that sure seems like small potatoes, compared with evolving a really valuable compression scheme... that works.

 

I hope Chrome doesn't unship it.


Jim,

Thanks for chiming in. While the performance gains may be exciting, it doesn't seem to address the structural or compatibility issues with SDCH. That is, as you note, it requires a lot of hand-holding both at the server operator side (to do it right) and the client side (to show it's working), that there's concern as to whether or not it's actually helping even Google. Further, it seems to ignore the security concerns that were raised (and for which no solution has been put forward) or the staffing issues with supporting it (e.g. those that have tried to improve it have found it hasn't improved much, and no one is presently able to step forward and maintain it and explain it as part of the web platform).

I'm not trying to suggest these aren't useful things to explore, but it also seems that if we're to be responsible stewards of the Web platform, we need to recognize when our 'experiment' has continued for a number of years without internal or external support to push it towards standardization or better security.

For example, the ability to explain SDCH in terms of Fetch seems critically important, but is extremely non-trivial, and would require rearchitecting much of the SDCH design so that it could flow through the layers for Fetch - and without cross-vendor support/interest, we can't even be sure that it's the right thing or that anyone else would be able to implement (and we haven't really heard any interest in implementing it, from other UAs).

I'm not trying to suggest we abandon SDCH as not valuable at all (even though that is what the metrics Randy mentioned do suggest, for the users overall), just that we shouldn't be shipping something with known issues, without cross-vendor interest, and without a good path towards incubation and standardization. 

Matthew Menke

unread,
Nov 23, 2016, 6:40:25 PM11/23/16
to blink-dev, ben.m...@gmail.com, eri...@amazon.com, rby...@chromium.org, rsl...@chromium.org, rds...@chromium.org
One potential concern that I haven't seen raised is SDCH dictionaries are big, and non-cancellable.  So if you're on a 2G connection, and visit a page that triggers an SDCH dictionary download, it could have significant performance implications for the next site you visit.  Reloading won't help, trying to visit another site won't help, etc.  Not sure how big an issue it is, it may not matter, in practice, or it may be a significant problem.

PhistucK

unread,
Nov 24, 2016, 1:53:42 AM11/24/16
to Matthew Menke, blink-dev, Ben Maurer, eri...@amazon.com, Rick Byers, Ryan Sleevi, Randy Smith
This specific concern can always be mitigated with interventions. Like document.write for injecting external script does (or will soon) not work anymore in 2G, SDCH can be omitted from the used compression list in 2G connections.


PhistucK

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.

Kenji Baheux

unread,
Nov 25, 2016, 3:35:16 AM11/25/16
to PhistucK, Matthew Menke, blink-dev, Ben Maurer, eri...@amazon.com, Rick Byers, Ryan Sleevi, Randy Smith
"For example, the ability to explain SDCH in terms of Fetch seems critically important, but is extremely non-trivial, and would require rearchitecting much of the SDCH design so that it could flow through the layers for Fetch"

Emphasis is mine, I'm curious about this particular point.
Do we have feedback showing that explaining SDCH in Fetch is critically important?
What valuable use cases would this enable?

Thanks in advance.

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Ryan Sleevi

unread,
Nov 27, 2016, 1:29:34 AM11/27/16
to Kenji Baheux, PhistucK, Matthew Menke, blink-dev, Ben Maurer, eri...@amazon.com, Rick Byers, Ryan Sleevi, Randy Smith
On Fri, Nov 25, 2016 at 12:35 AM, Kenji Baheux <kenji...@google.com> wrote:
"For example, the ability to explain SDCH in terms of Fetch seems critically important, but is extremely non-trivial, and would require rearchitecting much of the SDCH design so that it could flow through the layers for Fetch"

Emphasis is mine, I'm curious about this particular point.
Do we have feedback showing that explaining SDCH in Fetch is critically important?
What valuable use cases would this enable?

Thanks in advance.

The simplest answer: If any other implementor wants to come and implement, they would either be stuck reversing Chromium's implementation or attempting an ad-hoc, non-interoperable solution. That doesn't seem desirable for anyone.

A more complex, but still localized (to just the existing functionality set), answer: What interaction model should SDCH have with proxies or authentication prompts? Is that expected? Should SDCH requests be accompanied with a CORS preflight? Can cookies be set on SDCH responses? Should they be?

And then imagine the more complicated answer: Should SDCH interact with Service Workers? For example, this would allow much more robust experimentation with dictionary caching, and potentially avoid many of the sharp edges in the (underspecified) current draft. If it shouldn't interact with SW, does that make sense and follow the principle of least surprise? What does that mean for Cache entries that had SDCH dictionaries involved? Do the dictionaries get stored alongside the cached entry, or independentn of it - and what form is stored in the Cache - compressed or decompressed?

As it stands, SDCH cannot be explained using any of the existing primitives, either in HTTP or in Fetch, nor does it intentionally try to introduce new primitives in its spec. As it stands, it's completely non-interoperable - unless you reverse what Chromium has done, which hasn't been for any significantly principled reason other than it was the quickest path to experiment.

Randy Smith

unread,
Dec 1, 2016, 7:42:24 PM12/1/16
to Ben Maurer, Schurman, Eric, blink-dev, Rick Byers, Ryan Sleevi
I can't speak to the maturity or challenges of Service worker, but FWIW I don't agree with the comment that SDCH required a fair amount of optimization in how and when dictionaries were loaded into memory.  The implementation I inherited (some time ago now) unilaterally loaded dictionaries into memory in response to Get-Dictionary (while obeying a memory cap).  I added a little bit of eviction/optimization code that implements LRU with a little bit of thrashing protection (see https://codereview.chromium.org/841883002) but it's not rocket science.  And I suspect it wouldn't be relevant unless SDCH was being used on multiple sites, so I wouldn't expect benchmarking a Service worker implementation to need it.

(Sorry I didn't see this on the first round--I think I wasn't on the to line for this email, and I skimmed over quoted text assuming I had seen it before. )

-- Randy

Jim Roskind

unread,
Dec 1, 2016, 11:23:12 PM12/1/16
to rsl...@chromium.org, Kenji Baheux, PhistucK, Matthew Menke, blink-dev, Ben Maurer, eri...@amazon.com, Rick Byers, Randy Smith
I've already given my longer list of arguments/explanations, so I'll restrict this posting to just mostly commenting on a single paragraph (argument) that I think you've said a few times. 


On Sat, Nov 26, 2016 at 10:28 PM, Ryan Sleevi <rsl...@chromium.org> wrote:

...

The simplest answer: If any other implementor wants to come and implement, they would either be stuck reversing Chromium's implementation or attempting an ad-hoc, non-interoperable solution. That doesn't seem desirable for anyone.


There were two independent client implementations of SDCH, both done based on the published SDCH proposal.  One was for Google Toolbar, and the other was in Chrome.  Both of them interoperated beautifully with a single server implementation (still actively hosted by Google).  As a result, I don't think reverse engineering is in any way needed for other implementations to proceed. The SDCH proposal was quite detailed IMO. (there was also a mod-sdch Apache module, with which both client side implementations also interoperated wonderfully, again, based on the published SDCH proposal).

The next paragraph lists several odd questions, 
 
A more complex, but still localized (to just the existing functionality set), answer: What interaction model should SDCH have with proxies or authentication prompts? Is that expected? Should SDCH requests be accompanied with a CORS preflight? Can cookies be set on SDCH responses? Should they be?

SDCH functions pretty precisely the same as any compression algorithm, such as gzip encoding, and it *only* compresses the body, and not the header.  As a result, things like cookies and header contents are seemingly beyond the purview of SDCH.  Any questions or concerns you provide about SDCH (actually written as content-encoding: sdch,gzip) are equally applicable to gzip, and I'd be more than surprised if you argued to "unship" gzip.

If you were asking questions about whether the download of an SDCH dictionary should be allowed to set cookies, I think the answer is a resounding "sure," as the dictionary arrives as content, with headers, etc., and never gets any other special handling with regard to headers.  Since the download of the dictionary is on "optimization," failure to do so (presumably in the "background") for any reason appears to be an issue of "quality of implementation." I could probably ponder similar question about near-background download of various images and JS content (although there the necessity of a download seems higher).

I'm mostly confused by the questions you are posing.  They seem pretty scattered, and I'd be sad that asking a list of questions, seemingly almost rhetorically stated, would be used as a justification to unship a feature that has provided significant benefit to numerous large sites (discussed by several vendors on this thread).

Ryan Sleevi

unread,
Dec 2, 2016, 3:19:28 AM12/2/16
to Jim Roskind, Rick Byers, Kenji Baheux, PhistucK, Randy Smith, Ben Maurer, Matthew Menke, blink-dev, eri...@amazon.com

On Dec 1, 2016 8:23 PM, "Jim Roskind" <j...@chromium.org> wrote:
> There were two independent client implementations of SDCH, both done based on the published SDCH proposal.  One was for Google Toolbar, and the other was in Chrome.  Both of them interoperated beautifully with a single server implementation (still actively hosted by Google).  As a result, I don't think reverse engineering is in any way needed for other implementations to proceed.

I appreciate that two clients and one server from the same company, and many of the same engineers, were able to interoperate, but you've misunderstood my concern.

How and when is an SDCH dictionary fetch made, and how does that interact with the web platform?

I appreciate that SDCH is a topic you're passionate about, but SDCH's documentation at no time explained how they worked with the rest of the web platform.

This is precisely why we try to ensure networking-related features in terms of the Fetch spec - so that there can be interoperable solutions, and that the observable behaviour from the server side is consistent across clients, and the observable behaviour is consistent across clients. The toolbar and Chrome implementations were explicitly not consistent in that regard, and similarly, anyone else who wanted to implement SDCH would have to, for example, reverse engineer Chrome to learn whether to send it through Service Workers or preconnects or proxy prompts.

> The SDCH proposal was quite detailed IMO. 

Respectfully, I disagree, but I believe the disagreement may be simply a lack of familiarity with Fetch, rather than a fundamental disagreement, based on your follow-up.


> SDCH functions pretty precisely the same as any compression algorithm, such as gzip encoding, and it *only* compresses the body, and not the header.  As a result, things like cookies and header contents are seemingly beyond the purview of SDCH.

This is not what I was talking about. Should dictionary requests send cookies? Should they set them? What I'd third party cookies are disabled? Which socket pool should they use?

I am explicitly not talking about compressing cookies, I am talking about the interaction of cookies and dictionary fetches. This is part of what it means to explain something in terms of the Fetch spec.

>  Any questions or concerns you provide about SDCH (actually written as content-encoding: sdch,gzip) are equally applicable to gzip, and I'd be more than surprised if you argued to "unship" gzip.
>

Not at all, as explained above.


> If you were asking questions about whether the download of an SDCH dictionary should be allowed to set cookies, I think the answer is a resounding "sure," as the dictionary arrives as content, with headers, etc., and never gets any other special handling with regard to headers.  

And where is this specified? (It isn't) And where does Chrome allow this? (It's inconsistent) And does it work with Chrome's extension model or cookie security policy? (It doesn't)

>Since the download of the dictionary is on "optimization," failure to do so (presumably in the "background") for any reason appears to be an issue of "quality of implementation." 

These are not quality of implementation issues. They are core, underspecified, interoperability issues, full of subtle edge cases. Like if you see a Get-Dictionary header, SDCH may fail, but if you force an XHR to that dictionary, seeding the cache, subsequent resource loads will work. That is fundamentally surprising and inconsistent, and arguably not desirable. But it's also not speced as such.

> I could probably ponder similar question about near-background download of various images and JS content (although there the necessity of a download seems higher).

Except all of these have described behaviour, or cross browser collaboration to describe and resolve any inconsistencies.

>
> I'm mostly confused by the questions you are posing.  They seem pretty scattered, and I'd be sad that asking a list of questions, seemingly almost rhetorically stated, would be used as a justification to unship a feature that has provided significant benefit to numerous large sites (discussed by several vendors on this thread).

I encourage you to read the Fetch spec, because every question I posed is something answered by Fetch, but not at all described by SDCH, and entirely inconsistent and illogical when evaluated against Chrome's current implementation.

https://fetch.spec.whatwg.org

Jochen Eisinger

unread,
Dec 9, 2016, 3:10:20 AM12/9/16
to rsl...@chromium.org, Jim Roskind, Rick Byers, Kenji Baheux, PhistucK, Randy Smith, Ben Maurer, Matthew Menke, blink-dev, eri...@amazon.com
We discussed this at the Blink API owner meeting, and came to the conclusion that we support the removal:

- Websites will likely continue to work, as servers always had to support clients without SDCH support
- We wouldn't have shipped SDCH with our current process (lack of spec / cross vendor support)
- It seems prudent to rely on the judgement of the network team that all alternatives have been considered, and they decided that removal is the best option.

eri...@gmail.com

unread,
Dec 13, 2016, 1:51:57 PM12/13/16
to blink-dev, rsl...@chromium.org, j...@chromium.org, rby...@chromium.org, kenji...@google.com, phis...@gmail.com, rds...@chromium.org, ben.m...@gmail.com, mme...@google.com, eri...@amazon.com
What is the timeline?

ramk...@gmail.com

unread,
Jan 5, 2017, 1:22:30 AM1/5/17
to blink-dev
Do you have a timeline identified for this pull out (which version of Chrome etc.,)?

rsl...@chromium.org

unread,
Jan 6, 2017, 5:08:27 PM1/6/17
to blink-dev, ramk...@gmail.com
On Wednesday, January 4, 2017 at 10:22:30 PM UTC-8, ramk...@gmail.com wrote:
Do you have a timeline identified for this pull out (which version of Chrome etc.,)?

We expect and hope to disable SDCH support in M58, but will be meeting with Blink API Owners in the next coming weeks to ensure this timing has fully balanced the technical and Web Platform concerns with any business and product concerns. 

ramk...@gmail.com

unread,
Jan 23, 2017, 10:25:02 AM1/23/17
to blink-dev
Thanks for the update. Do you have a confirmation about the version (v58)?

Chris Bentzel

unread,
Jan 26, 2017, 11:28:54 AM1/26/17
to ramk...@gmail.com, blink-dev
There is not a confirmation yet. We'll make sure to post when details are more known. Thanks.

Chris Bentzel

unread,
Feb 23, 2017, 7:31:19 AM2/23/17
to Jochen Eisinger, rsl...@chromium.org, Jim Roskind, Rick Byers, Kenji Baheux, PhistucK, Randy Smith, Ben Maurer, Matthew Menke, blink-dev, eri...@amazon.com
I wanted to follow up here on unship timing as well as steps forward.

For timing, we plan to disable support for SDCH in Chrome 58.

However, we've certainly heard interest for an SDCH-like approach - both in this thread and from other channels of communication. There's the potential for both data savings as well as latency benefits from this type of specified dictionary style of compression. We're actively looking into approaches which provide the performance benefits of SDCH while also being more aligned with other web standards and hopefully on a path to standardization and wider adoption across both browser and server vendors. Some examples of possible approaches include H2 Compression Dictionaries (proposed by Cloudflare) and shared Brotli dictionaries (being pursued by the Brotli team).

For now we're happy to hear additional feedback on net...@chromium.org about forward directions, but ideally we'd find an appropriate standards body to act as the forum instead.

ben.m...@gmail.com

unread,
Feb 23, 2017, 11:58:17 AM2/23/17
to blink-dev, joc...@chromium.org, rsl...@chromium.org, j...@chromium.org, rby...@chromium.org, kenji...@google.com, phis...@gmail.com, rds...@chromium.org, ben.m...@gmail.com, mme...@google.com, eri...@amazon.com
Are there any details on either of those proposals. I'd love to make sure FB is exploring this area as you guys are thinking about specs.

I'd also love to see if we can start prototyping ideas here using service worker. This might involve exposing some primitives (eg providing an api for SW to pass a dictionary to brotli). But I think those will be easier to standardize than a full blown spec.

Ryan Sleevi

unread,
Feb 23, 2017, 2:10:27 PM2/23/17
to Ben Maurer, blink-dev, Jochen Eisinger, Ryan Sleevi, Jim Roskind, Rick Byers, Kenji Baheux, PhistucK, Randy Smith, Matt Menke, eri...@amazon.com
On Thu, Feb 23, 2017 at 8:58 AM, <ben.m...@gmail.com> wrote:
Are there any details on either of those proposals. I'd love to make sure FB is exploring this area as you guys are thinking about specs.

https://datatracker.ietf.org/doc/slides-97-httpbis-sessb-vlad-compression-dictionaries/ included the slides from the Cloudflare proposal at IETF97

Chris Bentzel

unread,
Feb 28, 2017, 5:34:30 PM2/28/17
to rsl...@chromium.org, Ben Maurer, blink-dev, Jochen Eisinger, Jim Roskind, Rick Byers, Kenji Baheux, PhistucK, Randy Smith, Matt Menke, eri...@amazon.com
Update: the technical untangling of SDCH is a bit more complicated than we anticipated (particularly around cached entries), so disabling will not be in 58 and will likely be in 59.

Randy Smith

unread,
Apr 4, 2017, 6:05:50 PM4/4/17
to Chris Bentzel, Ryan Sleevi, Ben Maurer, blink-dev, Jochen Eisinger, Jim Roskind, Rick Byers, Kenji Baheux, PhistucK, Matt Menke, Schurman, Eric
Update: The CL disabling SDCH landed (http://codereview.chromium.org/2785493003) last Friday and has apparently stuck, so we are on track for disabling SDCH in M59.

-- Randy


Nico Weber

unread,
Aug 8, 2017, 10:54:16 AM8/8/17
to Randy Smith, Chris Bentzel, Ryan Sleevi, Ben Maurer, blink-dev, Jochen Eisinger, Jim Roskind, Rick Byers, Kenji Baheux, PhistucK, Matt Menke, Schurman, Eric
Any news on this? Can we delete the sdch code now? (I need to change something in open-vcdiff again, and getting changes to that upstreamed has historically been pretty bumpy.)

Matt Menke

unread,
Aug 8, 2017, 10:57:21 AM8/8/17
to Nico Weber, Misha Efimov, Randy Smith, Chris Bentzel, Ryan Sleevi, Ben Maurer, blink-dev, Jochen Eisinger, Jim Roskind, Rick Byers, Kenji Baheux, PhistucK, Schurman, Eric
[+mef]:  I think Cronet still uses SDCH?

Misha Efimov

unread,
Aug 8, 2017, 1:25:43 PM8/8/17
to Matt Menke, Nico Weber, Randy Smith, Chris Bentzel, Ryan Sleevi, Ben Maurer, blink-dev, Jochen Eisinger, Jim Roskind, Rick Byers, Kenji Baheux, PhistucK, Schurman, Eric
That's correct, Cronet still supports SDCH, although we are planning to remove it soon.

Helen Li

unread,
Sep 21, 2017, 6:16:40 PM9/21/17
to Misha Efimov, Matt Menke, Nico Weber, Randy Smith, Chris Bentzel, Ryan Sleevi, Ben Maurer, blink-dev, Jochen Eisinger, Jim Roskind, Rick Byers, Kenji Baheux, PhistucK, Schurman, Eric
Update: SDCH code is deleted (crbug.com/762686) in M63.
Code search index is a bit stale. If you see any remaining reference to SDCH, please feel free to upload a CL to remove it.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
Reply all
Reply to author
Forward
0 new messages