SPDY data compression

651 views
Skip to first unread message

Roberto Peon

unread,
Feb 24, 2012, 6:17:13 PM2/24/12
to spdy...@googlegroups.com
I'd like to propose that we remove SPDY-level compression of payload (i.e. data frame payloads) from the draft-3 spec.
It isn't implemented anywhere and it is (imho) potentially harmful. The sender of the data can always compress it (as they can and do today) without instructing SPDY to attempt to compress everything.

We should leave the sections that talk about mandatory gzip decompression alone.

None of this is referring to header compression-- header compression is in use and is useful.

-=R

William Chan (陈智昌)

unread,
Feb 24, 2012, 6:19:34 PM2/24/12
to spdy...@googlegroups.com
IIRC, it was originally dropped, but then brought back for some reason, which may or may not have been well informed. Mike might remember.

In any case, I'm also for removing it.

Daniel Stenberg

unread,
Feb 24, 2012, 6:20:51 PM2/24/12
to spdy...@googlegroups.com
On Fri, 24 Feb 2012, Roberto Peon wrote:

> I'd like to propose that we remove SPDY-level compression of payload (i.e.
> data frame payloads) from the draft-3 spec. It isn't implemented anywhere
> and it is (imho) potentially harmful. The sender of the data can always
> compress it (as they can and do today) without instructing SPDY to attempt
> to compress everything.

+1, drop it.

--

/ daniel.haxx.se

Costin Manolache

unread,
Feb 24, 2012, 7:27:20 PM2/24/12
to spdy...@googlegroups.com
On Fri, Feb 24, 2012 at 3:17 PM, Roberto Peon <fe...@google.com> wrote:
I'd like to propose that we remove SPDY-level compression of payload (i.e. data frame payloads) from the draft-3 spec.
It isn't implemented anywhere and it is (imho) potentially harmful. The sender of the data can always compress it (as they can and do today) without instructing SPDY to attempt to compress everything.

They can - but are they doing it today ? Do we have numbers on how many sites are compressing the data (at application level) ? AFAIK it's far easier to develop an app without compression, and that's what most web apps do.  

IMHO it would be nice to allow to even reuse the same dictionary for multiple streams. They could for example compress all .html - and maybe even have a pre-set dictionary, that will give some extra savings that aren't possible with individual stream compression.
 
Costin

Costin Manolache

unread,
Feb 24, 2012, 7:53:33 PM2/24/12
to spdy...@googlegroups.com
On Fri, Feb 24, 2012 at 4:27 PM, Costin Manolache <cos...@gmail.com> wrote:


On Fri, Feb 24, 2012 at 3:17 PM, Roberto Peon <fe...@google.com> wrote:
I'd like to propose that we remove SPDY-level compression of payload (i.e. data frame payloads) from the draft-3 spec.
It isn't implemented anywhere and it is (imho) potentially harmful. The sender of the data can always compress it (as they can and do today) without instructing SPDY to attempt to compress everything.

They can - but are they doing it today ? Do we have numbers on how many sites are compressing the data (at application level) ? AFAIK it's far easier to develop an app without compression, and that's what most web apps do.  

IMHO it would be nice to allow to even reuse the same dictionary for multiple streams. They could for example compress all .html - and maybe even have a pre-set dictionary, that will give some extra savings that aren't possible with individual stream compression.

BTW, +1 on removing it in the current form.

Costin

Hasan Khalil

unread,
Feb 24, 2012, 11:44:45 PM2/24/12
to spdy...@googlegroups.com
+1, please drop DATA compression from the spec. And hopefully this time it stays dead!

    -Hasan

Patrick McManus

unread,
Feb 25, 2012, 2:51:37 PM2/25/12
to spdy...@googlegroups.com
On 2/24/2012 11:44 PM, Hasan Khalil wrote:
+1, please drop DATA compression from the spec. And hopefully this time it stays dead!

+1 also.

 but this is something we should discuss again in the ietf space if the http-wg adopts spdy. There is a higher ratio of server/proxy folks there and this imo is something I would largely listen to them on. (by that I compression support, not mandatory data compression - of course).

Mike Belshe

unread,
Feb 29, 2012, 6:21:39 PM2/29/12
to spdy...@googlegroups.com
Sorry for being so slow to reply. 

The reason data compression is still in is because it helps with one very specific purpose:  client uploaded data.   

Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support.  If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing.  You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.

Mike

Antonio Vicente

unread,
Feb 29, 2012, 6:33:36 PM2/29/12
to spdy...@googlegroups.com
On Wed, Feb 29, 2012 at 6:21 PM, Mike Belshe <mbe...@chromium.org> wrote:
> Sorry for being so slow to reply.
>
> The reason data compression is still in is because it helps with one very
> specific purpose:  client uploaded data.
>
> Specifically, there is no way to use compression for uploaded content in
> HTTP/1.1 today, because the browser can't negotiate what the server will or
> will not support.  If we believe the world is moving more and more to
> RESTful APIs where large blobs of JSON or XML are being uploaded, then data
> compression is a good thing.  You can always punt to the next layer of the
> stack, of course, but that means you'll be doing your compression in
> javascript.
>
> Mike

Is there a feature request for exposing a gzip function in javascript?
If not, should we file such a feature request?
It'll take a while for it to be available everywhere so apps would
need to be able to fall back to a js implementation or no compression.

-antonio

p.s. Welcome back.

William Chan (陈智昌)

unread,
Feb 29, 2012, 6:39:11 PM2/29/12
to spdy...@googlegroups.com
On Wed, Feb 29, 2012 at 3:21 PM, Mike Belshe <mbe...@chromium.org> wrote:
Sorry for being so slow to reply. 

The reason data compression is still in is because it helps with one very specific purpose:  client uploaded data.   

Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support.  If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing.  You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.

The solution for this should not be forced data compression in the client, but rather extending XHR or whatever to expose a way to request compression of the payload. I've discussed this internally at Google with some folks.

Mike Belshe

unread,
Feb 29, 2012, 6:44:03 PM2/29/12
to spdy...@googlegroups.com
On Wed, Feb 29, 2012 at 3:39 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Wed, Feb 29, 2012 at 3:21 PM, Mike Belshe <mbe...@chromium.org> wrote:
Sorry for being so slow to reply. 

The reason data compression is still in is because it helps with one very specific purpose:  client uploaded data.   

Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support.  If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing.  You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.

The solution for this should not be forced data compression in the client, but rather extending XHR or whatever to expose a way to request compression of the payload. I've discussed this internally at Google with some folks.

I'm not understanding how this would work.   For HTTP, you have no way to negotiate it; so I don't see any way it could possibly work without a round trip.

With SPDY, we can force compression support from the get-go, so there is no negotiation necessary.  compression for the win!  :-)

Mike

William Chan (陈智昌)

unread,
Feb 29, 2012, 6:52:40 PM2/29/12
to spdy...@googlegroups.com
On Wed, Feb 29, 2012 at 3:44 PM, Mike Belshe <mbe...@chromium.org> wrote:


On Wed, Feb 29, 2012 at 3:39 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Wed, Feb 29, 2012 at 3:21 PM, Mike Belshe <mbe...@chromium.org> wrote:
Sorry for being so slow to reply. 

The reason data compression is still in is because it helps with one very specific purpose:  client uploaded data.   

Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support.  If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing.  You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.

The solution for this should not be forced data compression in the client, but rather extending XHR or whatever to expose a way to request compression of the payload. I've discussed this internally at Google with some folks.

I'm not understanding how this would work.   For HTTP, you have no way to negotiate it; so I don't see any way it could possibly work without a round trip.

I don't understand what you need to negotiate. The server knows that it supports gzip. It ships JS code to the browser which uses an XHR API to request the browser compress the content and send it to the server.
 

With SPDY, we can force compression support from the get-go, so there is no negotiation necessary.  compression for the win!  :-)

This sucks when you upload an already compressed file. Think of Dropbox or other cloud hosted storage solutions with web interfaces where people store their pirated mp3s and porn^W^W^W^Wpersonal music and video files.

Mike Belshe

unread,
Feb 29, 2012, 7:01:15 PM2/29/12
to spdy...@googlegroups.com
On Wed, Feb 29, 2012 at 3:52 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Wed, Feb 29, 2012 at 3:44 PM, Mike Belshe <mbe...@chromium.org> wrote:


On Wed, Feb 29, 2012 at 3:39 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Wed, Feb 29, 2012 at 3:21 PM, Mike Belshe <mbe...@chromium.org> wrote:
Sorry for being so slow to reply. 

The reason data compression is still in is because it helps with one very specific purpose:  client uploaded data.   

Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support.  If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing.  You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.

The solution for this should not be forced data compression in the client, but rather extending XHR or whatever to expose a way to request compression of the payload. I've discussed this internally at Google with some folks.

I'm not understanding how this would work.   For HTTP, you have no way to negotiate it; so I don't see any way it could possibly work without a round trip.

I don't understand what you need to negotiate. The server knows that it supports gzip. It ships JS code to the browser which uses an XHR API to request the browser compress the content and send it to the server.

What is this XHR API you're talking about?  Something new?

So if I'm sending a post request to a twitter API, how do I know if twitter supports compressed post data?  how does twitter's server know if this request was compressed or not?

I guess you could send a content-type: application/compressed-json
 
 

With SPDY, we can force compression support from the get-go, so there is no negotiation necessary.  compression for the win!  :-)

This sucks when you upload an already compressed file. Think of Dropbox or other cloud hosted storage solutions with web interfaces where people store their pirated mp3s and porn^W^W^W^Wpersonal music and video files.

The browser doesn't have to use the flag.  It can use it for json data but not for binary files

I guess I agree this isn't a huge feature of the protocol, but we could make it work right now. I'm not sure the status on when/if/ever for XHR changes.

mike

William Chan (陈智昌)

unread,
Feb 29, 2012, 7:12:32 PM2/29/12
to spdy...@googlegroups.com
On Wed, Feb 29, 2012 at 4:01 PM, Mike Belshe <mbe...@chromium.org> wrote:


On Wed, Feb 29, 2012 at 3:52 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Wed, Feb 29, 2012 at 3:44 PM, Mike Belshe <mbe...@chromium.org> wrote:


On Wed, Feb 29, 2012 at 3:39 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Wed, Feb 29, 2012 at 3:21 PM, Mike Belshe <mbe...@chromium.org> wrote:
Sorry for being so slow to reply. 

The reason data compression is still in is because it helps with one very specific purpose:  client uploaded data.   

Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support.  If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing.  You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.

The solution for this should not be forced data compression in the client, but rather extending XHR or whatever to expose a way to request compression of the payload. I've discussed this internally at Google with some folks.

I'm not understanding how this would work.   For HTTP, you have no way to negotiate it; so I don't see any way it could possibly work without a round trip.

I don't understand what you need to negotiate. The server knows that it supports gzip. It ships JS code to the browser which uses an XHR API to request the browser compress the content and send it to the server.

What is this XHR API you're talking about?  Something new?

Vaporware, only discussed internally. I think someone should propose this addition to XHR.
 

So if I'm sending a post request to a twitter API, how do I know if twitter supports compressed post data?  how does twitter's server know if this request was compressed or not?

I guess you could send a content-type: application/compressed-json

Yeah, use a Content-Type or whatever, and the server can return an error code if it doesn't like it. 415 seems appropriate. I would like the determination of whether or not it's supported to another out of band mechanism (in your case, I think a Twitter API should document it).
 
 
 

With SPDY, we can force compression support from the get-go, so there is no negotiation necessary.  compression for the win!  :-)

This sucks when you upload an already compressed file. Think of Dropbox or other cloud hosted storage solutions with web interfaces where people store their pirated mp3s and porn^W^W^W^Wpersonal music and video files.

The browser doesn't have to use the flag.  It can use it for json data but not for binary files

I guess I agree this isn't a huge feature of the protocol, but we could make it work right now. I'm not sure the status on when/if/ever for XHR changes.

I guess I feel like we should not require POSTs with compressible content to have to be deployed over SPDY. I think this is quite fixable within a reasonable period in the web platform, and we don't need to hack in a fix for the web platform's deficiency into SPDY.

Ryan Sleevi

unread,
Feb 29, 2012, 7:30:25 PM2/29/12
to spdy...@googlegroups.com
On Wed, Feb 29, 2012 at 4:12 PM, William Chan (陈智昌) <will...@chromium.org> wrote:

What is this XHR API you're talking about?  Something new?

Vaporware, only discussed internally. I think someone should propose this addition to XHR.

Or perhaps somewhere else. WebApps?

This has been discussed in the W3C DOMCrypt group, in the IETF's JOSE (Javascript Object Signing and Encryption) group, and in the IETF HYBI group.

Considering that there is a growing request for it in a number of different use cases, would it make sense to remove from the SPDY spec and let this be left to the application layer/web platform?


Roberto Peon

unread,
Mar 1, 2012, 2:34:37 PM3/1/12
to spdy...@googlegroups.com
I'd say yes. Blindly gzipping POSTS seems to be a poor choice as compared to other options. Getting an API exposed for it in JS seems far preferable.
-=R

Matthew Steele

unread,
Mar 1, 2012, 2:53:33 PM3/1/12
to spdy...@googlegroups.com
Silly question: why can't the browser just send the POST with
Content-Encoding: gzip (for appropriate content-types)? As I recall,
SPDY endpoints are required by the spec to accept gzip/deflate even if
they don't say they do, so there's no need to negotiate support with
the server first, right?

Costin Manolache

unread,
Mar 1, 2012, 2:55:08 PM3/1/12
to spdy...@googlegroups.com
Why 'blindly' ? A common solution is for the server to have a list of content types that it's compressing, and it's usually hidden from the user ( since the server needs to make various decisions based on the other side - sometimes using user agent, etc ). 

Same for the client - you do need to send a content type, and you can choose to compress json and form data only.

Costin
 
-=R

Roberto Peon

unread,
Mar 1, 2012, 3:29:07 PM3/1/12
to spdy...@googlegroups.com
Sure, so you can examine the file extension for some help. Or you can examine the first parts and magic to see if it is a recognized filetype. It is all just guessing. In any of these did we allow the application to declare that it doesn't make sense to compress? (I'm an advocate for compression by default with allowed disabling.)
 

Same for the client - you do need to send a content type, and you can choose to compress json and form data only.

You're also assuming all you see is the HTTP requests. What about WebSocket? I'd guess that some people would like to compress their data before sending there as well.
Doing it at this layer salves the immediate problem and, yes, it does help to combat uncompressed-because-of-programmer-laziness style issues, however, those aren't the only usecases and the other ones are also important.
-=R

 

Costin
 
-=R


Valentin V. Bartenev

unread,
Mar 7, 2012, 6:58:45 AM3/7/12
to spdy...@googlegroups.com
On Thursday 01 March 2012 03:52:40 William Chan (陈智昌) wrote:
[...]

> > With SPDY, we can force compression support from the get-go, so there is
> > no negotiation necessary. compression for the win! :-)
>
> This sucks when you upload an already compressed file. Think of Dropbox or
> other cloud hosted storage solutions with web interfaces where people store
> their pirated mp3s and porn^W^W^W^Wpersonal music and video files.

http://tools.ietf.org/html/rfc1951#page-11

According to rfc, deflate format may consist of uncompressed blocks of data.

wbr, Valentin V. Bartenev

Bijan Vakili

unread,
Jul 10, 2013, 5:48:14 AM7/10/13
to spdy...@googlegroups.com
Hi Mike,


The reason data compression is still in is because it helps with one very specific purpose:  client uploaded data.  

Ok; why not allow disabling it? So the agent can opt for uncompressed.

"Optional data compression. HTTP uses optional compression encodings for data. Content should always be sent in a compressed format." [1]

"Always" is too strong; why not "by default"?

[1] "SPDY: An experimental protocol for a faster web", accessed 10 July 2013, http://www.chromium.org/spdy/spdy-whitepaper, permalink: http://web.archive.org/web/20130426110049/http://www.chromium.org/spdy/spdy-whitepaper



Bijan Vakili

unread,
Jul 10, 2013, 5:49:34 AM7/10/13
to spdy...@googlegroups.com
One reason is plain text's simplicity. Sometimes, e.g. debugging, plan is best.

Reply all
Reply to author
Forward
0 new messages