> I'd like to propose that we remove SPDY-level compression of payload (i.e.
> data frame payloads) from the draft-3 spec. It isn't implemented anywhere
> and it is (imho) potentially harmful. The sender of the data can always
> compress it (as they can and do today) without instructing SPDY to attempt
> to compress everything.
+1, drop it.
--
I'd like to propose that we remove SPDY-level compression of payload (i.e. data frame payloads) from the draft-3 spec.It isn't implemented anywhere and it is (imho) potentially harmful. The sender of the data can always compress it (as they can and do today) without instructing SPDY to attempt to compress everything.
On Fri, Feb 24, 2012 at 3:17 PM, Roberto Peon <fe...@google.com> wrote:I'd like to propose that we remove SPDY-level compression of payload (i.e. data frame payloads) from the draft-3 spec.It isn't implemented anywhere and it is (imho) potentially harmful. The sender of the data can always compress it (as they can and do today) without instructing SPDY to attempt to compress everything.They can - but are they doing it today ? Do we have numbers on how many sites are compressing the data (at application level) ? AFAIK it's far easier to develop an app without compression, and that's what most web apps do.IMHO it would be nice to allow to even reuse the same dictionary for multiple streams. They could for example compress all .html - and maybe even have a pre-set dictionary, that will give some extra savings that aren't possible with individual stream compression.
+1, please drop DATA compression from the spec. And hopefully this time it stays dead!
Is there a feature request for exposing a gzip function in javascript?
If not, should we file such a feature request?
It'll take a while for it to be available everywhere so apps would
need to be able to fall back to a js implementation or no compression.
-antonio
p.s. Welcome back.
Sorry for being so slow to reply.
The reason data compression is still in is because it helps with one very specific purpose: client uploaded data.Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support. If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing. You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.
On Wed, Feb 29, 2012 at 3:21 PM, Mike Belshe <mbe...@chromium.org> wrote:Sorry for being so slow to reply.
The reason data compression is still in is because it helps with one very specific purpose: client uploaded data.Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support. If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing. You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.The solution for this should not be forced data compression in the client, but rather extending XHR or whatever to expose a way to request compression of the payload. I've discussed this internally at Google with some folks.
On Wed, Feb 29, 2012 at 3:39 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Wed, Feb 29, 2012 at 3:21 PM, Mike Belshe <mbe...@chromium.org> wrote:Sorry for being so slow to reply.
The reason data compression is still in is because it helps with one very specific purpose: client uploaded data.Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support. If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing. You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.The solution for this should not be forced data compression in the client, but rather extending XHR or whatever to expose a way to request compression of the payload. I've discussed this internally at Google with some folks.I'm not understanding how this would work. For HTTP, you have no way to negotiate it; so I don't see any way it could possibly work without a round trip.
With SPDY, we can force compression support from the get-go, so there is no negotiation necessary. compression for the win! :-)
On Wed, Feb 29, 2012 at 3:44 PM, Mike Belshe <mbe...@chromium.org> wrote:On Wed, Feb 29, 2012 at 3:39 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Wed, Feb 29, 2012 at 3:21 PM, Mike Belshe <mbe...@chromium.org> wrote:Sorry for being so slow to reply.
The reason data compression is still in is because it helps with one very specific purpose: client uploaded data.Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support. If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing. You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.The solution for this should not be forced data compression in the client, but rather extending XHR or whatever to expose a way to request compression of the payload. I've discussed this internally at Google with some folks.I'm not understanding how this would work. For HTTP, you have no way to negotiate it; so I don't see any way it could possibly work without a round trip.I don't understand what you need to negotiate. The server knows that it supports gzip. It ships JS code to the browser which uses an XHR API to request the browser compress the content and send it to the server.
With SPDY, we can force compression support from the get-go, so there is no negotiation necessary. compression for the win! :-)This sucks when you upload an already compressed file. Think of Dropbox or other cloud hosted storage solutions with web interfaces where people store their pirated mp3s and porn^W^W^W^Wpersonal music and video files.
On Wed, Feb 29, 2012 at 3:52 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Wed, Feb 29, 2012 at 3:44 PM, Mike Belshe <mbe...@chromium.org> wrote:On Wed, Feb 29, 2012 at 3:39 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Wed, Feb 29, 2012 at 3:21 PM, Mike Belshe <mbe...@chromium.org> wrote:Sorry for being so slow to reply.
The reason data compression is still in is because it helps with one very specific purpose: client uploaded data.Specifically, there is no way to use compression for uploaded content in HTTP/1.1 today, because the browser can't negotiate what the server will or will not support. If we believe the world is moving more and more to RESTful APIs where large blobs of JSON or XML are being uploaded, then data compression is a good thing. You can always punt to the next layer of the stack, of course, but that means you'll be doing your compression in javascript.The solution for this should not be forced data compression in the client, but rather extending XHR or whatever to expose a way to request compression of the payload. I've discussed this internally at Google with some folks.I'm not understanding how this would work. For HTTP, you have no way to negotiate it; so I don't see any way it could possibly work without a round trip.I don't understand what you need to negotiate. The server knows that it supports gzip. It ships JS code to the browser which uses an XHR API to request the browser compress the content and send it to the server.What is this XHR API you're talking about? Something new?
So if I'm sending a post request to a twitter API, how do I know if twitter supports compressed post data? how does twitter's server know if this request was compressed or not?I guess you could send a content-type: application/compressed-json
With SPDY, we can force compression support from the get-go, so there is no negotiation necessary. compression for the win! :-)This sucks when you upload an already compressed file. Think of Dropbox or other cloud hosted storage solutions with web interfaces where people store their pirated mp3s and porn^W^W^W^Wpersonal music and video files.The browser doesn't have to use the flag. It can use it for json data but not for binary filesI guess I agree this isn't a huge feature of the protocol, but we could make it work right now. I'm not sure the status on when/if/ever for XHR changes.
What is this XHR API you're talking about? Something new?Vaporware, only discussed internally. I think someone should propose this addition to XHR.
-=R
Same for the client - you do need to send a content type, and you can choose to compress json and form data only.
Costin-=R
http://tools.ietf.org/html/rfc1951#page-11
According to rfc, deflate format may consist of uncompressed blocks of data.
wbr, Valentin V. Bartenev
The reason data compression is still in is because it helps with one very specific purpose: client uploaded data.