SPDY crunch down

37 просмотров
Перейти к первому непрочитанному сообщению

Mike Belshe

не прочитано,
26 мая 2011 г., 11:55:5126.05.2011
– spdy-dev
A few things have been lingering for a long time and I'm finally going resolve them.

1) Nix FLAG_COMPRESSION on data frames.
As much as I like this feature, its never going to get used.  I'd really like to see it implemented for post data from the client.  However, SPDY servers will often be proxies/gateways to HTTP servers, and most HTTP servers don't implement chunked encoding, regardless of what the HTTP/1.1 specification says.  Therefore, if clients tried to use this feature, it would have a significant scalability issue for proxy servers which cannot be worked around without updating all HTTP/1.1 servers.  

This will become a deployment problem for real websites, and I don't see significant enough benefit to justify.

Other than compressing the uplink side, I don't see an advantage for the downlink side.  HTTP's compression is already good enough, and SPDY mandates that SPDY endpoints must be capable of sending compressed data. 



2) Nix 101 continue.
Jim Morrison brought this up a few weeks back.  Fortunately, not many servers use it.  But it is a badly designed piece of HTTP, and I really don't see how you could implement it in any reasonable way.  We could leave it in the spec, and it would become a piece of brittleness that half the clients don't implement anyway (this creating bad breakages all over the web), or we can nix it upfront.



3) Nix Versioning
Draft 3 got ambitious about versioning for non-SSL/NPN negotiated versioning.  Right now, we have no clients and no servers which implement this versioning.  Maybe it works, or maybe it doesn't, but I don't want to leave in a feature which has zero implementation experience.  If, when this protocol is picked up by a standards body, if they want to see self-descriptive versioning, this could be resurrected.  Until then, it is fairly pointless.



Let me know asap!

William Chan (陈智昌)

не прочитано,
26 мая 2011 г., 11:58:5226.05.2011
– spdy...@googlegroups.com
+1

Mark Nottingham

не прочитано,
26 мая 2011 г., 21:47:5726.05.2011
– spdy...@googlegroups.com

On 27/05/2011, at 1:55 AM, Mike Belshe wrote:

> 2) Nix 101 continue.
> Jim Morrison brought this up a few weeks back. Fortunately, not many servers use it. But it is a badly designed piece of HTTP, and I really don't see how you could implement it in any reasonable way. We could leave it in the spec, and it would become a piece of brittleness that half the clients don't implement anyway (this creating bad breakages all over the web), or we can nix it upfront.


FWIW, we implemented it in a vary reasonable way in Node.JS (IMO).

http://nodejs.org/docs/v0.4.8/api/http.html#event_checkContinue_
http://nodejs.org/docs/v0.4.8/api/http.html#event_continue_

--
Mark Nottingham http://www.mnot.net/

Mike Belshe

не прочитано,
27 мая 2011 г., 02:18:2627.05.2011
– spdy-dev
It occurs to me that I didn't put a lot of explanation behind the why.  I actually did wrestle with how spdy could support something like 100-continue reasonably.  Unfortunately, nothing is simple - two sets of headers to a single response?  how to demarcate the beginning and end of headers for each response block?  Does the client implicitly advertise "expect: 100-continue"?   Or do clients optionally use it?

On the other hand, simply making it illegal to send a "expect: 100-continue" header is backward compatible, matches the behavior of all major browsers today, and doesn't require a bolt-on to the framing layer.  I believe there are no other cases where double-header-blocks are ever valid in http or spdy, so adding it for this case (which is not used) seems like a choice more likely to just introduce bugs.  Am I reading that wrong?

Mike

Greg Wilkins

не прочитано,
30 мая 2011 г., 03:35:1430.05.2011
– spdy...@googlegroups.com
On 27 May 2011 01:55, Mike Belshe <mbe...@google.com> wrote:

> 2) Nix 101 continue.
> Jim Morrison brought this up a few weeks back.  Fortunately, not many
> servers use it.  But it is a badly designed piece of HTTP, and I really
> don't see how you could implement it in any reasonable way.  We could leave
> it in the spec, and it would become a piece of brittleness that half the
> clients don't implement anyway (this creating bad breakages all over the
> web), or we can nix it upfront.
>


While 101 (and the similar 102 processing) responses are not widely
supported in HTTP clients and servers, there are definitely both in
real use and provide valuable semantics when there are used.

So I think it would be a pity to dump these responses.

I would have thought that SPDY with its advanced framing would have
been able to better support the sending of multiple responses for a
given request. Perhaps you can specify that 1xx responses may be
ignored by a SPDY client, so that clients that don't implement it will
just silently ignore any responses sent by the server.

cheers

Mike Belshe

не прочитано,
1 июн. 2011 г., 17:04:5701.06.2011
– spdy-dev
On Mon, May 30, 2011 at 12:35 AM, Greg Wilkins <gregory....@gmail.com> wrote:
On 27 May 2011 01:55, Mike Belshe <mbe...@google.com> wrote:

> 2) Nix 101 continue.
> Jim Morrison brought this up a few weeks back.  Fortunately, not many
> servers use it.  But it is a badly designed piece of HTTP, and I really
> don't see how you could implement it in any reasonable way.  We could leave
> it in the spec, and it would become a piece of brittleness that half the
> clients don't implement anyway (this creating bad breakages all over the
> web), or we can nix it upfront.
>


While 101 (and the similar 102 processing) responses are not widely
supported in HTTP clients and servers, there are definitely both in
real use and provide valuable semantics when there are used.

I didn't write correctly.  I said '101 continue', but I meant '100 continue'.

I don't have a problem with 101.

102 processing is similar to 100.
 

So I think it would be a pity to dump these responses.
 

I would have thought that SPDY with its advanced framing would have
been able to better support  the sending of multiple responses for a
given request.  

Certainly we can add frames.  Every new frame is a new set of work for every protocol implementer to support all the way through the stack.  If nobody is using it, and the value is marginal, then we're better off nixing it but having really robust and solid implementations.


 
 Perhaps you can specify that 1xx responses may be
ignored by a SPDY client, so that clients that don't implement it will
just silently ignore any responses sent by the server.

So you're proposing to allow multiple SYN_REPLY frames, but those with status-code 1xx will be ignored until a non-1xx comes through.  I could live with that.

Mike



cheers

Antonio Vicente

не прочитано,
1 июн. 2011 г., 18:23:2001.06.2011
– spdy...@googlegroups.com
Another way to handle 100-continue might be to "server push" the 100 continue response on an associated stream.  This approach has the benefit of making it easy to ignore the 100-continue response, preserve functionality and not lose the 1 request-1 response aspect of the protocol.  A better way to get Expect:100-like behavior in SPDY might be to encourage servers to RST stream with an appropriate code if they are not willing to accept a POST due to its size or some other factor.

There is the question of how these various options factor into intermediaries that do HTTP to SPDY or SPDY to HTTP translation.  My take is that it doesn't really matter what we do with 100-Continue as long as we decide how to handle it by either disallowing it or specifying exactly how servers are supposed to reply to HTTP-OVER-SPDY requests with Expect:100 headers.

-antonio

Costin Manolache

не прочитано,
1 июн. 2011 г., 19:54:1101.06.2011
– spdy...@googlegroups.com
On Thu, May 26, 2011 at 8:55 AM, Mike Belshe <mbe...@google.com> wrote:
A few things have been lingering for a long time and I'm finally going resolve them.

1) Nix FLAG_COMPRESSION on data frames.
As much as I like this feature, its never going to get used.  I'd really like to see it implemented for post data from the client.  However, SPDY servers will often be proxies/gateways to HTTP servers, and most HTTP servers don't implement chunked encoding, regardless of what the HTTP/1.1 specification says.  Therefore, if clients tried to use this feature, it would have a significant scalability issue for proxy servers which cannot be worked around without updating all HTTP/1.1 servers.  
 

This will become a deployment problem for real websites, and I don't see significant enough benefit to justify.

Other than compressing the uplink side, I don't see an advantage for the downlink side.  HTTP's compression is already good enough, and SPDY mandates that SPDY endpoints must be capable of sending compressed data. 

Why do server need to implement chunked ? I assume posts include content length, which would work fine with compression.

I'm not sure http compression is so 'good enough' - do you have any numbers on what % of the traffic that could be compressed is actually compressed ? My guess was that a lot of http servers don't bother with compression, and a lot of http clients don't request compressed content. 

 
 



2) Nix 101 continue.
Jim Morrison brought this up a few weeks back.  Fortunately, not many servers use it.  But it is a badly designed piece of HTTP, and I really don't see how you could implement it in any reasonable way.  We could leave it in the spec, and it would become a piece of brittleness that half the clients don't implement anyway (this creating bad breakages all over the web), or we can nix it upfront.



3) Nix Versioning
Draft 3 got ambitious about versioning for non-SSL/NPN negotiated versioning.  Right now, we have no clients and no servers which implement this versioning.  Maybe it works, or maybe it doesn't, but I don't want to leave in a feature which has zero implementation experience.  If, when this protocol is picked up by a standards body, if they want to see self-descriptive versioning, this could be resurrected.  Until then, it is fairly pointless.

Any news on how to support non-NPN traffic ? Different port number ? Tunneling over websockets ?  
 
Costin



Let me know asap!

Mike Belshe

не прочитано,
2 июн. 2011 г., 13:29:5602.06.2011
– spdy-dev
On Wed, Jun 1, 2011 at 4:54 PM, Costin Manolache <cos...@gmail.com> wrote:


On Thu, May 26, 2011 at 8:55 AM, Mike Belshe <mbe...@google.com> wrote:
A few things have been lingering for a long time and I'm finally going resolve them.

1) Nix FLAG_COMPRESSION on data frames.
As much as I like this feature, its never going to get used.  I'd really like to see it implemented for post data from the client.  However, SPDY servers will often be proxies/gateways to HTTP servers, and most HTTP servers don't implement chunked encoding, regardless of what the HTTP/1.1 specification says.  Therefore, if clients tried to use this feature, it would have a significant scalability issue for proxy servers which cannot be worked around without updating all HTTP/1.1 servers.  
 

This will become a deployment problem for real websites, and I don't see significant enough benefit to justify.

Other than compressing the uplink side, I don't see an advantage for the downlink side.  HTTP's compression is already good enough, and SPDY mandates that SPDY endpoints must be capable of sending compressed data. 

Why do server need to implement chunked ? I assume posts include content length, which would work fine with compression.

If you're a SPDY-to-HTTP proxy, it creates a problem.  You've got compressed data from the client coming in over SPDY, and you need to uncompress to pass to the HTTP server.  But you don't know the uncompressed size.  So you have to uncompress the whole thing, send the content-length header, and then send the uncompressed data.  Servers can't tolerate this buffering at scale.  



 

I'm not sure http compression is so 'good enough' - do you have any numbers on what % of the traffic that could be compressed is actually compressed ? My guess was that a lot of http servers don't bother with compression, and a lot of http clients don't request compressed content. 

See the published web metrics - there is room for better here.

 

 
 



2) Nix 101 continue.
Jim Morrison brought this up a few weeks back.  Fortunately, not many servers use it.  But it is a badly designed piece of HTTP, and I really don't see how you could implement it in any reasonable way.  We could leave it in the spec, and it would become a piece of brittleness that half the clients don't implement anyway (this creating bad breakages all over the web), or we can nix it upfront.



3) Nix Versioning
Draft 3 got ambitious about versioning for non-SSL/NPN negotiated versioning.  Right now, we have no clients and no servers which implement this versioning.  Maybe it works, or maybe it doesn't, but I don't want to leave in a feature which has zero implementation experience.  If, when this protocol is picked up by a standards body, if they want to see self-descriptive versioning, this could be resurrected.  Until then, it is fairly pointless.

Any news on how to support non-NPN traffic ? Different port number ? Tunneling over websockets ?   

You can always use a different port number.

I'm proposing we finish up the SPDY portion of the specification so that others can start to rely on it.  If you want to then use SPDY on a different port, that is pretty trivial. 

Mike

Costin Manolache

не прочитано,
2 июн. 2011 г., 18:23:1102.06.2011
– spdy...@googlegroups.com
On Thu, Jun 2, 2011 at 10:29 AM, Mike Belshe <mbe...@google.com> wrote:


On Wed, Jun 1, 2011 at 4:54 PM, Costin Manolache <cos...@gmail.com> wrote:


On Thu, May 26, 2011 at 8:55 AM, Mike Belshe <mbe...@google.com> wrote:
A few things have been lingering for a long time and I'm finally going resolve them.

1) Nix FLAG_COMPRESSION on data frames.
As much as I like this feature, its never going to get used.  I'd really like to see it implemented for post data from the client.  However, SPDY servers will often be proxies/gateways to HTTP servers, and most HTTP servers don't implement chunked encoding, regardless of what the HTTP/1.1 specification says.  Therefore, if clients tried to use this feature, it would have a significant scalability issue for proxy servers which cannot be worked around without updating all HTTP/1.1 servers.  
 

This will become a deployment problem for real websites, and I don't see significant enough benefit to justify.

Other than compressing the uplink side, I don't see an advantage for the downlink side.  HTTP's compression is already good enough, and SPDY mandates that SPDY endpoints must be capable of sending compressed data. 

Why do server need to implement chunked ? I assume posts include content length, which would work fine with compression.

If you're a SPDY-to-HTTP proxy, it creates a problem.  You've got compressed data from the client coming in over SPDY, and you need to uncompress to pass to the HTTP server.  But you don't know the uncompressed size.  So you have to uncompress the whole thing, send the content-length header, and then send the uncompressed data.  Servers can't tolerate this buffering at scale.  


Depends on how you define the Content-Length header in SPDY ( i.e. if compression is low-level SPDY detail or if it's something the browser needs to know ).

If the browser keeps doing what it does today - set Content-Length to uncompressed size of the payload, and SPDY does transparent compression - then there is no problem for a proxy, it'll forward (and maybe check ) the uncompressed size.  

If the browser doesn't send the Content-Length - it's the same situation as today (i.e. will be broken on some servers ). 


 



 

I'm not sure http compression is so 'good enough' - do you have any numbers on what % of the traffic that could be compressed is actually compressed ? My guess was that a lot of http servers don't bother with compression, and a lot of http clients don't request compressed content. 

See the published web metrics - there is room for better here.

 

 
 



2) Nix 101 continue.
Jim Morrison brought this up a few weeks back.  Fortunately, not many servers use it.  But it is a badly designed piece of HTTP, and I really don't see how you could implement it in any reasonable way.  We could leave it in the spec, and it would become a piece of brittleness that half the clients don't implement anyway (this creating bad breakages all over the web), or we can nix it upfront.



3) Nix Versioning
Draft 3 got ambitious about versioning for non-SSL/NPN negotiated versioning.  Right now, we have no clients and no servers which implement this versioning.  Maybe it works, or maybe it doesn't, but I don't want to leave in a feature which has zero implementation experience.  If, when this protocol is picked up by a standards body, if they want to see self-descriptive versioning, this could be resurrected.  Until then, it is fairly pointless.

Any news on how to support non-NPN traffic ? Different port number ? Tunneling over websockets ?   

You can always use a different port number.

I'm proposing we finish up the SPDY portion of the specification so that others can start to rely on it.  If you want to then use SPDY on a different port, that is pretty trivial. 

Some specs include an assigned port number.

I noticed ( or fail to notice ) any "SSL + NPN" required - did I just miss it, or is it intentional ?
And if it is ok for servers/browsers to implement SPDY without NPN - it would be good to provide
some info. 

Costin

Mike Belshe

не прочитано,
2 июн. 2011 г., 18:55:2502.06.2011
– spdy-dev
On Thu, Jun 2, 2011 at 3:23 PM, Costin Manolache <cos...@gmail.com> wrote:
On Thu, Jun 2, 2011 at 10:29 AM, Mike Belshe <mbe...@google.com> wrote:


On Wed, Jun 1, 2011 at 4:54 PM, Costin Manolache <cos...@gmail.com> wrote:


On Thu, May 26, 2011 at 8:55 AM, Mike Belshe <mbe...@google.com> wrote:
A few things have been lingering for a long time and I'm finally going resolve them.

1) Nix FLAG_COMPRESSION on data frames.
As much as I like this feature, its never going to get used.  I'd really like to see it implemented for post data from the client.  However, SPDY servers will often be proxies/gateways to HTTP servers, and most HTTP servers don't implement chunked encoding, regardless of what the HTTP/1.1 specification says.  Therefore, if clients tried to use this feature, it would have a significant scalability issue for proxy servers which cannot be worked around without updating all HTTP/1.1 servers.  
 

This will become a deployment problem for real websites, and I don't see significant enough benefit to justify.

Other than compressing the uplink side, I don't see an advantage for the downlink side.  HTTP's compression is already good enough, and SPDY mandates that SPDY endpoints must be capable of sending compressed data. 

Why do server need to implement chunked ? I assume posts include content length, which would work fine with compression.

If you're a SPDY-to-HTTP proxy, it creates a problem.  You've got compressed data from the client coming in over SPDY, and you need to uncompress to pass to the HTTP server.  But you don't know the uncompressed size.  So you have to uncompress the whole thing, send the content-length header, and then send the uncompressed data.  Servers can't tolerate this buffering at scale.  


Depends on how you define the Content-Length header in SPDY ( i.e. if compression is low-level SPDY detail or if it's something the browser needs to know ).

If the browser keeps doing what it does today - set Content-Length to uncompressed size of the payload, and SPDY does transparent compression - then there is no problem for a proxy, it'll forward (and maybe check ) the uncompressed size.  

If the browser doesn't send the Content-Length - it's the same situation as today (i.e. will be broken on some servers ). 

Its a fair point.  Alyssa/Jim/Roberto - what do you think?  Why doesn't this work?
 


 



 

I'm not sure http compression is so 'good enough' - do you have any numbers on what % of the traffic that could be compressed is actually compressed ? My guess was that a lot of http servers don't bother with compression, and a lot of http clients don't request compressed content. 

See the published web metrics - there is room for better here.

 

 
 



2) Nix 101 continue.
Jim Morrison brought this up a few weeks back.  Fortunately, not many servers use it.  But it is a badly designed piece of HTTP, and I really don't see how you could implement it in any reasonable way.  We could leave it in the spec, and it would become a piece of brittleness that half the clients don't implement anyway (this creating bad breakages all over the web), or we can nix it upfront.



3) Nix Versioning
Draft 3 got ambitious about versioning for non-SSL/NPN negotiated versioning.  Right now, we have no clients and no servers which implement this versioning.  Maybe it works, or maybe it doesn't, but I don't want to leave in a feature which has zero implementation experience.  If, when this protocol is picked up by a standards body, if they want to see self-descriptive versioning, this could be resurrected.  Until then, it is fairly pointless.

Any news on how to support non-NPN traffic ? Different port number ? Tunneling over websockets ?   

You can always use a different port number.

I'm proposing we finish up the SPDY portion of the specification so that others can start to rely on it.  If you want to then use SPDY on a different port, that is pretty trivial. 

Some specs include an assigned port number.

I noticed ( or fail to notice ) any "SSL + NPN" required - did I just miss it, or is it intentional ?
And if it is ok for servers/browsers to implement SPDY without NPN - it would be good to provide
some info. 

I'm not sure where to put this stuff.  It's not required.  The SPDY protocol is independent of that.  How it gets used, however, is a different question.  I don't think we need to lock it into this spec.  Others, either through IETF or other vehicles, can lock down particular deployment targets when/if they need to.

For reference, HTTP's RFC2616 is also vague about deployment, they simply state:

      "HTTP communication usually takes place over TCP/IP connections. The
   default port is TCP 80 [19], but other ports can be used. This does
   not preclude HTTP from being implemented on top of any other protocol
   on the Internet, or on other networks. HTTP only presumes a reliable
   transport; any protocol that provides such guarantees can be used;
   the mapping of the HTTP/1.1 request and response structures onto the
   transport data units of the protocol in question is outside the scope
   of this specification."

I could add something like that?

Mike

Costin Manolache

не прочитано,
2 июн. 2011 г., 19:43:0502.06.2011
– spdy...@googlegroups.com
IMHO the purpose of the RFC is to enable interoperability - in SPDY case I would add at least:
- the default port is TCP 443, with TLS + NPN, where the NPN protocol is ....
- if NPN or 443 is not available, SPDY should be used on port TBD ( need to get a reg port number ), with simple TLS. You may want to keep the header to indicate that SPDY is available on an alternate port if users connect to 443 without NPN.
- it can be implemented on top of any other protocol and port, separate spec to define tunneling on web socket or other mechanisms.


Costin

James A. Morrison

не прочитано,
2 июн. 2011 г., 21:51:1602.06.2011
– spdy...@googlegroups.com

I'd be ok defining content-length to be the length of the original
content. So if
spdy data compression length of the transferred data is still determined by
FLAG_FIN, but the content-length header can optionally describe the size of the
original content.

You'll obviously need to change
"If a server receives a request where the sum of the data frame
payload lengths does not equal the size of the Content-Length header,
the server MUST return a 400 (Bad Request) error." to something like:
"... does not equal after data frame decompression ...".

--
Thanks,
Jim
http://phython.blogspot.com

Greg Wilkins

не прочитано,
2 июн. 2011 г., 00:19:4602.06.2011
– spdy...@googlegroups.com
On 2 June 2011 07:04, Mike Belshe <mbe...@google.com> wrote:
>
>
> On Mon, May 30, 2011 at 12:35 AM, Greg Wilkins <gregory....@gmail.com>
> wrote:
>
> I didn't write correctly. I said '101 continue', but I meant '100
> continue'.
> I don't have a problem with 101.
> 102 processing is similar to 100.

oops yes - I followed you and said 101 when I meant 100.

> Certainly we can add frames. Every new frame is a new set of work for every
> protocol implementer to support all the way through the stack. If nobody is
> using it, and the value is marginal, then we're better off nixing it but
> having really robust and solid implementations.

I agree that if it is a lot of work to support 1xx then they uses are
pretty much a minority, so perhaps nix them. But they are used and I
don't think supporting (or at least ignoring) them is too much work.

Expecting a 100 is used as a way to check credentials and other
headers before commencing sending a large body (almost like a
preflight request). Sending 102 is used to keep connections alive
while a response is being prepared.

These may both be minority use-cases, but they still exist and
solutions will need to be found one way or another. I'd think that
ignoring 1xx frames would be far less work for implement ors than
having to come up with new solutions for these minority cases.

> So you're proposing to allow multiple SYN_REPLY frames, but those with
> status-code 1xx will be ignored until a non-1xx comes through. I could live
> with that.

Exactly.

cheers

Ответить всем
Отправить сообщение автору
Переслать
0 новых сообщений