Questions about server push

55 views
Skip to first unread message

Matthew Steele

unread,
Jan 11, 2012, 2:14:09 PM1/11/12
to spdy...@googlegroups.com
Hi, I'm trying to implement server push in mod_spdy, and I have a few
questions about the spec:

The requirement that a server push SYN_STREAM must be sent before the
associated stream's FLAG_FIN is somewhat of a burden for the server
implementation, as it introduces a synchronization between different
streams that may be being handled by different threads. Is this
requirement really necessary?

When sending a server push, what headers are required to be in the
SYN_STREAM frame? Can any of them be placed in a subsequent HEADERS
frame instead, or _must_ they be in the SYN_STREAM? Looking at the
draft 3 spec, it looks like the answer is scheme/host/path are
required to be in the SYN_STREAM, but that status/version (and all
other HTTP response headers) can wait for a subsequent HEADERS frame.
Is that also true in (current real-world implementations of) draft 2,
or only in draft 3? It would be convenient when implementing server
push to be allowed to quickly push a small SYN_STREAM with the bare
minimum of headers so the client knows the resource is being pushed,
and let all other headers wait for a HEADERS frame once the server
has actually processed the imaginary request.

What are the semantics of a server push with a non-200 HTTP status
code? If the server pushes a 307, should the client then immediately
request the new URL? If the server pushes a 404 or a 500, should the
client just ignore it?

Speaking of redirects, suppose the client requests a URL and the
server wishes to redirect the client. Could the server reply with
the redirect code, but simultaneously initiate a push for the correct
URL (assuming it's under the same origin) to save a round trip?
What's the recommended best practice here?

Thanks,
-Matthew

Mike Belshe

unread,
Jan 11, 2012, 2:28:01 PM1/11/12
to spdy...@googlegroups.com
On Wed, Jan 11, 2012 at 11:14 AM, Matthew Steele <mdst...@google.com> wrote:
Hi, I'm trying to implement server push in mod_spdy, and I have a few
questions about the spec:

The requirement that a server push SYN_STREAM must be sent before the
associated stream's FLAG_FIN is somewhat of a burden for the server
implementation, as it introduces a synchronization between different
streams that may be being handled by different threads.  Is this
requirement really necessary?

It is *very* necessary.  

The problem is that without it, the lifecycle of any given stream-id is essentially forever.  What would prevent a server from sending an associated stream 10-20 seconds after the origin stream had closed?  Since there is a relationship between the streams, both the client and server must agree on what that relationship is.  And an infinite lifecycle means you keep using more and more and more and more ram.

This is especially problematic for proxies.



 

When sending a server push, what headers are required to be in the
SYN_STREAM frame?  Can any of them be placed in a subsequent HEADERS
frame instead, or _must_ they be in the SYN_STREAM?

They can be separate primarily to allow early notification that "this is coming".  So, making sure the URL is known to the client as soon as possible may be desirable, even if the full header block isn't ready until later.
 
 Looking at the
draft 3 spec, it looks like the answer is scheme/host/path are
required to be in the SYN_STREAM, but that status/version (and all
other HTTP response headers) can wait for a subsequent HEADERS frame.

Right
 
Is that also true in (current real-world implementations of) draft 2,
or only in draft 3?  It would be convenient when implementing server
push to be allowed to quickly push a small SYN_STREAM with the bare
minimum of headers so the client knows the resource is being pushed,
and let all other headers wait for a HEADERS frame once the server
has actually processed the imaginary request.

I believe it is true in draft 2 as well, although draft 2 is really fuzzy.  Do you really mean draft 2?  Or do you mean what does chrome do?  

<I realize this squishiness is undesirable, and future versions of the implementation must not be squishy - but the past is the past>


 

What are the semantics of a server push with a non-200 HTTP status
code?  If the server pushes a 307, should the client then immediately
request the new URL?  If the server pushes a 404 or a 500, should the
client just ignore it?

It's out-of-scope from the framing layer.  Further definition from the app layer (http) may be necessary.  I would be fine with limiting what types of responses can be pushed.


 

Speaking of redirects, suppose the client requests a URL and the
server wishes to redirect the client.  Could the server reply with
the redirect code, but simultaneously initiate a push for the correct
URL (assuming it's under the same origin) to save a round trip?

Cool idea.
 
What's the recommended best practice here?

It all needs exploration.  There is basically no practical implementation experience here from the wild - you're pioneering!  :-)  Once you have more data, recommendations would be super.

Mike




Thanks,
-Matthew

Matthew Steele

unread,
Jan 11, 2012, 3:04:29 PM1/11/12
to spdy...@googlegroups.com
Thanks for the quick reply!

On Wed, Jan 11, 2012 at 2:28 PM, Mike Belshe <mbe...@chromium.org> wrote:
> On Wed, Jan 11, 2012 at 11:14 AM, Matthew Steele <mdst...@google.com>
> wrote:
>>
>> Hi, I'm trying to implement server push in mod_spdy, and I have a few
>> questions about the spec:
>>
>> The requirement that a server push SYN_STREAM must be sent before the
>> associated stream's FLAG_FIN is somewhat of a burden for the server
>> implementation, as it introduces a synchronization between different
>> streams that may be being handled by different threads.  Is this
>> requirement really necessary?
>
> It is *very* necessary.
>
> The problem is that without it, the lifecycle of any given stream-id is
> essentially forever.  What would prevent a server from sending an associated
> stream 10-20 seconds after the origin stream had closed?  Since there is a
> relationship between the streams, both the client and server must agree on
> what that relationship is.  And an infinite lifecycle means you keep using
> more and more and more and more ram.
>
> This is especially problematic for proxies.

Okay, makes sense. If the server can send a minimal SYN_STREAM
immediately and send everything else in a later HEADERS frame, as
below, then this isn't so bad after all.

>>  Looking at the
>> draft 3 spec, it looks like the answer is scheme/host/path are
>> required to be in the SYN_STREAM, but that status/version (and all
>> other HTTP response headers) can wait for a subsequent HEADERS frame.
>
> Right

Great. That seems like the easiest way to go, then.

>> Is that also true in (current real-world implementations of) draft 2,
>> or only in draft 3?  It would be convenient when implementing server
>> push to be allowed to quickly push a small SYN_STREAM with the bare
>> minimum of headers so the client knows the resource is being pushed,
>> and let all other headers wait for a HEADERS frame once the server
>> has actually processed the imaginary request.
>
> I believe it is true in draft 2 as well, although draft 2 is really fuzzy.
>  Do you really mean draft 2?  Or do you mean what does chrome do?

Yes, I suppose I mean, "what do Chrome/Firefox do?" (I'm unsure which
draft Firefox is implementing for now).

>> What are the semantics of a server push with a non-200 HTTP status
>> code?  If the server pushes a 307, should the client then immediately
>> request the new URL?  If the server pushes a 404 or a 500, should the
>> client just ignore it?
>
> It's out-of-scope from the framing layer.  Further definition from the app
> layer (http) may be necessary.  I would be fine with limiting what types of
> responses can be pushed.

Again, I suppose my real question is "what will Chrome/Firefox do in
this case, as of today?" I guess I can just try it and see what
happens, but I was curious what's recommended here.

>> Speaking of redirects, suppose the client requests a URL and the
>> server wishes to redirect the client.  Could the server reply with
>> the redirect code, but simultaneously initiate a push for the correct
>> URL (assuming it's under the same origin) to save a round trip?
>
> Cool idea.
>
>> What's the recommended best practice here?
>
> It all needs exploration.  There is basically no practical implementation
> experience here from the wild - you're pioneering!  :-)  Once you have more
> data, recommendations would be super.

Okay, thanks. Once I have basic server push working, maybe I'll throw
together a quick implementation of this and see how it works out.

Cheers,
-Matthew

Patrick McManus

unread,
Jan 11, 2012, 4:43:16 PM1/11/12
to spdy...@googlegroups.com
On Wed, 2012-01-11 at 15:04 -0500, Matthew Steele wrote:
>
> Yes, I suppose I mean, "what do Chrome/Firefox do?" (I'm unsure which
> draft Firefox is implementing for now).
>

firefox=v2 right now.. will update as group and standards proceed.

>
> Again, I suppose my real question is "what will Chrome/Firefox do in
> this case, as of today?"

as of today we rst_stream any server push - but it is desirable to do
more. We want to work through the same kinds of questions you have been
asking.

> >> Speaking of redirects, suppose the client requests a URL and the
> >> server wishes to redirect the client. Could the server reply with
> >> the redirect code, but simultaneously initiate a push for the correct
> >> URL (assuming it's under the same origin) to save a round trip?
> >
> > Cool idea.

yep.


Oliver Mattos

unread,
Jan 20, 2012, 6:02:17 PM1/20/12
to spdy...@googlegroups.com
> as of today we rst_stream any server push

Any ballpark figure of when we can see basic support for this?   It would be really nice for servers to be able to recognise clients by a cookie id, know what the client has cached, and send the exact right resources to load a page with no extra roundtrips.

Patrick McManus

unread,
Jan 20, 2012, 10:17:08 PM1/20/12
to spdy...@googlegroups.com, Oliver Mattos
On 1/20/2012 6:02 PM, Oliver Mattos wrote:
> as of today we rst_stream any server push

Any ballpark figure of when we can see basic support for this?   It would be really nice for servers to be able to recognise clients by a cookie id, know what the client has cached, and send the exact right resources to load a page with no extra roundtrips.

I presume your question was about Firefox and I can't give a timetable because I don't have one.

It is something I want to do, but I'm not in a rush to do because:

a] it's diminishing returns. server-push saves 1 rtt while multiplexing the dependent object requests saves N. Still, a savings of 1 is nice I agree.

b] it's not clear that clients want servers to make bandwidth, priority, and scheduling decisions for them. Pushing stuff at the client makes it harder for the transaction stream to reflect things like the viewport or active tab, which the server has no insight into, and potentially the deprioritization of some embedded content (e.g. advertisements), which is something an add-on might want to be involved in.

c] As you note, client side cache state is a factor. I don't really think it is knowable to the server in the general case.

But all that said, I think this is desirable and we can find a way to make it work. I hope to hear from more implementation experiences as standardization/refinement moves along. And of course, patches that deal with this are welcome!


Jim Roskind

unread,
Jan 23, 2012, 3:58:21 PM1/23/12
to spdy-dev
@mcmanus: Excellent questions and comments about server push.

I tend to agree that such optimizing features are best performed when
there is better client-side information. Some clients do indeed value
bandwidth reduction more or less than others, so that would be nice to
take into account. Client-side cache contents are a second issue you
nicely raised.

On the other hand, in the long term, we expect Moore's Law to
consistently increase bandwidth availability for all clients, while we
don't see much flexibility in RTT (based mostly on the speed of
light :-) ).

As a result, if we "aim ahead of the puck" we should be aggressively
looking at any way we can reduce even a single RTT. Tomorrow, it sure
seems like this is where the wins will come.

In addition, I think the comparison of 1 RTT saved vs N RTT saved is
probably overly simplistic. HTTP currently "allows" for 6 parallel
connection, so maybe you'd say (for large N) N/6 RTT reduction. Then
again, folks that have large values N (the number of resources), often
go further and shard <sigh> across more than one domain when acquiring
resources :-(. There is also some overlap of request RTT costs with
resource acquisition costs (serialization latency can limit
transfers). Then again, maybe you'll argue that Moore's Law and
increase bandwidth will vanquish the serialization costs.

Bottom line: I really think server push will become progressively
valuable. The interesting thing in the short term may be how to
harness this push-feature so that clients can have a bit more control
(input suggestions?) about bandwidth value and availability.

Jim

Patrick McManus

unread,
Jan 24, 2012, 10:19:05 AM1/24/12
to spdy...@googlegroups.com
On Mon, 2012-01-23 at 12:58 -0800, Jim Roskind wrote:

> Bottom line: I really think server push will become progressively
> valuable. The interesting thing in the short term may be how to
> harness this push-feature so that clients can have a bit more control
> (input suggestions?) about bandwidth value and availability.

my whiteboard idea there (completely unvetted) is some kind of
aggregate small sized windowing (or rate-based) limit across all the
server pushed streams. That way small responses that are totally latency
dominated make it down without a rtt cost, most things get their headers
down immediately for cache comparisons, and even bigger responses such
as images can get some leading bytes down the pipe and those leading
bytes are often really useful for layout decisions. A window update from
the client at 1 rtt would promote the stream from the
aggregated-low-bandwidth set of streams onto its own normal congestion
control path... alternatively that would be the time to RST it if the
client didn't want it, already had it, whatever..

James A. Morrison

unread,
Jan 27, 2012, 2:51:18 PM1/27/12
to spdy...@googlegroups.com

I think there are two options:
1) Allow any http response code, but clients should only cache 200 or
300 responses. 300 responses do not need to be followed.
2) Only allow 200 (maybe 300) responses at the server, if an application sends
something else, then the server must use RST_STREAM.

>
>
>>
>>
>> Speaking of redirects, suppose the client requests a URL and the
>> server wishes to redirect the client.  Could the server reply with
>> the redirect code, but simultaneously initiate a push for the correct
>> URL (assuming it's under the same origin) to save a round trip?
>
>
> Cool idea.
>
>>
>> What's the recommended best practice here?
>
>
> It all needs exploration.  There is basically no practical implementation
> experience here from the wild - you're pioneering!  :-)  Once you have more
> data, recommendations would be super.
>
> Mike
>
>
>
>>
>> Thanks,
>> -Matthew
>
>

--
Thanks,
Jim
http://phython.blogspot.com

Hasan Khalil

unread,
Jan 27, 2012, 3:38:15 PM1/27/12
to spdy...@googlegroups.com
On Fri, Jan 27, 2012 at 2:51 PM, James A. Morrison <jim.mo...@gmail.com> wrote:
clients should only cache 200 or 300 responses.

Why? I don't see any particular need to change caching behavior specifically for server push.

James A. Morrison

unread,
Jan 27, 2012, 3:46:31 PM1/27/12
to spdy...@googlegroups.com
If I recall correctly server pushed objects are meant to go into the
cache. So server push may have already changed the behaviour.

--
Thanks,
Jim
http://phython.blogspot.com

Reply all
Reply to author
Forward
0 new messages