Disallowing 100 responses

90 views
Skip to first unread message

James A. Morrison

unread,
Apr 14, 2011, 1:48:00 PM4/14/11
to spdy...@googlegroups.com
Does anyone object if we disallow 100 response codes (and thus Expect:
100-continue) with
spdy? If no one objects, I'd like to get this into the next spdy
draft. 100 response codes
require mulitple response to a single stream. This is not like server
push since the second
response is a related response, but the actual response.

--
Thanks,
Jim
http://phython.blogspot.com

Dzonatas Sol

unread,
Apr 15, 2011, 11:12:12 AM4/15/11
to spdy...@googlegroups.com
Positive.

Maybe further work this so it toggles the assumption that mime-parts
automatically follow without any SYNs/ACKs.

The only thing I worry about is overruns, yet I see this concern is
already in mind with SPDY.

The previous method obviously assumed it needed to wait for the
web-browser to be ready. With this, we can assume the transport is
always ready when the connection is ready (to stream). Then the process
can follow like old-modem dial-ups to start/stop if overruns are present.

Mime-parts then are trivial to multiplex streams by Content-Type:
"spdy-application/html" or any "spdy-*"

This is more pivotal to filter-in then filter-out. (Which is almost
putting the internet pedal to the metallic ethernet.)

James A. Morrison wrote:
> Does anyone object if we disallow 100 response codes (and thus Expect:
> 100-continue) with
> spdy? If no one objects, I'd like to get this into the next spdy
> draft. 100 response codes
> require mulitple response to a single stream. This is not like server
> push since the second
> response is a related response, but the actual response.
>
>


--
--- https://twitter.com/Dzonatas_Sol ---
Web Development, Software Engineering, Virtual Reality, Consultant

Dzonatas Sol

unread,
Apr 15, 2011, 11:13:35 AM4/15/11
to Dzonatas Sol, spdy...@googlegroups.com
By the way, I wouldn't doubt IBM will greatly interested if you do that
with content-types...

Mark Nottingham

unread,
Apr 28, 2011, 2:56:05 AM4/28/11
to spdy...@googlegroups.com
I have a bit of concern here, in that this makes SPDY a subset of HTTP/1.1.

This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.

While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.

Why is having two responses to a request an issue?

Regards,

--
Mark Nottingham http://www.mnot.net/

Mike Belshe

unread,
Apr 28, 2011, 2:15:09 PM4/28/11
to spdy...@googlegroups.com
On Wed, Apr 27, 2011 at 11:56 PM, Mark Nottingham <mn...@mnot.net> wrote:
I have a bit of concern here, in that this makes SPDY a subset of HTTP/1.1.

This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.

Thanks for chiming in, Mark -

I don't have an opinion on this yet, because I don't understand the 100-continues well enough.

Would a gateway simply suppress the 100-continue response?
 

While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.

What is the reason for a 100-continue anyway?
 

Why is having two responses to a request an issue?

It is true that we didn't address it in the HTTP layering-over-spdy yet, so we at least need to discuss this.  But technically, the framing layer doesn't have a good way to carry these responses.  We have SYN_STREAM, SYN_REPLY, and HEADERS.  Maybe the second response would be a HEADERS frame?  We don't want to coin a new stream (e.g. SYN_REPLY) for a 100-continue

Mike

Roberto Peon

unread,
Apr 28, 2011, 2:22:17 PM4/28/11
to spdy...@googlegroups.com
On Wed, Apr 27, 2011 at 11:56 PM, Mark Nottingham <mn...@mnot.net> wrote:
I have a bit of concern here, in that this makes SPDY a subset of HTTP/1.1.

This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.

While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.

Why is having two responses to a request an issue?

The "smaller" reason is that sometimes a poor server design or poor proxy design will respond twice to a response (yes I've seen it), and so we build defenses against that.
The "bigger" reason is that most implementations dont' seem to support it , or only support it in late-life. We'd done some testing (now some years back) and found that 100-continues often were swallowed or caused negative side-effects on port-80. Having multiple responses to a single request makes the codepaths moer complex too.  It was a poorly thought out addition that really never worked properly.


I do, however, share your concern about having SPDY be a subset of HTTP/1.1, which is why we're talking about this here :)
-=R

Mark Nottingham

unread,
Apr 29, 2011, 3:46:32 AM4/29/11
to spdy...@googlegroups.com
Hi Roberto,

On 29/04/2011, at 4:22 AM, Roberto Peon wrote:

> On Wed, Apr 27, 2011 at 11:56 PM, Mark Nottingham <mn...@mnot.net> wrote:
> I have a bit of concern here, in that this makes SPDY a subset of HTTP/1.1.
>
> This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.
>
> While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.
>
> Why is having two responses to a request an issue?
>
> The "smaller" reason is that sometimes a poor server design or poor proxy design will respond twice to a response (yes I've seen it), and so we build defenses against that.
> The "bigger" reason is that most implementations dont' seem to support it , or only support it in late-life. We'd done some testing (now some years back) and found that 100-continues often were swallowed or caused negative side-effects on port-80. Having multiple responses to a single request makes the codepaths moer complex too. It was a poorly thought out addition that really never worked properly.

Yep, I've seen the same behaviours and failures. No disagreement that it's a not-so-terriffically designed feature. AIUI it was layered on late in the HTTP/1.1 process, which might be part of the reason why this is so.

Mike: The primary use case is to allow a client to check whether the server will accept a request before sending a big request body. E.g., if you send a header block with a Content-Length that's really large, the server can make a decision about whether or not to accept it. It also gives the server a chance to 401 before you send the response body, so that you avoid submitting a request, only to re-submit it later with credentials.

As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.

Mike Belshe

unread,
Apr 29, 2011, 9:55:15 AM4/29/11
to spdy...@googlegroups.com
On Fri, Apr 29, 2011 at 12:46 AM, Mark Nottingham <mn...@mnot.net> wrote:
Hi Roberto,

On 29/04/2011, at 4:22 AM, Roberto Peon wrote:

> On Wed, Apr 27, 2011 at 11:56 PM, Mark Nottingham <mn...@mnot.net> wrote:
> I have a bit of concern here, in that this makes SPDY a subset of HTTP/1.1.
>
> This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.
>
> While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.
>
> Why is having two responses to a request an issue?
>
> The "smaller" reason is that sometimes a poor server design or poor proxy design will respond twice to a response (yes I've seen it), and so we build defenses against that.
> The "bigger" reason is that most implementations dont' seem to support it , or only support it in late-life. We'd done some testing (now some years back) and found that 100-continues often were swallowed or caused negative side-effects on port-80. Having multiple responses to a single request makes the codepaths moer complex too.  It was a poorly thought out addition that really never worked properly.

Yep, I've seen the same behaviours and failures. No disagreement that it's a not-so-terriffically designed feature. AIUI it was layered on late in the HTTP/1.1 process, which might be part of the reason why this is so.

Mike: The primary use case is to allow a client to check whether the server will accept a request before sending a big request body. E.g., if you send a header block with a Content-Length that's really large, the server can make a decision about whether or not to accept it. It also gives the server a chance to 401 before you send the response body, so that you avoid submitting a request, only to re-submit it later with credentials.

What I don't understand most about it is that the *client* is the one that wants to know if the server will reject.  But the client doesn't know if the server uses 100-continue responses or not (and the server likely does not).  So the client's only choice is to blast away, right?  
 

As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.

Interesting.  If it's not in any of the top-5 browsers, then it is effectively non-existent.  If we agree it's not a good feature, removal is a good idea.  

I can speak for chrome, which doesn't do this.

Mike

Daniel Stenberg

unread,
Apr 29, 2011, 10:06:06 AM4/29/11
to spdy...@googlegroups.com
On Fri, 29 Apr 2011, Mike Belshe wrote:

> What I don't understand most about it is that the *client* is the one that
> wants to know if the server will reject. But the client doesn't know if the
> server uses 100-continue responses or not (and the server likely does not).
> So the client's only choice is to blast away, right?

Right, the client can only wait a little while to give the server some time to
respond and if it doesn't it continues anyway.

>> see it being especially useful for api.*.com kinds of use cases.
>
> Interesting. If it's not in any of the top-5 browsers, then it is
> effectively non-existent.

I disagree. The browsers are quite possibly not the biggest users of APIs but
rather other kind of scripts, libraries and tools are.

But I do agree that the feature is widely badly implemented (server-side) and
is otherwise not too widely used client-side. curl/libcurl does however use it
by default if the request body is larger than some threshold.

--

/ daniel.haxx.se

Dzonatas Sol

unread,
Apr 29, 2011, 10:36:08 AM4/29/11
to spdy...@googlegroups.com
Thank you, especially for your ear on this.

Just a little tidbit on that point of progress, most wireless (home)
devices tout encrypted initial connections, or also known as
closed/secure wireless router. Weith SPDY and SSL being enabled for
combined/multiplexed connections, we can tout secure mesh networks out
of current hardware trends that allow open wireless routers.

With that in mind, you can see why the browsers have become the design
monolith they are today. One has to start their browser just to select
close wireless router in mobile units. It puts a spin on people’s
viewpoint of what is the Internet.

ISPs, of course, want to keep their star-topology. That's where
client/server assumption have been heavily made. With SSL put into the
application layer, application by application, this is why it seemed
browser made what exists and what-not exists. HTML5 standard motions to
remove the weight of "embed everything into the web-browser" that has
been followed.

Now even EFF states the wireless router should be able to stay open.
This kind of change effects those assumptions about browsers and
client/servers. https://www.eff.org/deeplinks/2011/04/open-wireless-movement

Effectively, it is existent (with greater need in api.*.com without
web-browsers), yet it just has lack the common security procedures for
open wireless. People could essentially only enable secure ports on
their server, leave the wireless router open, and disallow everything
else. The proxy option is ideal here, especially being able to filter-in
mime-parts rather than filter-out traffic. With that idea I could
further get into mesh-style proxies for SPDY, yet that is 'reaching'
without, first, the above demonstrated.


--

James A. Morrison

unread,
Apr 29, 2011, 11:28:09 AM4/29/11
to spdy...@googlegroups.com

Does it still do this? I thought the behaviour changed a few years ago?

> --
>
>  / daniel.haxx.se

James A. Morrison

unread,
Apr 29, 2011, 11:35:27 AM4/29/11
to spdy...@googlegroups.com
On Fri, Apr 29, 2011 at 00:46, Mark Nottingham <mn...@mnot.net> wrote:
> Hi Roberto,
>
> On 29/04/2011, at 4:22 AM, Roberto Peon wrote:
>
>> On Wed, Apr 27, 2011 at 11:56 PM, Mark Nottingham <mn...@mnot.net> wrote:
>> I have a bit of concern here, in that this makes SPDY a subset of HTTP/1.1.
>>
>> This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.
>>
>> While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.
>>
>> Why is having two responses to a request an issue?
>>
>> The "smaller" reason is that sometimes a poor server design or poor proxy design will respond twice to a response (yes I've seen it), and so we build defenses against that.
>> The "bigger" reason is that most implementations dont' seem to support it , or only support it in late-life. We'd done some testing (now some years back) and found that 100-continues often were swallowed or caused negative side-effects on port-80. Having multiple responses to a single request makes the codepaths moer complex too.  It was a poorly thought out addition that really never worked properly.
>
> Yep, I've seen the same behaviours and failures. No disagreement that it's a not-so-terriffically designed feature. AIUI it was layered on late in the HTTP/1.1 process, which might be part of the reason why this is so.
>
> Mike: The primary use case is to allow a client to check whether the server will accept a request before sending a big request body. E.g., if you send a header block with a Content-Length that's really large, the server can make a decision about whether or not to accept it. It also gives the server a chance to 401 before you send the response body, so that you avoid submitting a request, only to re-submit it later with credentials.

The spec says that the response may only be a 100-continue or a 417.
This doesn't leave much room to give a useful error.

> As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.

It seems that if someone wants something similar to 100-continue they
can easily implement it with two requests. With two requests the full
list of response codes is available. e.g. a 302 through an sso login
may be good for no-credentials, or a 401 for bad credentials.

Anyway, taking a step back. For http, is it fully conforming for a
server to send a response before it has the entire request?
http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html#sec6 seems to
imply that it is not. However, being able to reply with a 401
before the entire request body is sent would do the same thing as 100 continue.

Dzonatas Sol

unread,
Apr 29, 2011, 10:03:45 PM4/29/11
to spdy...@googlegroups.com
James A. Morrison wrote:
>> Mike: The primary use case is to allow a client to check whether the server will accept a request before sending a big request body. E.g., if you send a header block with a Content-Length that's really large, the server can make a decision about whether or not to accept it. It also gives the server a chance to 401 before you send the response body, so that you avoid submitting a request, only to re-submit it later with credentials.
>>
>
> The spec says that the response may only be a 100-continue or a 417.
> This doesn't leave much room to give a useful error.
>
>
>> As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.
>>
>
> It seems that if someone wants something similar to 100-continue they
> can easily implement it with two requests. With two requests the full
> list of response codes is available. e.g. a 302 through an sso login
> may be good for no-credentials, or a 401 for bad credentials.
>
>
Alternatives: Hybrid initial start and query patten of SMTP & HTTP. Or
SMTP reply code as response codes. This simplifies into one
query/request, like ESMTP.

Or: WebSockets (HTML5), which would maybe ideal for external proxies.

The hybrid version is not out of the picture now that DNSSEC is in
place, yet still need to allow fallback to HTTP/HTTPS.

I always wondered why we haven't done "esmtp:api.proxy.com:..." or
"spdy:api.proxy.com:..." in URN format, and let the proxy bundle these
together from queued HTTP/HTTPS tasks. Once bundled, the URI could look
like "esmtp:api.proxy.com:xml-ietf-spdy-" to show the mime-parts contain
xml-ietf-spdy format headers. The proxy can then carry out the queries
and return the entire mime-parts with all their appropriate responses.
The mime-parts hold the original queued http/https headers. That's like
HTTP/1.1 stream, yet the proxy allows batched, fast, and slow priorities
(which I think is part of the goal with SPDY & this thread: "to allow a

client to check whether the server will accept a request before sending

a big request body"). Note, I had IPv6-ness in mind on your thoughts and
the more speedily-reactive e-mail client/server for batches.

Mark Nottingham

unread,
Apr 30, 2011, 8:12:46 PM4/30/11
to spdy...@googlegroups.com

On 29/04/2011, at 11:55 PM, Mike Belshe wrote:

[snip]

> As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.
>
> Interesting. If it's not in any of the top-5 browsers, then it is effectively non-existent. If we agree it's not a good feature, removal is a good idea.
>
> I can speak for chrome, which doesn't do this.

IIRC I have seen browsers do this, when the conditions are right (e.g., request body is large enough). Would have to dig around, though.

Cheers,

Mark Nottingham

unread,
Apr 30, 2011, 8:19:22 PM4/30/11
to spdy...@googlegroups.com

On 30/04/2011, at 1:35 AM, James A. Morrison wrote:

[...]

> The spec says that the response may only be a 100-continue or a 417.
> This doesn't leave much room to give a useful error.

RFC2616, section 13.20:

"""
A server that does not understand or is unable to comply with any of the expectation values in the Expect field of a request MUST respond with appropriate error status. The server MUST respond with a 417 (Expectation Failed) status if any of the expectations cannot be met or, if there are other problems with the request, some other 4xx status.
"""

> It seems that if someone wants something similar to 100-continue they
> can easily implement it with two requests. With two requests the full
> list of response codes is available. e.g. a 302 through an sso login
> may be good for no-credentials, or a 401 for bad credentials.

That's placing a really tight binding between the two requests.


> Anyway, taking a step back. For http, is it fully conforming for a
> server to send a response before it has the entire request?
> http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html#sec6 seems to
> imply that it is not.

Yes, it is; it happens fairly often. I don't see how section 6 implies otherwise.


> However, being able to reply with a 401
> before the entire request body is sent would do the same thing as 100 continue.

The browser will still be pumping the request down the connection until they receive and process the error, and that leaves the connection in an unsuable state.

It also doesn't help in the cases that are interesting here; using SPDY as a hop in a connection where there are also HTTP hops, and people might be using HTTP features.

Cheers,

Dzonatas Sol

unread,
May 1, 2011, 1:35:37 PM5/1/11
to spdy...@googlegroups.com, Mark Nottingham
On 04/30/2011 05:19 PM, Mark Nottingham wrote:
>
> The browser will still be pumping the request down the connection until they receive and process the error, and that leaves the connection in an unsuable state.
>

If that assumes connection headers are in default text mode (backwards
compatibility) than, yes, that is where that kind of confusion begins.
If the protocol has already switched over to XML encapsulation
(XML-IETF-HTTP-1-1, for example), then the document-break or
invalid-document can be used to recover the connection. As pivoted into XML:

<? barebones XML std built-in IETF DTD ?>
<XML><IETF>
<TLS>...
<HTTP>GET ...</HTTP>
<HTTP>PUT ...</HTTP>
<HTTP>DELETE ...</HTTP>
</TLS>
<TLS>...
<HTTP>UPDATE ...</HTTP>
</TLS>
</IETF><IETF>
...
<IETF></XML>

Being pivotal as above, HTTP header compression (and further) can be
folded into XML element compression. That implies XML streams, yet how
these delineate are implementation specific besides the complete
document->keep-alive->next-document and invalid document->next-document.
More pivotal headers and stream-ability is key. Notice above I put TLS
tag in there where an agent service may authorize queries from multiple
account/avatars (and maybe fold in some IETF-VWRAP for assets) being
streamed together. If the connection cannot recover (even to simply
prevent spoofs) then either re-queue the entire XML'd bundle or remove
successful queries and re-queue those that did not get any response.
Errors can follow in same array sequence as the stream of queries. The
500s errors make sense if one uses 100 or 101 for XML encapsulation &
compression or mime-parts.


> It also doesn't help in the cases that are interesting here; using SPDY as a hop in a connection where there are also HTTP hops, and people might be using HTTP features.
>
>

Exactly why I continue to lean on mime-parts and Content-Type
(:xml-ietf-spdy-) for backwards compatibility. I wonder at times if the
PPP layer should implement HTTP/1.1's 101-switch-protocol to avoid
redundancy through the layers, yet does this disturb potential pivotal
areas on other networks (thinking blackboxed here, token)?

Mark Nottingham

unread,
Jun 18, 2011, 8:22:18 PM6/18/11
to spdy...@googlegroups.com

On 29/04/2011, at 6:55 AM, Mike Belshe wrote:

> As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.
>
> Interesting. If it's not in any of the top-5 browsers, then it is effectively non-existent. If we agree it's not a good feature, removal is a good idea.
>
> I can speak for chrome, which doesn't do this.

I ran across more folks (who I can't identify) using Expect: 100-continue. Their use case is that they don't want to buffer a large request when proxying it to a far-away server.

I'd grant that this is a primarily "back-end" use case. However, to me that highlights a really central question for SPDY: is it aiming to be a true HTTP replacement, or is it just looking at the vanilla browser use case?

My preference would be for SPDY to aim to be a *real* HTTP replacement; lots of people are going to want to use it in back-end cases, for APIs and lots of other ways that HTTP is currently used.

If, on the other hand, SPDY is not intended to be a drop-in HTTP replacement, that needs to be very explicitly said in the draft, with an explanation of what it doesn't do.

Cheers,

Mike Benna

unread,
Jun 20, 2011, 1:44:52 PM6/20/11
to spdy...@googlegroups.com
Mark, would chunked encoding also satisfy their 100-continue use case?

Mark Nottingham

unread,
Jun 20, 2011, 3:59:28 PM6/20/11
to spdy...@googlegroups.com
No; they need positive feedback from the server that the request is acceptable before they start streaming it.

Dzonatas Sol

unread,
Jun 21, 2011, 3:52:57 PM6/21/11
to spdy...@googlegroups.com
If HTML switchs protocol on footer:+1 header then are their spdy footer
instead of headers? There is the <footer/> tag alone in an instance;
here, spdy <footer id=000 />. Is that --- or -rw mode?


--
--- http://twitter.com/Dzonatas_Sol ---
Web Development, Software Engineering
Ag-Biotech, Virtual Reality, Consultant

Greg Wilkins

unread,
Jun 22, 2011, 11:07:30 PM6/22/11
to spdy...@googlegroups.com
I think it would be good to keep 100 continue - but it is not absolutely vital.
What is the downside of keeping it.

There is also the rarer 102 processing, where the server can say "I'm
thinking" to stop a connection from being closed.

I'd rather see SPDY embrace mutliple responses to a single request (if
it can be done without too much complexity) rather than not support
these semantics and force people to come up with other solutions.

cheers

Mike Belshe

unread,
Jun 30, 2011, 6:48:22 PM6/30/11
to spdy-dev
On Jun 22, 8:07 pm, Greg Wilkins <gr...@intalio.com> wrote:
> I think it would be good to keep 100 continue - but it is not absolutely vital.
> What is the downside of keeping it.

The downside is that it is a one-off. It requires us to break a set
of otherwise fairly clean assertions about headers and data at the
framing layer.

I love this comment in the HTTP spec (section 8):
"Because of the presence of older implementations, the protocol
allows ambiguous situations in which a client may send "Expect: 100-
continue" without receiving either a 417 (Expectation Failed) status
or a 100 (Continue) status. Therefore, when a client sends this header
field to an origin server (possibly via a proxy) from which it has
never seen a 100 (Continue) status, the client SHOULD NOT wait for an
indefinite period before sending the request body. "

In other words, we know that servers don't implement it right, so
clients need to be careful not to leave the user hanging forever. Of
course, if you time out too quickly, then it still could have been a
slow, but correct server, and now you broke something else....

>
> There is also the rarer 102 processing, where the server can say "I'm
> thinking" to stop a connection from being closed.
>

Except it doesn't actually work either, for all the same reasons :-)


> I'd rather see SPDY embrace mutliple responses to a single request (if
> it can be done without too much complexity) rather than not support
> these semantics and force people to come up with other solutions.

"Less is more" is very much true with protocols.

Practically, we do need to ensure that SPDY-to-HTTP gateways work for
all branches of HTTP. So I'll put something in to accommodate 100
responses in a well-defined way.

Mike





>
> cheers

Mark Nottingham

unread,
Jun 30, 2011, 11:06:24 PM6/30/11
to spdy...@googlegroups.com
Thinking out loud - it might be addressed by a new kind of control frame which is essentially STREAM_CONTINUE. The actual headers on a 100 Continue response aren't semantically significant. This would avoid polluting the complications of having another full response in the protocol flow.

However, that wouldn't address other 1xx status codes. Arguably, that's OK; 101 is very specific to HTTP's upgrade model (which you probably don't want to use, and anyway it's hop-by-hop, so a HTTP-to-SPDY gateway would have to terminate it regardless). 102 Processing is likewise very HTTP-over-HTCP specific, you can probably transmute that into PINGs, re-synthesise it, etc.

If you did want to include other 1xx, it could perhaps be a STREAM_STATUS, with a field for the HTTP status code.

YMMV, of course.

Cheers,

--
Mark Nottingham http://www.mnot.net/

Dzonatas Sol

unread,
Jun 30, 2011, 11:25:48 PM6/30/11
to spdy...@googlegroups.com
On 06/30/2011 08:06 PM, Mark Nottingham wrote:
> Thinking out loud - it might be addressed by a new kind of control frame which is essentially STREAM_CONTINUE.

Note: "down mount hypermedia" could not be realized if it was only that;
standard?

Mike Belshe

unread,
Jul 1, 2011, 7:06:07 AM7/1/11
to spdy...@googlegroups.com
Roberto - 

This guy is some sort of automated spam bot.  It's pretty good, considering, but it can't be a real person.

I've sent email to the google groups guys to restore my admin privs.

Mike

Dzonatas Sol

unread,
Jul 1, 2011, 9:12:05 AM7/1/11
to spdy...@googlegroups.com
No I am not a spam bot. Live person, here.

FYI, "down mount hypermedia" is the haptic state movement from that
header position.

Please, be more nice towards real people.

> <mailto:gr...@intalio.com>> �wrote:

Mike Belshe

unread,
Jul 1, 2011, 10:20:04 AM7/1/11
to spdy...@googlegroups.com
On Thu, Jun 30, 2011 at 8:06 PM, Mark Nottingham <mn...@mnot.net> wrote:
Thinking out loud - it might be addressed by a new kind of control frame which is essentially STREAM_CONTINUE. The actual headers on a 100 Continue response aren't semantically significant. This would avoid polluting the complications of having another full response in the protocol flow.

It's a good suggestion. 

To be honest, I hate the idea of adding a control frame just for this.  But, it does confine the state pretty well, and may be more resilient to partial implementations.

Dzonatas Sol

unread,
Jul 1, 2011, 1:04:22 PM7/1/11
to Mike Belshe, spdy...@googlegroups.com
I posted this part back to the list.

I am down to Earth about these matters, not a child anymore that
imagines beyond wonders of the world. Any sense of that has been killed
off due to my experience. I survived, and I can keep my dreams to me;
this is not my disability.

First, you said "Read them aloud. " I have said I am deaf and visual.
That is again, offensive. Did you lose your patience with our
differences? Again, see why we fail in court. Your common sense (court)
as a cure fails me. Notice the court still remains the same, for
"hearings". We do know "why" there are these difference in people, so no
need to react the way you did. Please, be more courteous.

By the way, did you know that the Internet, SMTP, and HTTP was built on
TTY services? Do you know how hard it is to safely build haptic devices
this way (yes, for the blind from the deaf)? By you reaction below, we
can determine something here. "Down mount hypermedia" is very relevant
in "feel" of this condition on the scale of quantum down. They're
geniuses! Were do you think I learned this from? School alone? They
didn't give up, neither do we.

There are reasons why specific bits documented work, and are still being
clarified as states. They are way ahead of you. At least Fielding
demonstrated some sense of that, too. They don't need their disabilities
probed as spam bots.

One big problem is that people misused the connection states (1xx),
simply because they go from TTY context to some device totally unrelated
to TTY, and they change it for that reason.

And, you want to talk about loss of productivity and what is relevant to
the topic, and you don't see that?

I'm not attacking you. We understand you want speedier, so bare with us,
as we allow this to work together.

Realize this is beyond passion, especially when we weigh in on the word
tangible, and what is not lost (and what is "made lost" and found).

On 07/01/2011 09:16 AM, Mike Belshe wrote:
>
> Here are a few specific recommendations, and I don't know if they
> work, or if they are good enough. It seems like you are persistent
> enough to have some passion about protocols, which is great. I hope
> these will help you.
> a) Read over your emails carefully. Read them aloud. Read them 3
> times if you need to. But figure out if they are coherent before
> sending. I'm not an idiot, and I'm not mean. But if your emails are
> incomprehensible to me, then they're not good for the list. I don't
> disagree with what you've said - I simply can't understand what you
> were even attempting to say.
> b) Make sure you have a specific point that is relevant to the
> topic. I've seen you drift far from the topics, and it is distracting
> to everyone and causes conversations to stop which should continue.
>
> Respectfully,
> Mike

Jeremy Wohl

unread,
Jul 9, 2011, 10:28:35 PM7/9/11
to spdy...@googlegroups.com
On Saturday, June 18, 2011 5:22:18 PM UTC-7, mnot wrote:

I ran across more folks (who I can't identify) using Expect: 100-continue. Their use case is that they don't want to buffer a large request when proxying it to a far-away server.

To put some names against this: Amazon S3 and its variants in Google Storage for Developers and MS Azure Blob Service use 100-continue for the above reason, redirecting large PUTs closer to disks -- or variously handling failure conditions, resource throttling, etc with partial visibility.

As Mark says, if this is an HTTP replacement, these folks will be unhappy without something similar.

Jeremy

Mark Nottingham

unread,
Jul 9, 2011, 10:47:33 PM7/9/11
to spdy...@googlegroups.com
:) The use case I ran into was *very* similar.

Cheers,

--
Mark Nottingham http://www.mnot.net/

Reply all
Reply to author
Forward
0 new messages