--
Thanks,
Jim
http://phython.blogspot.com
Maybe further work this so it toggles the assumption that mime-parts
automatically follow without any SYNs/ACKs.
The only thing I worry about is overruns, yet I see this concern is
already in mind with SPDY.
The previous method obviously assumed it needed to wait for the
web-browser to be ready. With this, we can assume the transport is
always ready when the connection is ready (to stream). Then the process
can follow like old-modem dial-ups to start/stop if overruns are present.
Mime-parts then are trivial to multiplex streams by Content-Type:
"spdy-application/html" or any "spdy-*"
This is more pivotal to filter-in then filter-out. (Which is almost
putting the internet pedal to the metallic ethernet.)
James A. Morrison wrote:
> Does anyone object if we disallow 100 response codes (and thus Expect:
> 100-continue) with
> spdy? If no one objects, I'd like to get this into the next spdy
> draft. 100 response codes
> require mulitple response to a single stream. This is not like server
> push since the second
> response is a related response, but the actual response.
>
>
--
--- https://twitter.com/Dzonatas_Sol ---
Web Development, Software Engineering, Virtual Reality, Consultant
This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.
While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.
Why is having two responses to a request an issue?
Regards,
--
Mark Nottingham http://www.mnot.net/
I have a bit of concern here, in that this makes SPDY a subset of HTTP/1.1.
This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.
While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.
Why is having two responses to a request an issue?
I have a bit of concern here, in that this makes SPDY a subset of HTTP/1.1.
This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.
While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.
Why is having two responses to a request an issue?
On 29/04/2011, at 4:22 AM, Roberto Peon wrote:
> On Wed, Apr 27, 2011 at 11:56 PM, Mark Nottingham <mn...@mnot.net> wrote:
> I have a bit of concern here, in that this makes SPDY a subset of HTTP/1.1.
>
> This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.
>
> While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.
>
> Why is having two responses to a request an issue?
>
> The "smaller" reason is that sometimes a poor server design or poor proxy design will respond twice to a response (yes I've seen it), and so we build defenses against that.
> The "bigger" reason is that most implementations dont' seem to support it , or only support it in late-life. We'd done some testing (now some years back) and found that 100-continues often were swallowed or caused negative side-effects on port-80. Having multiple responses to a single request makes the codepaths moer complex too. It was a poorly thought out addition that really never worked properly.
Yep, I've seen the same behaviours and failures. No disagreement that it's a not-so-terriffically designed feature. AIUI it was layered on late in the HTTP/1.1 process, which might be part of the reason why this is so.
Mike: The primary use case is to allow a client to check whether the server will accept a request before sending a big request body. E.g., if you send a header block with a Content-Length that's really large, the server can make a decision about whether or not to accept it. It also gives the server a chance to 401 before you send the response body, so that you avoid submitting a request, only to re-submit it later with credentials.
As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.
Hi Roberto,
Yep, I've seen the same behaviours and failures. No disagreement that it's a not-so-terriffically designed feature. AIUI it was layered on late in the HTTP/1.1 process, which might be part of the reason why this is so.
On 29/04/2011, at 4:22 AM, Roberto Peon wrote:
> On Wed, Apr 27, 2011 at 11:56 PM, Mark Nottingham <mn...@mnot.net> wrote:
> I have a bit of concern here, in that this makes SPDY a subset of HTTP/1.1.
>
> This makes it difficult to make a HTTP->SPDY gateway -- which is a use case that many people (including me!) get excited about.
>
> While it's true you can apply a policy decision at the gateway (e.g., 100-continue all requests, or 417 all requests), that's not great.
>
> Why is having two responses to a request an issue?
>
> The "smaller" reason is that sometimes a poor server design or poor proxy design will respond twice to a response (yes I've seen it), and so we build defenses against that.
> The "bigger" reason is that most implementations dont' seem to support it , or only support it in late-life. We'd done some testing (now some years back) and found that 100-continues often were swallowed or caused negative side-effects on port-80. Having multiple responses to a single request makes the codepaths moer complex too. It was a poorly thought out addition that really never worked properly.
Mike: The primary use case is to allow a client to check whether the server will accept a request before sending a big request body. E.g., if you send a header block with a Content-Length that's really large, the server can make a decision about whether or not to accept it. It also gives the server a chance to 401 before you send the response body, so that you avoid submitting a request, only to re-submit it later with credentials.
As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.
> What I don't understand most about it is that the *client* is the one that
> wants to know if the server will reject. But the client doesn't know if the
> server uses 100-continue responses or not (and the server likely does not).
> So the client's only choice is to blast away, right?
Right, the client can only wait a little while to give the server some time to
respond and if it doesn't it continues anyway.
>> see it being especially useful for api.*.com kinds of use cases.
>
> Interesting. If it's not in any of the top-5 browsers, then it is
> effectively non-existent.
I disagree. The browsers are quite possibly not the biggest users of APIs but
rather other kind of scripts, libraries and tools are.
But I do agree that the feature is widely badly implemented (server-side) and
is otherwise not too widely used client-side. curl/libcurl does however use it
by default if the request body is larger than some threshold.
--
Just a little tidbit on that point of progress, most wireless (home)
devices tout encrypted initial connections, or also known as
closed/secure wireless router. Weith SPDY and SSL being enabled for
combined/multiplexed connections, we can tout secure mesh networks out
of current hardware trends that allow open wireless routers.
With that in mind, you can see why the browsers have become the design
monolith they are today. One has to start their browser just to select
close wireless router in mobile units. It puts a spin on people’s
viewpoint of what is the Internet.
ISPs, of course, want to keep their star-topology. That's where
client/server assumption have been heavily made. With SSL put into the
application layer, application by application, this is why it seemed
browser made what exists and what-not exists. HTML5 standard motions to
remove the weight of "embed everything into the web-browser" that has
been followed.
Now even EFF states the wireless router should be able to stay open.
This kind of change effects those assumptions about browsers and
client/servers. https://www.eff.org/deeplinks/2011/04/open-wireless-movement
Effectively, it is existent (with greater need in api.*.com without
web-browsers), yet it just has lack the common security procedures for
open wireless. People could essentially only enable secure ports on
their server, leave the wireless router open, and disallow everything
else. The proxy option is ideal here, especially being able to filter-in
mime-parts rather than filter-out traffic. With that idea I could
further get into mesh-style proxies for SPDY, yet that is 'reaching'
without, first, the above demonstrated.
--
The spec says that the response may only be a 100-continue or a 417.
This doesn't leave much room to give a useful error.
> As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.
It seems that if someone wants something similar to 100-continue they
can easily implement it with two requests. With two requests the full
list of response codes is available. e.g. a 302 through an sso login
may be good for no-credentials, or a 401 for bad credentials.
Anyway, taking a step back. For http, is it fully conforming for a
server to send a response before it has the entire request?
http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html#sec6 seems to
imply that it is not. However, being able to reply with a 401
before the entire request body is sent would do the same thing as 100 continue.
Or: WebSockets (HTML5), which would maybe ideal for external proxies.
The hybrid version is not out of the picture now that DNSSEC is in
place, yet still need to allow fallback to HTTP/HTTPS.
I always wondered why we haven't done "esmtp:api.proxy.com:..." or
"spdy:api.proxy.com:..." in URN format, and let the proxy bundle these
together from queued HTTP/HTTPS tasks. Once bundled, the URI could look
like "esmtp:api.proxy.com:xml-ietf-spdy-" to show the mime-parts contain
xml-ietf-spdy format headers. The proxy can then carry out the queries
and return the entire mime-parts with all their appropriate responses.
The mime-parts hold the original queued http/https headers. That's like
HTTP/1.1 stream, yet the proxy allows batched, fast, and slow priorities
(which I think is part of the goal with SPDY & this thread: "to allow a
client to check whether the server will accept a request before sending
a big request body"). Note, I had IPv6-ness in mind on your thoughts and
the more speedily-reactive e-mail client/server for batches.
[snip]
> As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.
>
> Interesting. If it's not in any of the top-5 browsers, then it is effectively non-existent. If we agree it's not a good feature, removal is a good idea.
>
> I can speak for chrome, which doesn't do this.
IIRC I have seen browsers do this, when the conditions are right (e.g., request body is large enough). Would have to dig around, though.
Cheers,
[...]
> The spec says that the response may only be a 100-continue or a 417.
> This doesn't leave much room to give a useful error.
RFC2616, section 13.20:
"""
A server that does not understand or is unable to comply with any of the expectation values in the Expect field of a request MUST respond with appropriate error status. The server MUST respond with a 417 (Expectation Failed) status if any of the expectations cannot be met or, if there are other problems with the request, some other 4xx status.
"""
> It seems that if someone wants something similar to 100-continue they
> can easily implement it with two requests. With two requests the full
> list of response codes is available. e.g. a 302 through an sso login
> may be good for no-credentials, or a 401 for bad credentials.
That's placing a really tight binding between the two requests.
> Anyway, taking a step back. For http, is it fully conforming for a
> server to send a response before it has the entire request?
> http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html#sec6 seems to
> imply that it is not.
Yes, it is; it happens fairly often. I don't see how section 6 implies otherwise.
> However, being able to reply with a 401
> before the entire request body is sent would do the same thing as 100 continue.
The browser will still be pumping the request down the connection until they receive and process the error, and that leaves the connection in an unsuable state.
It also doesn't help in the cases that are interesting here; using SPDY as a hop in a connection where there are also HTTP hops, and people might be using HTTP features.
Cheers,
If that assumes connection headers are in default text mode (backwards
compatibility) than, yes, that is where that kind of confusion begins.
If the protocol has already switched over to XML encapsulation
(XML-IETF-HTTP-1-1, for example), then the document-break or
invalid-document can be used to recover the connection. As pivoted into XML:
<? barebones XML std built-in IETF DTD ?>
<XML><IETF>
<TLS>...
<HTTP>GET ...</HTTP>
<HTTP>PUT ...</HTTP>
<HTTP>DELETE ...</HTTP>
</TLS>
<TLS>...
<HTTP>UPDATE ...</HTTP>
</TLS>
</IETF><IETF>
...
<IETF></XML>
Being pivotal as above, HTTP header compression (and further) can be
folded into XML element compression. That implies XML streams, yet how
these delineate are implementation specific besides the complete
document->keep-alive->next-document and invalid document->next-document.
More pivotal headers and stream-ability is key. Notice above I put TLS
tag in there where an agent service may authorize queries from multiple
account/avatars (and maybe fold in some IETF-VWRAP for assets) being
streamed together. If the connection cannot recover (even to simply
prevent spoofs) then either re-queue the entire XML'd bundle or remove
successful queries and re-queue those that did not get any response.
Errors can follow in same array sequence as the stream of queries. The
500s errors make sense if one uses 100 or 101 for XML encapsulation &
compression or mime-parts.
> It also doesn't help in the cases that are interesting here; using SPDY as a hop in a connection where there are also HTTP hops, and people might be using HTTP features.
>
>
Exactly why I continue to lean on mime-parts and Content-Type
(:xml-ietf-spdy-) for backwards compatibility. I wonder at times if the
PPP layer should implement HTTP/1.1's 101-switch-protocol to avoid
redundancy through the layers, yet does this disturb potential pivotal
areas on other networks (thinking blackboxed here, token)?
> As such, it's a somewhat useful feature *if* it's well-implemented. I can see it being especially useful for api.*.com kinds of use cases.
>
> Interesting. If it's not in any of the top-5 browsers, then it is effectively non-existent. If we agree it's not a good feature, removal is a good idea.
>
> I can speak for chrome, which doesn't do this.
I ran across more folks (who I can't identify) using Expect: 100-continue. Their use case is that they don't want to buffer a large request when proxying it to a far-away server.
I'd grant that this is a primarily "back-end" use case. However, to me that highlights a really central question for SPDY: is it aiming to be a true HTTP replacement, or is it just looking at the vanilla browser use case?
My preference would be for SPDY to aim to be a *real* HTTP replacement; lots of people are going to want to use it in back-end cases, for APIs and lots of other ways that HTTP is currently used.
If, on the other hand, SPDY is not intended to be a drop-in HTTP replacement, that needs to be very explicitly said in the draft, with an explanation of what it doesn't do.
Cheers,
--
--- http://twitter.com/Dzonatas_Sol ---
Web Development, Software Engineering
Ag-Biotech, Virtual Reality, Consultant
There is also the rarer 102 processing, where the server can say "I'm
thinking" to stop a connection from being closed.
I'd rather see SPDY embrace mutliple responses to a single request (if
it can be done without too much complexity) rather than not support
these semantics and force people to come up with other solutions.
cheers
However, that wouldn't address other 1xx status codes. Arguably, that's OK; 101 is very specific to HTTP's upgrade model (which you probably don't want to use, and anyway it's hop-by-hop, so a HTTP-to-SPDY gateway would have to terminate it regardless). 102 Processing is likewise very HTTP-over-HTCP specific, you can probably transmute that into PINGs, re-synthesise it, etc.
If you did want to include other 1xx, it could perhaps be a STREAM_STATUS, with a field for the HTTP status code.
YMMV, of course.
Cheers,
--
Mark Nottingham http://www.mnot.net/
Note: "down mount hypermedia" could not be realized if it was only that;
standard?
FYI, "down mount hypermedia" is the haptic state movement from that
header position.
Please, be more nice towards real people.
> <mailto:gr...@intalio.com>> �wrote:
Thinking out loud - it might be addressed by a new kind of control frame which is essentially STREAM_CONTINUE. The actual headers on a 100 Continue response aren't semantically significant. This would avoid polluting the complications of having another full response in the protocol flow.
I am down to Earth about these matters, not a child anymore that
imagines beyond wonders of the world. Any sense of that has been killed
off due to my experience. I survived, and I can keep my dreams to me;
this is not my disability.
First, you said "Read them aloud. " I have said I am deaf and visual.
That is again, offensive. Did you lose your patience with our
differences? Again, see why we fail in court. Your common sense (court)
as a cure fails me. Notice the court still remains the same, for
"hearings". We do know "why" there are these difference in people, so no
need to react the way you did. Please, be more courteous.
By the way, did you know that the Internet, SMTP, and HTTP was built on
TTY services? Do you know how hard it is to safely build haptic devices
this way (yes, for the blind from the deaf)? By you reaction below, we
can determine something here. "Down mount hypermedia" is very relevant
in "feel" of this condition on the scale of quantum down. They're
geniuses! Were do you think I learned this from? School alone? They
didn't give up, neither do we.
There are reasons why specific bits documented work, and are still being
clarified as states. They are way ahead of you. At least Fielding
demonstrated some sense of that, too. They don't need their disabilities
probed as spam bots.
One big problem is that people misused the connection states (1xx),
simply because they go from TTY context to some device totally unrelated
to TTY, and they change it for that reason.
And, you want to talk about loss of productivity and what is relevant to
the topic, and you don't see that?
I'm not attacking you. We understand you want speedier, so bare with us,
as we allow this to work together.
Realize this is beyond passion, especially when we weigh in on the word
tangible, and what is not lost (and what is "made lost" and found).
On 07/01/2011 09:16 AM, Mike Belshe wrote:
>
> Here are a few specific recommendations, and I don't know if they
> work, or if they are good enough. It seems like you are persistent
> enough to have some passion about protocols, which is great. I hope
> these will help you.
> a) Read over your emails carefully. Read them aloud. Read them 3
> times if you need to. But figure out if they are coherent before
> sending. I'm not an idiot, and I'm not mean. But if your emails are
> incomprehensible to me, then they're not good for the list. I don't
> disagree with what you've said - I simply can't understand what you
> were even attempting to say.
> b) Make sure you have a specific point that is relevant to the
> topic. I've seen you drift far from the topics, and it is distracting
> to everyone and causes conversations to stop which should continue.
>
> Respectfully,
> Mike
I ran across more folks (who I can't identify) using Expect: 100-continue. Their use case is that they don't want to buffer a large request when proxying it to a far-away server.