Any plans to put ASP.NET on Kayak? Maybe start with IHttpHandler and IHttpModule? I would bet that getting the basic foundation in place would motivate someone to finish the job for you. Haven't gotten that far myself in my own work, but it would be a cool milestone and might allow some people to start using it in their work environments.
Maybe just use something of razor template to better serve html content.
On Aug 10, 2011, at 0:03, Paul Batum <paul....@gmail.com> wrote:
> Can you elaborate on what you plan to build for the HTTP client?
The HTTP client will reuse the IHttpRequestDelegate and IHttpResponseDelegate interfaces, but the user implements IHttpResponseDelegate and calls into IHttpRequestDelegate—the opposite of the HTTP server interface.
The client implementation will internally /transparently handle DNS resolution, manage a connection pool, and deal with request pipelining/keep-alive.
You will be able to test client/server code directly together without any sockets or IPC.
Does that give you a better picture? Let me know if I can elaborate or if I'm missing anything. :)
This is interesting, but I wonder if each of those things should be
"optional", in the sense that there should be a way to use the client
library so that it does not attempt to provide those services.
Skipping DNS resolution should be straightforward if the user provides
an IP endpoint, but pooling and keep-alive... my gut feeling is that
there should be a way to disable them.
I would even say that they should be implemented "later", after a
stable (and already useful) HTTP client release.
This in the "minimalist and barebone" spirit of Kayak, which for me is
a framework that strives to be a good foundation that integrates well
with whatever the user already has...
Or maybe I am not getting what "connection pooling" means in this context.
I think of a connection as an open and connected socket, and I don't
see how a connected socket can be recycled in a pool.
> You will be able to test client/server code directly together without any sockets or IPC.
Now *this* is a really cute idea!
Cheers,
Massi
On Wed, Aug 10, 2011 at 7:06 PM, Benjamin van der VeenThis is interesting, but I wonder if each of those things should be
<b...@bvanderveen.com> wrote:
> The client implementation will internally /transparently handle DNS resolution, manage a connection pool, and deal with request pipelining/keep-alive.
"optional", in the sense that there should be a way to use the client
library so that it does not attempt to provide those services.
Skipping DNS resolution should be straightforward if the user provides
an IP endpoint, but pooling and keep-alive... my gut feeling is that
there should be a way to disable them.
I would even say that they should be implemented "later", after a
stable (and already useful) HTTP client release.
Or maybe I am not getting what "connection pooling" means in this context.
I think of a connection as an open and connected socket, and I don't
see how a connected socket can be recycled in a pool.
Ok, now I think I get it: pooling is HTTP 1.1 specific, it happens
when the connection stays open and can be reused for other requests.
Maybe you could do like this... 1st of all model a sort of "HTTP end
point", which is an object that "represents the server".
You use it to make requests passing a response handler and the handler
will be called appropriately when the server answers (the usual Kayak
idiom, but as you said in reverse).
Creating an "HTTP end point" can involve DNS requests, but does not
involve creating sockets.
Then, when a request is issued, the end point creates a socket as
needed, eventually reusing it for HTTP 1.1.
Of course (and this is what I was wondering about thinking of
"pooling") the socket can be reused only inside the *same* HTTP end
point.
I mean, there is no global "client side connection pool" (in contrast
to the usual pooling, like in thread pools or for "db connection
pools"), each HTTP end point is a "mini-pool" of its own.
Then an end point should expose an API to control how keep-alive is
handled and when an 1.1 connection will be closed (maybe with a "close
now" option, even without issuing a request).
The end point will of course create the socket on the fly for each
request that is issued when no socket is active.
Probably the "final" issue to fix would be how to handle multiple
requests: should the end point open more connections (sockets), or
should it serialize the requests?
Likely, also in this case it should expose an API to control its behavior.
Maybe the sensible default would be opening a new socket for 1.0
requests and queuing them on the existing socket for 1.1 requests,
unless the user asks otherwise.
Just my two cents "thinking aloud" around midnight :-)
Keep up the good work!
Massi
I like the idea of being able to do client<->server tests without any sockets or IPC, I'm just not sure if its worth it for you to spend effort on a proper HTTP client (one that does DNS resolution, connection pooling, etc). Have you looked at the new HttpClient that we (Microsoft) have on Codeplex and NuGet (http://nuget.org/List/Packages/HttpClient)?