Hi boys and girls,
So, if you remember a couple of days ago we talked about making the pipeline “resumable”, aka make it so the pipeline can stop execution and simply restart when re-executed a second time.
As far as I can tell, my code works and openbastard doesn’t have any issues with it, so it’s in. It’s not been integrated to the aspnet module yet, I’m waiting to get the asp.net integration for openbastard working before I enable it. All this is RC / RTM work, what follows is for 2.1.
Now the interesting challenge is the following. The first step of execution can simply try and match the uri (aka execute until KnownStages.IUriResolving or whatnot), return, and the http module will know if the URI is something OR should process or not.
If yes, the module does the rewrite, the handler goes and execute the rest of the pipeline.
Now if we think about async handlers, there’s a slight problem. We don’t know for sure which method will be executed until it’s time to execute the operation, which will have received all of its data from the request already. That means that the reading codec will still be synchronous. I don’t think it’s a problem, because you can stream the content yourself if and when required for codecs that support streaming (aka multipart when using IEnumerable<IHttpMultipartEntity> and App/octet-stream when using any stream of data).
Enough of the boring details. Now we have two choices when it comes to making the operation asynchronous.
Option 1, we manage the execution by queuing the request in the thread pool. This is the typical one you see in other frameworks when they annotate with an [Async] attribute. The issue with this is that you just take work items from the thread pool, which is the same one the asp.net requests get processed from. You end up hitting the cpu in both instances, playing with the same threads as asp.net and potentially end-up with threadpool starvation, which kills asp.net performance completely. I just don’t think there’s a valid use-case for this.
Option 2, we let the operation control the asynchronous call, aka have a signature of the form IAsyncResult BeginGet(), and have an EndGet() method that matches. Considering the only reason you should want to use async is when hitting APIs that hit IO threads rather than threadpool threads, this seems like the correct approach.
Option 3 is to create a type for async handling that encapsulates somehow the begin and end, but at that stage I wonder if it’s not simple over-engineering.
Comments?
Seb
Well, any Fsharp will have to stay out of the core to preserve .net 2 compat. Other than that, I have no major issues with it, it just can’t be used for the core API.
Seb
Option 1, we manage the execution by queuing the request in the thread pool. This is the typical one you see in other frameworks when they annotate with an [Async] attribute. The issue with this is that you just take work items from the thread pool, which is the same one the asp.net requests get processed from. You end up hitting the cpu in both instances, playing with the same threads as asp.net and potentially end-up with threadpool starvation, which kills asp.net performance completely. I just don’t think there’s a valid use-case for this.
Option 2, we let the operation control the asynchronous call, aka have a signature of the form IAsyncResult BeginGet(), and have an EndGet() method that matches. Considering the only reason you should want to use async is when hitting APIs that hit IO threads rather than threadpool threads, this seems like the correct approach.
Option 3 is to create a type for async handling that encapsulates somehow the begin and end, but at that stage I wonder if it’s not simple over-engineering.