I see the async client requests are of the form "response_ = client_.post(request_, body, callback)" This seems to indicate the http call is performed synchronously though the body is processed asynchronously.It seems this fails to hide the latency of performing the actual http request and getting back the status code, am I missing something?
--
You received this message because you are subscribed to the Google Groups "The C++ Network Library" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cpp-netlib+...@googlegroups.com.
To post to this group, send email to cpp-n...@googlegroups.com.
Visit this group at https://groups.google.com/group/cpp-netlib.
For more options, visit https://groups.google.com/d/optout.
I see. The issue I see with this is in order to eliminate any thread from blocking I either need a notification when something like the status code is available, which hides the tcp and initial http handshake latency, or I need to have some thread at some time blocking waiting for it to be available which seemingly defeats the purpose of using boost ASIO.
This kind of ties in to another difficulty I've had with the library how it requires a thread pool which is out of line with how boost ASIO typically operates.
I see you guys are going to rework the API, I'd suggest something like void client::post(request, callback) where the callback is something of the form void (boost::system::ec &, status_code, response) and a method on response can fetch blocks of the body in similar callback fashion. This would mean no thread is ever blocking on getting an http response and it falls more in line with the style of ASIO using callbacks.
It would also remove the need for a specialized threadpool which I highly recommend. I feel people familiar with ASIO understand you don't do slow things inside the handler and have their own IO dispatching mechanism which may be a thread pool but as is, it is just forcing a new dependency on the library user.
I'd also suggest removing share_ptr wrapping the boost::asio::io_service and instead hold a reference leaving the library consumer to choose how the lifetime of io_service is managed. When I switched to using this library I had to replumb about 10 classes just to add the shared_ptr to the signature which was purely vestigious in my case.
On Fri, Jan 22, 2016 at 2:39 PM Colin LeMahieu <clem...@gmail.com> wrote:I see. The issue I see with this is in order to eliminate any thread from blocking I either need a notification when something like the status code is available, which hides the tcp and initial http handshake latency, or I need to have some thread at some time blocking waiting for it to be available which seemingly defeats the purpose of using boost ASIO.So on the client side, this is true -- but there is a way out. I was working on a helper called "when_ready(...)" which allows you to do something like this:when_ready(client.get(...), [](response& r, system::error_code ec) {// do something with r});This "should just happen", but I got blocked by some other issues which I think have been fixed since. Will something like that be more acceptable to you?
This kind of ties in to another difficulty I've had with the library how it requires a thread pool which is out of line with how boost ASIO typically operates.You mean, for the HTTP server part, yes?I see you guys are going to rework the API, I'd suggest something like void client::post(request, callback) where the callback is something of the form void (boost::system::ec &, status_code, response) and a method on response can fetch blocks of the body in similar callback fashion. This would mean no thread is ever blocking on getting an http response and it falls more in line with the style of ASIO using callbacks.It's already there. ;)
- response_ = client_.get(request_, callback)
- Perform an HTTP GET request, and have the body chunks be handled by the callback parameter. The signature of callback should be the following: void(iterator_
- range<char const *> const &, boost::system::error_code const &).
It would also remove the need for a specialized threadpool which I highly recommend. I feel people familiar with ASIO understand you don't do slow things inside the handler and have their own IO dispatching mechanism which may be a thread pool but as is, it is just forcing a new dependency on the library user.So the threadpool is meant to handle the non-networking part of the application logic in the server. This means you're isolating the network events from the non-network events, and you can start writing blocking code there (and tune the number of threads you have concurrently happening). You can even do admission control on the handlers to start rejecting requests that come in while the pool is busy (admittedly that feature isn't built-in but easy to implement).
I'd also suggest removing share_ptr wrapping the boost::asio::io_service and instead hold a reference leaving the library consumer to choose how the lifetime of io_service is managed. When I switched to using this library I had to replumb about 10 classes just to add the shared_ptr to the signature which was purely vestigious in my case.Interesting.I think the reason we took a shared_ptr instead is to that we can tie the lifetime of the io_service to the operations that were still pending on that io_service. So imagine when you create a client object and you do a post, then the client goes out of scope but the post isn't finished yet -- the shared pointer keeps the io_service alive until all the operations that need it are done. This is much easier to get wrong if we took a reference to an optional io_service -- note not all users need to provide an io_service they control, because not everyone will be using Boost.Asio already.Thanks for the feedback Colin!Is there anything else I can help with?
On Thursday, January 21, 2016 at 9:55:36 PM UTC-6, Dean Michael Berris wrote:On Fri, Jan 22, 2016 at 2:39 PM Colin LeMahieu <clem...@gmail.com> wrote:I see. The issue I see with this is in order to eliminate any thread from blocking I either need a notification when something like the status code is available, which hides the tcp and initial http handshake latency, or I need to have some thread at some time blocking waiting for it to be available which seemingly defeats the purpose of using boost ASIO.So on the client side, this is true -- but there is a way out. I was working on a helper called "when_ready(...)" which allows you to do something like this:when_ready(client.get(...), [](response& r, system::error_code ec) {// do something with r});This "should just happen", but I got blocked by some other issues which I think have been fixed since. Will something like that be more acceptable to you?This kind of ties in to another difficulty I've had with the library how it requires a thread pool which is out of line with how boost ASIO typically operates.You mean, for the HTTP server part, yes?I see you guys are going to rework the API, I'd suggest something like void client::post(request, callback) where the callback is something of the form void (boost::system::ec &, status_code, response) and a method on response can fetch blocks of the body in similar callback fashion. This would mean no thread is ever blocking on getting an http response and it falls more in line with the style of ASIO using callbacks.It's already there. ;)
- response_ = client_.get(request_, callback)
- Perform an HTTP GET request, and have the body chunks be handled by the callback parameter. The signature of callback should be the following: void(iterator_
- range<char const *> const &, boost::system::error_code const &).
This is the one I use, it processes the http body chunks though I was expecting something similar before processing the body where you would be given the header and status code when they're available. Rather than get(...) having a callback to body chunks, it would have a callback with the header and status code. Inside this callback you could then request the body with another callback to process the body chunks. I haven't tested, is this called if the http response has no body like for a 500?
It would also remove the need for a specialized threadpool which I highly recommend. I feel people familiar with ASIO understand you don't do slow things inside the handler and have their own IO dispatching mechanism which may be a thread pool but as is, it is just forcing a new dependency on the library user.So the threadpool is meant to handle the non-networking part of the application logic in the server. This means you're isolating the network events from the non-network events, and you can start writing blocking code there (and tune the number of threads you have concurrently happening). You can even do admission control on the handlers to start rejecting requests that come in while the pool is busy (admittedly that feature isn't built-in but easy to implement).Blocking inside the ASIO threads is never a good idea agreed, though by explicitly using a pool cppnetlib requires the library user to give this guarantee in a specific way which may not be the way the user wants to do it. For instance what if their http server just proxies the request off to another ASIO call? Maybe it doesn't do I/O processing or already does asynchronous disk IO inside the http handler. Any of these cases makes the threadpool work against how the library user wants to operate. In the end they could always just use a threadpool but it would be their own option. Sometimes the operations being performed inside a callback require special thread-local setup maybe Windows COM initialization. The user already had COM initialized in their own thread pool which was servicing the io_service proactor, now with this new thread pool they're required to use, they need to make sure these threads are COM initialized or thunk calls off to the threads they already had running the io_service.
I'd also suggest removing share_ptr wrapping the boost::asio::io_service and instead hold a reference leaving the library consumer to choose how the lifetime of io_service is managed. When I switched to using this library I had to replumb about 10 classes just to add the shared_ptr to the signature which was purely vestigious in my case.Interesting.I think the reason we took a shared_ptr instead is to that we can tie the lifetime of the io_service to the operations that were still pending on that io_service. So imagine when you create a client object and you do a post, then the client goes out of scope but the post isn't finished yet -- the shared pointer keeps the io_service alive until all the operations that need it are done. This is much easier to get wrong if we took a reference to an optional io_service -- note not all users need to provide an io_service they control, because not everyone will be using Boost.Asio already.Thanks for the feedback Colin!Is there anything else I can help with?No that's good, thanks for taking a look!
Hi Dean,
Thank you for your swift reaction.
I did not find the entry about "ready" in the documentation.
Do you have any plan for providing with a full asynchronous client in the future?
Best.