Multiple send messages in on_message callback function

843 views
Skip to first unread message

vasakris

unread,
Oct 22, 2014, 6:17:43 AM10/22/14
to webso...@googlegroups.com
Hi

I have a single threaded websocketpp application, where server when it receives some message, it should send 3 messages in sequence like

con.send("A");
sleep(5);
con.send("B");
sleep(5)
con.send("C");

But all the messages are reaching the client at the same time in same packet











Why this kind of design is done? Is it to keep messages synchronous ?


Peter Thorson

unread,
Oct 22, 2014, 7:36:14 AM10/22/14
to vasakris, webso...@googlegroups.com
WebSocket++ using the Asio transport is an asynchronous application. Using sleep or any other blocking method in a handler will prevent the library from doing any work. Including sending messages, receiving messages, or accepting new connections.

What happens is that the first call to send adds the message to the outgoing queue. Then the sleep causes the entire application to sit and do nothing for 5 second. The second send adds a second message to the outgoing queue. Then we block again. Finally the third message is added to the queue and your handler returns. Once the handler returns the library itself gets a chance to run. It sees that there are three messages in the queue and sends them all.

If you want the behavior of sending multiple messages with a delay you have two options. One is to run your blocking logic in a separate thread. This way while your code is blocking in sleep the websocket++ network thread can still work on sending messages. A second option if you don't want threads is to use a timer or interrupt handler to yield control back to the library but schedule a followup handler to run after the library has had a chance to do some work.

The interrupt handler will call back as soon as possible. The timer handler will wait at approximately the number of milliseconds you specify before calling back. 

A callback with timeout is effectively the asynchronous equivalent of synchronous sleep. If in your case "sleep" is standing in for a blocking calculation rather than literally wait some wall time, then you can use the interrupt handler to do one calculation, then send, then interrupt yourself to do the second one, and so on. Interrupt handler behaves like an async timer with a timeout of 0. This will ensure that the results of those calculations get sent as they are generated rather than in a batch at the end. 

Note: blocking calculations in handlers will also prevent reads of incoming messages and processing of new connections. In most cases involving blocking calculations a second processing thread would be a much better idea.

Docs and signature for set_timer

Basically in the message handler you would send message A then set a timer for 5000ms to call a function that would send message B which in turn would set another 5000ms timer to call back a function that would send C.
--
You received this message because you are subscribed to the Google Groups "WebSocket++" group.
To unsubscribe from this group and stop receiving emails from it, send an email to websocketpp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

vasakris

unread,
Oct 22, 2014, 12:36:55 PM10/22/14
to webso...@googlegroups.com
Thank you so much peter, i was struggling by creating separate thread, the timer solution was so easy and simple.

I just modified something like this

server.set_timer(1000, bind(&sendA,hdl));
server.set_timer(5000, bind(&sendB,hdl));
server.set_timer(10000, bind(&sendC,hdl));

where sendA,sendB and sendC functions will handle of sending respective messages, it worked as expected. Now getting different packets :)

vasakris

unread,
Oct 30, 2014, 6:17:29 AM10/30/14
to webso...@googlegroups.com
Hi

i am facing one more problem now.

I have my code something like this
################################################
on_msg
{
process_request(msg);
}

process_request(msg)
{
//some logic
endpoint.send(response)
response_queue.remove(response);
if(!response_queue.isempty())
process_request(msg)
}
#################################################

Hence process_request will send multiple responses and all responses will go in same packet.

I was thinking, i have to write something like this, how should i go forward?
################################################

asio_service ioThread;
//run ioThread in seperate thread

on_msg
{
ioThread.set_timer(bind(&process_request(msg)));
}

process_request(msg)
{
//some logic
endpoint.send(response)
response_queue.remove(response);
if(!response_queue.isempty())
process_request(msg)
}
#################################################


On Wednesday, 22 October 2014 15:47:43 UTC+5:30, vasakris wrote:

Peter Thorson

unread,
Oct 30, 2014, 9:15:10 AM10/30/14
to vasakris, webso...@googlegroups.com
I guess firstly… can you confirm when you say “same packet” you are referring to TCP packets and not WebSocket messages, correct?

If so, I am curious why it is important that your messages be sent in separate TCP packets. WebSocket++ aggregates multiple small WebSocket messages going to the same destination at the same time into single TCP packets because it drastically reduces framing overhead on the wire and the cpu time used by the endpoint.

——

If your goal is to space out responses in time (as in your original example) using timers is a good way. If your goal is to improve latency of response (i.e. if you have 10 requests to process and you want to send a response before processing the second response) then what you need is to control the total runtime of your message handler. Responses won’t be written out to the wire until after the handler that sends the message returns. Some strategies for doing this:

1. If an individual request might take a long time to process and you will have multiple connections then your only sane choice is to use a background thread for processing (see broadcast_server for an example).

2. If each request is fairly short, but 10 requests might be too long to wait then you want your handlers to process requests in batches rather than a loop that does all of them. Example:

on_msg(hdl,msg)
{
request_queue.push(msg);
process_batch(hdl);
}

on_interrupt(hdl)
{
process_batch(hdl);
}

process_batch(hdl)
{
// send some finite number of responses based on expected run time of each
// response and desired latency. Adjusting batch size will affect minimum and 
// average latency and server resource usage. This example uses a batch size 
// of one for simplicity. You should test with your data and workloads what the
// best batch size would be.
response = actually_process_request(request_queue.front());
endpoint.send(response)
request_queue.pop();
// instead of calling process_request inline which will result in extending the
// run time of the handler, we use an interrupt to yield control back to the main
// event loop but request that this connection gets its interrupt handler called
// as soon as the library has had a chance to process some network data.
if(!response_queue.isempty())
endpoint.interrupt(hdl);
}

3. A third option, that might be better for a busy server, especially one with lots of clients but not as many messages per client, would be to approximate a worker thread using a perpetual timer. Benefits of this method are that you can use one timer and queue for all connections which reduces overhead. Drawbacks are that requests will only be processed at specified intervals. Example:

main() {
// sometime before accepting requests
endpoint.set_timer(1000, process_batch);
}

on_msg(hdl,msg)
{
global_request_queue.push({hdl,msg});
}

process_batch() {
// process a batch of requests if there are any
if (!global_request_queue.empty()) {
std::tie(hdl,msg) = global_request_queue.front();
response = actually_process_request(msg);
endpoint.send(hdl,response);
global_request_queue.pop();
}
// reset the timer for the next interval
endpoint.set_timer(1000, process_batch);
}

Finally, if you have a really compelling reason to want to push out bunches of websocket messages on the same connection in distinct TCP packets with low latency between messages the only way to do that is going to be me providing an option to disable request coalescing.

vasakris

unread,
Oct 31, 2014, 2:57:00 AM10/31/14
to webso...@googlegroups.com, vasanth....@gmail.com
Thank u so much, Yes single packet, i was referring to WebSocket packet with multiple payloads of messages. 


I think for my application i should implement background thread like the one in broadcast server, thank u :)
Reply all
Reply to author
Forward
0 new messages