Receiving multiple replies in event bus.

615 views
Skip to first unread message

赵普明

unread,
Jun 19, 2012, 4:13:47 AM6/19/12
to ve...@googlegroups.com
Hi :

   I'm writing a server that needs accepts a request, dispatch it to multiply remote client servers, and collect their results back and then send back the summary result. And this process needs to be finished in 200ms. I came up with two solutions, but each has its own problem:

1. have one front end verticle, and multiple beacon verticles each communicates with one remote client server. The front server collects results from each beacon and sums it up and returns.
2. have one front end verticle, and a beacon verticle that communicates with all remote client servers.

the first solution seems natural and efficient, until I found that Event Bus message is received only once. and all other results are ignored.

I can emulate this dispatch/collect process using no-reply messages: front end sends a "request" message, beacon sends a "response" message, and front end needs to check whether all "responses" are collected or just use a timer to get what ever is returned. But this seems very complicated.

So my question is :

1. Is there (future) support for multiple replies messaging?
2. if it is supported, i still need a setTimeout for it :)

The second solution seems workable. but I'm not sure whether it will scale well. Our collection of remote client servers are relatively stable, so a keep-alive and connection pool is needed. I did not test whether one verticle with multiple remote servers is OK for this scenario.

Which solution do you suggest? Or is there better solutions?

Thanks :)

Tim Fox

unread,
Jun 19, 2012, 6:28:36 AM6/19/12
to ve...@googlegroups.com
If I understand you correctly, you want to send a message out to n recipients, have them process it, then when all the results are received (or a timeout occurs), do something.

We don't support this in the event bus out of the box, but it's something that should be pretty simple to layer over the current API, and using the work queue (see busmods manual).

Basically set up the work queue as normal, and implement your processors.

Then when sending the message, let's say you want to send it to 10 recipients (in JS):

(Off the top of my head  - probably has bugs since haven't run it)

var n = 10;
var eb = vertx.eventBus;

function sendWork(work) {
  for (var i = 0; i < n; i++) {
    var replies = 0; // Aren't closures cool?
    eb.send('address-of-my-work-queue', work, function (reply) {
        if (++replies == n) {
            // All replies have returned - do something
        }       
    }
    vertx.setTimeout(1000, function() {
        // Timeout after 1 second. Do something
    });    
  }
}

Something like that.

Tim Fox

unread,
Jun 19, 2012, 6:34:36 AM6/19/12
to ve...@googlegroups.com
I should add - you will need to use the work-queue from master since with the one in 1.0.1.final, replies from processors won't make it back to the sender.

Tim Fox

unread,
Jun 19, 2012, 7:16:29 AM6/19/12
to ve...@googlegroups.com
Actually - the multiple dispatch code could be easily rolled into the work-queue busmod so we have this functionality out of the box. I'll add a github issue

Jaime Yap

unread,
Jun 19, 2012, 5:58:44 PM6/19/12
to ve...@googlegroups.com
On Tue, Jun 19, 2012 at 7:16 AM, Tim Fox <timv...@gmail.com> wrote:
Actually - the multiple dispatch code could be easily rolled into the work-queue busmod so we have this functionality out of the box. I'll add a github issue


I am interested in your opinion of the roll of the work-queue busmod now that the default behavior of the event bus is to round-robin dispatch to handlers at a specific address. It seems like if the event bus were to behave as advertised, and later have extensible implementations for different load balancing schemes, that it would obviate the need for an explicit work-queue busmod.

Or is my mental model of the new event bus changes incorrect?

Thanks!
-Jaime
 
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/gCdeYGfuGTMJ.

To post to this group, send an email to ve...@googlegroups.com.
To unsubscribe from this group, send email to vertx+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/vertx?hl=en-GB.

赵普明

unread,
Jun 19, 2012, 11:01:51 PM6/19/12
to ve...@googlegroups.com
Thanks Tim, I'll try that later and report back :)

在 2012年6月19日星期二UTC+8下午6时28分36秒,Tim Fox写道:

Tim Fox

unread,
Jun 20, 2012, 2:39:41 AM6/20/12
to ve...@googlegroups.com
The work queue has somewhat different semantics than simple round robin to a single subscriber.

With simple round robin messages are sent to a subscriber whether they are ready or not.

With the work queue, if a worker is ready it is available to receive a message, when it receives a message it is not ready any more and cannot receive any more messages until that message is acked (replied to).

Work queue optionally does persistence, and will probably do multiple despatch too (as described recently in a different thread).

I'm not a big fan of pushing too much functionality into the core. KISS. Then layer more complex stuff up on top of the simple stuff using busmods.
To unsubscribe from this group, send email to vertx+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages