[Eventmachine-talk] Need Example using Stomp Protocol

161 views
Skip to first unread message

henry74

unread,
Jun 24, 2008, 11:29:03 PM6/24/08
to eventmac...@rubyforge.org
I've been searching far and wide for an example of using the latest Eventmachine and the Stomp protocol.

I would like to setup a look which subscribes to a Stomp queue and runs a procedure when a message is added to the queue.  I don't want it to block (so I'd like to use the deferrable pattern).  Can someone provide me an example of using the built in stomp protocol within a EM loop?

Thanks so much,
Henry

Aman Gupta

unread,
Jun 25, 2008, 6:22:01 PM6/25/08
to eventmac...@rubyforge.org
Here's a simple stomp client example: http://p.ramaze.net/1716

Out of curiosity, what server are you trying to connect to? ActiveMQ?

Aman

> _______________________________________________
> Eventmachine-talk mailing list
> Eventmachine-t...@rubyforge.orghttp://rubyforge.org/mailman/listinfo/eventmachine-talk
_______________________________________________
Eventmachine-talk mailing list
Eventmac...@rubyforge.org
http://rubyforge.org/mailman/listinfo/eventmachine-talk

henry74

unread,
Jun 25, 2008, 9:42:40 PM6/25/08
to eventmac...@rubyforge.org
Thanks for the example.  I'm connecting it to stompserver (a simple ruby implementation of a queue leveraging the stomp protocol).

Aman Gupta

unread,
Jun 25, 2008, 10:02:32 PM6/25/08
to eventmac...@rubyforge.org
I have a slightly updated version at http://p.ramaze.net/1717 with examples of how to subscribe to a topic and receives messages.

Ideally you'd want to use EM for the stomp server instead of the client, since the server needs to handle more connections and traffic... whereas the client only has one connection and can usually block while its waiting for the next message.

  Aman Gupta

henry74

unread,
Jun 25, 2008, 10:11:28 PM6/25/08
to eventmac...@rubyforge.org
I'm actually interested in using the stomp client with the deferrable pattern.  The stompserver itself handles all the connections just fine - not sure why I would need EM to run a stompserver as it is quite passive and just does the following: accepts connections, request to add data to a queue, and request to retrieve data from a queue.  I'm new to EM so I could definitely be missing something.

Logically it makes sense to use a stomp client with a callback so there is no blocking and an action can take place once a subscribed queue receives a message.  Am I thinking about it the wrong way?

Thanks for making an updated one - are there any examples using the deferrable module?

Aman Gupta

unread,
Jun 25, 2008, 10:22:26 PM6/25/08
to eventmac...@rubyforge.org
Looks like I spoke too soon.. the ruby stompserver already uses EventMachine. An EM based server can handle more open connections with much better performance than its pure ruby thread/socket based counterpart.

EM has some good docs on Deferrables at http://eventmachine.rubyforge.org/files/DEFERRABLES.html

The client example I posted is already non-blocking.. every time a new message arrives, receive_msg is triggered with the contents of that message. From there, you can call into your code to process the incoming message. What is your specific use-case that you think deferrable would be a good fit for? 

  Aman

henry74

unread,
Jun 25, 2008, 10:28:42 PM6/25/08
to eventmac...@rubyforge.org
I'm using it for asynchronous messaging.  Message comes in, drop it on an inbound queue.  The return message is placed on an outbound queue which is being subscribed to with a callback set to send the message back to the original requester.  As soon as a message is placed in the outbound queue, it will immediately be picked up since the deferred object will "wake up" upon getting the message and call the appropriate added callback.

Using the deferrable pattern avoids blocking and affords immediate response as soon as a message is placed on the queue.

Aman Gupta

unread,
Jun 25, 2008, 11:01:15 PM6/25/08
to eventmac...@rubyforge.org
I'm not quite sure I follow.. the messages are already arriving in a serial fashion, so there's no reason to put them into a specialized inbound queue. And whatever processing you need to perform on the incoming message (to generate the outgoing message) will block ruby and the event loop anyway.

  Aman  

henry74

unread,
Jun 25, 2008, 11:12:44 PM6/25/08
to eventmac...@rubyforge.org
Your assumption is the loop that is placing messages on the queue is the same loop which is processing the messages on the inbound true.  Assume a situation where many messages are coming in simultaneously from multiple sources.  Placing it on a queue gives you several advantages as follows:
  • You can create as many "worker" processes to read things off the queue.  This solves potential scalability issues as the # of messages increase you can increase the number of processes pulling inbound messages off the queue and doing work on them.
  • There is no blocking as placing a message on a queue is instantaneous.
This provides several advantages.  When work is completed on a particular message, the result is placed on an outbound queue.  This allows responses to be sent back on a first come, first serve basis.  If a request comes in which requires a long-running process it would be silly to block all other request from the same requester and wait until the original long-running process is finished.  A different worker may have finished another request which came on the inbound queue after the long-running request.  Once it finishes and it can drops it on the outbound queue which is being watched by a subscribe command which blocks until a message is received and the result is sent even while the long-running process continues.

I hope that clarifies my thinking.

Mark V

unread,
Jun 25, 2008, 11:47:36 PM6/25/08
to eventmac...@rubyforge.org
On Thu, Jun 26, 2008 at 1:12 PM, henry74 <hen...@gmail.com> wrote:
> Your assumption is the loop that is placing messages on the queue is the
> same loop which is processing the messages on the inbound true. Assume a
> situation where many messages are coming in simultaneously from multiple
> sources. Placing it on a queue gives you several advantages as follows:
>
> You can create as many "worker" processes to read things off the queue.
> This solves potential scalability issues as the # of messages increase you
> can increase the number of processes pulling inbound messages off the queue
> and doing work on them.
> There is no blocking as placing a message on a queue is instantaneous.
>
> This provides several advantages. When work is completed on a particular
> message, the result is placed on an outbound queue. This allows responses
> to be sent back on a first come, first serve basis. If a request comes in
> which requires a long-running process it would be silly to block all other
> request from the same requester and wait until the original long-running
> process is finished. A different worker may have finished another request
> which came on the inbound queue after the long-running request. Once it
> finishes and it can drops it on the outbound queue which is being watched by
> a subscribe command which blocks until a message is received and the result
> is sent even while the long-running process continues.
>
> I hope that clarifies my thinking.

If I may butt in. I'd like to understand this too.
By "worker" processes do you mean you create, say, 8 Ruby Threads?
By queue do you mean an instance of the Ruby Queue class (rather than
some Messaging system's queue)?
If not then you can probably disregard the following:

I had thought along similar lines in my application, except I had one
source of messages.
I expected the message processing to take long, so created up Ruby
Threads and used these to process messages. After some wrestling I
got it to work but discovered a bit more about about Ruby green
threads and that the suggested 'EM-way' was to only consider an
operation/action/event to be blocking if it relied on something
outside of my script and took _very_ long (seconds). So.... I then
dumped the Ruby Thread/Queue idea, rewrote things so that everything
runs sequentially. Without the stats at hand my recollection is that
the non-Ruby-Thread/Queue version was noticeably faster!

I came away with the impression that if I was in your situation with
multiple sources of messages, which I will be at some point, that I
should probably fireup a n-clients.
Is this a fair assessment of a suggested/recommended 'EM-way'?
I did make a mental note to try ensure that processing message 'A' was
independent of data in any other message, which might be handled by
another instance of my client script. I'm thinking to loosen that and
ensure a message is independent of any message coming from a
_different_ source - allowing my to have messages that depend on
earlier/later messages from the same source. Though I really think
independent messages will be easier to code for than conditionaly
independent messages.
As it turns out, with hindsight, this has made my app. more scalable
since now I can readily run this amoung N-machines.

Hopefully I haven't got things back to front :)

Mark

Mark V

unread,
Jun 25, 2008, 11:51:58 PM6/25/08
to eventmac...@rubyforge.org
On Thu, Jun 26, 2008 at 1:12 PM, henry74 <hen...@gmail.com> wrote:
> Your assumption is the loop that is placing messages on the queue is the
> same loop which is processing the messages on the inbound true. Assume a
> situation where many messages are coming in simultaneously from multiple
> sources. Placing it on a queue gives you several advantages as follows:
>
> You can create as many "worker" processes to read things off the queue.
> This solves potential scalability issues as the # of messages increase you
> can increase the number of processes pulling inbound messages off the queue
> and doing work on them.
> There is no blocking as placing a message on a queue is instantaneous.
>
> This provides several advantages. When work is completed on a particular
> message, the result is placed on an outbound queue. This allows responses
> to be sent back on a first come, first serve basis. If a request comes in
> which requires a long-running process it would be silly to block all other
> request from the same requester and wait until the original long-running
> process is finished. A different worker may have finished another request
> which came on the inbound queue after the long-running request. Once it
> finishes and it can drops it on the outbound queue which is being watched by
> a subscribe command which blocks until a message is received and the result
> is sent even while the long-running process continues.
>
> I hope that clarifies my thinking.

If I may butt in. I'd like to understand this too.

Mark

>

henry74

unread,
Jun 25, 2008, 11:59:29 PM6/25/08
to mvy...@gmail.com, eventmac...@rubyforge.org
Queue in my context is the StompServer, not a Ruby Queue class.  BUT queues are all the same in concept so you can use what you want.

Worker processes would be spawned to handle incoming messages - they do not have to be Ruby Green Threads - you could actually use another EM loop watching the inbound queue and have it use EM.spawn to create worker processes which do work on the message.

I am not sure what you are referring to when you say "fire up n-clients".   Are these "worker processes" or clients which can drop messages in the inbound queue? 

With regards to independent vs dependent messages - if you are using a single queue, then having multiple independent workers processes requires each message be independent.  If you want to process messages in order by client, then I'd suggest a queue for each client and a single worker process working on a queue.  Then you do not have to figure out order and reconstruction of incoming messages by user.

I hope that makes sense.  Don't you wish someone created a newsgroup where you could draw right into the message board?

Mark V

unread,
Jun 26, 2008, 12:11:20 AM6/26/08
to henry74, eventmac...@rubyforge.org
On Thu, Jun 26, 2008 at 1:59 PM, henry74 <hen...@gmail.com> wrote:
> Queue in my context is the StompServer, not a Ruby Queue class. BUT queues
> are all the same in concept so you can use what you want.
>

OK I did have things a little back-to-front.
Thanks for the clarification.
Mark

Aman Gupta

unread,
Jun 26, 2008, 2:39:08 AM6/26/08
to eventmac...@rubyforge.org
I think what you're looking for is EM.defer, which is different than a Deferrable. EM.defer uses a thread pool of 20 ruby threads for concurrent processing. Be aware though, if you do any IO in one of these threads, all of ruby will block. Also, using threads has 20-40% performance implication (because of rb_thread_select/rb_thread_schedule).

The alternative is to keep everything running in the single threaded event loop. If your processing involves network i/o (using EM::HttpClient2 to access a web service, or Asymy to access mysql), you can use Deferrable or Spawnable and avoid using threads.

Here's a simple example showing off a EM.defer and EM.spawn based stomp client worker: http://p.ramaze.net/1719. The two workers connect to StompServer and subscribe to ThreadPoolWorker and SingleThreadWorker queues. A third stomp client connects and sends json packets to the two workers for processing

  Aman Gupta

henry74

unread,
Jun 26, 2008, 8:55:30 AM6/26/08
to eventmac...@rubyforge.org
Thanks for all the well-written and tested examples.

I actually want to avoid using ruby threads if possible so I created a class which includes the deferrable module:

class StompClient < EventMachine::Connection
    include EM::Protocols::Stomp
    include EM::Deferrable
   
  def receive_msg msg
    unless msg.command == "CONNECTED"
            set_deferred_status :succeeded, msg
    end
  end
end

Then, within the loop something along the lines of:

EM.run do
  EM::PeriodicTimer.new(1) do
    message = "Testing message..."
      EM.spawn do
        headers = {'id' => 'testing'}
                EM.connect 'localhost', 6000, StompClient do |c|
                    c.callback do |response|
                        p response.header
                        puts response.body
                        c.close_connection
                    end           
                    c.connect({:login => 'user', :passcode => 'password'})
                    c.subscribe("/queue/outgoing")
                    c.send("/queue/incoming", message.body, headers)
                end
      end.notify
    end
  end
end

This actually works fine except it subscribes multiple times to the outgoing queue for every pass through the loop.  Ideally there is just a connection which stays open after subscribing and blocks on receiving but doesn't block the loop.  That's where the callback comes in to allows other work to continue while waiting for a message to arrive on the outgoing queue.

Does this make sense?

I want to avoid spawning threads and stick to the reactor pattern + deferrable to avoid the extra overhead.

Aman Gupta

unread,
Jun 26, 2008, 3:26:53 PM6/26/08
to eventmac...@rubyforge.org
It basically comes down to the type of processing you're doing on
incoming packets. Without threads, any long running processing code
will block the reactor, even if you use deferrables. What is your
specific use case?

Aman

> On Thu, Jun 26, 2008 at 1:39 AM, Aman Gupta <themastermi...@gmail.com>


> wrote:
>
> > I think what you're looking for is EM.defer, which is different than a
> > Deferrable. EM.defer uses a thread pool of 20 ruby threads for concurrent
> > processing. Be aware though, if you do any IO in one of these threads, all
> > of ruby will block. Also, using threads has 20-40% performance implication
> > (because of rb_thread_select/rb_thread_schedule).
> > The alternative is to keep everything running in the single threaded event
> > loop. If your processing involves network i/o (using EM::HttpClient2 to
> > access a web service, or Asymy to access mysql), you can use Deferrable or
> > Spawnable and avoid using threads.
>
> > Here's a simple example showing off a EM.defer and EM.spawn based stomp
> > client worker:http://p.ramaze.net/1719. The two workers connect to
> > StompServer and subscribe to ThreadPoolWorker and SingleThreadWorker queues.
> > A third stomp client connects and sends json packets to the two workers for
> > processing
>
> >   Aman Gupta
>

> > On Wed, Jun 25, 2008 at 8:12 PM, henry74 <henr...@gmail.com> wrote:
>
> >> Your assumption is the loop that is placing messages on the queue is the
> >> same loop which is processing the messages on the inbound true.  Assume a
> >> situation where many messages are coming in simultaneously from multiple
> >> sources.  Placing it on a queue gives you several advantages as follows:
>

> >>    - You can create as many "worker" processes to read things off the


> >>    queue.  This solves potential scalability issues as the # of messages
> >>    increase you can increase the number of processes pulling inbound messages
> >>    off the queue and doing work on them.

> >>    - There is no blocking as placing a message on a queue is


> >>    instantaneous.
>
> >> This provides several advantages.  When work is completed on a particular
> >> message, the result is placed on an outbound queue.  This allows responses
> >> to be sent back on a first come, first serve basis.  If a request comes in
> >> which requires a long-running process it would be silly to block all other
> >> request from the same requester and wait until the original long-running
> >> process is finished.  A different worker may have finished another request
> >> which came on the inbound queue after the long-running request.  Once it
> >> finishes and it can drops it on the outbound queue which is being watched by
> >> a subscribe command which blocks until a message is received and the result
> >> is sent even while the long-running process continues.
>
> >> I hope that clarifies my thinking.
>

> >> On Wed, Jun 25, 2008 at 10:01 PM, Aman Gupta <themastermi...@gmail.com>


> >> wrote:
>
> >>> I'm not quite sure I follow.. the messages are already arriving in a
> >>> serial fashion, so there's no reason to put them into a specialized inbound
> >>> queue. And whatever processing you need to perform on the incoming message
> >>> (to generate the outgoing message) will block ruby and the event loop
> >>> anyway.
> >>>   Aman
>

> >>> On Wed, Jun 25, 2008 at 7:28 PM, henry74 <henr...@gmail.com> wrote:
>
> >>>> I'm using it for asynchronous messaging.  Message comes in, drop it on
> >>>> an inbound queue.  The return message is placed on an outbound queue which
> >>>> is being subscribed to with a callback set to send the message back to the
> >>>> original requester.  As soon as a message is placed in the outbound queue,
> >>>> it will immediately be picked up since the deferred object will "wake up"
> >>>> upon getting the message and call the appropriate added callback.
>
> >>>> Using the deferrable pattern avoids blocking and affords immediate
> >>>> response as soon as a message is placed on the queue.
>

> >>>> On Wed, Jun 25, 2008 at 9:22 PM, Aman Gupta <themastermi...@gmail.com>


> >>>> wrote:
>
> >>>>> Looks like I spoke too soon.. the ruby stompserver already uses
> >>>>> EventMachine. An EM based server can handle more open connections with much
> >>>>> better performance than its pure ruby thread/socket based counterpart.
> >>>>> EM has some good docs on Deferrables at
> >>>>>http://eventmachine.rubyforge.org/files/DEFERRABLES.html
>
> >>>>> The client example I posted is already non-blocking.. every time a new
> >>>>> message arrives, receive_msg is triggered with the contents of that message.
> >>>>> From there, you can call into your code to process the incoming message.
> >>>>> What is your specific use-case that you think deferrable would be a good fit
> >>>>> for?
>
> >>>>>   Aman
>

> >>>>> On Wed, Jun 25, 2008 at 7:11 PM, henry74 <henr...@gmail.com> wrote:
>
> >>>>>> I'm actually interested in using the stomp client with the deferrable
> >>>>>> pattern.  The stompserver itself handles all the connections just fine - not
> >>>>>> sure why I would need EM to run a stompserver as it is quite passive and
> >>>>>> just does the following: accepts connections, request to add data to a
> >>>>>> queue, and request to retrieve data from a queue.  I'm new to EM so I could
> >>>>>> definitely be missing something.
>
> >>>>>> Logically it makes sense to use a stomp client with a callback so
> >>>>>> there is no blocking and an action can take place once a subscribed queue
> >>>>>> receives a message.  Am I thinking about it the wrong way?
>
> >>>>>> Thanks for making an updated one - are there any examples using the
> >>>>>> deferrable module?
>

> >>>>>> On Wed, Jun 25, 2008 at 9:02 PM, Aman Gupta <themastermi...@gmail.com>


> >>>>>> wrote:
>
> >>>>>>> I have a slightly updated version athttp://p.ramaze.net/1717with
> >>>>>>> examples of how to subscribe to a topic and receives messages.
> >>>>>>> Ideally you'd want to use EM for the stomp server instead of the
> >>>>>>> client, since the server needs to handle more connections and traffic...
> >>>>>>> whereas the client only has one connection and can usually block while its
> >>>>>>> waiting for the next message.
>
> >>>>>>>   Aman Gupta
>

> >>>>>>> On Wed, Jun 25, 2008 at 6:42 PM, henry74 <henr...@gmail.com> wrote:
>
> >>>>>>>> Thanks for the example.  I'm connecting it to stompserver (a simple
> >>>>>>>> ruby implementation of a queue leveraging the stomp protocol).
>

> >>>>>>>> On Tue, Jun 24, 2008 at 10:29 PM, henry74 <henr...@gmail.com>


> >>>>>>>> wrote:
>
> >>>>>>>>> I've been searching far and wide for an example of using the latest
> >>>>>>>>> Eventmachine and the Stomp protocol.
>
> >>>>>>>>> I would like to setup a look which subscribes to a Stomp queue and
> >>>>>>>>> runs a procedure when a message is added to the queue.  I don't want it to
> >>>>>>>>> block (so I'd like to use the deferrable pattern).  Can someone provide me
> >>>>>>>>> an example of using the built in stomp protocol within a EM loop?
>
> >>>>>>>>> Thanks so much,
> >>>>>>>>> Henry
>
> >>>>>>>> _______________________________________________
> >>>>>>>> Eventmachine-talk mailing list

> >>>>>>>> Eventmachine-t...@rubyforge.org


> >>>>>>>>http://rubyforge.org/mailman/listinfo/eventmachine-talk
>
> >>>>>>> _______________________________________________
> >>>>>>> Eventmachine-talk mailing list

> >>>>>>> Eventmachine-t...@rubyforge.org


> >>>>>>>http://rubyforge.org/mailman/listinfo/eventmachine-talk
>
> >>>>>> _______________________________________________
> >>>>>> Eventmachine-talk mailing list

> >>>>>> Eventmachine-t...@rubyforge.org


> >>>>>>http://rubyforge.org/mailman/listinfo/eventmachine-talk
>
> >>>>> _______________________________________________
> >>>>> Eventmachine-talk mailing list

> >>>>> Eventmachine-t...@rubyforge.org


> >>>>>http://rubyforge.org/mailman/listinfo/eventmachine-talk
>
> >>>> _______________________________________________
> >>>> Eventmachine-talk mailing list

> >>>> Eventmachine-t...@rubyforge.org


> >>>>http://rubyforge.org/mailman/listinfo/eventmachine-talk
>
> >>> _______________________________________________
> >>> Eventmachine-talk mailing list

> >>> Eventmachine-t...@rubyforge.org


> >>>http://rubyforge.org/mailman/listinfo/eventmachine-talk
>
> >> _______________________________________________
> >> Eventmachine-talk mailing list

> >> Eventmachine-t...@rubyforge.org


> >>http://rubyforge.org/mailman/listinfo/eventmachine-talk
>
> > _______________________________________________
> > Eventmachine-talk mailing list

> > Eventmachine-t...@rubyforge.org...
>
> read more »
>
> _______________________________________________
> Eventmachine-talk mailing list
> Eventmachine-t...@rubyforge.orghttp://rubyforge.org/mailman/listinfo/eventmachine-talk

henry74

unread,
Jun 26, 2008, 3:40:22 PM6/26/08
to eventmac...@rubyforge.org
I think the best way to explain the use case is to take a look at this example:

http://www.igvita.com/2008/05/27/ruby-eventmachine-the-speed-demon/

Read the section Deferrables: Concurrency without Threads

Aman Gupta

unread,
Jun 26, 2008, 4:10:15 PM6/26/08
to eventmac...@rubyforge.org
In Ilya's example, the processing involves querying an http service. This can be done asynchronously inside the reactor, which makes deferrable a good fit. If instead the processing required heavy computations or other blocking calls, you would need to use EM.defer with threads. You can see that the http service actually doing the processing in Ilya's example (em-http-pool) is in fact using EM.defer with a thread pool.

  Aman Gupta 

henry74

unread,
Jun 26, 2008, 7:52:50 PM6/26/08
to eventmac...@rubyforge.org
The http request actually returns a deferrable object in which he attaches a callback.  Replace the http service in Ilya's example with a outbound queue.  Replace the http request with a receive on a queue (which blocks).  So the wait on the queue is blocking but the object is deferrable so it returns immediately.  Once something on the queue is received, the callback is triggered (passing the message back to the original requestor) and within the same callback, another receive command is kicked off to wait on the queue.

I don't see how heavy computations has anything to do with blocking on receive for an outbound queue.  If the queue is empty, it will block with a single receive.  When a heavy process does complete, it will place the result on the outbound queue and it will eventually be picked up.  Why would I need EM.defer threads with this situation?  The only computation the loop is doing is pulling something off the outbound queue and delivering it.  It seems pretty straight forward to me.

Am I missing something?

Aman Gupta

unread,
Jun 26, 2008, 11:26:05 PM6/26/08
to eventmac...@rubyforge.org
Ah, I see.. so in your use-case, you're not doing any processing at
all. You're simply proxying packets between different network
services. Deferrable definitely makes the most sense then.

What are you using stompserver for? What are you experiences with it
so far?

Aman

On Jun 26, 4:52 pm, henry74 <henr...@gmail.com> wrote:
> The http request actually returns a deferrable object in which he attaches a
> callback.  Replace the http service in Ilya's example with a outbound
> queue.  Replace the http request with a receive on a queue (which blocks).
> So the wait on the queue is blocking but the object is deferrable so it
> returns immediately.  Once something on the queue is received, the callback
> is triggered (passing the message back to the original requestor) and within
> the same callback, another receive command is kicked off to wait on the
> queue.
>
> I don't see how heavy computations has anything to do with blocking on
> receive for an outbound queue.  If the queue is empty, it will block with a
> single receive.  When a heavy process does complete, it will place the
> result on the outbound queue and it will eventually be picked up.  Why would
> I need EM.defer threads with this situation?  The only computation the loop
> is doing is pulling something off the outbound queue and delivering it.  It
> seems pretty straight forward to me.
>
> Am I missing something?
>

> On Thu, Jun 26, 2008 at 3:10 PM, Aman Gupta <themastermi...@gmail.com>


> wrote:
>
> > In Ilya's example, the processing involves querying an http service. This
> > can be done asynchronously inside the reactor, which makes deferrable a good
> > fit. If instead the processing required heavy computations or other blocking
> > calls, you would need to use EM.defer with threads. You can see that the
> > http service actually doing the processing in Ilya's example (em-http-pool)
> > is in fact using EM.defer with a thread pool.
> >   Aman Gupta
>

> > On Thu, Jun 26, 2008 at 12:40 PM, henry74 <henr...@gmail.com> wrote:
>
> >> I think the best way to explain the use case is to take a look at this
> >> example:
>
> >>http://www.igvita.com/2008/05/27/ruby-eventmachine-the-speed-demon/
>
> >> Read the section Deferrables: Concurrency without Threads
>

> >> On Thu, Jun 26, 2008 at 2:26 PM, Aman Gupta <themastermi...@gmail.com>

> >>> > >>>>>> there is no...

henry74

unread,
Jun 27, 2008, 11:49:35 AM6/27/08
to eventmac...@rubyforge.org
I'm working on messaging platform which can take requests from multiple sources.  For example, if I want to search for weather report, I can make a request and get results.

I'm played around with multiple ruby-based messaging architectures and a queue-based platform appears the most scalable.  dRB is not reliable and many of the queue-based messaging solutions involve memory-based queues (using memcache) which is great for speed, but not that great for reliability if you need to stop and start. 

Stompserver's beauty comes in being able to create queues on the fly and leveraging a standard and simply protocol.  Leaves the door open to move to different MOMs if I have to.
Reply all
Reply to author
Forward
0 new messages