asynchronous calls to a RabbitMQ endpoint

733 views
Skip to first unread message

isaa...@gmail.com

unread,
Aug 17, 2015, 4:20:45 PM8/17/15
to camunda BPM users
I'm in the process of implementing a Camunda process with asynchronous calls to a RabbitMQ endpoint.

The pattern I originally implemented in Activiti BPM uses the send/receive pattern. It goes as follows:

- We would like to use the same service task that connects to RabbitMQ across multiple different processes without having to deploy new Jars to a Camunda installation. In that sense, we want to use that service task as a generic connector to RabbitMQ.

- A process instance calls our generic service task, which in turn calls a specific RabbitMQ endpoint exchange/queue (defined in the process variable). The message to RabbitMQ includes all process variables plus the process execution ID.

- A worker micro-service, deployed somewhere else, listens to that exchange/queue and does some media-processing work.

- While the work is being done, the worker micro-service publishes status messages, with the process execution ID included, to another RabbitMQ queue called "worker status".

- A "bridge micro-service" listens to the "worker status" RabbitMQ queue and invokes REST calls to the BPM system to update the instance process variables of the relevant process instance that corresponds to a execution ID.

- The same "bridge service" listens to this RabbitMQ queue and invokes REST calls with a "signal" message to a process instance that corresponds to an execution ID, signaling that some kind of work has been completed, so the process instance can continue its regular execution.

I'm uncertain if the same approach can be applied to Camunda and if its actually the best approach to have async communications implemented. What do you guys recommend?

I see a potentially issue with the above described send-receive pattern, when we need to implement parallel tasks, where each task calls a RabbitMQ service and waits for a response in the receive step. The issue comes from the fact that the process instance has only one execution ID. So if I have a process instance with 3 receive tasks waiting for a signal, how would the process engine know which one to unblock when it receives a signal event?

Another disadvantage of the above approach is the existence of the "bridge micro-service", which has to be deployed separately. The sole function of this micro-service is to listen to a RabbitMQ queue, and update the process instance with either process variables or by signaling that it should continue its regular flow.

I was looking at camunda-bpm-camel extension to accomplish the above requirements without the need of a bridge micro-service.

In the example you have here https://github.com/camunda/camunda-bpm-camel, you actually implemented a slightly different pattern for async communications by using a "send task" and using message events to coordinate the flow of events in the process instance.

I would like to know if there is any advantage on the pattern you guys described in the example above and the send-receive pattern that I described previously, which you guys also highlighted in this example: https://github.com/camunda/camunda-bpm-examples/tree/master/servicetask/service-invocation-asynchronous

Also, in case we could use a generic Camel route that could eliminate the need for the "bridge micro-service", I would like to know if there's anyway to have that Camel route in a shared Camunda deployment running in Apache Tomcat. The camel extension you guys have seems to be optimized for JBOSS.

Lastly, please correct me if I'm wrong, there might be a way to bundle a Camel route to a specific process definition deployed in a shared Camunda installation on Tomcat, by adding a file "camunda.cfg.xml" to my process definition itself. Is there a way to have that route deployed generically so it could be accessed by all process definitions that need to use that same route?

Thanks

Reply all
Reply to author
Forward
0 new messages