Hello,
I'm trying to find a way to process messages from a rabbitMq queue and to handle back-pressure without event lost (Vertx 3.5.3).
I've got a simple pipeline to consume message from a queue and push to a REST API endpoint.
To push each message result to the REST API, I'm calling a vertx circuit breaker instance that call the vertx http client.
When a message is correctly push to the API, an ack is return to rabbitmq. Otherwise, it's a nack !
When the circuit breaker is open, messages still consumed, nack and next consumed again. It lead to a heavy CPU usage even if we know all incoming messages will not be processed.
My question is, does someone have a good practice in order to reduce the number of message consumed while the circuit breaker is in open state?
Here is what I've tried so far:
Test 1: As RabbitMq client method basicCancel() is not exposed, I've tried to stop / start the vertx rabbitmq client.
It works well with one verticle instance, but this is an on-off mode (not possible to tune the number of messages consumed - input rate).
With 2 verticles calling client.basicConsum() on the same queue, I've got the following exception:
com.rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 801, class-id=60, method-id=80)
Test 2: Limite the number of open nack in the queue (Qos)
I could not change this value dynamically.
Test 3: Run several JVM with a fixed Qos
It works well and allow to have multiple consumer. But it's hard to manage number of instance running.
Currently, I go with the following solution that solve heavy CPU usage. One consumer and RabbitMq vertx client stop / start on circuit breaker state. But this solution do not tune number of message consumed and circuit breaker is activated all the time.
Any good advice would be appreciated,
Thanks for reading