I understand the statement of "messages going back into original position whenever possible". It's "whenever possible" that's ambiguous.
Assuming messages are buffered on the client (up to prefetch count) as much of the literature (ex: https://www.rabbitmq.com/blog/2012/05/11/some-queuing-theory-throughput-latency-and-bandwidth) seem to suggest, sending the rejected (requeue=true) message to the server will not get it into the original position. Sticking it back into the buffer will. However, same literature suggests messages go to the server.
What am I missing? Thanks
--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send an email to rabbitm...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
If there is no buffer on the client, specifically Java client, more specifically DefaultConsumer, then how is it assured that the consumer does not wait for an ack to travel back to the server and next message traveling from the server before the consumer can start processing that message?
Is this (https://www.rabbitmq.com/blog/2012/05/11/some-queuing-theory-throughput-latency-and-bandwidth) not accurate to suggest that consumer buffer is meant to improve consumer utilization by reducing or eliminating the wait time for messages to be dispatched from the broker?