On 15 October 2014 at 00:15:47, Ryan Brown (
ryank...@gmail.com) wrote:
> Secondly, we are using the network client. So that leads me to
> believe there is no real payoff to waiting for the ok. We do not
> want to use confirmed publishing as it just adds more overhead
> in publishing. However, this information raises some concern
> that possibly data could get lost if we don't do so. (although
> we have not actually seen this in practice with our current daily
> load of 17+M)
Network partition is a probability function. In some environments, the probability
is so low it may make sense to deliberately use riskier but higher throughput
solutions.
17M in 24 hours is about 200 messages/second — your RabbitMQ instance is probably
very lightly loaded, so you have some throughput head room.
Having your system in Erlang also gives you an option of switching to direct client
which avoids protocol serialisation overhead.
> Does pivotal recommend using confirmed publishing
> for applications that cannot lose messages? We are not talking
> financial transactions here. So, the occasional loss could
> be recovered/recreated. But, we can't have any significant
> amount of data loss or it would cause concern about the reliability
> of this system that has become a backbone for our entire product
> line.
You can't guarantee reliable delivery without getting a confirmation, so we definitely
recommend using them.
Dealing with acks and nacks w/o virtually no overhead is easier in Erlang
than most languages because [virtually all] Erlang systems are inherently event-driven and asynchronous.
Several RabbitMQ plugins use publisher confirms under the hood (e.g. Shovel, Federation).
See "Acknowledgements and Confirms" on
http://www.rabbitmq.com/reliability.html
and
http://www.rabbitmq.com/confirms.html.