I believe everybody want to make sure they have no message lose but the documentation states that you should pick one server you trust during partitioning and throw away the messages on the other servers.How would you go around recovering from partitioning if you have unique messages on multiple servers in partitioning?
--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.
To post to this group, send email to rabbitmq-users@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.
To post to this group, send email to rabbitmq-users@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send email to rabbitm...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.
To post to this group, send email to rabbitmq-users@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.
To post to this group, send email to rabbitmq-users@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.
To post to this group, send email to rabbitmq-users@googlegroups.com.
How about you mirror every queue over N/2+1 nodes, where N is the total number of nodes in the cluster? If my understanding is correct, in this case you will have at least one mirror in the winning partition. So, no matter what gets reset, no data is lost, or is my understanding not correct?
--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.
To post to this group, send an email to rabbitmq-users@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Replicating to a quorum of nodes is considered to be a common practice with most distributed data services. Good advice.
On Fri, Jul 21, 2017 at 3:05 AM, V Z <uvzu...@gmail.com> wrote:
How about you mirror every queue over N/2+1 nodes, where N is the total number of nodes in the cluster? If my understanding is correct, in this case you will have at least one mirror in the winning partition. So, no matter what gets reset, no data is lost, or is my understanding not correct?
--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send an email to rabbitm...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
We do run ignore mode.
We used to have 2 servers but our current cluster get torn a part all 3 at once.
I believe that would pause the entire cluster in pause mode if each node would lose connect to the other two.
We need to figure out what is causing it and stop it but now that I know it can happen i want to know how to recover from it.
We still have some edge cases where where the nodes doesn't lose connection one to one.
Like 1 sees 3 but not 2
2 sees 1 but not 3
3 sees 2 but not 1
Not sure how that would be handled with the pause.
Should the system even be able to enter such a state?
That is also a mystery to me. It has however happened more than once.
I think it happens upon a heartbeat check if the network is down for just a very short moment.
Then one node see other as down while other see the one as up.
Then they go in to some kind of weird one way partitioning.