Network partition issue

1,035 views
Skip to first unread message

Dev Imagicle

unread,
May 4, 2015, 10:26:31 AM5/4/15
to rabbitm...@googlegroups.com
Hi all,
I have a cluster of 8 nodes running RabbitMQ 3.5.1 on Win 2008 R2. I some times experience a network partition I'm not able to recover from, unless I restart all nodes.
Apparently all nodes are working correctly and there are no network link down.

I have two nodes which, at the same time, are not able to communicate each other, while others nodes can see them up.

Here the log from the two nodes (node 4 and node 6):

Node 4

=INFO REPORT==== 3-May-2015::12:37:28 ===
rabbit on node imagiclerabbit@PVFAXAS06V down

=INFO REPORT==== 3-May-2015::12:37:31 ===
node imagiclerabbit@PVFAXAS06V down: connection_closed

=ERROR REPORT==== 3-May-2015::12:37:31 ===
Partial partition detected:
 * We saw DOWN from imagiclerabbit@PVFAXAS06V
 * We can still see imagiclerabbit@PVFAXAS01V which can see imagiclerabbit@PVFAXAS06V
We will therefore intentionally disconnect from imagiclerabbit@PVFAXAS01V

=ERROR REPORT==== 3-May-2015::12:37:32 ===
Partial partition detected:
 * We saw DOWN from imagiclerabbit@PVFAXAS06V
 * We can still see imagiclerabbit@PVFAXAS07V which can see imagiclerabbit@PVFAXAS06V
We will therefore intentionally disconnect from imagiclerabbit@PVFAXAS07V

=ERROR REPORT==== 3-May-2015::12:37:33 ===
Partial partition detected:
 * We saw DOWN from imagiclerabbit@PVFAXAS06V
 * We can still see imagiclerabbit@PVFAXAS03V which can see imagiclerabbit@PVFAXAS06V
We will therefore intentionally disconnect from imagiclerabbit@PVFAXAS03V

=ERROR REPORT==== 3-May-2015::12:37:34 ===
Partial partition detected:
 * We saw DOWN from imagiclerabbit@PVFAXAS06V
 * We can still see imagiclerabbit@PVFAXAS05V which can see imagiclerabbit@PVFAXAS06V
We will therefore intentionally disconnect from imagiclerabbit@PVFAXAS05V

=ERROR REPORT==== 3-May-2015::12:37:34 ===
Mnesia(imagiclerabbit@PVFAXAS04V): ** ERROR ** mnesia_event got {inconsistent_database, running_partitioned_network, imagiclerabbit@PVFAXAS01V}

Node 6

=INFO REPORT==== 3-May-2015::12:37:28 ===
rabbit on node imagiclerabbit@PVFAXAS04V down

=INFO REPORT==== 3-May-2015::12:37:30 ===
node imagiclerabbit@PVFAXAS04V down: connection_closed

=ERROR REPORT==== 3-May-2015::12:37:30 ===
Partial partition detected:
 * We saw DOWN from imagiclerabbit@PVFAXAS04V
 * We can still see imagiclerabbit@PVFAXAS08V which can see imagiclerabbit@PVFAXAS04V
We will therefore intentionally disconnect from imagiclerabbit@PVFAXAS08V

=ERROR REPORT==== 3-May-2015::12:37:31 ===
Partial partition detected:
 * We saw DOWN from imagiclerabbit@PVFAXAS04V
 * We can still see imagiclerabbit@PVFAXAS07V which can see imagiclerabbit@PVFAXAS04V
We will therefore intentionally disconnect from imagiclerabbit@PVFAXAS07V

=ERROR REPORT==== 3-May-2015::12:37:33 ===
Partial partition detected:
 * We saw DOWN from imagiclerabbit@PVFAXAS04V
 * We can still see imagiclerabbit@PVFAXAS05V which can see imagiclerabbit@PVFAXAS04V
We will therefore intentionally disconnect from imagiclerabbit@PVFAXAS05V

=INFO REPORT==== 3-May-2015::12:37:34 ===
rabbit on node imagiclerabbit@PVFAXAS08V down

=ERROR REPORT==== 3-May-2015::12:37:37 ===
Mnesia(imagiclerabbit@PVFAXAS06V): ** ERROR ** mnesia_event got {inconsistent_database, running_partitioned_network, imagiclerabbit@PVFAXAS05V}

It seems all happens within a few seconds.
After that I'm not able to connect to the other nodes where I have errors like these:

=ERROR REPORT==== 3-May-2015::14:25:19 ===
closing AMQP connection <0.17372.85> ([::1]:50659 -> [::1]:5672):
{heartbeat_timeout,running}

The management plugin doesn't work on any nodes.

Questions:

1) How can i reduce the sensibility to very short link down on Windows server?
2) How can i replicate this on a lab test cluster?
3) Why other nodes become unreacheable/unavailable?

Thanks,
Rick




Jean-Sébastien Pédron

unread,
May 4, 2015, 11:24:57 AM5/4/15
to rabbitm...@googlegroups.com
On 04.05.2015 16:26, Dev Imagicle wrote:
> Hi all,

Hi!

> It seems all happens within a few seconds.
> After that I'm not able to connect to the other nodes where I have
> errors like these:
>
> =ERROR REPORT==== 3-May-2015::14:25:19 ===
> closing AMQP connection <0.17372.85> ([::1]:50659 -> [::1]:5672):
> {heartbeat_timeout,running}
>
> The management plugin doesn't work on any nodes.
>
> 1) How can i reduce the sensibility to very short link down on Windows
> server?

Here, the reason why the remote node is considered down is
"connection_closed": this means the TCP connection between nodes 4 and 6
was explicitely closed (and not caused by a timeout while trying to
communicate with the remote node).

Do you have anything happening on the hosts or on the network between
those hosts at the same time?

Could you please post your rabbitmq.config file (if any), the entire log
file and the sasl log file?

> 2) How can i replicate this on a lab test cluster?
> 3) Why other nodes become unreacheable/unavailable?

Please provie the "normal" and sasl log files for another nodes (eg.
node 5).

--
Jean-Sébastien Pédron
Pivotal / RabbitMQ

Dev Imagicle

unread,
May 5, 2015, 3:41:28 AM5/5/15
to rabbitm...@googlegroups.com, jean-se...@rabbitmq.com
Hello Jean,
I've attached my rabbit configuration file (it is the same for all the nodes) and log files from nodes 4, 5 and 6. 
As you can see everything begins at 12.37.28 on nodes 4 and 6: at that time, node 4 sees node 6 down and node 6 sees node 4 down, while others nodes can see both nodes 4 and 6.


Il giorno lunedì 4 maggio 2015 17:24:57 UTC+2, Jean-Sébastien Pédron ha scritto:
On 04.05.2015 16:26, Dev Imagicle wrote:
> Hi all,

Hi!

> It seems all happens within a few seconds.
> After that I'm not able to connect to the other nodes where I have
> errors like these:
>
> =ERROR REPORT==== 3-May-2015::14:25:19 ===
> closing AMQP connection <0.17372.85> ([::1]:50659 -> [::1]:5672):
> {heartbeat_timeout,running}
>
> The management plugin doesn't work on any nodes.
>
> 1) How can i reduce the sensibility to very short link down on Windows
> server?

Here, the reason why the remote node is considered down is
"connection_closed": this means the TCP connection between nodes 4 and 6
was explicitely closed (and not caused by a timeout while trying to
communicate with the remote node).

This is a line of log from node 5.

 
Do you have anything happening on the hosts or on the network between
those hosts at the same time?

Nothing I was able to detect. Other services did not report network issues.

Rick
Logs.zip
rabbitmq.config

Dev Imagicle

unread,
May 7, 2015, 11:25:51 AM5/7/15
to rabbitm...@googlegroups.com, jean-se...@rabbitmq.com
Hello,
Were you able to check my log files?
In my 8 nodes cluster I experienced a new network partition: each node was partitioned from all the others.
When the nodes are in this state I can not get messages from mirrored queues from any node.
I set autoheal to ignore on all nodes, because I want to get messages from mirrored queues and backup them in order to avoid data loss before restarting rabbit, as described here: https://aphyr.com/posts/315-call-me-maybe-rabbitmq, but sometimes basicget hangs.
I'm able to simulate multiple network partitions on my lab cluster (4 nodes), by programmatically disabling and enabling the network adapter in a random manner on each node. In this enviroment I always can backup the messages and then requeue them, after network partition is solved but I'm not able to replicate what happened on 8 nodes production cluster.

Could you help me please!

Rick

Jean-Sébastien Pédron

unread,
May 7, 2015, 12:11:33 PM5/7/15
to rabbitm...@googlegroups.com
On 07.05.2015 17:25, Dev Imagicle wrote:
> Hello,
> Were you able to check my log files?

Hi!

Yes, sorry, I did that yesterday while someone on IRC had a similar
problem, but forgot to get back to you.

In your first message, you say you use RabbitMQ 3.5.1. However, the log
files you provided indicates you are using 3.5.0. The user on IRC solved
his problem by upgrading to 3.5.1. Could you please try that too?

Dev Imagicle

unread,
May 8, 2015, 3:16:01 AM5/8/15
to rabbitm...@googlegroups.com, jean-se...@rabbitmq.com
Ops! You're right, I've Rabbit 3.5.0!
Do you think my partition issue is a bug solved in 3.5.1? 
I couldn't find it mentioned in changelog from 3.5.0 to 3.5.1.
However I'll update my cluster and I let you know if I get this problem again.

Rick.

Jean-Sébastien Pédron

unread,
May 11, 2015, 4:36:56 AM5/11/15
to rabbitm...@googlegroups.com
On 08.05.2015 09:16, Dev Imagicle wrote:
> Do you think my partition issue is a bug solved in 3.5.1?
> I couldn't find it mentioned in changelog from 3.5.0 to 3.5.1.

As I don't know the reason why the connection is being closed, it's hard
to tell. It could be a side effect of another bug fix.

Dev Imagicle

unread,
May 18, 2015, 5:52:53 AM5/18/15
to rabbitm...@googlegroups.com, jean-se...@rabbitmq.com
Hello Jean,
I updated my cluster to the version 3.5.1, but I ran into the same issues after a network partition: I cannot access mirrored queues in order to backup their messages and publish them again after the partition is solved.
I cannot lose data!
In my .NET application I introduced operations with timeout, in order to avoid the system to hang and I caught logs for each timed out/failed operation. 
Here an example of the errors I have in my logs.

0517 18:56:54.677 ERROR: RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=541, text="INTERNAL_ERROR", classId=0, methodId=0, cause=
   at RabbitMQ.Client.Impl.SimpleBlockingRpcContinuation.GetReply()
   at RabbitMQ.Client.Impl.ModelBase.QueueDeclare(String queue, Boolean passive, Boolean durable, Boolean exclusive, Boolean autoDelete, IDictionary`2 arguments)
   at RabbitMQ.Client.Impl.ModelBase.QueueDeclare(String queue, Boolean durable, Boolean exclusive, Boolean autoDelete, IDictionary`2 arguments)

0517 18:56:59.615 ERROR: RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=404, text="NOT_FOUND - queue 'ha_Retry' in vhost 'imagicle' has crashed and failed to restart", classId=50, methodId=10, cause=
   at RabbitMQ.Client.Impl.SimpleBlockingRpcContinuation.GetReply()
   at RabbitMQ.Client.Impl.ModelBase.QueueDeclare(String queue, Boolean passive, Boolean durable, Boolean exclusive, Boolean autoDelete, IDictionary`2 arguments)
   at RabbitMQ.Client.Impl.ModelBase.QueueDeclare(String queue, Boolean durable, Boolean exclusive, Boolean autoDelete, IDictionary`2 arguments)

Could you help me to understand what happened? 
Why I get errors like those reported in the second line of log (ha_Retry is a mirrored queue)? What can I do in this case in order to avoid message loss?
 
For your convenience I attached the whole rabbit cluster logs.

Thanks for your help,
Rick
RabbitLogs.zip

Jean-Sébastien Pédron

unread,
May 18, 2015, 1:24:31 PM5/18/15
to rabbitm...@googlegroups.com
On 18.05.2015 11:52, Dev Imagicle wrote:
> Hello Jean,

Hi!

> 0517 18:56:59.615 ERROR:
> RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP
> operation was interrupted: AMQP close-reason, initiated by Peer,
> code=404, text="*NOT_FOUND - queue 'ha_Retry' in vhost 'imagicle' has
> crashed and failed to restart*", classId=50, methodId=10, cause=
>
> Could you help me to understand what happened?

The root cause of this problem is the network issue. The cluster gets
confused about the nodes which are part of the cluster or not because
you seem to havevery short network outages.

First, I see you use a 32bit version of Erlang. Does it match your
version of Windows? Is it on purpose?

Are you using "real" machines or VMs?

Dev Imagicle

unread,
May 19, 2015, 7:02:50 AM5/19/15
to rabbitm...@googlegroups.com, jean-se...@rabbitmq.com
Hi!

> 0517 18:56:59.615 ERROR:
> RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP
> operation was interrupted: AMQP close-reason, initiated by Peer,
> code=404, text="*NOT_FOUND - queue 'ha_Retry' in vhost 'imagicle' has
> crashed and failed to restart*", classId=50, methodId=10, cause=
>
> Could you help me to understand what happened?

The root cause of this problem is the network issue. The cluster gets
confused about the nodes which are part of the cluster or not because
you seem to havevery short network outages.

Do you mean I can't trust the system and I can't consider it robust against "very short" network outages?
How can i increase robustness to very short link down on Windows server?
 

First, I see you use a 32bit version of Erlang. Does it match your
version of Windows? Is it on purpose?

Yes, it is on purpose, however I can install the 64bit version.

 
Are you using "real" machines or VMs?

VMs

Rick 

Alvaro Videla

unread,
May 19, 2015, 7:52:46 AM5/19/15
to Dev Imagicle, rabbitm...@googlegroups.com, jean-se...@rabbitmq.com
Hi,

On Tue, May 19, 2015 at 1:02 PM Dev Imagicle <dev.im...@gmail.com> wrote:
Do you mean I can't trust the system and I can't consider it robust against "very short" network outages?
How can i increase robustness to very short link down on Windows server?

In asynchronous distributed systems there's the concept of Failure Detectors that depend on a timeout to tell if the other node is alive or not. How to find the right timeout for network partitions is very hard. What if you set it to 10 seconds, and your partition happens at 10.1? Do you bump it to 11 or this was just a particular case? Extend the timeout indefinitely?

Node X came back after N seconds. Was it down? Did it lost many requests and now is not in sync? What should we do? 

In the case of RabbitMQ and Erlang you can control that parameter by using net_ticktime: https://www.rabbitmq.com/nettick.html

Can we do better here? I'd guess we could, but it's not easy to find a solution that works for every setup, for every workload.

Regards,

Alvaro


Jean-Sébastien Pédron

unread,
May 19, 2015, 9:35:24 AM5/19/15
to rabbitm...@googlegroups.com
On 19.05.2015 13:02, Dev Imagicle wrote:
> The root cause of this problem is the network issue. The cluster gets
> confused about the nodes which are part of the cluster or not because
> you seem to havevery short network outages.
>
> Do you mean I can't trust the system and I can't consider it robust
> against "very short" network outages?
> How can i increase robustness to very short link down on Windows server?

To add to what Alvaro said, in your case, by "very short", I mean a
subsecond network outage.

In your log files, from eg. node A's PoV, it looks like remote node B is
gone but is back again before node A finished to handle the "node down"
event. We have some checks for that when we can, but it's not perfect.

Jean-Sébastien Pédron

unread,
May 19, 2015, 9:55:17 AM5/19/15
to rabbitm...@googlegroups.com
On 19.05.2015 13:02, Dev Imagicle wrote:
> Yes, it is on purpose, however I can install the 64bit version.

I'm not sure it would change anything but perhaps try a 64bit version of
Erlang.

If you can't determine if there are really network outages and why, try
to use the Federation plugin [1] instead of plain clustering. It is more
complexe to use but allows a more fine-grained architecture and is
tolerant to network instabilities.

[1] http://www.rabbitmq.com/federation.html

Dev Imagicle

unread,
May 19, 2015, 10:54:01 AM5/19/15
to rabbitm...@googlegroups.com, jean-se...@rabbitmq.com
Hello Alvaro and Jean,
Thanks for your quick reply and please excuse me, I probably wasn't able to explain my problem properly and maybe I didn't well understand some Rabbit concept.
In my Windows 2008r2 cluster I've never got a node down message due to net_tick_timeout, but always to connection_closed, so I suppose net_tick_timeout is not related to my issue.
 
To add to what Alvaro said, in your case, by "very short", I mean a 
subsecond network outage. 

In your log files, from eg. node A's PoV, it looks like remote node B is 
gone but is back again before node A finished to handle the "node down" 
event. We have some checks for that when we can, but it's not perfect. 

Yes, you're right, it seems everything takes place in a few seconds!

This is the first event on node 4:

=INFO REPORT==== 17-May-2015::18:56:21 ===
rabbit on node imagiclerabbit@PVFAXAS07V down

=INFO REPORT==== 17-May-2015::18:56:21 ===
node imagiclerabbit@PVFAXAS07V down: connection_closed

while on node 8 I have:

=ERROR REPORT==== 17-May-2015::18:56:21 ===
Partial partition disconnect from imagiclerabbit@PVFAXAS04V

=INFO REPORT==== 17-May-2015::18:56:22 ===
rabbit on node imagiclerabbit@PVFAXAS04V down

=ERROR REPORT==== 17-May-2015::18:56:23 ===
Mnesia(imagiclerabbit@PVFAXAS08V): ** ERROR ** mnesia_event got {inconsistent_database, running_partitioned_network, imagiclerabbit@PVFAXAS04V}

=INFO REPORT==== 17-May-2015::18:56:23 ===
node imagiclerabbit@PVFAXAS04V down: connection_closed

=INFO REPORT==== 17-May-2015::18:56:23 ===
node imagiclerabbit@PVFAXAS04V up

after that each node disconnects from all the others and mirrored queues are, sometimes, not accesible.

If you can't determine if there are really network outages and why, try
to use the Federation plugin [1] instead of plain clustering. It is more
complexe to use but allows a more fine-grained architecture and is
tolerant to network instabilities.

[1] http://www.rabbitmq.com/federation.html

--
Jean-Sébastien Pédron
Pivotal / RabbitMQ

Node X came back after N seconds. Was it down? Did it lost many requests and now is not in sync? What should we do? 

My cluster_partition_handling  parameter is set to ignore. In this case, when a network partition occurs, nodes should remain partitioned until an action is taken and mirrored queues should be split across the partitions: this is what happens most of the times, but sometimes it happens what I described in my previous posts, maybe these cases are related to subseconds network outages; in this scenario, when a node is detected as down and it came back suddenly, I'd like it to be considered as partitioned.
I already considered the Federation Plugin, it could be an idea, but it requires me to handle additional complexity.
Can I try some other solution before moving toward the federation plugin?

Thanks,
Rick

Jean-Sébastien Pédron

unread,
May 19, 2015, 11:06:19 AM5/19/15
to Dev Imagicle, rabbitm...@googlegroups.com
On 19.05.2015 16:54, Dev Imagicle wrote:
> Yes, you're right, it seems everything takes place in a few seconds!
>
> while on node 8 I have:
>
> =ERROR REPORT==== 17-May-2015::18:56:23 ===
> Mnesia(imagiclerabbit@PVFAXAS08V): ** ERROR ** mnesia_event got
> {inconsistent_database, running_partitioned_network,
> imagiclerabbit@PVFAXAS04V}

That's a perfect example. The message above is logged when
imagiclerabbit@PVFAXAS04V is already *back* (both Erlang are connected
again).

> =INFO REPORT==== 17-May-2015::18:56:23 ===
> node imagiclerabbit@PVFAXAS04V down: connection_closed

But here we see RabbitMQ only receives/handles the "node down" event
now. Everything is mixed up.

The inversion of those events is still problematic in RabbitMQ. Some
progress was made in 3.5.0 but we still have room for improvements.

In fact, the partial partition detection which is triggered in your case
("Partial partition disconnect from $node") can worsen the situation in
rare cases when the network outage is very short. You probably hit one
of those...

Dev Imagicle

unread,
May 21, 2015, 4:06:10 AM5/21/15
to rabbitm...@googlegroups.com, jean-se...@rabbitmq.com
Hello!
Thanks for your clarification and your analysis!
I looked at the Federation Plugin but I can't figure out how I can use it in order to emulate a mirrored queue.
Let's suppose I split my 8 machines cluster into two clusters: A and B, each of 4 machines, I create the same mirrored queue on each cluster and I federate them.
I want a message published on A to be handled by B if the whole cluster A fails, but, if I well understood how the federation plugin works, the message published on A is not transferred to B, as mirrored queue would have guaranteed in a plain cluster.


That's a perfect example. The message above is logged when
imagiclerabbit@PVFAXAS04V is already *back* (both Erlang are connected
again).

> =INFO REPORT==== 17-May-2015::18:56:23 ===
> node imagiclerabbit@PVFAXAS04V down: connection_closed

But here we see RabbitMQ only receives/handles the "node down" event
now. Everything is mixed up.

The inversion of those events is still problematic in RabbitMQ. Some
progress was made in 3.5.0 but we still have room for improvements.

In fact, the partial partition detection which is triggered in your case
("Partial partition disconnect from $node") can worsen the situation in
rare cases when the network outage is very short. You probably hit one
of those...

Have you planned to handle this events mix in future versions? 
Can I do something to avoid/mitigate it in the meanwhile?

Rick

Jean-Sébastien Pédron

unread,
May 21, 2015, 4:42:31 AM5/21/15
to rabbitm...@googlegroups.com
On 21.05.2015 10:06, Dev Imagicle wrote:
> Hello!

Hi!

> Let's suppose I split my 8 machines cluster into two clusters: A and B,
> each of 4 machines, I create the same mirrored queue on each cluster and
> I federate them.
> I want a message published on A to be handled by B if the whole cluster
> A fails, but, if I well understood how the federation plugin works, the
> message published on A is not transferred to B, as mirrored queue would
> have guaranteed in a plain cluster.

You can't emulate mirroring behaviour with the Federation plugin, you
would have to adapt your workflow. The best solutions would be to figure
out those network issues.

You can read in more details about the federation of exchanges and
queues respectively:
http://www.rabbitmq.com/federated-exchanges.html
http://www.rabbitmq.com/federated-queues.html

> But here we see RabbitMQ only receives/handles the "node down" event
> now. Everything is mixed up.
>
> The inversion of those events is still problematic in RabbitMQ. Some
> progress was made in 3.5.0 but we still have room for improvements.
>
> In fact, the partial partition detection which is triggered in your
> case
> ("Partial partition disconnect from $node") can worsen the situation in
> rare cases when the network outage is very short. You probably hit one
> of those...
>
> Have you planned to handle this events mix in future versions?

Yes, it is on our TODO.

> Can I do something to avoid/mitigate it in the meanwhile?

Not really. The best thing to do is debug these micro disconnections. Do
you have firewall or something in the way which could drop idle
connections for instance? Can you test your setup and workload on real
hardware (as opposed to VMs)?

andrew.miller

unread,
Jun 9, 2015, 9:29:54 AM6/9/15
to rabbitm...@googlegroups.com, jean-se...@rabbitmq.com
Just wanted to weigh in that we are experiencing this same problem with RabbitMQ 3.4.3.  Among our 8 3-node clusters all in pause_minority mode, one of them will have a similar "partial partition" about once a week because a connection is dropped - and then Erlang/RabbitMQ will reconnect before the "node down" is done being handled, and thus the cluster will be stuck in a partition and require manual intervention.

I understand our network is at fault, but until our network team can figure out what is going on, I really would like RabbitMQ to be able to handle these.  

Is there a GitHub issue for this?

Thanks,
Andrew

Michael Klishin

unread,
Jun 9, 2015, 10:00:09 AM6/9/15
to rabbitm...@googlegroups.com, andrew.miller, jean-se...@rabbitmq.com
On 9 June 2015 at 16:29:55, andrew.miller (andrew...@sentry.com) wrote:
> Just wanted to weigh in that we are experiencing this same problem
> with RabbitMQ 3.4.3. Among our 8 3-node clusters all in pause_minority
> mode, one of them will have a similar "partial partition" about
> once a week because a connection is dropped - and then Erlang/RabbitMQ
> will reconnect before the "node down" is done being handled,
> and thus the cluster will be stuck in a partition and require manual
> intervention.
>
> I understand our network is at fault, but until our network team
> can figure out what is going on, I really would like RabbitMQ to
> be able to handle these.
>
> Is there a GitHub issue for this?

Andrew,

I don't think we have an issue for this. Can you please try 3.5.3?
If it's still the case in 3.5.x, I'd certainly investigate what we could do better,
so please file an issue for rabbitmq/rabbitmq-server.

Thank you. 
--
MK

Staff Software Engineer, Pivotal/RabbitMQ


Reply all
Reply to author
Forward
0 new messages