RabbitMQ Error Encountered "no binding <queuename> between exchange 'amq.topic' in vhost '/'

2,564 views
Skip to first unread message

Zack Kozar

unread,
Mar 22, 2017, 1:31:41 PM3/22/17
to rabbitmq-users
I apologize in advance for the water downed description as our organization is somewhat of a new adopter to RabbitMQ.  

We encountered a scenario overnight with RabbitMQ where our data communication was interrupted because of the following error:


 "no binding <queuename> between exchange 'amq.topic' in vhost '/'  


NOT_FOUND - no binding <queuename> between exchange 'amq.topic' in vhost '/' and queue 'queuename' in vhost '/'



We are running in a clustered 3 node setup with mirrored queues, it does not appear that a node went down in the time frame that the error occurred.


I was hoping that there maybe someone in the community\group that has had some experience with this type of issue could help me understand possible root causes or some lower level log locations that i could review.


We were able to get the system corrected by following this post from stackoverflow: 


I had the same problem and was able to fix it without having to shutdown the cluster or reset the virtual host.

I had a queue with 3 routing keys bound in a cluster. I had to remove the queue while 1 of the nodes was down and after that I always got the "no binding between exchange in vhost and queue' error when trying to register the routing keys again in the newly created queue with the same name.

The original queue was created as 'Durable' and the solution was to:

  • Delete the queue
  • Create a new queue with the same name but 'Transient' (non-durable)
  • Register the original 3 routing keys in the queue. It stopped raising the errors.

As I wanted to have a durable queue, I then deleted the queue again, created a new 'Durable' queue with the same name and then binding the routing keys worked perfectly.

Maybe by creating a new queue with different 'Durability' type, did reset the old bindings that were still remaining somewhere.

 

Michael Klishin

unread,
Mar 22, 2017, 2:23:04 PM3/22/17
to rabbitm...@googlegroups.com
We weren't able to find the root cause of this in several months but it seems to affect 3.6.6 and later less frequently.

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send email to rabbitm...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Staff Software Engineer, Pivotal/RabbitMQ

Ankit Gupta

unread,
Oct 9, 2018, 10:01:03 AM10/9/18
to rabbitmq-users
I am also facing the same issue with v3.7.3. Posted this here as well - https://groups.google.com/forum/#!topic/rabbitmq-users/LCZOxAgfnmE

Michael Klishin

unread,
Oct 10, 2018, 8:45:02 PM10/10/18
to rabbitm...@googlegroups.com
We found no way to reproduce but believe it might have been addressed by [1], which shipped in 3.7.8.


--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send email to rabbitm...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
MK

Staff Software Engineer, Pivotal/RabbitMQ

Bin Wang

unread,
Oct 22, 2018, 12:30:48 AM10/22/18
to rabbitmq-users
I am facing the same issue in 3.7.4 on Erlang 20.3.4 of a 3 nodes cluster.
One of my queue has this problem on particular routing key. I tried recreate the queue, if I set duration as transient there is no problem at all, but when durable, the issue comes out. I tried to restart each node, cannot help. 

Michael Klishin

unread,
Oct 22, 2018, 2:28:19 PM10/22/18
to rabbitm...@googlegroups.com
Thanks for your observations but have you seen the most recent response in this thread?

> We found no way to reproduce but believe it might have been addressed by [1], which shipped in 3.7.8.

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send email to rabbitm...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Sergio Amaral

unread,
Jan 15, 2019, 1:20:56 PM1/15/19
to rabbitmq-users
We have been struggling with this issue for a while now.
Here are some facts.

  • We updated in one of the environments to 3.7.9 to check if this issue had been in fact address, but the results were negative. It's still happening!
  • In our environment we use 3 node clusters mirroring queues with all nodes. 
    • Our development environments get stopped during the night turned back on in the morning (all nodes at roughly the same time).
    • We don't seem to experience the issue in production which is up all the time, but once during patching we had to bring the cluster down during an outage and the issue has appeared.
  • When a queue get's into this state, even if it's deleted and recreated the issues persists, but once the issue is happening I've noticed that when we recreate the queue. In the queue details page (management console), it is created without any bindings, not even the  default, when in a normal queue it would always have a default binding at least.
Let me know if you would like me to provide additional detail or any other diagnostic information.

Michael Klishin

unread,
Jan 15, 2019, 3:13:36 PM1/15/19
to rabbitm...@googlegroups.com
We will try to reproduce once we get some response to our latest round of questions.

I don't think, however, that we can do much about the "shutting all nodes down at the same time" case. There is no
predictable way for nodes to transfer e.g. queue master ownership when all of them are going down at the same time.

Some features in RabbitMQ 3.8.0 will refuse to work if the majority of nodes are not online, so if in your environment all nodes
are shut down at the same time, you may want to reconsider this practice or destroy the entire environment and rebuild it from scratch
once it is needed again.

Sergio Amaral

unread,
Jan 15, 2019, 5:52:49 PM1/15/19
to rabbitmq-users
Thanks for your responsiveness.

I do understand that the shut-down process may creates some tricky race-like conditions that may have unpredictable outcomes. We already running some experiments to eliminate that as a variable.
In term of 3.8, we are already running with pause-minority partitioning behaviour for which I imagine same restrictions apply, but thanks for the heads up.

It would be interesting to understand if anyone experiences this issues where no fail-over or cluster availability issues have occurred in close horizon.

Would logs or storage files where this is happening be of any use to you?

chenfa...@gmail.com

unread,
Jan 22, 2019, 12:53:28 AM1/22/19
to rabbitmq-users
 I encountered the same issue in our environment, while I have a reproduce method that discussed in https://groups.google.com/forum/#!topic/rabbitmq-users/ble31gjFGUE. maybe can give you some hints.

Sergio Amaral

unread,
Mar 5, 2019, 12:30:06 PM3/5/19
to rabbitmq-users
I've noticed some works has been carried out recently for https://github.com/rabbitmq/rabbitmq-server/issues/1873, which represents this issue.

Are those changes still in planned to go out with 3.7.13? 
Is there any official or intentional due date for 3.7.13 available yet?

Luke Bakken

unread,
Mar 5, 2019, 12:43:05 PM3/5/19
to rabbitmq-users
Hi Sergio,

If you can reproduce this issue reliably, please give this RC a test and report back:


Thanks,
Luke

Sergio Amaral

unread,
Mar 7, 2019, 5:58:15 AM3/7/19
to rabbitmq-users
Thanks.

We had the issue consistently everyday, where some services (bindings) randomly would hit this issue, after environment restart.
I've deployed the RC yesterday to one of our QA environments and results are good. 
No trace of the issue occurring today, where before it would happen every single day at least once.

Will keep monitoring and report after a couple more days.

Luke Bakken

unread,
Mar 7, 2019, 10:55:45 AM3/7/19
to rabbitmq-users
Hi Sergio,

This is great news, and we appreciate you reporting back since we could never reliably reproduce this issue in our environments.

Thanks,
Luke

Michael Klishin

unread,
Mar 14, 2019, 7:56:47 PM3/14/19
to rabbitmq-users
Thank you very much for confirming, this is very helpful to the core team :)

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send email to rabbitm...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages