Re: [rabbitmq-users] rabbitmq queue declaring but no queue

963 views
Skip to first unread message
Message has been deleted

Alvaro Videla

unread,
Aug 27, 2014, 4:24:58 AM8/27/14
to Qingchuan Hao, rabbitm...@googlegroups.com
Hi,

Why the consumer is using a name with this prefix "1a2477803ea749c1948e18fc519dd30c" while the queue name has this prefix "3c15f310294446f597397220532aa9a0" ?

Isn't it supposed to be the same queue name?

Regards,

Alvaro


On Wed, Aug 27, 2014 at 10:21 AM, Qingchuan Hao <haoqin...@gmail.com> wrote:
I am using rabbitmq 3.1.5 with mirrored queue.
Consumers first declare the queue and  then consume from queue. But what I saw from the client log was that there is no queue error.
And rabbitmq node log did not show the adding mirror ,but the connection was indeed closed as show in the start of the log.
Below is a reconnecting operation. There should be 5 queues syncrhonrized, but here only 2 were synchronized. Will the slave queue crash or just hanging there and stop the queue from being? 

Here lists the log of the rabbitmqnode.
=WARNING REPORT==== 21-Aug-2014::01:00:05 ===
closing AMQP connection <0.11317.0> (172.28.6.0:43595 -> 172.28.0.120:5672):
connection_closed_abruptly

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
accepting AMQP connection <0.6832.36> (172.28.6.0:46532 -> 172.28.0.120:5672)

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
Adding mirror of queue 'q-agent-notifier-qos-update_fanout_199c21a27fd84c17b63ca5765d798b90' in vhost '/' on node rabbit@rabbitmqNode1: <3008.8137.11>

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
Synchronising queue 'q-agent-notifier-qos-update_fanout_199c21a27fd84c17b63ca5765d798b90' in vhost '/': 0 messages to synchronise

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
Synchronising queue 'q-agent-notifier-qos-update_fanout_199c21a27fd84c17b63ca5765d798b90' in vhost '/': all slaves already synced

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
Synchronising queue 'q-agent-notifier-qos-update_fanout_199c21a27fd84c17b63ca5765d798b90' in vhost '/': 0 messages to synchronise

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
Synchronising queue 'q-agent-notifier-qos-update_fanout_199c21a27fd84c17b63ca5765d798b90' in vhost '/': all slaves already synced

=ERROR REPORT==== 21-Aug-2014::01:00:05 ===
connection <0.6832.36>, channel 1 - soft error:
{amqp_error,not_found,
            "no queue 'q-agent-notifier-port-update_fanout_1a2477803ea749c1948e18fc519dd30c' in vhost '/'",
            'basic.consume'}

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
Adding mirror of queue 'q-agent-notifier-l2population-update_fanout_3c15f310294446f597397220532aa9a0' in vhost '/' on node rabbit@rabbitmqNode1: <3008.8140.11>

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
Synchronising queue 'q-agent-notifier-l2population-update_fanout_3c15f310294446f597397220532aa9a0' in vhost '/': 0 messages to synchronise

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
Synchronising queue 'q-agent-notifier-l2population-update_fanout_3c15f310294446f597397220532aa9a0' in vhost '/': all slaves already synced

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
Synchronising queue 'q-agent-notifier-l2population-update_fanout_3c15f310294446f597397220532aa9a0' in vhost '/': 0 messages to synchronise

=INFO REPORT==== 21-Aug-2014::01:00:05 ===
Synchronising queue 'q-agent-notifier-l2population-update_fanout_3c15f310294446f597397220532aa9a0' in vhost '/': all slaves already synced

=ERROR REPORT==== 21-Aug-2014::01:00:06 ===
connection <0.6832.36>, channel 1 - soft error:
{amqp_error,not_found,
            "no queue 'q-agent-notifier-port-update_fanout_1a2477803ea749c1948e18fc519dd30c' in vhost '/'",
            'basic.consume'}

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send email to rabbitm...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Message has been deleted

Michael Klishin

unread,
Aug 27, 2014, 9:16:07 AM8/27/14
to Qingchuan Hao, rabbitm...@googlegroups.com
On 27 August 2014 at 17:11:47, Qingchuan Hao (haoqin...@gmail.com) wrote:
> > actually they are different queues, but i am wondering why the
> queue with 1a2477803ea749c1948e18fc519dd30c do not exist.
> The cosumer do declared the queue. The implementation of the
> client is from openstack impl_kombu.py

Queues can be exclusive (can only be used by the connection that declared them, removed
when the connection is closed), auto-delete or have TTL.

Check if any of those properties may be in effect.
--
MK

Staff Software Engineer, Pivotal/RabbitMQ
Message has been deleted
Message has been deleted

Michael Klishin

unread,
Aug 28, 2014, 4:44:02 AM8/28/14
to Qingchuan Hao, rabbitm...@googlegroups.com


On 28 August 2014 at 07:38:40, Qingchuan Hao (haoqin...@gmail.com) wrote:
> > BTW, Will the amqp.channel.basic.consume(consumer_tag,
> nowait) only wait the consumer directed by consumer_tag, or
> the all consumers shared with this channel.

If you want to have N consumers on a channel, invoke basic_consume multiple times.

Michael Klishin

unread,
Aug 28, 2014, 4:44:56 AM8/28/14
to Qingchuan Hao, rabbitm...@googlegroups.com
On 28 August 2014 at 07:50:27, Qingchuan Hao (haoqin...@gmail.com) wrote:
> > Will the rabbitmq server create queues in the sequence of declaring
> the queues?

Operations on the same channel are guaranteed to be executed in the order RabbitMQ
receives them.
Message has been deleted

Michael Klishin

unread,
Aug 28, 2014, 6:29:39 AM8/28/14
to Qingchuan Hao, rabbitm...@googlegroups.com
On 28 August 2014 at 14:25:43, Qingchuan Hao (haoqin...@gmail.com) wrote:
> > Five auto-delete queues are declared sequencially, and the
> first four queues use basic_consume(nowait=true), but the
> last one basic_consume(nowait=false).
> The exception raised
> from the client is basic_consume(nowait=false), that's where
> the "NO FOUND no queue q-agent-notifier-port-update_fanout_1a2477803ea749c1948e18fc519dd30c",
> indicating server can not find first queue declared. But this
> queue should be declared before q-agent-notifier-qos-update_fanout_199c21a27fd84c17b63ca5765d798b90
> and q-agent-notifier-l2population-update_fanout_3c15f310294446f597397220532aa9a0.

I'm afraid I can't suggest anything without seeing your code. It is extremely unlikely
to be a server issue. 

If you use queue names like that,
you might as well consider server-named queues, which mean you have to wait for
queue.declare-ok to arrive before you do basic.consume.
Message has been deleted
Message has been deleted

Michael Klishin

unread,
Aug 28, 2014, 9:15:01 AM8/28/14
to Qingchuan Hao, rabbitm...@googlegroups.com
 On 28 August 2014 at 17:13:16, Qingchuan Hao (haoqin...@gmail.com) wrote:
> > does auto-delete queue less reliable than non-auto-delete
> queue? Or takes much more time? Will the mirror queue takes any
> influence?

Auto-delete queues are deleted when the last consumer is cancelled or goes away
(if there ever was one). queue.declare-ok will not be sent until the queue
is asserted to exist or was created, mirrored or not.

Michael Klishin

unread,
Aug 28, 2014, 9:15:46 AM8/28/14
to Qingchuan Hao, rabbitm...@googlegroups.com
On 28 August 2014 at 16:35:41, Qingchuan Hao (haoqin...@gmail.com) wrote:
> > you can find the client code here, http://fossies.org/dox/nova-2013.2.3/impl__kombu_8py_source.html.
> The consume code is in line 649.

I don't need the client code. I need your code that does queue.declare, then basic.consume.
That code is somehow problematic, not Kombu, or the client it uses under the hood, or RabbitMQ. 
Message has been deleted

Michael Klishin

unread,
Aug 28, 2014, 9:27:15 AM8/28/14
to Qingchuan Hao, rabbitm...@googlegroups.com
On 28 August 2014 at 17:15:45, Michael Klishin (mic...@rabbitmq.com) wrote:
> > I don't need the client code. I need your code that does queue.declare,
> then basic.consume.
> That code is somehow problematic, not Kombu, or the client it
> uses under the hood, or RabbitMQ.

Apologies, at first I thought the file linked was Kombu source.

Please investigate why OpenStack's RPC mechanism tries to declare
a consumer on a queue that does not exist, if this only happens some of the time,
this sounds like a race condition. 

Michael Klishin

unread,
Aug 28, 2014, 9:30:18 AM8/28/14
to Qingchuan Hao, rabbitm...@googlegroups.com


On 28 August 2014 at 17:27:15, Michael Klishin (mic...@rabbitmq.com) wrote:
> > Please investigate why OpenStack's RPC mechanism tries to
> declare
> a consumer on a queue that does not exist, if this only happens
> some of the time,
> this sounds like a race condition.

To make the debugging easier, you can run RabbitMQ connections via protocol proxy:
http://rabbitmq.com/java-tools.html

or use Wireshark to capture protocol methods that go up and down the wire. 
Message has been deleted

Michael Klishin

unread,
Aug 28, 2014, 9:39:57 AM8/28/14
to Qingchuan Hao, rabbitm...@googlegroups.com
On 28 August 2014 at 17:33:46, Qingchuan Hao (haoqin...@gmail.com) wrote:
> > This problem happens some time, and only occur in such circumstances
> when several auto-delete queues are declared in one channel.
> Can you give more explanation on race condition.

Imagine some code declares a queue, stores the response and goes on to declare a consumer.
While that happens, response for another queue.declare arrives, overwriting the response.

(This is just one example, I'm not saying this is the exact sequence of events).

Multiple queues and consumers on a channel is fine unless the channel is not used
concurrently (shared between threads or similar). Again, having a protocol trace log
will provide some starting points .
Reply all
Reply to author
Forward
0 new messages