High CPU consumption

464 views
Skip to first unread message

Kushagra Bindal

unread,
Feb 10, 2020, 11:54:16 AM2/10/20
to rabbitm...@googlegroups.com
Hi,

While executing a load on 3.8.2 RabbitMQ version, I observed high CPU spikes after completion of the load as well.

As we have discussed in our last communications as well that I was importing high load with around 6.1 MB file. So, during execution I import the load in 3 node cluster of 8 Core each.

I have applied SERVER_ADDITIONAL_ERL_ARGS='+sbwt none' in rabbitmq-env.conf file as well. But still, CPU consumption is coming to approx ~60% even after completion of load. 

PS: After completion of load at around 21:40 PM (as can be seen from below screenshot) that my cluster is currently idle state, as I am monitoring the CPU consumption.

Below is the screenshot of the CPU.

image.png

Can someone please guide me if anything else needs to be done?

--
Regards,
Kushagra

Luke Bakken

unread,
Feb 10, 2020, 1:28:22 PM2/10/20
to rabbitmq-users
Hello,

Have you confirmed that the +swbt none argument is actually being used? You can confirm by looking at the output of:

ps -ef | fgrep beam.smp

Please run that command and attach the output.

If so, the next step would be to enable the rabbitmq_top plugin and see if any process has a high or increasing reduction count: https://github.com/rabbitmq/rabbitmq-top

After that, we can discuss other options.

Thanks,
Luke

Kushagra Bindal

unread,
Feb 10, 2020, 8:20:57 PM2/10/20
to rabbitm...@googlegroups.com
Hi Luke,

Please find the output of top | grep beam.smp that I was executing during my time (11:00 PM IST).

 6319 sssd      20   0 7166084   1.6g   4496 S 458.3  5.5 897:45.73 beam.smp                                                  
 6319 sssd      20   0 7160188   1.6g   4496 S 459.5  5.4 897:59.56 beam.smp                                                  
 6319 sssd      20   0 7177744   1.6g   4496 S 576.7  5.5 898:16.86 beam.smp                                                  
 6319 sssd      20   0 7170908   1.6g   4496 S 678.7  5.5 898:37.29 beam.smp                                                  

Now, when I tried it using your option i.e. top | fgrep beam.smp in morning 06:30 AM
 6456 sssd      20   0 6607304   1.0g   4472 S  56.2  3.4   2309:13 beam.smp
 6456 sssd      20   0 6601920   1.0g   4472 S  58.8  3.3   2309:15 beam.smp
 6456 sssd      20   0 6604152   1.0g   4472 S  63.5  3.3   2309:17 beam.smp
 6456 sssd      20   0 6601920   1.0g   4472 S  58.1  3.3   2309:19 beam.smp

Now as you can see that CPU % has come down to ~10%.

I think this is not due to  '+sbwt none'. Reason is when I import my metadata(6.1MB) at 09:30 PM IST then at that time queue count was 13215. My most of the queues have a TTL of 6 hours. So, if you can see from below screenshot that CPU comes down at around 3:30 AM IST.

image.png

I am working on enabling rabbitmq_top command. Meanwhile, this data is for your reference.


--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/rabbitmq-users/309f6ecc-8441-446d-8c18-2f805de5567a%40googlegroups.com.


--
Regards,
Kushagra

Kushagra Bindal

unread,
Feb 10, 2020, 8:40:24 PM2/10/20
to rabbitm...@googlegroups.com
Please find the screenshot of the rabbitmq_top command as of now.

image.png

Since the load (queues) is not that much as of now. So, I am gonna reset the RabbitMQ and start running the load again and will share the screenshot by capturing the rabbitmq_top output.
--
Regards,
Kushagra

Kushagra Bindal

unread,
Feb 11, 2020, 3:15:14 AM2/11/20
to rabbitm...@googlegroups.com
Hi Luke,

Here is final screenshot & details of the fresh run attached with this email. Number of screenshot was multiple, so I have created 3 document for each nodes.

Can you please check and help me in identifying the root cause of the failure.
--
Regards,
Kushagra
node2.docx
node1.docx
node3.docx

Luke Bakken

unread,
Feb 11, 2020, 10:40:32 AM2/11/20
to rabbitmq-users
Hi Kushagra,

I requested the output of the following command to check for the sbwt flag - you ran the top command instead:

ps -ef | fgrep beam.smp

It looks as though your import process must enqueue a lot of data because the top process is the queue stats collector. If the CPU usage returns to normal after a period of time I'm not exactly sure there is a problem.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.


--
Regards,
Kushagra


--
Regards,
Kushagra


--
Regards,
Kushagra

Kushagra Bindal

unread,
Feb 11, 2020, 10:58:09 AM2/11/20
to rabbitm...@googlegroups.com
Hi Luke,

Please find the output of ps -ef | fgrep beam.smp.


rabbitmq   213    16 99 14:00 ?        03:32:20 /usr/lib64/erlang/erts-10.5.6/bin/beam.smp -W w -A 128 -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -K true -sbwt none -B i -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.8.2/ebin  -noshell -noinput -s rabbit boot -sname rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1 -boot start_sasl -config /etc/rabbitmq/rabbitmq -kernel inet_default_connect_options [{nodelay,true}] -rabbit tcp_listeners [{"auto",5672}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit lager_log_root "/data/rabbitmq/logs" -rabbit lager_default_file "/data/rabbitmq/logs/rab...@keng03-dev01-ins01-dmq83-app-1581412027-1.log" -rabbit lager_upgrade_file "/data/rabbitmq/logs/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1_upgrade.log" -rabbit feature_flags_file "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1-feature_flags" -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/plugins:/usr/lib/rabbitmq/lib/rabbitmq_server-3.8.2/plugins" -rabbit plugins_expand_dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1" -ra data_dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1/quorum" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 start --
root      6524  6506  0 15:52 pts/0    00:00:00 grep -F --color=auto beam.smp

Actually CPU comes to normal when due to idle behavior queue deletes. As far as queue will be present CPU is on ~500% on 8 core CPU. Since system is idle so, this is not appearing a big number, but on proper execution state system might become unresponsive as well.

Your thought and suggestion which you think that I should try to get it resolve.

To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.


--
Regards,
Kushagra


--
Regards,
Kushagra


--
Regards,
Kushagra

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/rabbitmq-users/1b25e4fb-6047-4ddc-a6c9-f28f5be1316e%40googlegroups.com.

Luke Bakken

unread,
Feb 11, 2020, 11:06:07 AM2/11/20
to rabbitmq-users
Hello,

I don't know why you see this CPU load.

After your queues expire due to TTL, if you create a new, empty queue, I'm assuming the load does not re-appear, correct?

On Tuesday, February 11, 2020 at 7:58:09 AM UTC-8, Kushagra Bindal wrote:
Hi Luke,

Please find the output of ps -ef | fgrep beam.smp.


rabbitmq   213    16 99 14:00 ?        03:32:20 /usr/lib64/erlang/erts-10.5.6/bin/beam.smp -W w -A 128 -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -K true -sbwt none -B i -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.8.2/ebin  -noshell -noinput -s rabbit boot -sname rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1 -boot start_sasl -config /etc/rabbitmq/rabbitmq -kernel inet_default_connect_options [{nodelay,true}] -rabbit tcp_listeners [{"auto",5672}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit lager_log_root "/data/rabbitmq/logs" -rabbit lager_default_file "/data/rabbitmq/logs/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1.log" -rabbit lager_upgrade_file "/data/rabbitmq/logs/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1_upgrade.log" -rabbit feature_flags_file "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1-feature_flags" -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/plugins:/usr/lib/rabbitmq/lib/rabbitmq_server-3.8.2/plugins" -rabbit plugins_expand_dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1" -ra data_dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1/quorum" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 start --
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.


--
Regards,
Kushagra


--
Regards,
Kushagra


--
Regards,
Kushagra

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.

Kushagra Bindal

unread,
Feb 11, 2020, 11:14:51 AM2/11/20
to rabbitm...@googlegroups.com
Hi Luke,

These are empty queue only which I imported on my GREEN deployment from BLUE deployment.

Actually my concern is that just by importing the metadata json into system and putting it idle is still leading to CPU consumption.

As I mentioned earlier as well that TTL is 6 hours for most of the queues so once it reduces from 13000 to 1300 CPU utilization drops to 150% approximately. But it will starts growing as the new queues starts creating again.

This is concerning to us and will lead to an unexpected behavior of normal load situation. 

Please suggest.



On Tue, Feb 11, 2020 at 9:36 PM Luke Bakken <lba...@pivotal.io> wrote:
Hello,

I don't know why you see this CPU load.

After your queues expire due to TTL, if you create a new, empty queue, I'm assuming the load does not re-appear, correct?

On Tuesday, February 11, 2020 at 7:58:09 AM UTC-8, Kushagra Bindal wrote:
Hi Luke,

Please find the output of ps -ef | fgrep beam.smp.


rabbitmq   213    16 99 14:00 ?        03:32:20 /usr/lib64/erlang/erts-10.5.6/bin/beam.smp -W w -A 128 -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -K true -sbwt none -B i -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.8.2/ebin  -noshell -noinput -s rabbit boot -sname rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1 -boot start_sasl -config /etc/rabbitmq/rabbitmq -kernel inet_default_connect_options [{nodelay,true}] -rabbit tcp_listeners [{"auto",5672}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit lager_log_root "/data/rabbitmq/logs" -rabbit lager_default_file "/data/rabbitmq/logs/rab...@keng03-dev01-ins01-dmq83-app-1581412027-1.log" -rabbit lager_upgrade_file "/data/rabbitmq/logs/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1_upgrade.log" -rabbit feature_flags_file "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1-feature_flags" -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/plugins:/usr/lib/rabbitmq/lib/rabbitmq_server-3.8.2/plugins" -rabbit plugins_expand_dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1" -ra data_dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1/quorum" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 start --
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.


--
Regards,
Kushagra


--
Regards,
Kushagra


--
Regards,
Kushagra

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/rabbitmq-users/073210e8-5fd4-46d4-9816-804ac5396f1a%40googlegroups.com.


--
Regards,
Kushagra

Kushagra Bindal

unread,
Feb 11, 2020, 11:25:58 AM2/11/20
to rabbitm...@googlegroups.com
Hi Luke,

In addition to that my existing 3.6.10 version is working normally on this load. Surprisingly, CPU consumption is also 30% on 4 core machine.
--
Regards,
Kushagra
Message has been deleted

Kushagra Bindal

unread,
Feb 12, 2020, 9:54:23 AM2/12/20
to rabbitm...@googlegroups.com
Hello,

I tried to execute the load on 8 CPU, 3 node cluster with 13000 queues and 1250 vhost by using setting attributes in /etc/rabbitmq/rabbitmq-env.conf.

RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+sbwt none +sbwtdcpu none +sbwtdio none"

Please let me know if there is any side impact of setting these additional variable along with +sbwt none.

Since, my rabbitmq application is running on a docker machine, so along with that after setting the above value I restart the docker and execute rabbitmqadmin command to import the load.

After doing so my load comes down from max CPU of 60% to 1 % in less than a minute.

Please confirm,if the above settings is having any side impact.

--
Regards,
Kushagra

Luke Bakken

unread,
Feb 12, 2020, 11:33:38 AM2/12/20
to rabbitmq-users
Hi Kushagra,

I personally was not aware of the sbwtdcpu and sbwtdio settings until you tried them out. I suspect they were added to recent Erlang versions.

This article confirms what you found - https://stressgrid.com/blog/beam_cpu_usage/

I will discuss with the team about disabling busy-waiting by default. At the very least, we'll update the documentation.

Nice work!
Luke

rabbitmq   213    16 99 14:00 ?        03:32:20 /usr/lib64/erlang/erts-10.5.6/bin/beam.smp -W w -A 128 -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -K true -sbwt none -B i -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.8.2/ebin  -noshell -noinput -s rabbit boot -sname rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1 -boot start_sasl -config /etc/rabbitmq/rabbitmq -kernel inet_default_connect_options [{nodelay,true}] -rabbit tcp_listeners [{"auto",5672}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit lager_log_root "/data/rabbitmq/logs" -rabbit lager_default_file "/data/rabbitmq/logs/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1.log" -rabbit lager_upgrade_file "/data/rabbitmq/logs/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1_upgrade.log" -rabbit feature_flags_file "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1-feature_flags" -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/plugins:/usr/lib/rabbitmq/lib/rabbitmq_server-3.8.2/plugins" -rabbit plugins_expand_dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1" -ra data_dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1/quorum" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 start --
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.


--
Regards,
Kushagra


--
Regards,
Kushagra


--
Regards,
Kushagra

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.


--
Regards,
Kushagra


--
Regards,
Kushagra


--
Regards,
Kushagra

Kushagra Bindal

unread,
Feb 12, 2020, 11:56:45 AM2/12/20
to rabbitm...@googlegroups.com
Hi Luke,

Thanks for your response.

Please update me as well about the outcome of your discussion within team. :)


On Wed, Feb 12, 2020 at 10:03 PM Luke Bakken <lba...@pivotal.io> wrote:
Hi Kushagra,


rabbitmq   213    16 99 14:00 ?        03:32:20 /usr/lib64/erlang/erts-10.5.6/bin/beam.smp -W w -A 128 -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -K true -sbwt none -B i -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.8.2/ebin  -noshell -noinput -s rabbit boot -sname rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1 -boot start_sasl -config /etc/rabbitmq/rabbitmq -kernel inet_default_connect_options [{nodelay,true}] -rabbit tcp_listeners [{"auto",5672}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit lager_log_root "/data/rabbitmq/logs" -rabbit lager_default_file "/data/rabbitmq/logs/rab...@keng03-dev01-ins01-dmq83-app-1581412027-1.log" -rabbit lager_upgrade_file "/data/rabbitmq/logs/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1_upgrade.log" -rabbit feature_flags_file "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1-feature_flags" -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/plugins:/usr/lib/rabbitmq/lib/rabbitmq_server-3.8.2/plugins" -rabbit plugins_expand_dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1" -ra data_dir "/data/rabbitmq/data/mnesia/rabbit@keng03-dev01-ins01-dmq83-app-1581412027-1/quorum" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 start --
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.


--
Regards,
Kushagra


--
Regards,
Kushagra


--
Regards,
Kushagra

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.


--
Regards,
Kushagra


--
Regards,
Kushagra


--
Regards,
Kushagra

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/rabbitmq-users/25c6c090-ab4b-48a8-9161-74e35e06cc34%40googlegroups.com.


--
Regards,
Kushagra

Luke Bakken

unread,
Feb 12, 2020, 2:11:11 PM2/12/20
to rabbitmq-users
Hello,

Please subscribe to this GitHub issue for updates - https://github.com/rabbitmq/rabbitmq-server/issues/2243

On Wednesday, February 12, 2020 at 8:56:45 AM UTC-8, Kushagra Bindal wrote:
Hi Luke,

Thanks for your response.

Please update me as well about the outcome of your discussion within team. :)

Kushagra Bindal

unread,
Mar 6, 2020, 2:48:17 AM3/6/20
to rabbitm...@googlegroups.com
Hi Luke,

Recently, we have observed 1 more weird behavior in this. After applying all 3 settings, my Node 1 & Node 2 comes down to normal state i.e. 2-3%. But on Node 3 CPU consumption is still consistently very high and remain approx to 30% per core.

[root@keng03-dev01-ins01-dmq79-app-1583466377-3 tmp]# top | grep beam.smp
14907 sssd      20   0 6385836 960644   4468 S 255.6  3.1 331:19.67 beam.smp
14907 sssd      20   0 6388836 962464   4468 S 288.7  3.1 331:28.36 beam.smp
14907 sssd      20   0 6378728 957780   4468 S 291.7  3.1 331:37.14 beam.smp
14907 sssd      20   0 6393104 956388   4468 S 286.4  3.1 331:45.76 beam.smp
14907 sssd      20   0 6377052 958244   4468 S 289.4  3.1 331:54.47 beam.smp
14907 sssd      20   0 6390224 964512   4468 S 292.0  3.1 332:03.26 beam.smp
14907 sssd      20   0 6395060 973204   4468 S 293.7  3.2 332:12.10 beam.smp
14907 sssd      20   0 6381216 966288   4468 S 294.7  3.1 332:20.97 beam.smp
14907 sssd      20   0 6382168 966124   4468 S 286.0  3.1 332:29.55 beam.smp
14907 sssd      20   0 6380856 966560   4468 S 296.3  3.1 332:38.47 beam.smp
14907 sssd      20   0 6386052 972088   4468 S 287.7  3.2 332:47.13 beam.smp
14907 sssd      20   0 6394548 978624   4468 S 295.3  3.2 332:56.02 beam.smp
14907 sssd      20   0 6378728 969816   4468 S 292.7  3.2 333:04.83 beam.smp
14907 sssd      20   0 6377860 969028   4468 S 295.0  3.2 333:13.68 beam.smp
14907 sssd      20   0 6386272 976176   4468 S 292.4  3.2 333:22.48 beam.smp
14907 sssd      20   0 6386052 977772   4468 S 294.4  3.2 333:31.34 beam.smp

Did you observed any such behavior in your use case execution. If above properties was not reflecting properly then the same behavior should be there on all 3 nodes. But this is creating high CPU spikes only on Node#3.

Please help & guide. 

Let me know if any additional details is required from my side. 

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.


--
Regards,
Kushagra

Luke Bakken

unread,
Mar 6, 2020, 10:15:55 AM3/6/20
to rabbitmq-users
Hello,

Are you certain the settings are applied to Node 3? You can't tell by the command output you provided. I gave instructions earlier in this discussion on what command to run:

ps -ef | fgrep beam.smp

Are all queue masters on Node 3? Do most connections go to this node?
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-users+unsubscribe@googlegroups.com.


--
Regards,
Kushagra

Kushagra Bindal

unread,
Mar 6, 2020, 12:05:25 PM3/6/20
to rabbitm...@googlegroups.com
I found the difference. Earlier queues were only present in node1 & 2. Then I correct the value of queue_master_locator in rabbitmq.config and now each node is having equal number of queues and hence resulting into proper cpu cooling.

To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.


--
Regards,
Kushagra

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/rabbitmq-users/75c3b350-96f9-412e-8f2a-9141223340c1%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages