Supervisor had child channel_sup started ... with reason shutdown in context shutdown_error

7,617 views
Skip to first unread message

Raul Kaubi

unread,
Sep 27, 2018, 10:41:22 AM9/27/18
to rabbitmq-users
Hi

3.7.8
erlang 21.0.9
centos7

I get the following messages in rabbit log file:
2018-09-27 17:03:16.972 [error] <0.9092.150> Supervisor {<0.9092.150>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.9110.150>, <<"<rab...@XXXXXX.3.9110.150>">>) at undefined exit with reason shutdown in context shutdown_error

When googling, couldn't find any information about this.
Should I be worried. Or perhaps some ideas, why or what is the problem here.

Thanks.

Regards
Raul

Michael Klishin

unread,
Sep 27, 2018, 5:48:29 PM9/27/18
to rabbitm...@googlegroups.com
It sounds like a shutdown of one of the channels on a direct connection at the wrong time
(e.g. a closing channel also ran into an exception). I don't think you should be worried, that's connection-local state.

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send email to rabbitm...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
MK

Staff Software Engineer, Pivotal/RabbitMQ

Raul Kaubi

unread,
Sep 28, 2018, 9:28:59 AM9/28/18
to rabbitm...@googlegroups.com
Hi

Quite annoying actually. These messages happen every 3-5 minutes.
Is there something that application that publish/consume messages have to do differently or is it rabbitmq/erlang problem..?

Regards
Raul

You received this message because you are subscribed to a topic in the Google Groups "rabbitmq-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/rabbitmq-users/swlv6lFoAQo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to rabbitmq-user...@googlegroups.com.

Michael Klishin

unread,
Sep 28, 2018, 11:44:05 AM9/28/18
to rabbitm...@googlegroups.com
There's not enough information to tell. It's a direct connection so unless you use Erlang, it's not a channel used by your applications.

There can be all kinds of legitimate reasons for such exceptions that are not RabbitMQ or Erlang problems.

Luke Bakken

unread,
Sep 28, 2018, 12:00:00 PM9/28/18
to rabbitmq-users
Hi Raul,

Are you using any protocols other than AMQP? Maybe MQTT?

Raul Kaubi

unread,
Oct 1, 2018, 7:26:29 AM10/1/18
to rabbitmq-users
Hi

I have one application connected to this rabbitmq cluster (publishers, consumers) and connections are all AMQP 0-9-1 (RabbitMQ / .NET 5.1.0)

What does that mean "unless you use erlang"..?

It's a direct connection so unless you use Erlang, it's not a channel used by your applications.


Regards
Raul

Michael Klishin

unread,
Oct 1, 2018, 9:33:07 AM10/1/18
to rabbitm...@googlegroups.com
Unless you use the Erlang client (from an Erlang/Elixir/other BEAM-based app).

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send email to rabbitm...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Michael Klishin

unread,
Oct 1, 2018, 9:34:29 AM10/1/18
to rabbitm...@googlegroups.com
Luke's point is that some RabbitMQ plugins — namely Shovel and Federation — use the Erlang client and direct connections under the hood.

.NET client does not and technically cannot open a direct connection but that's what the error pretty clearly says. Can there be
unaccounted Shovels or Federation links in your environment?

Raul Kaubi

unread,
Oct 1, 2018, 10:06:00 AM10/1/18
to rabbitm...@googlegroups.com
Hi

What do you mean by “unaccounted shovels or federation link”..?

This is actually new environment, I did import schema definitions, but other than that there should be no shovels or federations defined. I can check this later.

Actually, these messages started to pop out in logs after I connected my .net application into that cluster, before this, there were no such messages in rabbitmq log file.

Regards
Raul

Sent from my iPhone
You received this message because you are subscribed to a topic in the Google Groups "rabbitmq-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/rabbitmq-users/swlv6lFoAQo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to rabbitmq-user...@googlegroups.com.

Michael Klishin

unread,
Oct 1, 2018, 11:15:33 AM10/1/18
to rabbitm...@googlegroups.com
I mean literally "unaccounted shovels or federation links". Someone or something set up a Shovel or Federation policy
that you may or may not be aware of, possibly in a different virtual host from the one your .NET apps use.

Raul Kaubi

unread,
Oct 2, 2018, 1:22:43 AM10/2/18
to rabbitmq-users
Hi

I only have "/" virtual host.
And I checked, I do not have no shovels nor the federation links used.

Regards
Raul

Michael Klishin

unread,
Oct 2, 2018, 9:16:05 AM10/2/18
to rabbitm...@googlegroups.com
STOMP, AMQP 1.0 and MQTT (including their WebSocket counterparts) also use the Erlang client.

We don't guess on this list. Please post `rabbitmqctl environment` and `rabbitmq-plugins list` output.

Raul Kaubi

unread,
Oct 3, 2018, 12:52:47 AM10/3/18
to rabbitmq-users
# rabbitmqctl environment
Application environment of node rabbit@dc1-rabbit1 ...
[{amqp10_client,[]},
 {amqp10_common,[]},
 {amqp_client,[{prefer_ipv6,false},{ssl_options,[]}]},
 {asn1,[]},
 {compiler,[]},
 {cowboy,[]},
 {cowlib,[]},
 {crypto,[{fips_mode,false},{rand_cache_size,896}]},
 {eldap,[]},
 {goldrush,[]},
 {inets,[]},
 {jsx,[]},
 {kernel,
     [{inet_default_connect_options,[{nodelay,true}]},
      {inet_dist_listen_max,25672},
      {inet_dist_listen_min,25672},
      {logger,
          [{handler,default,logger_std_h,
               #{config => #{type => standard_io},
                 formatter =>
                     {logger_formatter,
                         #{legacy_header => true,single_line => false}}}}]},
      {logger_level,notice},
      {logger_sasl_compatible,false}]},
 {lager,
     [{async_threshold,20},
      {async_threshold_window,5},
      {colored,false},
      {colors,
          [{debug,"\e[0;38m"},
           {info,"\e[1;37m"},
           {notice,"\e[1;36m"},
           {warning,"\e[1;33m"},
           {error,"\e[1;31m"},
           {critical,"\e[1;35m"},
           {alert,"\e[1;44m"},
           {emergency,"\e[1;41m"}]},
      {crash_log,"log/crash.log"},
      {crash_log_count,5},
      {crash_log_date,"$D0"},
      {crash_log_msg_size,65536},
      {crash_log_rotator,lager_rotator_default},
      {crash_log_size,10485760},
      {error_logger_format_raw,true},
      {error_logger_hwm,50},
      {error_logger_hwm_original,50},
      {error_logger_redirect,true},
      {extra_sinks,
          [{error_logger_lager_event,
               [{handlers,[{lager_forwarder_backend,[lager_event,inherit]}]},
                {rabbit_handlers,
                    [{lager_forwarder_backend,[lager_event,inherit]}]}]},
           {rabbit_log_lager_event,
               [{handlers,[{lager_forwarder_backend,[lager_event,inherit]}]},
                {rabbit_handlers,
                    [{lager_forwarder_backend,[lager_event,inherit]}]}]},
           {rabbit_log_channel_lager_event,
               [{handlers,[{lager_forwarder_backend,[lager_event,inherit]}]},
                {rabbit_handlers,
                    [{lager_forwarder_backend,[lager_event,inherit]}]}]},
           {rabbit_log_connection_lager_event,
               [{handlers,[{lager_forwarder_backend,[lager_event,inherit]}]},
                {rabbit_handlers,
                    [{lager_forwarder_backend,[lager_event,inherit]}]}]},
           {rabbit_log_mirroring_lager_event,
               [{handlers,[{lager_forwarder_backend,[lager_event,inherit]}]},
                {rabbit_handlers,
                    [{lager_forwarder_backend,[lager_event,inherit]}]}]},
           {rabbit_log_queue_lager_event,
               [{handlers,[{lager_forwarder_backend,[lager_event,inherit]}]},
                {rabbit_handlers,
                    [{lager_forwarder_backend,[lager_event,inherit]}]}]},
           {rabbit_log_federation_lager_event,
               [{handlers,[{lager_forwarder_backend,[lager_event,inherit]}]},
                {rabbit_handlers,
                    [{lager_forwarder_backend,[lager_event,inherit]}]}]},
           {rabbit_log_upgrade_lager_event,
               [{handlers,
                    [{lager_file_backend,
                         [{date,[]},
                          {file,
                              "/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"},
                          {formatter_config,
                              [date," ",time," ",color,"[",severity,"] ",
                               {pid,[]},
                               " ",message,"\n"]},
                          {level,info},
                          {size,0}]}]},
                {rabbit_handlers,
                    [{lager_file_backend,
                         [{date,[]},
                          {file,
                              "/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"},
                          {formatter_config,
                              [date," ",time," ",color,"[",severity,"] ",
                               {pid,[]},
                               " ",message,"\n"]},
                          {level,info},
                          {size,0}]}]}]}]},
      {handlers,
          [{lager_file_backend,
               [{date,[]},
                {file,"/var/log/rabbitmq/rab...@dc1-rabbit1.log"},
                {formatter_config,
                    [date," ",time," ",color,"[",severity,"] ",
                     {pid,[]},
                     " ",message,"\n"]},
                {level,info},
                {size,0}]}]},
      {log_root,"/var/log/rabbitmq"},
      {rabbit_handlers,
          [{lager_file_backend,
               [{date,[]},
                {file,"/var/log/rabbitmq/rab...@dc1-rabbit1.log"},
                {formatter_config,
                    [date," ",time," ",color,"[",severity,"] ",
                     {pid,[]},
                     " ",message,"\n"]},
                {level,info},
                {size,0}]}]}]},
 {mnesia,[{dir,"/var/lib/rabbitmq/mnesia/rabbit@dc1-rabbit1"}]},
 {os_mon,
     [{start_cpu_sup,false},
      {start_disksup,false},
      {start_memsup,false},
      {start_os_sup,false}]},
 {public_key,[]},
 {rabbit,
     [{auth_backends,[rabbit_auth_backend_internal,rabbit_auth_backend_ldap]},
      {auth_mechanisms,['PLAIN','AMQPLAIN']},
      {autocluster,
          [{peer_discovery_backend,rabbit_peer_discovery_classic_config}]},
      {background_gc_enabled,false},
      {background_gc_target_interval,60000},
      {backing_queue_module,rabbit_priority_queue},
      {channel_max,2047},
      {channel_operation_timeout,15000},
      {cluster_keepalive_interval,10000},
      {cluster_nodes,{[],disc}},
      {cluster_partition_handling,ignore},
      {collect_statistics,fine},
      {collect_statistics_interval,5000},
      {config_entry_decoder,
          [{cipher,aes_cbc256},
           {hash,sha512},
           {iterations,1000},
           {passphrase,undefined}]},
      {connection_max,infinity},
      {credit_flow_default_credit,{400,200}},
      {default_consumer_prefetch,{false,0}},
      {default_permissions,[<<".*">>,<<".*">>,<<".*">>]},
      {default_user,<<"guest">>},
      {default_user_tags,[administrator]},
      {default_vhost,<<"/">>},
      {delegate_count,16},
      {disk_free_limit,{mem_relative,1.0}},
      {disk_monitor_failure_retries,10},
      {disk_monitor_failure_retry_interval,120000},
      {enabled_plugins_file,"/etc/rabbitmq/enabled_plugins"},
      {fhc_read_buffering,false},
      {fhc_write_buffering,true},
      {frame_max,131072},
      {halt_on_upgrade_failure,true},
      {handshake_timeout,20000},
      {heartbeat,60},
      {hipe_compile,false},
      {hipe_modules,
          [rabbit_reader,rabbit_channel,gen_server2,rabbit_exchange,
           rabbit_command_assembler,rabbit_framing_amqp_0_9_1,rabbit_basic,
           rabbit_event,lists,queue,priority_queue,rabbit_router,rabbit_trace,
           rabbit_misc,rabbit_binary_parser,rabbit_exchange_type_direct,
           rabbit_guid,rabbit_net,rabbit_amqqueue_process,
           rabbit_variable_queue,rabbit_binary_generator,rabbit_writer,
           delegate,gb_sets,lqueue,sets,orddict,rabbit_amqqueue,
           rabbit_limiter,gb_trees,rabbit_queue_index,
           rabbit_exchange_decorator,gen,dict,ordsets,file_handle_cache,
           rabbit_msg_store,array,rabbit_msg_store_ets_index,rabbit_msg_file,
           rabbit_exchange_type_fanout,rabbit_exchange_type_topic,mnesia,
           mnesia_lib,rpc,mnesia_tm,qlc,sofs,proplists,credit_flow,pmon,
           ssl_connection,tls_connection,ssl_record,tls_record,gen_fsm,ssl]},
      {lager_default_file,"/var/log/rabbitmq/rab...@dc1-rabbit1.log"},
      {lager_extra_sinks,
          [rabbit_log_lager_event,rabbit_log_channel_lager_event,
           rabbit_log_connection_lager_event,rabbit_log_mirroring_lager_event,
           rabbit_log_queue_lager_event,rabbit_log_federation_lager_event,
           rabbit_log_upgrade_lager_event]},
      {lager_log_root,"/var/log/rabbitmq"},
      {lager_upgrade_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"},
      {lazy_queue_explicit_gc_run_operation_threshold,1000},
      {log,
          [{file,[{file,"/var/log/rabbitmq/rab...@dc1-rabbit1.log"}]},
           {categories,
               [{upgrade,
                    [{file,
                         "/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"}]}]}]},
      {loopback_users,[<<"guest">>]},
      {memory_monitor_interval,2500},
      {mirroring_flow_control,true},
      {mirroring_sync_batch_size,4096},
      {mnesia_table_loading_retry_limit,10},
      {mnesia_table_loading_retry_timeout,30000},
      {msg_store_credit_disc_bound,{4000,800}},
      {msg_store_file_size_limit,16777216},
      {msg_store_index_module,rabbit_msg_store_ets_index},
      {msg_store_io_batch_size,4096},
      {num_ssl_acceptors,10},
      {num_tcp_acceptors,10},
      {password_hashing_module,rabbit_password_hashing_sha256},
      {plugins_dir,
          "/usr/lib/rabbitmq/plugins:/usr/lib/rabbitmq/lib/rabbitmq_server-3.7.8/plugins"},
      {plugins_expand_dir,
          "/var/lib/rabbitmq/mnesia/rabbit@dc1-rabbit1-plugins-expand"},
      {proxy_protocol,false},
      {queue_explicit_gc_run_operation_threshold,1000},
      {queue_index_embed_msgs_below,4096},
      {queue_index_max_journal_entries,32768},
      {reverse_dns_lookups,true},
      {server_properties,[]},
      {ssl_allow_poodle_attack,false},
      {ssl_apps,[asn1,crypto,public_key,ssl]},
      {ssl_cert_login_from,distinguished_name},
      {ssl_handshake_timeout,5000},
      {ssl_listeners,[]},
      {ssl_options,[]},
      {tcp_listen_options,
          [{backlog,128},
           {nodelay,true},
           {linger,{true,0}},
           {exit_on_close,false}]},
      {tcp_listeners,[5672]},
      {trace_vhosts,[]},
      {vhost_restart_strategy,continue},
      {vm_memory_calculation_strategy,rss},
      {vm_memory_high_watermark,0.6},
      {vm_memory_high_watermark_paging_ratio,0.2}]},
 {rabbit_common,[]},
 {rabbitmq_auth_backend_ldap,
     [{anon_auth,false},
      {dn_lookup_attribute,"userPrincipalName"},
      {dn_lookup_base,"DC=XXX,DC=YYY"},
      {dn_lookup_bind,as_user},
      {group_lookup_base,none},
      {idle_timeout,300000},
      {log,false},
      {other_bind,as_user},
      {pool_size,64},
      {port,389},
      {resource_access_query,
          {for,
              [{permission,configure,
                   {in_group,
                       "CN=Domain Admins,OU=Admin,OU=Groups,DC=XXX,DC=YYY"}},
               {permission,write,
                   {for,
                       [{resource,queue,
                            {in_group,
                                "CN=Domain Admins,OU=Admin,OU=Groups,DC=XXX,DC=YYY"}},
                        {resource,exchange,{constant,true}}]}},
               {permission,read,
                   {for,
                       [{resource,queue,
                            {'or',
                                [{in_group,
                                     "CN=Domain Admins,OU=Admin,OU=Groups,DC=XXX,DC=YYY"},
                                 {match,
                                     {string,"${name}"},
                                     {string,"error"}}]}},
                        {resource,exchange,
                            {in_group,
                                "CN=Domain Admins,OU=Admin,OU=Groups,DC=XXX,DC=YYY"}}]}}]}},
      {servers,["XXX.YYY"]},
      {ssl_options,[]},
      {tag_queries,
          [{administrator,
               {in_group,
                   "CN=Domain Admins,OU=Admin,OU=Groups,DC=XXX,DC=YYY"}},
           {management,
               {in_group,
                   "CN=et_rabbitmq_management,OU=CCC,OU=ZZZ,OU=Groups,DC=XXX,DC=YYY"}},
           {monitoring,
               {in_group,
                   "CN=et_rabbitmq_management,OU=CCC,OU=ZZZ,OU=Groups,DC=XXX,DC=YYY"}}]},
      {timeout,infinity},
      {topic_access_query,{constant,true}},
      {use_ssl,false},
      {use_starttls,false},
      {user_dn_pattern,"${username}@XXX.YYY"},
      {vhost_access_query,
          {'or',
              [{in_group,
                   "CN=Domain Admins,OU=Admin,OU=Groups,DC=XXX,DC=YYY",
                   "member"},
               {in_group,
                   "CN=et_rabbitmq_management,OU=CCC,OU=ZZZ,OU=Groups,DC=XXX,DC=YYY",
                   "member"}]}}]},
 {rabbitmq_management,
     [{cors_allow_origins,[]},
      {cors_max_age,1800},
      {http_log_dir,none},
      {listener,[{port,15672}]},
      {load_definitions,none},
      {management_db_cache_multiplier,5},
      {process_stats_gc_timeout,300000},
      {stats_event_max_backlog,250}]},
 {rabbitmq_management_agent,
     [{rates_mode,basic},
      {sample_retention_policies,
          [{global,[{605,5},{3660,60},{29400,600},{86400,1800}]},
           {basic,[{605,5},{3600,60}]},
           {detailed,[{605,5}]}]}]},
 {rabbitmq_shovel,
     [{defaults,
          [{prefetch_count,1000},
           {ack_mode,on_confirm},
           {publish_fields,[]},
           {publish_properties,[]},
           {reconnect_delay,5}]},
      {shovels,[]}]},
 {rabbitmq_shovel_management,[]},
 {rabbitmq_tracing,
     [{directory,"/var/tmp/rabbitmq-tracing"},
      {password,<<"guest">>},
      {username,<<"guest">>}]},
 {rabbitmq_web_dispatch,[]},
 {ranch,[]},
 {ranch_proxy_protocol,[{proxy_protocol_timeout,55000},{ssl_accept_opts,[]}]},
 {recon,[]},
 {sasl,[{errlog_type,error},{sasl_error_logger,false}]},
 {ssl,[]},
 {stdlib,[]},
 {syntax_tools,[]},
 {syslog,[{syslog_error_logger,false}]},
 {xmerl,[]}]
 
# rabbitmq-plugins list
 Configured: E = explicitly enabled; e = implicitly enabled
 | Status: * = running on rabbit@dc1-rabbit1
 |/
[  ] rabbitmq_amqp1_0                  3.7.8
[  ] rabbitmq_auth_backend_cache       3.7.8
[  ] rabbitmq_auth_backend_http        3.7.8
[E*] rabbitmq_auth_backend_ldap        3.7.8
[  ] rabbitmq_auth_mechanism_ssl       3.7.8
[  ] rabbitmq_consistent_hash_exchange 3.7.8
[  ] rabbitmq_event_exchange           3.7.8
[  ] rabbitmq_federation               3.7.8
[  ] rabbitmq_federation_management    3.7.8
[  ] rabbitmq_jms_topic_exchange       3.7.8
[E*] rabbitmq_management               3.7.8
[e*] rabbitmq_management_agent         3.7.8
[  ] rabbitmq_mqtt                     3.7.8
[  ] rabbitmq_peer_discovery_aws       3.7.8
[  ] rabbitmq_peer_discovery_common    3.7.8
[  ] rabbitmq_peer_discovery_consul    3.7.8
[  ] rabbitmq_peer_discovery_etcd      3.7.8
[  ] rabbitmq_peer_discovery_k8s       3.7.8
[  ] rabbitmq_random_exchange          3.7.8
[  ] rabbitmq_recent_history_exchange  3.7.8
[  ] rabbitmq_sharding                 3.7.8
[E*] rabbitmq_shovel                   3.7.8
[E*] rabbitmq_shovel_management        3.7.8
[  ] rabbitmq_stomp                    3.7.8
[  ] rabbitmq_top                      3.7.8
[E*] rabbitmq_tracing                  3.7.8
[  ] rabbitmq_trust_store              3.7.8
[e*] rabbitmq_web_dispatch             3.7.8
[  ] rabbitmq_web_mqtt                 3.7.8
[  ] rabbitmq_web_mqtt_examples        3.7.8
[  ] rabbitmq_web_stomp                3.7.8
[  ] rabbitmq_web_stomp_examples       3.7.8

Michael Klishin

unread,
Oct 3, 2018, 12:47:47 PM10/3/18
to rabbitm...@googlegroups.com
So you claim that Shovel isn't used but rabbitmq_shovel is enabled.

Raul Kaubi

unread,
Oct 3, 2018, 1:17:13 PM10/3/18
to rabbitm...@googlegroups.com
Shovel plugin is enabled, but I do not have any shovels or dynamic shovels created.

Raul

Sent from my iPhone

Michael Klishin

unread,
Oct 3, 2018, 1:27:36 PM10/3/18
to rabbitm...@googlegroups.com
I'm sorry but I don't believe in coincidences. A log messages on a direct connection and Shovel is enabled but not used.
Well, if it's not used then consider disabling it?

Raul Kaubi

unread,
Oct 4, 2018, 5:23:33 AM10/4/18
to rabbitmq-users
It was enabled just in case.

So I disbled it.
# rabbitmq-plugins disable rabbitmq_shovel rabbitmq_shovel_management
The following plugins have been configured:
  rabbitmq_auth_backend_ldap
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_tracing
  rabbitmq_web_dispatch
Applying plugin configuration to rabbit@dc1-rabbit1...
The following plugins have been disabled:
  rabbitmq_shovel
  rabbitmq_shovel_management
stopped 2 plugins. 
[  ] rabbitmq_shovel                   3.7.8
[  ] rabbitmq_shovel_management        3.7.8

[  ] rabbitmq_stomp                    3.7.8
[  ] rabbitmq_top                      3.7.8
[E*] rabbitmq_tracing                  3.7.8
[  ] rabbitmq_trust_store              3.7.8
[e*] rabbitmq_web_dispatch             3.7.8
[  ] rabbitmq_web_mqtt                 3.7.8
[  ] rabbitmq_web_mqtt_examples        3.7.8
[  ] rabbitmq_web_stomp                3.7.8
[  ] rabbitmq_web_stomp_examples       3.7.8

But still I have these messages in log file.

2018-10-04 12:22:59.347 [error] <0.27208.1523> Supervisor {<0.27208.1523>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.27218.1523>, <<"<rab...@dc1-rabbit1.3.27218.1523>">>) at undefined exit with reason shutdown in context shutdown_error

Raul
                              "/var/log/rab...@dc1-rabbit1_upgrade.log"},

                          {formatter_config,
                              [date," ",time," ",color,"[",severity,"] ",
                               {pid,[]},
                               " ",message,"\n"]},
                          {level,info},
                          {size,0}]}]},
                {rabbit_handlers,
                    [{lager_file_backend,
                         [{date,[]},
                          {file,
                              "/var/log/rab...@dc1-rabbit1_upgrade.log"},
      {lager_default_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1.log"},

      {lager_extra_sinks,
          [rabbit_log_lager_event,rabbit_log_channel_lager_event,
           rabbit_log_connection_lager_event,rabbit_log_mirroring_lager_event,
           rabbit_log_queue_lager_event,rabbit_log_federation_lager_event,
           rabbit_log_upgrade_lager_event]},
      {lager_log_root,"/var/log/rabbitmq"},
      {lager_upgrade_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"},
      {lazy_queue_explicit_gc_run_operation_threshold,1000},
      {log,
          [{file,[{file,"/var/log/rabbitmq/rabbit@dc1-rabbit1.log"}]},
           {categories,
               [{upgrade,
                    [{file,
                         "/var/log/rab...@dc1-rabbit1_upgrade.log"}]}]}]},
      {user_dn_pattern,"${user...@XXX.YYY"},

ofer peretz

unread,
Oct 23, 2018, 3:40:42 AM10/23/18
to rabbitmq-users
Having the same errors on my RabbitMQ Cluster, consist 4 nodes 1 master with NO shovel plugin enabled.
using nginx to connect between Tornado server and RabbitMQ queues.

didnt understand if its an critical issue or just a warning when channel does not closed properly.

2018-10-04 12:22:59.347 [error] <0.27208.1523> Supervisor {<0.27208.1523>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.27218.1523>, <<"<rabbit@dc1-rabbit1.3.27218.1523>">>) at undefined exit with reason shutdown in context shutdown_error

Michael Klishin

unread,
Oct 25, 2018, 2:42:54 PM10/25/18
to rabbitm...@googlegroups.com
Please start new threads for new questions and post the actual messages from the log.

RabbitMQ will log warnings at the warning level. Not all errors are critical but if something is logged
as a warning, it usually is.

2018-10-04 12:22:59.347 [error] <0.27208.1523> Supervisor {<0.27208.1523>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.27218.1523>, <<"<rab...@dc1-rabbit1.3.27218.1523>">>) at undefined exit with reason shutdown in context shutdown_error

      {lager_default_file,"/var/log/rabbitmq/rab...@dc1-rabbit1.log"},

      {lager_extra_sinks,
          [rabbit_log_lager_event,rabbit_log_channel_lager_event,
           rabbit_log_connection_lager_event,rabbit_log_mirroring_lager_event,
           rabbit_log_queue_lager_event,rabbit_log_federation_lager_event,
           rabbit_log_upgrade_lager_event]},
      {lager_log_root,"/var/log/rabbitmq"},
      {lager_upgrade_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"},
      {lazy_queue_explicit_gc_run_operation_threshold,1000},
      {log,
          [{file,[{file,"/var/log/rabbitmq/rab...@dc1-rabbit1.log"}]},

Raul Kaubi

unread,
Nov 9, 2018, 2:14:22 AM11/9/18
to rabbitmq-users
So I have disabled shovel. But there are still these messages coming, only from primary node.
Flooding my log file every few minutes.

Regards
Raul
2018-10-04 12:22:59.347 [error] <0.27208.1523> Supervisor {<0.27208.1523>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.27218.1523>, <<"<rabbit@dc1-rabbit1.3.27218.1523>">>) at undefined exit with reason shutdown in context shutdown_error

      {lager_default_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1.log"},

      {lager_extra_sinks,
          [rabbit_log_lager_event,rabbit_log_channel_lager_event,
           rabbit_log_connection_lager_event,rabbit_log_mirroring_lager_event,
           rabbit_log_queue_lager_event,rabbit_log_federation_lager_event,
           rabbit_log_upgrade_lager_event]},
      {lager_log_root,"/var/log/rabbitmq"},
      {lager_upgrade_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"},
      {lazy_queue_explicit_gc_run_operation_threshold,1000},
      {log,
          [{file,[{file,"/var/log/rabbitmq/rabbit@dc1-rabbit1.log"}]},

Michael Klishin

unread,
Nov 9, 2018, 8:42:11 AM11/9/18
to rabbitm...@googlegroups.com
What is that "primary node"?

Assuming the list of enabled plugins is the same as above minus Shovel, only rabbitmq_tracing uses the Erlang client [1].




Regards
2018-10-04 12:22:59.347 [error] <0.27208.1523> Supervisor {<0.27208.1523>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.27218.1523>, <<"<rab...@dc1-rabbit1.3.27218.1523>">>) at undefined exit with reason shutdown in context shutdown_error

      {lager_default_file,"/var/log/rabbitmq/rab...@dc1-rabbit1.log"},

      {lager_extra_sinks,
          [rabbit_log_lager_event,rabbit_log_channel_lager_event,
           rabbit_log_connection_lager_event,rabbit_log_mirroring_lager_event,
           rabbit_log_queue_lager_event,rabbit_log_federation_lager_event,
           rabbit_log_upgrade_lager_event]},
      {lager_log_root,"/var/log/rabbitmq"},
      {lager_upgrade_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"},
      {lazy_queue_explicit_gc_run_operation_threshold,1000},
      {log,
          [{file,[{file,"/var/log/rabbitmq/rab...@dc1-rabbit1.log"}]},

Raul Kaubi

unread,
Nov 9, 2018, 4:53:39 PM11/9/18
to rabbitmq-users
Well, not mirror.

By the way, I disabled tracing as well, also restart rabbitmq-server service, but still these messages appear (3.7.8 and 21.0.9):
[  ] rabbitmq_tracing                  3.7.8

[  ] rabbitmq_trust_store              3.7.8
[e*] rabbitmq_web_dispatch             3.7.8
[  ] rabbitmq_web_mqtt                 3.7.8
[  ] rabbitmq_web_mqtt_examples        3.7.8
[  ] rabbitmq_web_stomp                3.7.8
[  ] rabbitmq_web_stomp_examples       3.7.8


[error] <0.14590.35> Supervisor {<0.14590.35>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.14584.35>, <<"<rab...@dc1-rabbit1.2.14584.35>">>) at undefined exit with reason noproc in context shutdown_error

Regards
Raul

Regards
2018-10-04 12:22:59.347 [error] <0.27208.1523> Supervisor {<0.27208.1523>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.27218.1523>, <<"<rabbit@dc1-rabbit1.3.27218.1523>">>) at undefined exit with reason shutdown in context shutdown_error

      {lager_default_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1.log"},

      {lager_extra_sinks,
          [rabbit_log_lager_event,rabbit_log_channel_lager_event,
           rabbit_log_connection_lager_event,rabbit_log_mirroring_lager_event,
           rabbit_log_queue_lager_event,rabbit_log_federation_lager_event,
           rabbit_log_upgrade_lager_event]},
      {lager_log_root,"/var/log/rabbitmq"},
      {lager_upgrade_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"},
      {lazy_queue_explicit_gc_run_operation_threshold,1000},
      {log,
          [{file,[{file,"/var/log/rabbitmq/rabbit@dc1-rabbit1.log"}]},

Vitaliy Zhiltsov

unread,
Feb 5, 2019, 3:54:02 AM2/5/19
to rabbitmq-users
Hi, Raul.

Do you have some progress with your question/problem, have you solved it ?

суббота, 10 ноября 2018 г., 4:53:39 UTC+7 пользователь Raul Kaubi написал:
[error] <0.14590.35> Supervisor {<0.14590.35>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.14584.35>, <<"<rabbit@dc1-rabbit1.2.14584.35>">>) at undefined exit with reason noproc in context shutdown_error

Raul Kaubi

unread,
Feb 5, 2019, 5:31:40 AM2/5/19
to rabbitm...@googlegroups.com
Hi

Still no luck, these messages are still present in log file.

Regards
Raul

Hi, Raul.

[error] <0.14590.35> Supervisor {<0.14590.35>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.14584.35>, <<"<rab...@dc1-rabbit1.2.14584.35>">>) at undefined exit with reason noproc in context shutdown_error

Regards

Regards
2018-10-04 12:22:59.347 [error] <0.27208.1523> Supervisor {<0.27208.1523>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.27218.1523>, <<"<rab...@dc1-rabbit1.3.27218.1523>">>) at undefined exit with reason shutdown in context shutdown_error

      {lager_default_file,"/var/log/rabbitmq/rab...@dc1-rabbit1.log"},

      {lager_extra_sinks,
          [rabbit_log_lager_event,rabbit_log_channel_lager_event,
           rabbit_log_connection_lager_event,rabbit_log_mirroring_lager_event,
           rabbit_log_queue_lager_event,rabbit_log_federation_lager_event,
           rabbit_log_upgrade_lager_event]},
      {lager_log_root,"/var/log/rabbitmq"},
      {lager_upgrade_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"},
      {lazy_queue_explicit_gc_run_operation_threshold,1000},
      {log,
          [{file,[{file,"/var/log/rabbitmq/rab...@dc1-rabbit1.log"}]},

Karim Gillani

unread,
Jun 12, 2019, 11:28:51 AM6/12/19
to rabbitmq-users
Did you solve this yet?  I am getting this as well.  RabbitMQ High Availability setting.  I am using Flask-socketio to send messages.

To be clear, I am getting a ton of these:

Supervisor {<0.3715.745>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.3744.745>, <<"<rab...@rabbitmq-cluster-1.rabbitmq-cluster.xxx.svc.cluster.local.2.3744.745>">>) at undefined exit with reason shutdown in context shutdown_error

Any help would be appreciated..
Hi, Raul.

[error] <0.14590.35> Supervisor {<0.14590.35>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.14584.35>, <<"<rabbit@dc1-rabbit1.2.14584.35>">>) at undefined exit with reason noproc in context shutdown_error

Regards

Regards
2018-10-04 12:22:59.347 [error] <0.27208.1523> Supervisor {<0.27208.1523>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.27218.1523>, <<"<rabbit@dc1-rabbit1.3.27218.1523>">>) at undefined exit with reason shutdown in context shutdown_error

      {lager_default_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1.log"},

      {lager_extra_sinks,
          [rabbit_log_lager_event,rabbit_log_channel_lager_event,
           rabbit_log_connection_lager_event,rabbit_log_mirroring_lager_event,
           rabbit_log_queue_lager_event,rabbit_log_federation_lager_event,
           rabbit_log_upgrade_lager_event]},
      {lager_log_root,"/var/log/rabbitmq"},
      {lager_upgrade_file,"/var/log/rabbitmq/rabbit@dc1-rabbit1_upgrade.log"},
      {lazy_queue_explicit_gc_run_operation_threshold,1000},
      {log,
          [{file,[{file,"/var/log/rabbitmq/rabbit@dc1-rabbit1.log"}]},
To unsubscribe from this group and all its topics, send an email to rabbitm...@googlegroups.com.

Raul Kaubi

unread,
Jun 12, 2019, 12:23:31 PM6/12/19
to rabbitm...@googlegroups.com
Hi

Nop, still this problem exists in my environment.
I am using rabbitmq 3.7.13 version.

Raul

Sent from my iPhone

Brad Smith

unread,
Jun 17, 2019, 11:13:55 PM6/17/19
to rabbitmq-users
FWIW i see this in 3.7.15

2019-06-18
 
03:09:09.147 [error] <0.21361.520> Supervisor
{<0.21361.520>,amqp_channel_sup_sup} had child channel_sup started
 
with amqp_channel_sup:start_link(direct, <0.20589.422>,
<<"61.91.215.194:51918 -> 100.118.0.1:15675">>) at
undefined exit with reason noproc in context shutdown_error
2019-06-18
 
03:09:09.344 [error] <0.17936.449> Channel error on connection
<0.28785.528> (112.109.93.170:49704 -> 100.118.0.1:15675,
vhost
: '/', user: 'agent'), channel 2:
2019-06-18
 
03:09:09.490 [error] <0.1372.524> Supervisor
{<0.1372.524>,amqp_channel_sup_sup} had child channel_sup started
with amqp_channel_sup:start_link(direct, <0.5921.534>,
<<"36.152.32.148:59392 -> 100.118.0.1:15675">>) at
undefined exit with reason shutdown in context shutdown_error
We have around 30K connections.

Brad

Vitaliy Zhiltsov

unread,
Jun 18, 2019, 12:25:54 AM6/18/19
to rabbitmq-users
Hi. Now we use rabbitmq 3.7.14 and we have about 44K devices which send information everyday and in ELK I see about 400 error messages per day: "at undefined exit with reason shutdown in context shutdown_error" and "at undefined exit with reason noproc in context shutdown_error"

rabbitmq_shutdown.PNG rabbitmq_noproc.PNG


We added Prometheus pluging for Rabbitmq, it shows internally everything is fine. Since last update we've extended logging, and now we know SN of devices, but it's still unclear, because sometime the same device close connection correctly sometime not. 
June 18th 2019, 10:41:54.527 <0.29703.621> closing MQTT connection <0.29703.621> (10.24.1.99:1474 -> 172.19.0.4:8883)

June 18th 2019, 10:41:54.527 <0.942.622> Supervisor {<0.942.622>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.32156.621>, <<"10.24.1.99:1474 -> 172.19.0.4:8883">>) at undefined exit with reason noproc in context shutdown_error
June 18th 2019, 10:41:48.052 <0.29703.621> accepting MQTT connection <0.29703.621> (10.24.1.99:1474 -> 172.19.0.4:8883)
Definitely I can say only one: it's problem with closing connection only, because messages are published in a queue correctly and our services read them without errors. 

Maybe it's depend on connection's speed. We're continuing investigation.

Brad Smith

unread,
Jun 18, 2019, 9:25:27 AM6/18/19
to rabbitmq-users

Here is some more info. The rate at which we are creating and closing connections seems odd. I may be reading this wrong though.

Brad Smith

unread,
Jun 19, 2019, 12:18:02 PM6/19/19
to rabbitmq-users
Ok, so I think we have figured it out. Posting here in case other run into this issue. We are using Haproxy on the frontend and had our timeouts set to low. By dumping the packets we are able to see Haproxy would connect and then send a Reset rather quickly. After removing the timeouts and only keeping client, server, and connect our connection count rose and we stopped getting those errors. Also fwiw, the churn stats were telling us a signal but we did not know how to interpret it correctly. I hope that helps someone else in this thread.

                timeout connect 5s
                timeout client
120s
                timeout server
120s


Michael Klishin

unread,
Jun 30, 2019, 6:28:31 PM6/30/19
to rabbitmq-users
Thank you for reporting back to the list. FTR, the churn metrics are documented in [1].


--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To post to this group, send email to rabbitm...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Raul Kaubi

unread,
Nov 3, 2020, 5:49:06 AM11/3/20
to rabbitmq-users
Hi

CentOS Linux release 7.8.2003 (Core)
This error is still present in 3.8.9 (erlang 23.1.1)

2020-11-03 12:39:53.815 [error] <0.12072.1680> Supervisor {<0.12072.1680>,amqp_channel_sup_sup} had child channel_sup started with amqp_channel_sup:start_link(direct, <0.11926.1680>, <<"<rabbit....1.2.11926.1680>">>) at undefined exit with reason shutdown in context shutdown_error


Regards
Raul
Reply all
Reply to author
Forward
0 new messages