Rabbitmq stop connection when idle for too long

1,273 views
Skip to first unread message

Ricky Gunawan

unread,
Dec 8, 2021, 8:32:25 PM12/8/21
to rabbitmq-users
Thank you for providing a place for discussion, I'll get straight to the point. my rabbitmq is get the following error when running too long. i use docker for rabbitmq and celery.

18:07:50.222786+00:00 [erro] <0.183.0> ** Generic server aten_detector terminating
18:07:50.222786+00:00 [erro] <0.183.0> ** Last message in was poll
18:07:50.222786+00:00 [erro] <0.183.0> ** When Server state == {state,#Ref<0.224353195.995622914.145530>,5000,0.99,
18:07:50.222786+00:00 [erro] <0.183.0>                                #{},#{}}
18:07:50.222786+00:00 [erro] <0.183.0> ** Reason for termination ==
18:07:50.222786+00:00 [erro] <0.183.0> ** {{timeout,{gen_server,call,[aten_sink,get_failure_probabilities]}},
18:07:50.222786+00:00 [erro] <0.183.0>     [{gen_server,call,2,[{file,"gen_server.erl"},{line,239}]},
18:07:50.222786+00:00 [erro] <0.183.0>      {aten_detector,handle_info,2,[{file,"src/aten_detector.erl"},{line,109}]},
18:07:50.222786+00:00 [erro] <0.183.0>      {gen_server,try_dispatch,4,[{file,"gen_server.erl"},{line,695}]},
18:07:50.222786+00:00 [erro] <0.183.0>      {gen_server,handle_msg,6,[{file,"gen_server.erl"},{line,771}]},
18:07:50.222786+00:00 [erro] <0.183.0>      {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}
18:07:50.222786+00:00 [erro] <0.183.0>
18:09:09.923311+00:00 [erro] <0.183.0>   crasher:
18:09:09.923311+00:00 [erro] <0.183.0>     initial call: aten_detector:init/1
18:09:09.923311+00:00 [erro] <0.183.0>     pid: <0.183.0>
18:09:09.923311+00:00 [erro] <0.183.0>     registered_name: aten_detector
18:09:09.923311+00:00 [erro] <0.183.0>     exception exit: {timeout,
18:09:09.923311+00:00 [erro] <0.183.0>                         {gen_server,call,
18:09:09.923311+00:00 [erro] <0.183.0>                             [aten_sink,get_failure_probabilities]}}
18:09:09.923311+00:00 [erro] <0.183.0>       in function  gen_server:call/2 (gen_server.erl, line 239)
18:09:09.923311+00:00 [erro] <0.183.0>       in call from aten_detector:handle_info/2 (src/aten_detector.erl, line 109)
18:09:09.923311+00:00 [erro] <0.183.0>       in call from gen_server:try_dispatch/4 (gen_server.erl, line 695)
18:09:09.923311+00:00 [erro] <0.183.0>       in call from gen_server:handle_msg/6 (gen_server.erl, line 771)
18:09:09.923311+00:00 [erro] <0.183.0>     ancestors: [aten_sup,<0.179.0>]
18:09:09.923311+00:00 [erro] <0.183.0>     message_queue_len: 1
18:09:09.923311+00:00 [erro] <0.183.0>     messages: [poll]
18:09:09.923311+00:00 [erro] <0.183.0>     links: [<0.180.0>]
18:09:09.923311+00:00 [erro] <0.183.0>     dictionary: []
18:09:09.923311+00:00 [erro] <0.183.0>     trap_exit: false
18:09:09.923311+00:00 [erro] <0.183.0>     status: running
18:09:09.923311+00:00 [erro] <0.183.0>     heap_size: 6772
18:09:09.923311+00:00 [erro] <0.183.0>     stack_size: 29
18:09:09.923311+00:00 [erro] <0.183.0>     reductions: 405329
18:09:09.923311+00:00 [erro] <0.183.0>   neighbours:
18:09:09.923311+00:00 [erro] <0.183.0>
18:26:20.964562+00:00 [erro] <0.180.0>     supervisor: {local,aten_sup}
18:26:20.964562+00:00 [erro] <0.180.0>     errorContext: child_terminated
18:26:20.964562+00:00 [erro] <0.180.0>     reason: {timeout,{gen_server,call,[aten_sink,get_failure_probabilities]}}
18:26:20.964562+00:00 [erro] <0.180.0>     offender: [{pid,<0.183.0>},
18:26:20.964562+00:00 [erro] <0.180.0>                {id,aten_detector},
18:26:20.964562+00:00 [erro] <0.180.0>                {mfargs,{aten_detector,start_link,[]}},
18:26:20.964562+00:00 [erro] <0.180.0>                {restart_type,permanent},
18:26:20.964562+00:00 [erro] <0.180.0>                {significant,false},
18:26:20.964562+00:00 [erro] <0.180.0>                {shutdown,5000},
18:26:20.964562+00:00 [erro] <0.180.0>                {child_type,worker}]
18:26:20.964562+00:00 [erro] <0.180.0>
19:01:31.692249+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
19:01:31.692249+00:00 [warn] <0.316.0>
19:19:00.524269+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
19:19:00.524269+00:00 [warn] <0.316.0>
19:42:31.425828+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
19:42:31.425828+00:00 [warn] <0.316.0>
20:18:26.933895+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
20:18:26.933895+00:00 [warn] <0.316.0>
20:37:09.572991+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
20:37:09.572991+00:00 [warn] <0.316.0>
20:59:26.459775+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
20:59:26.459775+00:00 [warn] <0.316.0>
21:33:26.913175+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
21:33:26.913175+00:00 [warn] <0.316.0>
21:36:21.959884+00:00 [warn] <0.554.0> epmd does not know us, re-registering rabbit at port 25672
21:47:46.865173+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
21:47:46.865173+00:00 [warn] <0.316.0>
21:47:49.285249+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
21:47:49.285249+00:00 [warn] <0.316.0>
21:51:02.781982+00:00 [erro] <0.14530.0> ** Generic server aten_detector terminating
21:51:02.781982+00:00 [erro] <0.14530.0> ** Last message in was poll
21:51:02.781982+00:00 [erro] <0.14530.0> ** When Server state == {state,#Ref<0.224353195.995622914.164059>,5000,0.99,
21:51:02.781982+00:00 [erro] <0.14530.0>                                #{},#{}}
21:51:02.781982+00:00 [erro] <0.14530.0> ** Reason for termination ==
21:51:02.781982+00:00 [erro] <0.14530.0> ** {{timeout,{gen_server,call,[aten_sink,get_failure_probabilities]}},
21:51:02.781982+00:00 [erro] <0.14530.0>     [{gen_server,call,2,[{file,"gen_server.erl"},{line,239}]},
21:51:02.781982+00:00 [erro] <0.14530.0>      {aten_detector,handle_info,2,[{file,"src/aten_detector.erl"},{line,109}]},
21:51:02.781982+00:00 [erro] <0.14530.0>      {gen_server,try_dispatch,4,[{file,"gen_server.erl"},{line,695}]},
21:51:02.781982+00:00 [erro] <0.14530.0>      {gen_server,handle_msg,6,[{file,"gen_server.erl"},{line,771}]},
21:51:02.781982+00:00 [erro] <0.14530.0>      {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}
21:51:02.781982+00:00 [erro] <0.14530.0>
21:53:47.342950+00:00 [erro] <0.14530.0>   crasher:
21:53:47.342950+00:00 [erro] <0.14530.0>     initial call: aten_detector:init/1
21:53:47.342950+00:00 [erro] <0.14530.0>     pid: <0.14530.0>
21:53:47.342950+00:00 [erro] <0.14530.0>     registered_name: aten_detector
21:53:47.342950+00:00 [erro] <0.14530.0>     exception exit: {timeout,
21:53:47.342950+00:00 [erro] <0.14530.0>                         {gen_server,call,
21:53:47.342950+00:00 [erro] <0.14530.0>                             [aten_sink,get_failure_probabilities]}}
21:53:47.342950+00:00 [erro] <0.14530.0>       in function  gen_server:call/2 (gen_server.erl, line 239)
21:53:47.342950+00:00 [erro] <0.14530.0>       in call from aten_detector:handle_info/2 (src/aten_detector.erl, line 109)
21:53:47.342950+00:00 [erro] <0.14530.0>       in call from gen_server:try_dispatch/4 (gen_server.erl, line 695)
21:53:47.342950+00:00 [erro] <0.14530.0>       in call from gen_server:handle_msg/6 (gen_server.erl, line 771)
21:53:47.342950+00:00 [erro] <0.14530.0>     ancestors: [aten_sup,<0.179.0>]
21:53:47.342950+00:00 [erro] <0.14530.0>     message_queue_len: 1
21:53:47.342950+00:00 [erro] <0.14530.0>     messages: [poll]
21:53:47.342950+00:00 [erro] <0.14530.0>     links: [<0.180.0>]
21:53:47.342950+00:00 [erro] <0.14530.0>     dictionary: []
21:53:47.342950+00:00 [erro] <0.14530.0>     trap_exit: false
21:53:47.342950+00:00 [erro] <0.14530.0>     status: running
21:53:47.342950+00:00 [erro] <0.14530.0>     heap_size: 6772
21:53:47.342950+00:00 [erro] <0.14530.0>     stack_size: 29
21:53:47.342950+00:00 [erro] <0.14530.0>     reductions: 12150
21:53:47.342950+00:00 [erro] <0.14530.0>   neighbours:
21:53:47.342950+00:00 [erro] <0.14530.0>
21:53:59.334280+00:00 [erro] <0.180.0>     supervisor: {local,aten_sup}
21:53:59.334280+00:00 [erro] <0.180.0>     errorContext: child_terminated
21:53:59.334280+00:00 [erro] <0.180.0>     reason: {timeout,{gen_server,call,[aten_sink,get_failure_probabilities]}}
21:53:59.334280+00:00 [erro] <0.180.0>     offender: [{pid,<0.14530.0>},
21:53:59.334280+00:00 [erro] <0.180.0>                {id,aten_detector},
21:53:59.334280+00:00 [erro] <0.180.0>                {mfargs,{aten_detector,start_link,[]}},
21:53:59.334280+00:00 [erro] <0.180.0>                {restart_type,permanent},
21:53:59.334280+00:00 [erro] <0.180.0>                {significant,false},
21:53:59.334280+00:00 [erro] <0.180.0>                {shutdown,5000},
21:53:59.334280+00:00 [erro] <0.180.0>                {child_type,worker}]
21:53:59.334280+00:00 [erro] <0.180.0>
21:57:37.790205+00:00 [erro] <0.14639.0> ** Generic server aten_detector terminating
21:57:37.790205+00:00 [erro] <0.14639.0> ** Last message in was poll
21:57:37.790205+00:00 [erro] <0.14639.0> ** When Server state == {state,#Ref<0.224353195.996409345.209853>,5000,0.99,
21:57:37.790205+00:00 [erro] <0.14639.0>                                #{},#{}}
21:57:37.790205+00:00 [erro] <0.14639.0> ** Reason for termination ==
21:57:37.790205+00:00 [erro] <0.14639.0> ** {{timeout,{gen_server,call,[aten_sink,get_failure_probabilities]}},
21:57:37.790205+00:00 [erro] <0.14639.0>     [{gen_server,call,2,[{file,"gen_server.erl"},{line,239}]},
21:57:37.790205+00:00 [erro] <0.14639.0>      {aten_detector,handle_info,2,[{file,"src/aten_detector.erl"},{line,109}]},
21:57:37.790205+00:00 [erro] <0.14639.0>      {gen_server,try_dispatch,4,[{file,"gen_server.erl"},{line,695}]},
21:57:37.790205+00:00 [erro] <0.14639.0>      {gen_server,handle_msg,6,[{file,"gen_server.erl"},{line,771}]},
21:57:37.790205+00:00 [erro] <0.14639.0>      {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}
21:57:37.790205+00:00 [erro] <0.14639.0>
22:07:19.381706+00:00 [erro] <0.14639.0>   crasher:
22:07:19.381706+00:00 [erro] <0.14639.0>     initial call: aten_detector:init/1
22:07:19.381706+00:00 [erro] <0.14639.0>     pid: <0.14639.0>
22:07:19.381706+00:00 [erro] <0.14639.0>     registered_name: aten_detector
22:07:19.381706+00:00 [erro] <0.14639.0>     exception exit: {timeout,
22:07:19.381706+00:00 [erro] <0.14639.0>                         {gen_server,call,
22:07:19.381706+00:00 [erro] <0.14639.0>                             [aten_sink,get_failure_probabilities]}}
22:07:19.381706+00:00 [erro] <0.14639.0>       in function  gen_server:call/2 (gen_server.erl, line 239)
22:07:19.381706+00:00 [erro] <0.14639.0>       in call from aten_detector:handle_info/2 (src/aten_detector.erl, line 109)
22:07:19.381706+00:00 [erro] <0.14639.0>       in call from gen_server:try_dispatch/4 (gen_server.erl, line 695)
22:07:19.381706+00:00 [erro] <0.14639.0>       in call from gen_server:handle_msg/6 (gen_server.erl, line 771)
22:07:19.381706+00:00 [erro] <0.14639.0>     ancestors: [aten_sup,<0.179.0>]
22:07:19.381706+00:00 [erro] <0.14639.0>     message_queue_len: 1
22:07:19.381706+00:00 [erro] <0.14639.0>     messages: [poll]
22:07:19.381706+00:00 [erro] <0.14639.0>     links: [<0.180.0>]
22:07:19.381706+00:00 [erro] <0.14639.0>     dictionary: []
22:07:19.381706+00:00 [erro] <0.14639.0>     trap_exit: false
22:07:19.381706+00:00 [erro] <0.14639.0>     status: running
22:07:19.381706+00:00 [erro] <0.14639.0>     heap_size: 4185
22:07:19.381706+00:00 [erro] <0.14639.0>     stack_size: 29
22:07:19.381706+00:00 [erro] <0.14639.0>     reductions: 11556
22:07:19.381706+00:00 [erro] <0.14639.0>   neighbours:
22:07:19.381706+00:00 [erro] <0.14639.0>
22:27:07.422842+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
22:27:07.422842+00:00 [warn] <0.316.0>
22:28:55.698130+00:00 [warn] <0.554.0> epmd does not know us, re-registering rabbit at port 25672
22:27:39.935869+00:00 [erro] <0.180.0>     supervisor: {local,aten_sup}
22:27:39.935869+00:00 [erro] <0.180.0>     errorContext: child_terminated
22:27:39.935869+00:00 [erro] <0.180.0>     reason: {timeout,{gen_server,call,[aten_sink,get_failure_probabilities]}}
22:27:39.935869+00:00 [erro] <0.180.0>     offender: [{pid,<0.14639.0>},
22:27:39.935869+00:00 [erro] <0.180.0>                {id,aten_detector},
22:27:39.935869+00:00 [erro] <0.180.0>                {mfargs,{aten_detector,start_link,[]}},
22:27:39.935869+00:00 [erro] <0.180.0>                {restart_type,permanent},
22:27:39.935869+00:00 [erro] <0.180.0>                {significant,false},
22:27:39.935869+00:00 [erro] <0.180.0>                {shutdown,5000},
22:27:39.935869+00:00 [erro] <0.180.0>                {child_type,worker}]
22:27:39.935869+00:00 [erro] <0.180.0>
22:31:43.264388+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
22:31:43.264388+00:00 [warn] <0.316.0>
22:37:47.569567+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
22:37:47.569567+00:00 [warn] <0.316.0>
22:52:41.166680+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
22:52:41.166680+00:00 [warn] <0.316.0>
22:52:51.008372+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
22:52:51.008372+00:00 [warn] <0.316.0>
22:38:26.210015+00:00 [erro] <0.14660.0> ** Generic server aten_detector terminating
22:38:26.210015+00:00 [erro] <0.14660.0> ** Last message in was poll
22:38:26.210015+00:00 [erro] <0.14660.0> ** When Server state == {state,#Ref<0.224353195.996409345.210112>,5000,0.99,
22:38:26.210015+00:00 [erro] <0.14660.0>                                #{},#{}}
22:38:26.210015+00:00 [erro] <0.14660.0> ** Reason for termination ==
22:38:26.210015+00:00 [erro] <0.14660.0> ** {{timeout,{gen_server,call,[aten_sink,get_failure_probabilities]}},
22:38:26.210015+00:00 [erro] <0.14660.0>     [{gen_server,call,2,[{file,"gen_server.erl"},{line,239}]},
22:38:26.210015+00:00 [erro] <0.14660.0>      {aten_detector,handle_info,2,[{file,"src/aten_detector.erl"},{line,109}]},
22:38:26.210015+00:00 [erro] <0.14660.0>      {gen_server,try_dispatch,4,[{file,"gen_server.erl"},{line,695}]},
22:38:26.210015+00:00 [erro] <0.14660.0>      {gen_server,handle_msg,6,[{file,"gen_server.erl"},{line,771}]},
22:38:26.210015+00:00 [erro] <0.14660.0>      {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}
22:38:26.210015+00:00 [erro] <0.14660.0>
22:58:35.842824+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
22:58:35.842824+00:00 [warn] <0.316.0>
22:58:46.542488+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
22:58:46.542488+00:00 [warn] <0.316.0>
23:09:09.206451+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
23:09:09.206451+00:00 [warn] <0.316.0>
23:09:15.735150+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
23:09:15.735150+00:00 [warn] <0.316.0>
23:15:09.499198+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
23:15:09.499198+00:00 [warn] <0.316.0>
23:15:15.307946+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
23:15:15.307946+00:00 [warn] <0.316.0>
23:27:18.456380+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
23:27:18.456380+00:00 [warn] <0.316.0>
23:27:18.456491+00:00 [warn] <0.316.0> Mnesia(rabbit@6bd709cbe2c3): ** WARNING ** Mnesia is overloaded: {dump_log,time_threshold}
23:27:18.456491+00:00 [warn] <0.316.0>
22:57:17.820894+00:00 [erro] <0.14660.0>   crasher:
22:57:17.820894+00:00 [erro] <0.14660.0>     initial call: aten_detector:init/1
22:57:17.820894+00:00 [erro] <0.14660.0>     pid: <0.14660.0>
22:57:17.820894+00:00 [erro] <0.14660.0>     registered_name: aten_detector
22:57:17.820894+00:00 [erro] <0.14660.0>     exception exit: {timeout,
22:57:17.820894+00:00 [erro] <0.14660.0>                         {gen_server,call,
22:57:17.820894+00:00 [erro] <0.14660.0>                             [aten_sink,get_failure_probabilities]}}
22:57:17.820894+00:00 [erro] <0.14660.0>       in function  gen_server:call/2 (gen_server.erl, line 239)
22:57:17.820894+00:00 [erro] <0.14660.0>       in call from aten_detector:handle_info/2 (src/aten_detector.erl, line 109)
22:57:17.820894+00:00 [erro] <0.14660.0>       in call from gen_server:try_dispatch/4 (gen_server.erl, line 695)
22:57:17.820894+00:00 [erro] <0.14660.0>       in call from gen_server:handle_msg/6 (gen_server.erl, line 771)
22:57:17.820894+00:00 [erro] <0.14660.0>     ancestors: [aten_sup,<0.179.0>]
22:57:17.820894+00:00 [erro] <0.14660.0>     message_queue_len: 1
22:57:17.820894+00:00 [erro] <0.14660.0>     messages: [poll]
22:57:17.820894+00:00 [erro] <0.14660.0>     links: [<0.180.0>]
22:57:17.820894+00:00 [erro] <0.14660.0>     dictionary: []
22:57:17.820894+00:00 [erro] <0.14660.0>     trap_exit: false
22:57:17.820894+00:00 [erro] <0.14660.0>     status: running
22:57:17.820894+00:00 [erro] <0.14660.0>     heap_size: 4185
22:57:17.820894+00:00 [erro] <0.14660.0>     stack_size: 29
22:57:17.820894+00:00 [erro] <0.14660.0>     reductions: 11592
22:57:17.820894+00:00 [erro] <0.14660.0>   neighbours:
22:57:17.820894+00:00 [erro] <0.14660.0>
23:27:18.458352+00:00 [erro] <0.180.0>     supervisor: {local,aten_sup}
23:27:18.458352+00:00 [erro] <0.180.0>     errorContext: child_terminated
23:27:18.458352+00:00 [erro] <0.180.0>     reason: {timeout,{gen_server,call,[aten_sink,get_failure_probabilities]}}
23:27:18.458352+00:00 [erro] <0.180.0>     offender: [{pid,<0.14660.0>},
23:27:18.458352+00:00 [erro] <0.180.0>                {id,aten_detector},
23:27:18.458352+00:00 [erro] <0.180.0>                {mfargs,{aten_detector,start_link,[]}},
23:27:18.458352+00:00 [erro] <0.180.0>                {restart_type,permanent},
23:27:18.458352+00:00 [erro] <0.180.0>                {significant,false},
23:27:18.458352+00:00 [erro] <0.180.0>                {shutdown,5000},
23:27:18.458352+00:00 [erro] <0.180.0>                {child_type,worker}]
23:27:18.458352+00:00 [erro] <0.180.0>
23:27:25.178477+00:00 [noti] <0.60.0> SIGTERM received - shutting down
23:27:25.178477+00:00 [noti] <0.60.0>
23:27:25.180410+00:00 [warn] <0.681.0> HTTP listener registry could not find context rabbitmq_prometheus_tls
23:27:25.189891+00:00 [info] <0.222.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping unregistration.
23:27:25.190003+00:00 [info] <0.729.0> stopped TCP listener on [::]:5672
23:27:25.190609+00:00 [erro] <0.834.0> Error on AMQP connection <0.834.0> (10.0.10.4:53930 -> 10.0.10.10:5672, vhost: '/', user: 'guest', state: running), channel 0:
23:27:25.190609+00:00 [erro] <0.834.0>  operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
23:27:25.190609+00:00 [erro] <0.863.0> Error on AMQP connection <0.863.0> (10.0.10.4:53932 -> 10.0.10.10:5672, vhost: '/', user: 'guest', state: running), channel 0:
23:27:25.190609+00:00 [erro] <0.863.0>  operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
23:27:25.190687+00:00 [erro] <0.803.0> Error on AMQP connection <0.803.0> (10.0.10.4:53928 -> 10.0.10.10:5672, vhost: '/', user: 'guest', state: running), channel 0:
23:27:25.190687+00:00 [erro] <0.803.0>  operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
23:27:25.201416+00:00 [info] <0.14715.0> Closing all connections in vhost '/' on node 'rabbit@6bd709cbe2c3' because the vhost is stopping
23:27:25.235360+00:00 [info] <0.627.0> Stopping message store for directory '/var/lib/rabbitmq/mnesia/rabbit@6bd709cbe2c3/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent'
23:27:25.252534+00:00 [info] <0.627.0> Message store for directory '/var/lib/rabbitmq/mnesia/rabbit@6bd709cbe2c3/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent' is stopped
23:27:25.256056+00:00 [info] <0.623.0> Stopping message store for directory '/var/lib/rabbitmq/mnesia/rabbit@6bd709cbe2c3/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_transient'
23:27:25.266608+00:00 [info] <0.623.0> Message store for directory '/var/lib/rabbitmq/mnesia/rabbit@6bd709cbe2c3/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_transient' is stopped
23:27:25.284917+00:00 [info] <0.550.0> Management plugin: to stop collect_statistics.

This also has an impact on Celery being unable to connect to the broker
can you tell me how to to solve this.
below is error from my celery:

 [2021-12-08 23:27:25,209: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 326, in start
     blueprint.start(self)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
     step.start(parent)
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 618, in start
     c.loop(*c.loop_args())
   File "/usr/local/lib/python3.7/site-packages/celery/worker/loops.py", line 97, in asynloop
     next(loop)
   File "/usr/local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 362, in create_loop
     cb(*cbargs)
   File "/usr/local/lib/python3.7/site-packages/kombu/transport/base.py", line 235, in on_readable
     reader(loop)
   File "/usr/local/lib/python3.7/site-packages/kombu/transport/base.py", line 217, in _read
     drain_events(timeout=0)
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 523, in drain_events
     while not self.blocking_read(timeout):
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 529, in blocking_read
     return self.on_inbound_frame(frame)
   File "/usr/local/lib/python3.7/site-packages/amqp/method_framing.py", line 53, in on_frame
     callback(channel, method_sig, buf, None)
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 536, in on_inbound_method
     method_sig, payload, content,
   File "/usr/local/lib/python3.7/site-packages/amqp/abstract_channel.py", line 143, in dispatch_method
     listener(*args)
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 664, in _on_close
     self._x_close_ok()
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 679, in _x_close_ok
     self.send_method(spec.Connection.CloseOk, callback=self._on_close_ok)
   File "/usr/local/lib/python3.7/site-packages/amqp/abstract_channel.py", line 57, in send_method
     conn.frame_writer(1, self.channel_id, sig, args, content)
   File "/usr/local/lib/python3.7/site-packages/amqp/method_framing.py", line 183, in write_frame
     write(view[:offset])
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 352, in write
     self._write(s)
 ConnectionResetError: [Errno 104] Connection reset by peer
 [2021-12-08 23:27:25,207: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 326, in start
     blueprint.start(self)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
     step.start(parent)
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 618, in start
     c.loop(*c.loop_args())
   File "/usr/local/lib/python3.7/site-packages/celery/worker/loops.py", line 97, in asynloop
     next(loop)
   File "/usr/local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 362, in create_loop
     cb(*cbargs)
   File "/usr/local/lib/python3.7/site-packages/kombu/transport/base.py", line 235, in on_readable
     reader(loop)
   File "/usr/local/lib/python3.7/site-packages/kombu/transport/base.py", line 217, in _read
     drain_events(timeout=0)
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 523, in drain_events
     while not self.blocking_read(timeout):
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 529, in blocking_read
     return self.on_inbound_frame(frame)
   File "/usr/local/lib/python3.7/site-packages/amqp/method_framing.py", line 53, in on_frame
     callback(channel, method_sig, buf, None)
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 536, in on_inbound_method
     method_sig, payload, content,
   File "/usr/local/lib/python3.7/site-packages/amqp/abstract_channel.py", line 143, in dispatch_method
     listener(*args)
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 664, in _on_close
     self._x_close_ok()
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 679, in _x_close_ok
     self.send_method(spec.Connection.CloseOk, callback=self._on_close_ok)
   File "/usr/local/lib/python3.7/site-packages/amqp/abstract_channel.py", line 57, in send_method
     conn.frame_writer(1, self.channel_id, sig, args, content)
   File "/usr/local/lib/python3.7/site-packages/amqp/method_framing.py", line 183, in write_frame
     write(view[:offset])
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 352, in write
     self._write(s)
 ConnectionResetError: [Errno 104] Connection reset by peer
 [2021-12-08 23:27:25,208: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 326, in start
     blueprint.start(self)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
     step.start(parent)
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 618, in start
     c.loop(*c.loop_args())
   File "/usr/local/lib/python3.7/site-packages/celery/worker/loops.py", line 97, in asynloop
     next(loop)
   File "/usr/local/lib/python3.7/site-packages/kombu/asynchronous/hub.py", line 362, in create_loop
     cb(*cbargs)
   File "/usr/local/lib/python3.7/site-packages/kombu/transport/base.py", line 235, in on_readable
     reader(loop)
   File "/usr/local/lib/python3.7/site-packages/kombu/transport/base.py", line 217, in _read
     drain_events(timeout=0)
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 523, in drain_events
     while not self.blocking_read(timeout):
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 529, in blocking_read
     return self.on_inbound_frame(frame)
   File "/usr/local/lib/python3.7/site-packages/amqp/method_framing.py", line 53, in on_frame
     callback(channel, method_sig, buf, None)
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 536, in on_inbound_method
     method_sig, payload, content,
   File "/usr/local/lib/python3.7/site-packages/amqp/abstract_channel.py", line 143, in dispatch_method
     listener(*args)
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 664, in _on_close
     self._x_close_ok()
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 679, in _x_close_ok
     self.send_method(spec.Connection.CloseOk, callback=self._on_close_ok)
   File "/usr/local/lib/python3.7/site-packages/amqp/abstract_channel.py", line 57, in send_method
     conn.frame_writer(1, self.channel_id, sig, args, content)
   File "/usr/local/lib/python3.7/site-packages/amqp/method_framing.py", line 183, in write_frame
     write(view[:offset])
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 352, in write
     self._write(s)
 ConnectionResetError: [Errno 104] Connection reset by peer
 [2021-12-08 23:27:25,245: WARNING/MainProcess] /usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py:361: CPendingDeprecationWarning:
 In Celery 5.1 we introduced an optional breaking change which
 on connection loss cancels all currently executed tasks with late acknowledgement enabled.
 These tasks cannot be acknowledged as the connection is gone, and the tasks are automatically redelivered back to the queue.
 You can enable this behavior using the worker_cancel_long_running_tasks_on_connection_loss setting.
 In Celery 5.1 it is set to False by default. The setting will be set to True by default in Celery 6.0.

   warnings.warn(CANCEL_TASKS_BY_DEFAULT, CPendingDeprecationWarning)

 [2021-12-08 23:27:25,242: WARNING/MainProcess] /usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py:361: CPendingDeprecationWarning:
 In Celery 5.1 we introduced an optional breaking change which
 on connection loss cancels all currently executed tasks with late acknowledgement enabled.
 These tasks cannot be acknowledged as the connection is gone, and the tasks are automatically redelivered back to the queue.
 You can enable this behavior using the worker_cancel_long_running_tasks_on_connection_loss setting.
 In Celery 5.1 it is set to False by default. The setting will be set to True by default in Celery 6.0.

   warnings.warn(CANCEL_TASKS_BY_DEFAULT, CPendingDeprecationWarning)

 [2021-12-08 23:27:25,243: WARNING/MainProcess] /usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py:361: CPendingDeprecationWarning:
 In Celery 5.1 we introduced an optional breaking change which
 on connection loss cancels all currently executed tasks with late acknowledgement enabled.
 These tasks cannot be acknowledged as the connection is gone, and the tasks are automatically redelivered back to the queue.
 You can enable this behavior using the worker_cancel_long_running_tasks_on_connection_loss setting.
 In Celery 5.1 it is set to False by default. The setting will be set to True by default in Celery 6.0.

   warnings.warn(CANCEL_TASKS_BY_DEFAULT, CPendingDeprecationWarning)

 [2021-12-08 23:27:26,312: CRITICAL/MainProcess] Unrecoverable error: OperationalError('[Errno 111] Connection refused')
 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 173, in _connect
     host, port, family, socket.SOCK_STREAM, SOL_TCP)
   File "/usr/local/lib/python3.7/socket.py", line 752, in getaddrinfo
     for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
 socket.gaierror: [Errno -5] No address associated with hostname

 During handling of the above exception, another exception occurred:

 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 447, in _reraise_as_library_errors
     yield
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 438, in _ensure_connection
     callback, timeout=timeout
   File "/usr/local/lib/python3.7/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
     return fun(*args, **kwargs)
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 878, in _connection_factory
     self._connection = self._establish_connection()
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 813, in _establish_connection
     conn = self.transport.establish_connection()
   File "/usr/local/lib/python3.7/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
     conn.connect()
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 323, in connect
     self.transport.connect()
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 113, in connect
     self._connect(self.host, self.port, self.connect_timeout)
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 184, in _connect
     "failed to resolve broker hostname"))
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 197, in _connect
     self.sock.connect(sa)
 ConnectionRefusedError: [Errno 111] Connection refused

 The above exception was the direct cause of the following exception:

 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/celery/worker/worker.py", line 203, in start
     self.blueprint.start(self)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
     step.start(parent)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 365, in start
     return self.obj.start()
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 326, in start
     blueprint.start(self)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
     step.start(parent)
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/connection.py", line 21, in start
     c.connection = c.connect()
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 422, in connect
     conn = self.connection_for_read(heartbeat=self.amqheartbeat)
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 429, in connection_for_read
     self.app.connection_for_read(heartbeat=heartbeat))
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 456, in ensure_connected
     callback=maybe_shutdown,
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 382, in ensure_connection
     self._ensure_connection(*args, **kwargs)
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 438, in _ensure_connection
     callback, timeout=timeout
   File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
     self.gen.throw(type, value, traceback)
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 451, in _reraise_as_library_errors
     raise ConnectionError(str(exc)) from exc
 kombu.exceptions.OperationalError: [Errno 111] Connection refused
 [2021-12-08 23:27:26,346: CRITICAL/MainProcess] Unrecoverable error: OperationalError('[Errno 111] Connection refused')
 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 173, in _connect
     host, port, family, socket.SOCK_STREAM, SOL_TCP)
   File "/usr/local/lib/python3.7/socket.py", line 752, in getaddrinfo
     for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
 socket.gaierror: [Errno -5] No address associated with hostname

 During handling of the above exception, another exception occurred:

 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 447, in _reraise_as_library_errors
     yield
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 438, in _ensure_connection
     callback, timeout=timeout
   File "/usr/local/lib/python3.7/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
     return fun(*args, **kwargs)
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 878, in _connection_factory
     self._connection = self._establish_connection()
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 813, in _establish_connection
     conn = self.transport.establish_connection()
   File "/usr/local/lib/python3.7/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
     conn.connect()
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 323, in connect
     self.transport.connect()
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 113, in connect
     self._connect(self.host, self.port, self.connect_timeout)
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 184, in _connect
     "failed to resolve broker hostname"))
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 197, in _connect
     self.sock.connect(sa)
 ConnectionRefusedError: [Errno 111] Connection refused

 The above exception was the direct cause of the following exception:

 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/celery/worker/worker.py", line 203, in start
     self.blueprint.start(self)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
     step.start(parent)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 365, in start
     return self.obj.start()
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 326, in start
     blueprint.start(self)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
     step.start(parent)
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/connection.py", line 21, in start
     c.connection = c.connect()
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 422, in connect
     conn = self.connection_for_read(heartbeat=self.amqheartbeat)
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 429, in connection_for_read
     self.app.connection_for_read(heartbeat=heartbeat))
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 456, in ensure_connected
     callback=maybe_shutdown,
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 382, in ensure_connection
     self._ensure_connection(*args, **kwargs)
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 438, in _ensure_connection
     callback, timeout=timeout
   File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
     self.gen.throw(type, value, traceback)
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 451, in _reraise_as_library_errors
     raise ConnectionError(str(exc)) from exc
 kombu.exceptions.OperationalError: [Errno 111] Connection refused
 [2021-12-08 23:27:26,350: CRITICAL/MainProcess] Unrecoverable error: OperationalError('[Errno 111] Connection refused')
 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 173, in _connect
     host, port, family, socket.SOCK_STREAM, SOL_TCP)
   File "/usr/local/lib/python3.7/socket.py", line 752, in getaddrinfo
     for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
 socket.gaierror: [Errno -5] No address associated with hostname

 During handling of the above exception, another exception occurred:

 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 447, in _reraise_as_library_errors
     yield
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 438, in _ensure_connection
     callback, timeout=timeout
   File "/usr/local/lib/python3.7/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
     return fun(*args, **kwargs)
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 878, in _connection_factory
     self._connection = self._establish_connection()
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 813, in _establish_connection
     conn = self.transport.establish_connection()
   File "/usr/local/lib/python3.7/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
     conn.connect()
   File "/usr/local/lib/python3.7/site-packages/amqp/connection.py", line 323, in connect
     self.transport.connect()
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 113, in connect
     self._connect(self.host, self.port, self.connect_timeout)
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 184, in _connect
     "failed to resolve broker hostname"))
   File "/usr/local/lib/python3.7/site-packages/amqp/transport.py", line 197, in _connect
     self.sock.connect(sa)
 ConnectionRefusedError: [Errno 111] Connection refused

 The above exception was the direct cause of the following exception:

 Traceback (most recent call last):
   File "/usr/local/lib/python3.7/site-packages/celery/worker/worker.py", line 203, in start
     self.blueprint.start(self)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
     step.start(parent)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 365, in start
     return self.obj.start()
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 326, in start
     blueprint.start(self)
   File "/usr/local/lib/python3.7/site-packages/celery/bootsteps.py", line 116, in start
     step.start(parent)
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/connection.py", line 21, in start
     c.connection = c.connect()
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 422, in connect
     conn = self.connection_for_read(heartbeat=self.amqheartbeat)
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 429, in connection_for_read
     self.app.connection_for_read(heartbeat=heartbeat))
   File "/usr/local/lib/python3.7/site-packages/celery/worker/consumer/consumer.py", line 456, in ensure_connected
     callback=maybe_shutdown,
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 382, in ensure_connection
     self._ensure_connection(*args, **kwargs)
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 438, in _ensure_connection
     callback, timeout=timeout
   File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
     self.gen.throw(type, value, traceback)
   File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 451, in _reraise_as_library_errors
     raise ConnectionError(str(exc)) from exc
 kombu.exceptions.OperationalError: [Errno 111] Connection refused

i tried heartbeat=0 and other setting but no answer.
please help me.
thanks

yon...@laposte.net

unread,
Dec 8, 2021, 8:51:15 PM12/8/21
to rabbitm...@googlegroups.com
It seems like your container has some resources issue. The disk is slow, or lacks space/FD. Please monitor the running environment.‌
 
De : "Ricky Gunawan"
A : "rabbitmq-users"
Envoyé: jeudi 9 Décembre 2021 09:32
Objet : [rabbitmq-users] Rabbitmq stop connection when idle for too long
 
--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/rabbitmq-users/2023a68d-21ee-4958-950e-c350553970f5n%40googlegroups.com.
Message has been deleted
Message has been deleted

Ricky Gunawan

unread,
Dec 8, 2021, 9:59:23 PM12/8/21
to rabbitmq-users
thank you for your reply.
i run in Alibabacloud and i run with many service in docker stack

so the issues of  the connection it's lost or the connection refused is not my broker heartbeat or the celery heartbeat but my resources ?

can you tell me which error is saying that we are short of resources ?
so when there was that error again I knew it was due to a lack of resources and not because of the connection

is there a mechanism to clear the logs so it doesn't get too slow? because I've used --without-heartbeat --without-gossip --without-mingle

is there any other way?

when the initial deployment starts it runs normally but after not being used for a while maybe at night because when no one uses it at night it gets the error

bitfox

unread,
Dec 8, 2021, 10:03:49 PM12/8/21
to rabbitm...@googlegroups.com
This is more likely the system issue such as network communication
between containers. As "yonghua" suggested you may deploy the monitoring
for RMQ's running env such as network, disk, mem etc. Your original log
shows there may have the storage problem.

Thanks

Ricky Gunawan

unread,
Dec 8, 2021, 10:13:07 PM12/8/21
to rabbitmq-users
thanks for the insight, I'll try to monitor the resources.
and post the result in this conversation if there is others who have similar issues like this

thanks
Reply all
Reply to author
Forward
0 new messages