2021-06-30 15:54:50.924 [warning] <0.273.0> Could not auto-cluster with node rab...@ip-xx-xx-32-53.xx.xxxxxx-1.xxxxx.xxxxxx: {badrpc,nodedown}
2021-06-30 15:54:50.928 [warning] <0.273.0> Could not auto-cluster with node rab...@ip-xx-xx-33-64.xx.xxxxxx-1.xxxxx.xxxxxx: {badrpc,nodedown}
2021-06-30 15:54:50.933 [warning] <0.273.0> Could not auto-cluster with node rab...@ip-xx-xx-34-62.xx.xxxxxx-1.xxxxx.xxxxxx: {badrpc,nodedown}
2021-06-30 15:54:50.933 [error] <0.273.0> Trying to join discovered peers failed. Will retry after a delay of 500 ms, 0 retries left...
2021-06-30 15:54:51.434 [warning] <0.273.0> Could not successfully contact any node of: rab...@ip-xx-xx-32-53.xx.xxxxxx-1.xxxxx.xxxxxx,rab...@ip-xx-xx-33-64.xx.xxxxxx-1.xxxxx.xxxxxx,rab...@ip-xx-xx-34-62.xx.xxxxxx-1.xxxxx.xxxxxx (as in Erlang distribution). Starting as a blank standalone node...
[root@{{ hostname }}01 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@{{ hostname }}01.xx.xxxxxx.xxx ...
Basics
Cluster name: rabbit@{{ hostname }}01.xx.xxxxxx.xxx
Disk Nodes
rabbit@{{ hostname }}01.xx.xxxxxx.xxx
rabbit@{{ hostname }}02.xx.xxxxxx.xxx
rabbit@{{ hostname }}03.xx.xxxxxx.xxx
Running Nodes
rabbit@{{ hostname }}01.xx.xxxxxx.xxx
rabbit@{{ hostname }}02.xx.xxxxxx.xxx
rabbit@{{ hostname }}03.xx.xxxxxx.xxx
Versions
rabbit@{{ hostname }}01.xx.xxxxxx.xxx: RabbitMQ 3.8.16 on Erlang 24.0.3
rabbit@{{ hostname }}02.xx.xxxxxx.xxx: RabbitMQ 3.8.16 on Erlang 24.0.3
rabbit@{{ hostname }}03.xx.xxxxxx.xxx: RabbitMQ 3.8.16 on Erlang 24.0.3
Maintenance status
Node: rabbit@{{ hostname }}01.xx.xxxxxx.xxx, status: not under maintenance
Node: rabbit@{{ hostname }}02.xx.xxxxxx.xxx, status: not under maintenance
Node: rabbit@{{ hostname }}03.xx.xxxxxx.xxx, status: not under maintenance
Alarms
(none)
Network Partitions
(none)
Listeners
Node: rabbit@{{ hostname }}01.xx.xxxxxx.xxx, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@{{ hostname }}01.xx.xxxxxx.xxx, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Node: rabbit@{{ hostname }}01.xx.xxxxxx.xxx, interface: [::], port: 15672, protocol: http, purpose: HTTP API
Node: rabbit@{{ hostname }}02.xx.xxxxxx.xxx, interface: [::], port: 15672, protocol: http, purpose: HTTP API
Node: rabbit@{{ hostname }}02.xx.xxxxxx.xxx, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@{{ hostname }}02.xx.xxxxxx.xxx, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Node: rabbit@{{ hostname }}03.xx.xxxxxx.xxx, interface: [::], port: 15672, protocol: http, purpose: HTTP API
Node: rabbit@{{ hostname }}03.xx.xxxxxx.xxx, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@{{ hostname }}03.xx.xxxxxx.xxx, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Feature flags
Flag: drop_unroutable_metric, state: enabled
Flag: empty_basic_get_metric, state: enabled
Flag: implicit_default_bindings, state: enabled
Flag: maintenance_mode_status, state: enabled
Flag: quorum_queue, state: enabled
Flag: user_limits, state: enabled
Flag: virtual_host_metadata, state: enabled
I am not sure if I need a new thread for this, but I am getting some strange behavior when offloading SSL on an AWS ALB. I am redirecting port 443 to 15627 on the back end. The targets are healthy. Additionally, I am sometimes getting this when going directly to the instances. Logs are running in debug and I can't seem to see any requests