MCollective server cannot connect to ActiveMQ broker

567 views
Skip to first unread message

Isabell Cowan

unread,
Jan 31, 2017, 3:36:20 AM1/31/17
to Puppet Users
I've been trying all do to set up MCollective on my puppet cluster.  No matter what I do, I can't seem to get the MCollective server to connect.  The MCollective server(s) are running mostly on Ubnutu Xenial.  The ActiveMQ broker (5.14.3) is running on Debian Stretch.  I'm running puppet 4.x on all nodes. I've used every transport connector I can thing if, and they all fail to connect.  Let me dump some log files at you.

In mcollective.log I'm getting `Connection reset by peer`:

I, [2017-01-27T15:43:59.869501 #18729]  INFO -- : activemq.rb:139:in `on_ssl_connecting' Establishing SSL session with stomp+ssl://mcoll...@broker.example.com:61614
E
, [2017-01-27T15:44:00.070995 #18729] ERROR -- : activemq.rb:149:in `on_ssl_connectfail' SSL session creation with stomp+ssl://mcoll...@broker.example.com:61614 failed: Connection reset by peer - SSL_connect
I
, [2017-01-27T15:44:00.071371 #18729]  INFO -- : activemq.rb:129:in `on_connectfail' TCP Connection to stomp+ssl://mcoll...@broker.example.com:61614 failed on attempt 24


Oddly enough, in the ActiveMQ log, I also seem to be getting `Connection reset by peer`:

ERROR | Could not accept connection from null : {}
java
.io.IOException: java.io.IOException: Connection reset by peer
 at org
.apache.activemq.transport.nio.NIOSSLTransport.initializeStreams(NIOSSLTransport.java:188)[activemq-client.jar:]
 at org
.apache.activemq.transport.stomp.StompNIOSSLTransport.initializeStreams(StompNIOSSLTransport.java:57)[activemq-stomp.jar:]
 at org
.apache.activemq.transport.tcp.TcpTransport.connect(TcpTransport.java:543)[activemq-client.jar:]
 at org
.apache.activemq.transport.nio.NIOTransport.doStart(NIOTransport.java:174)[activemq-client.jar:]
 at org
.apache.activemq.transport.nio.NIOSSLTransport.doStart(NIOSSLTransport.java:462)[activemq-client.jar:]
 at org
.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)[activemq-client.jar:]
 at org
.apache.activemq.transport.TransportFilter.start(TransportFilter.java:64)[activemq-client.jar:]
 at org
.apache.activemq.transport.stomp.StompTransportFilter.start(StompTransportFilter.java:65)[activemq-stomp.jar:]
 at org
.apache.activemq.transport.AbstractInactivityMonitor.start(AbstractInactivityMonitor.java:169)[activemq-client.jar:]
 at org
.apache.activemq.transport.TransportFilter.start(TransportFilter.java:64)[activemq-client.jar:]
 at org
.apache.activemq.broker.TransportConnection.start(TransportConnection.java:1072)[activemq-broker.jar:]
 at org
.apache.activemq.broker.TransportConnector$1$1.run(TransportConnector.java:218)[activemq-broker.jar:]
 at java
.lang.Thread.run(Thread.java:745)[:1.8.0_111]


So they're both resetting the connection.  Huh.  Before you ask, no: there are no iptables rules, and yes: there is a route between the two nodes.  Let's take a peak at `lsof -i` just to be sure and then I'll throw some config files at you.

java    20833 activemq   84u  IPv6  53552      0t0  TCP *:61614 (LISTEN)


activemq.xml:

<!DOCTYPE activemq [
  <!ENTITY keyStores SYSTEM "keyStores.xml">

]>
<beans
 
xmlns="http://www.springframework.org/schema/beans"
 
xmlns:amq="http://activemq.apache.org/schema/core"
 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd"
>


   
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"/>
   
   
<broker xmlns="http://activemq.apache.org/schema/core"
           
useJmx="false"
           
brokerName="broker"
           
dataDirectory="${activemq.base}/data">


     
<persistenceAdapter>
       
<kahaDB directory="${activemq.base}/data/kahadb"/>
     
</persistenceAdapter>


     
<sslContext>
        &keyStores;
     
</sslContext>


     
<transportConnectors>
       
<transportConnector
         
name="stomp+nio"
         
uri="stomp+nio+ssl://0.0.0.0:61614?needClientAuth=true&amp;transport.enabledProtocols=TLSv1,TLSv1.1,TLSv1.2"/>
     
</transportConnectors>


     
<plugins>
       
<simpleAuthenticationPlugin>
         
<users>
           
<authenticationUser username="mcollective" password="password" groups="mcollective,everyone"/>
           
<authenticationUser username="admin" password="password" groups="mcollective,admins,everyone"/>
         
</users>
       
</simpleAuthenticationPlugin>
       
<authorizationPlugin>
         
<map>
           
<authorizationMap>
             
<authorizationEntries>
               
<authorizationEntry queue=">" write="admins" read="admins" admin="admins" />
               
<authorizationEntry topic=">" write="admins" read="admins" admin="admins" />
               
<authorizationEntry topic="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" />
               
<authorizationEntry queue="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" />
               
<authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" write="everyone" admin="everyone"/>
             
</authorizationEntries>
           
</authorizationMap>
         
</map>
       
</authorizationPlugin>
     
</plugins>
   
</broker>


</beans>


keyStores.xml:

<sslContext
   
keyStore="/etc/activemq/keystore.jks"
   
keyStorePassword="password"
   
trustStore="/etc/activemq/truststore.jks"
   
trustStorePassword="password" />


mcollective/server.cfg:

connector = activemq
direct_addressing
= 1
plugin
.activemq.pool.size = 1
plugin
.activemq.pool.1.host = broker.example.com
plugin
.activemq.pool.1.port = 61614
plugin
.activemq.pool.1.user = mcollective
plugin
.activemq.pool.1.password = password
plugin
.activemq.pool.1.ssl = 1
plugin
.activemq.pool.1.ssl.ca = /etc/puppetlabs/puppet/ssl/certs/ca.pem
plugin
.activemq.pool.1.ssl.cert = /etc/puppetlabs/puppet/ssl/certs/mail.example.com.pem
plugin
.activemq.pool.1.ssl.key = /etc/puppetlabs/puppet/ssl/private_keys/mail.example.com.pem
plugin
.activemq.pool.1.ssl.fallback = 0
securityprovider
= ssl
plugin
.ssl_client_cert_dir = /etc/puppetlabs/mcollective/clients
plugin
.ssl_server_private = /etc/puppetlabs/mcollective/server_private.pem
plugin
.ssl_server_public = /etc/puppetlabs/mcollective/server_public.pem
identity
= mail.example.com
factsource
= yaml
plugin
.yaml = /etc/puppetlabs/mcollective/facts.yaml
classesfile
= /var/lib/puppet/state/classes.txt
collectives
= mcollective
main_collective
= mcollective
registerinterval
= 600
rpcaudit
= 1
rpcauditprovider
= logfile
plugin
.rpcaudit.logfile = /var/log/mcollective-audit.log
logger_type
= file
loglevel
= debug
logfile
= /var/log/mcollective.log
keeplogs
= 5
max_log_size
= 2097152
logfacility
= user
libdir
= /usr/share/mcollective/plugins
daemonize
= 1



The keys/certs in keystore.jks and truststore.jks are correct and so is the password.  The shared keys and certs are also available.  So lets try connecting with `openssl`:

root@mail:/etc/puppetlabs/puppet/ssl# openssl s_client -connect broker.example.com:61614 -CAfile certs/ca.pem -cert certs/mail.example.com.pem -key private_keys/mail.example.com.pemCONNECTED(00000003)
write
:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read
0 bytes and written 305 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL
-Session:
   
Protocol  : TLSv1.2
   
Cipher    : 0000
   
Session-ID:
   
Session-ID-ctx:
   
Master-Key:
   
Key-Arg   : None
    PSK identity
: None
    PSK identity hint
: None
    SRP username
: None
   
Start Time: 1485554633
   
Timeout   : 300 (sec)
   
Verify return code: 0 (ok)
---



I'm not exactly sure how to interpret this, maybe someone else knows.  Any new good guesses would be helpful, I'm stuck.

John Gelnaw

unread,
Jan 31, 2017, 7:38:44 AM1/31/17
to Puppet Users
On Tuesday, January 31, 2017 at 3:36:20 AM UTC-5, Isabell Cowan wrote:
I've been trying all do to set up MCollective on my puppet cluster.  No matter what I do, I can't seem to get the MCollective server to connect.  The MCollective server(s) are running mostly on Ubnutu Xenial.  The ActiveMQ broker (5.14.3) is running on Debian Stretch.  I'm running puppet 4.x on all nodes. I've used every transport connector I can thing if, and they all fail to connect.  Let me dump some log files at you.

In mcollective.log I'm getting `Connection reset by peer`:

Any time I see "connection reset by peer", my first instinct is that there is some device in between that's breaking the traffic.

Unfortunately, one of the newer "tricks" is so-called intelligent firewalls that base their rules on traffic, not ports-- so the initial connection may be allowed, but data transfer, not so much.

Have you tried using nmap from the mcollective server against the 61614 port on the broker? 
Reply all
Reply to author
Forward
0 new messages