Agent Connection Issues

1,576 views
Skip to first unread message

Christopher Sayre

unread,
Jul 7, 2022, 1:24:57 PM7/7/22
to Wazuh mailing list
Hello all, 

I have been spending the past few days trying to get Wazuh up and running in a docker environment. I have been able to get an agent to register, but not connect and share data with the docker environment. I can ping the docker host from my workstation. I have Ran the netcat commands to verify the ports are open. 

nc -zv manager ip 1514 1515 55000

ri-prd-dck00 [manager ip] 1514 (fujitsu-dtcns) open

ri-prd-dck00 [manager ip] 1515 (ifor-protocol) open

ri-prd-dck00 [manager ip] 55000 open


But in the logs on my client device, this is what I see. 

2022/07/07 13:03:28 wazuh-agentd: INFO: Closing connection to server (managerip:1514/tcp).
2022/07/07 13:03:28 wazuh-agentd: INFO: Trying to connect to server (managerip:1514/tcp).
2022/07/07 13:03:38 wazuh-agentd: INFO: Closing connection to server (managerip:1514/tcp).
2022/07/07 13:03:38 wazuh-agentd: INFO: Trying to connect to server (managerip:1514/tcp).

I have attempted to add this to my ossec.conf to resolve this issue. 
  <client>
<server>
<address>managerip</address>
<port>1514</port>
<protocol>tcp</protocol>
</server>

<enrollment>
<enabled>yes</enabled>
<manager_address>managerip</manager_address>
<port>1515</port>
<agent_name>agent</agent_name>
<use_source_ip>no</use_source_ip>
</enrollment>

And things still don't seem to work. 

I have run this on the manager container to see if the node is connected and it does not appear to be, /var/ossec/bin/agent_control -lc


Wazuh agent_control. List of available agents:
   ID: 000, Name: wazuhmanager (server), IP: 127.0.0.1, Active/Local

Doing a tail -f /var/ossec/logs/ossec.log I see this. 

2022/07/07 17:20:49 wazuh-authd: WARNING: Duplicate name 'agent', rejecting enrollment. Agent '002' key already exists on the manager.
2022/07/07 17:21:40 wazuh-authd: INFO: New connection from 10.255.0.2
2022/07/07 17:21:40 wazuh-authd: INFO: Received request for a new agent (agent) from: 10.255.0.2
2022/07/07 17:21:40 wazuh-authd: WARNING: Duplicate name 'agent', rejecting enrollment. Agent '002' key already exists on the manager.
2022/07/07 17:22:30 wazuh-authd: INFO: New connection from 10.255.0.2
2022/07/07 17:22:30 wazuh-authd: INFO: Received request for a new agent (agent) from: 10.255.0.2
2022/07/07 17:22:30 wazuh-authd: WARNING: Duplicate name 'agent', rejecting enrollment. Agent '002' key already exists on the manager.
2022/07/07 17:23:20 wazuh-authd: INFO: New connection from 10.255.0.2
2022/07/07 17:23:20 wazuh-authd: INFO: Received request for a new agent (agent) from: 10.255.0.2
2022/07/07 17:23:20 wazuh-authd: WARNING: Duplicate name 'agent', rejecting enrollment. Agent '002' key already exists on the manager.
2022/07/07 17:24:10 wazuh-authd: INFO: New connection from 10.255.0.2
2022/07/07 17:24:10 wazuh-authd: INFO: Received request for a new agent (agent) from: 10.255.0.2
2022/07/07 17:24:10 wazuh-authd: WARNING: Duplicate name 'agent', rejecting enrollment. Agent '002' key already exists on the manager.

Any ideas as to how to get the agent to register and stay connected?

Carlos Dams

unread,
Jul 7, 2022, 3:33:46 PM7/7/22
to Wazuh mailing list
Hi  Christopher,
Thanks for using Wazuh! I will help you with this issue.

The agent seems that was able to register (port 1515 TCP) but somehow is not able to establish a connection with the Manager (port 1514).
More information about the ports here: Required ports 

Let's make sure that port is reachable from your agent's host. If possible from your agent host run telnet WazuhManagerIPAddress 1514 .
If the port is open you should get a Connected to WazuhManagerIPAddress, If you do not get this message now we know what it is necessary to troubleshoot.
Bear in mind you might need to install telnet in your system.

Also, from Wazuh Manager run this command /var/ossec/bin/agent_control -l and print the output here, it will let us know if the agent is listed there. 
More info: agent_control

To understand more about your environment:
- What Wazuh Manager version are you running? What is the Operating system of the host?
- What Wazuh Agent version are you running? What is the Operating system of the host?

Let me know your findings.

Christopher Sayre

unread,
Jul 15, 2022, 2:27:24 PM7/15/22
to Wazuh mailing list
For those who might run into this issue here is the post history that I had with Carlos. 

Hello all,I have been spending the past few days trying to get Wazuh up and running in a docker environment. I have been able to get an agent to register, but not connect and share data with the docker environment. I can ping the docker host from my workstation. I have Ran the netcat commands to verify the ports are open.

nc -zv manager ip 1514 1515 55000
ri-prd-dck00 [manager ip] 1514 (fujitsu-dtcns) open
ri-prd-dck00 [manager ip] 1515 (ifor-protocol) open
ri-prd-dck00 [manager ip] 55000 openBut in the logs on my client device, this is what I see.

2022/07/07 13:03:28 wazuh-agentd: INFO: Closing connection to server (managerip:1514/tcp).
2022/07/07 13:03:28 wazuh-agentd: INFO: Trying to connect to server (managerip:1514/tcp).
2022/07/07 13:03:38 wazuh-agentd: INFO: Closing connection to server (managerip:1514/tcp).
2022/07/07 13:03:38 wazuh-agentd: INFO: Trying to connect to server (managerip:1514/tcp).I have attempted to add this to my ossec.conf to resolve this issue.

 <client>
<server>
<address>managerip</address>
<port>1514</port>
<protocol>tcp</protocol>
</server><enrollment>
<enabled>yes</enabled>
<manager_address>managerip</manager_address>
<port>1515</port>
<agent_name>agent</agent_name>
<use_source_ip>no</use_source_ip>
</enrollment>And things still don’t seem to work.I have run this on the manager container to see if the node is connected and it does not appear to be, /var/ossec/bin/agent_control -lcWazuh agent_control. List of available agents:
  ID: 000, Name: wazuhmanager (server), IP: 127.0.0.1, Active/LocalDoing a tail -f /var/ossec/logs/ossec.log I see this.2022/07/07 17:20:49 wazuh-authd: WARNING: Duplicate name ‘agent’, rejecting enrollment. Agent ‘002’ key already exists on the manager.

2022/07/07 17:21:40 wazuh-authd: INFO: New connection from 10.255.0.2
2022/07/07 17:21:40 wazuh-authd: INFO: Received request for a new agent (agent) from: 10.255.0.2
2022/07/07 17:21:40 wazuh-authd: WARNING: Duplicate name ‘agent’, rejecting enrollment. Agent ‘002’ key already exists on the manager.
2022/07/07 17:22:30 wazuh-authd: INFO: New connection from 10.255.0.2
2022/07/07 17:22:30 wazuh-authd: INFO: Received request for a new agent (agent) from: 10.255.0.2
2022/07/07 17:22:30 wazuh-authd: WARNING: Duplicate name ‘agent’, rejecting enrollment. Agent ‘002’ key already exists on the manager.
2022/07/07 17:23:20 wazuh-authd: INFO: New connection from 10.255.0.2
2022/07/07 17:23:20 wazuh-authd: INFO: Received request for a new agent (agent) from: 10.255.0.2
2022/07/07 17:23:20 wazuh-authd: WARNING: Duplicate name ‘agent’, rejecting enrollment. Agent ‘002’ key already exists on the manager.
2022/07/07 17:24:10 wazuh-authd: INFO: New connection from 10.255.0.2
2022/07/07 17:24:10 wazuh-authd: INFO: Received request for a new agent (agent) from: 10.255.0.2
2022/07/07 17:24:10 wazuh-authd: WARNING: Duplicate name ‘agent’, rejecting enrollment. Agent ‘002’ key already exists on the manager.Any ideas as to how to get the agent to register and stay connected?
white_check_markeyesraised_hands
Carlos Dams  8 days ago
Hi @Chris Sayre, I answered you in Google Groups.
I am copying the message here too:
Hi Christopher, Thanks for using Wazuh! I will help you with this issue. The agent seems that was able to register (port 1515 TCP) but somehow is not able to establish a connection with the Manager (port 1514). More information about the ports here: Required ports Let's make sure that port is reachable from your agent's host. If possible from your agent host run telnet WazuhManagerIPAddress 1514 . If the port is open you should get a Connected to WazuhManagerIPAddress, If you do not get this message now we know what it is necessary to troubleshoot. Bear in mind you might need to install telnet in your system. Also, from Wazuh Manager run this command /var/ossec/bin/agent_control -l and print the output here, it will let us know if the agent is listed there. More info: agent_control To understand more about your environment: - What Wazuh Manager version are you running? What is the Operating system of the host? - What Wazuh Agent version are you running? What is the Operating system of the host? Let me know your findings.
Chris Sayre  8 days ago
Just saw the message. Working on this now.
Chris Sayre  8 days ago
@Carlos Dams - if it is easier we can work on here. I will copy everything over to the google group as a resource after we get things completed.
:+1:1
Chris Sayre  8 days ago
Telnet results:
telnet wazuhmanager.host.edu 1515 Trying 172.21.12.122... Connected to dockerhost. Escape character is '^]'. Connection closed by foreign host.
Chris Sayre  8 days ago
The connection opened and then closed pretty much immediately.
Chris Sayre  8 days ago
Wazuh agent_control. List of available agents: ID: 000, Name: wazuhmanager (server), IP: 127.0.0.1, Active/Local ID: 002, Name: agent, IP: any, Never connected
Carlos Dams  8 days ago
About this The connection opened and then closed pretty much immediately.
Did it close without interaction? you didn't press Ctrl+c or Ctrl+] ?
Chris Sayre  8 days ago
It closed with out any intervention
To understand more about your environment: - What Wazuh Manager version are you running? What is the Operating system of the host? - What Wazuh Agent version are you running? What is the Operating system of the host?
  1. we are running 4.3.5 Revision: 4306. We are running this in a docker environment. The docker system is running RHEL 7.8
  2. The latest version, as far as I can tell on my mac and windows machine. On the linux system I have tried I am running 4.3.5. This same issue has happened on all 3 system.
Carlos Dams  8 days ago
Can you try stopping the Wazuh Agent and then try again the telnet? if it closes without intervention then the issue is most likely there, you can even try from another host than the one of Wazuh Agent
Chris Sayre  8 days ago
I stopped the service and re-ran the telnet connection, the same behavior happened again. (edited) 
Chris Sayre  8 days ago
I just tried from another system, and the same thing happened, with it connecting and then disconnecting. (edited) 
Carlos Dams  8 days ago
Let's start checking if the port is open in the firewall.
iptables-save | grep 1514
Run it from your Wazuh Manager host (edited) 
Chris Sayre  8 days ago
[root@RI-PRD-DCK00 data]# iptables-save | grep 1514 -A INPUT -s 172.21.4.0/22 -p tcp -m tcp --dport 1514 -j ACCEPT -A INPUT -s 172.21.12.0/23 -p tcp -m tcp --dport 1514 -j ACCEPT -A DOCKER-INGRESS -p tcp -m tcp --dport 1514 -j ACCEPT -A DOCKER-INGRESS -p tcp -m state --state RELATED,ESTABLISHED -m tcp --sport 1514 -j ACCEPT -A DOCKER-INGRESS -p tcp -m tcp --dport 1514 -j DNAT --to-destination 172.18.0.2:1514
Carlos Dams  8 days ago
I will investigate this and let you know what to test, just to check one last thing, could you run ss -ltnu in your Wazuh Manager host and let me know the result
Chris Sayre  8 days ago
That ss command is not in the docker container. (edited) 
Carlos Dams  8 days ago
it should be in RHEL host, did you follow this documentation: https://documentation.wazuh.com/current/deployment-options/docker/index.html
Chris Sayre  8 days ago
Sorry, I thought you wanted me to run that in the docker container.
Chris Sayre  8 days ago
ss command.txt 
[root@RI-PRD-DCK00 data]# ss -ltnu
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 *:44560 *:*
udp UNCONN 0 0 ...
Click to expand inline (44 lines)
Chris Sayre  8 days ago
I did follow that documentation.
Carlos Dams  8 days ago
Hi @Chris Sayre, I am still checking what can be happening here, did you create the following rules that you shared yourself? or was it the docker deployment?
[root@RI-PRD-DCK00 data]# iptables-save | grep 1514 -A INPUT -s 172.21.4.0/22 -p tcp -m tcp --dport 1514 -j ACCEPT -A INPUT -s 172.21.12.0/23 -p tcp -m tcp --dport 1514 -j ACCEPT -A DOCKER-INGRESS -p tcp -m tcp --dport 1514 -j ACCEPT -A DOCKER-INGRESS -p tcp -m state --state RELATED,ESTABLISHED -m tcp --sport 1514 -j ACCEPT -A DOCKER-INGRESS -p tcp -m tcp --dport 1514 -j DNAT --to-destination 172.18.0.2:1514
To understand the environment:
  • What Virtualization software are you using?
  • What is the networking mode/network connection type?
  • What network devices are in between the manager and agent?
If this is a new deployment, it might be simpler to deploy a new one and see if the issue is reproduced (edited) 
Carlos Dams  8 days ago
By the way I just noticed you telnet port 1515 instead of 1514 telnet wazuhmanager.host.edu 1515 hereCan you test it with port 1514 instead, it is the one we must focus on: telnet wazuhmanager.host.edu 1514 (edited) 
Carlos Dams  8 days ago
The behavior of Connection closed by foreign host with port 1515 is expected
Chris Sayre  7 days ago
I had tried both ports.
Chris Sayre  7 days ago
The same issues happened
Chris Sayre  7 days ago
I created these rules.
-A INPUT -s 172.21.4.0/22 -p tcp -m tcp --dport 1514 -j ACCEPT -A INPUT -s 172.21.12.0/23 -p tcp -m tcp --dport 1514 -j ACCEPT
Docker created these:
Chris Sayre  7 days ago
-A DOCKER-INGRESS -p tcp -m tcp --dport 1514 -j ACCEPT -A DOCKER-INGRESS -p tcp -m state --state RELATED,ESTABLISHED -m tcp --sport 1514 -j ACCEPT -A DOCKER-INGRESS -p tcp -m tcp --dport 1514 -j DNAT --to-destination 172.18.0.2:1514
Chris Sayre  7 days ago
To understand the environment:
  • What Virtualization software are you using?
Docker server is virtualized using vmware, I assume that is what you are asking.
  • What is the networking mode/network connection type?
Not sure what you are asking for this one.
  • What network devices are in between the manager and agent?
There are switches, no firewalls.
Carlos Dams  7 days ago
Hi @Chris Sayre ,
About the rules you created, the docker installation exposes the ports on the host. Did you create them after you enroll the agent or before?
I would like to understand the environment where you are running Docker and where all hosts are running.
  • What Virtualization software are you using?
Docker server is virtualized using vmware, I assume that is what you are asking.
Yes, that is what I meant, so you have the docker deployment on a RHEL VM which is on VMware, right?
  • What is the networking mode/network connection type?
Not sure what you are asking for this one.
Is this RHEL Virtual Machine on VMware using NAT? Bridged? more info here
Carlos Dams  7 days ago
If you are monitoring physical hosts in your network, it might be simpler to have the VM configured with Bridged networking.
Chris Sayre  7 days ago
I created them before I registered the agent. The RHEL system is connected to a Distributed vswitch.
Chris Sayre  7 days ago
Is there a way to bridge the network in a docker swarm?
Carlos Dams  7 days ago
I am not sure, I would have to search for it, according to this answer it is not possible.
I am sharing the default rules that Wazuh docker deployment creates if it helps you, but this is in a LAN (network 192.168.x.x) where the VMs are using bridged networking:
[root@wazuhRedhatSmaill single-node]# iptables-save | grep 1514 -A POSTROUTING -s 172.18.0.2/32 -d 172.18.0.2/32 -p tcp -m tcp --dport 1514 -j MASQUERADE -A DOCKER ! -i br-ffa002bdec98 -p tcp -m tcp --dport 1514 -j DNAT --to-destination 172.18.0.2:1514 -A DOCKER -d 172.18.0.2/32 ! -i br-ffa002bdec98 -o br-ffa002bdec98 -p tcp -m tcp --dport 1514 -j ACCEPT [root@wazuhRedhatSmaill single-node]# iptables -L -n | grep 1514 ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:1514
When you try connecting another agent, you didn't add the following to the ossec.conf of the agents, right? you shouldn't because you already have a Wazuh Agent registered with the name "agent", just specify the IP address of the Wazuh Manager RHEL host when installing Wazuh Agent
<client> <server> <address>managerip</address> <port>1514</port> <protocol>tcp</protocol> </server> <enrollment> <enabled>yes</enabled> <manager_address>managerip</manager_address> <port>1515</port> <agent_name>agent</agent_name> <use_source_ip>no</use_source_ip> </enrollment>
(edited)
Chris Sayre  3 days ago
@Carlos Dams sorry for the delay.
Chris Sayre  3 days ago
Here is what the first part of my ossec.conf file looks like.
<ossec_config> <client> <server> <address>172.21.12.122</address> <port>1514</port> <protocol>tcp</protocol> </server> <enrollment> <enabled>yes</enabled> <manager_address>172.21.12.122</manager_address> <port>1515</port> <agent_name>agent</agent_name> <use_source_ip>no</use_source_ip> </enrollment> <config-profile>darwin, darwin21, darwin21.4</config-profile> <notify_time>10</notify_time> <time-reconnect>60</time-reconnect> <auto_restart>yes</auto_restart> <crypto_method>aes</crypto_method> </client>
Chris Sayre  3 days ago
I also did a pcap, and it didnt really give me too much information.
Chris Sayre  3 days ago
I did see a reset flag in the last packet.
Carlos Dams  3 days ago
Hi @Chris Sayre,
Thanks for the information provided, so that ossec.conf is from one of your Wazuh agents, isn't it?
I am wondering if when you tried to connect different Wazuh Agents you used the same name "agent", this would make it fail since there is an agent in the Wazuh manager client.keys file with that same name and it wouldn't accept it.We could try the deleting the agent and entolling it again with a different name to check if the connection is successful, before make a backup of the client.keys of the Manager and the Agent:
  1. In Wazuh Manager and Wazuh Agent, back up the client.keys file in /var/ossec/etc/client.keys
  2. Stop the Wazuh Agent
  3. Remove it using one of the options from the article, from the Wazuh Web API:
  1. Wazuh app → Tools → API Console
  2. Use DELETE /agents, for example: DELETE /agents?older_than=0s&agents_list=002&status=all This would remove agent with ID 002, that should be the one with the issue
  1.  Start the Wazuh Agent again making sure it has a different name under <agent_name>
Let me know if this fixes the issue or if you still get the same messages in /var/ossec/logs/ossec.log of manager and agent
Chris Sayre  3 days ago
Thanks for the information provided, so that ossec.conf is from one of your Wazuh agents, isn’t it?Yes it is.
Chris Sayre  3 days ago
I added the agent name in there.
Chris Sayre  3 days ago
But I can try again.
Chris Sayre  3 days ago
@Carlos Dams Looks like the same thing is happening.
Chris Sayre  3 days ago
2022/07/12 14:59:30 wazuh-modulesd:control: INFO: Starting control thread. 2022/07/12 14:59:30 sca: INFO: Loaded policy '/Library/Ossec/ruleset/sca/sca_unix_audit.yml' 2022/07/12 14:59:30 sca: INFO: Starting Security Configuration Assessment scan. 2022/07/12 14:59:30 sca: INFO: Starting evaluation of policy: '/Library/Ossec/ruleset/sca/sca_unix_audit.yml' 2022/07/12 14:59:31 wazuh-modulesd:syscollector: INFO: Module started. 2022/07/12 14:59:31 wazuh-modulesd:syscollector: INFO: Starting evaluation. 2022/07/12 14:59:32 wazuh-modulesd:syscollector: INFO: Evaluation finished. 2022/07/12 14:59:48 wazuh-agentd: INFO: (1410): Reading authentication keys file. 2022/07/12 14:59:48 wazuh-agentd: INFO: Using AES as encryption method. 2022/07/12 14:59:48 wazuh-agentd: INFO: Trying to connect to server (172.21.12.122:1514/tcp). 2022/07/12 14:59:58 wazuh-agentd: INFO: Closing connection to server (172.21.12.122:1514/tcp). 2022/07/12 14:59:58 wazuh-agentd: INFO: Trying to connect to server (172.21.12.122:1514/tcp). 2022/07/12 15:00:08 wazuh-agentd: INFO: Closing connection to server (172.21.12.122:1514/tcp). 2022/07/12 15:00:08 wazuh-agentd: INFO: Trying to connect to server (172.21.12.122:1514/tcp). 2022/07/12 15:00:18 wazuh-agentd: INFO: Closing connection to server (172.21.12.122:1514/tcp). 2022/07/12 15:00:18 wazuh-agentd: INFO: Trying to connect to server (172.21.12.122:1514/tcp). 2022/07/12 15:00:28 wazuh-agentd: INFO: Closing connection to server (172.21.12.122:1514/tcp). 2022/07/12 15:00:28 wazuh-agentd: INFO: Trying to connect to server (172.21.12.122:1514/tcp). 2022/07/12 15:00:28 wazuh-agentd: INFO: Requesting a key from server: 172.21.12.122 2022/07/12 15:00:28 wazuh-agentd: INFO: No authentication password provided 2022/07/12 15:00:28 wazuh-agentd: INFO: Using agent name as: chrismac 2022/07/12 15:00:28 wazuh-agentd: INFO: Waiting for server reply 2022/07/12 15:00:28 wazuh-agentd: ERROR: Duplicate agent name: chrismac (from manager) 2022/07/12 15:00:28 wazuh-agentd: ERROR: Unable to add agent (from manager)
Chris Sayre  3 days ago
oot@wazuhmanager:/var/ossec/bin# tail -f ../logs/ossec.log 2022/07/12 18:59:28 wazuh-authd: INFO: Received request for a new agent (chrismac) from: 10.255.0.2 2022/07/12 18:59:28 wazuh-authd: INFO: Agent key generated for 'chrismac' (requested by any) 2022/07/12 18:59:34 wazuh-remoted: INFO: (1409): Authentication file changed. Updating. 2022/07/12 18:59:34 wazuh-remoted: INFO: (1410): Reading authentication keys file. 2022/07/12 19:00:28 wazuh-authd: INFO: New connection from 10.255.0.2 2022/07/12 19:00:28 wazuh-authd: INFO: Received request for a new agent (chrismac) from: 10.255.0.2 2022/07/12 19:00:28 wazuh-authd: WARNING: Duplicate name 'chrismac', rejecting enrollment. Agent '003' key already exists on the manager. 2022/07/12 19:01:18 wazuh-authd: INFO: New connection from 10.255.0.2 2022/07/12 19:01:18 wazuh-authd: INFO: Received request for a new agent (chrismac) from: 10.255.0.2 2022/07/12 19:01:18 wazuh-authd: WARNING: Duplicate name 'chrismac', rejecting enrollment. Agent '003' key already exists on the manager.
(edited)
Carlos Dams  3 days ago
Hi @Chris Sayre,
I am still evaluating this, as you enrolled the agent again with a different name but the communication keeps failing, I think the issue is somewhere in the network connection to port 1514.You confirmed that with telnet wazuhmanager.host.edu 1514 you get the same of Connection closed by foreign host almost immediately, right?Is it possible that you use a machine in the same network of the Wazuh Manager (RHEL host) and try connecting with a Wazuh Agent? or at least try a telnet to port 1514 too? since I see that the Wazuh manager and the Wazuh Agent are in different networks (172.21.12.122 and 10.255.0.2 respectively), that would rule out any physical/virtual device in the middle.Also, just to make sure, were there any modification to the Wazuh docker files? or any configuration that you think is important to point out? (edited) 
Chris Sayre  2 days ago
I changed some of the names of the hosts, but I went back through the files and updated them.
Chris Sayre  2 days ago
If you would like @Carlos Dams I can share my docker compose file and configfiles that you would like to see.
Carlos Dams  1 day ago
Hi @Chris Sayre,
I can take a look but It may be simpler to download again the files from the Wazuh site and run a new docker container,
I was thinking that probably the application is not actually listening in port 1514 for IPv4, but it would be very strange since the agents were able to enrolled.
For example, when you shared the result of the command ss -ltnu in this comment I noticed there are no applications listening in port 1515 and 1514 for IPv4, the ones listed are for IPv6 based on the [::] , however, I wanted to move forward because you shared the result of nc -zv manager ip 1514 1515 55000 and made me think there are applications listening on IPv4 socket there.Let's do the following:
In Wazuh Manager host (RHEL) execute this command that will tell only IPv4 sockets ss -ltn4
Would you do this from my previous comment?
Is it possible that you use a machine in the same network of the Wazuh Manager (RHEL host) and try connecting with a Wazuh Agent? or at least try a telnet to port 1514 too? since I see that the Wazuh manager and the Wazuh Agent are in different networks (172.21.12.122 and 10.255.0.2 respectively), that would rule out any physical/virtual device in the middle.
(edited)
Chris Sayre  1 day ago
[root@RI-PRD-DCK00 ~]# ss -ltn4 State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 172.21.12.122:2377 *:* LISTEN 0 128 172.21.12.122:7946 *:* LISTEN 0 128 *:111 *:* LISTEN 0 128 *:47604 *:* LISTEN 0 64 *:39861 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 *:10050 *:*
Chris Sayre  1 day ago
I do not think this is including the docker containers.
Chris Sayre  1 day ago
I know for a fact that we have stuff in docker running on 80 and 443 and those do not show up there.
Chris Sayre  1 day ago
Would you do this from my previous comment?
Is it possible that you use a machine in the same network of the Wazuh Manager (RHEL host) and try connecting with a Wazuh Agent? or at least try a telnet to port 1514 too? since I see that the Wazuh manager and the Wazuh Agent are in different networks (172.21.12.122 and 10.255.0.2 respectively), that would rule out any physical/virtual device in the middle.
Do you know of a docker container that has wazuh built in that I can try? (edited) 
Chris Sayre  1 day ago
[root@RI-PRD-DCK00 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 25fc0b28305c immauss/scannable:latest "/bin/bash /entrypoi…" About an hour ago Up About an hour openvas_scannable.1.o0y4a4hl7vmlbewokga8fs9n6 923674b44f4d portainer/agent:latest "./agent" 6 days ago Up 6 days portainer_agent.do5u8mz2yxmqa2xuomdp1jv64.rxiz7kig216a9l928p47a31k1 c4fda8ef89b4 wazuh/wazuh-manager:4.3.5 "/init" 6 days ago Up 6 days 1514-1516/tcp, 514/udp, 55000/tcp wazuh_wazuhmanager.1.v7ehetplz8niyg885dzjtu9yu 5dbd36c3aff0 guacamole/guacamole:latest "/opt/guacamole/bin/…" 6 days ago Up 6 days 8080/tcp Guacamole_guacamole.1.z52vq7flt0ut5gfkyxcy07mhx 621b63fbb0ae oxidized/oxidized:latest "/sbin/my_init" 6 days ago Up 6 days 8888/tcp oxidized_oxidized.1.td2v344vi508be2duu0a64iz0 9a7291dc4633 traefik:v2.1.0 "/entrypoint.sh trae…" 6 days ago Up 6 days 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:8080->8080/tcp traefik_traefik.1.yfx1ol6veujucfvi5jgq9s47t 1e0b16f9e06b portainer/portainer:latest "/portainer -H tcp:/…" 6 days ago Up 6 days 9000/tcp portainer_portainer.1.35zbedejlacz4uze09jsrue5l 6cbfc2b94992 guacamole/guacd:latest "/bin/sh -c '/usr/lo…" 6 days ago Up 6 days (healthy) 4822/tcp Guacamole_guacd.1.54r9tajddjp6fcz3h34xedntn ea960b1fbf12 postgres:14.2 "docker-entrypoint.s…" 6 days ago Up 6 days 5432/tcp Guacamole_postgres.1.uj8326mkigf5zrujp0f29s7d8
Chris Sayre  1 day ago
Docker version is quite a bit old, I am going to update.
Chris Sayre  1 day ago
OK. I got docker updated.
Chris Sayre  1 day ago
I will try to start from scratch tomorrow.
Carlos Dams  1 day ago
Hi Chris, sorry for the late replyThe output of ss command should also include the ones from Wazuh from Docker, I am not sure about the other applications, there seems to be a similar discussion here and probably EXPOSE and the way the containers are run has something to do
I am providing here the output of ss in a lab environment:
[root@wazuhRedhatSmaill ~]# ss -ltn4 State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:55000 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 *:443 *:* LISTEN 0 128 *:1514 *:* LISTEN 0 128 *:1515 *:* LISTEN 0 128 *:9200 *:* LISTEN 0 128 *:22 *:*
I am not sure if I understood this "Do you know of a docker container that has wazuh built in that I can try?" - here is the documentation to deploy Wazuh in Docker
Here is what I get when running docker ps
[root@wazuhRedhatSmaill ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES def4c4cd51a4 wazuh/wazuh-dashboard:4.3.5 "/entrypoint.sh" 6 days ago Up 10 hours 443/tcp, 0.0.0.0:443->5601/tcp, :::443->5601/tcp single-node_wazuh.dashboard_1 3191b4a0e759 wazuh/wazuh-manager:4.3.5 "/init" 6 days ago Up 10 hours 0.0.0.0:1514-1515->1514-1515/tcp, :::1514-1515->1514-1515/tcp, 0.0.0.0:514->514/udp, :::514->514/udp, 0.0.0.0:55000->55000/tcp, :::55000->55000/tcp, 1516/tcp single-node_wazuh.manager_1 72f261444c46 wazuh/wazuh-indexer:4.3.5 "/entrypoint.sh open…" 6 days ago Up 10 hours 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp single-node_wazuh.indexer_1
Let me know, I'm glad to help here too, we just got to be patient with the troubleshooting and eventually it will work
Chris Sayre  1 day ago
Its all good @Carlos DamsHere is the error that I am getting when I try to deploy the stack.Deployment error
Ignoring unsupported options: links, restart, ulimits failed to create service wazuh2_wazuh2.indexer: Error response from daemon: rpc error: code = InvalidArgument desc = name must be valid as a DNS name component
Chris Sayre  24 hours ago
@Carlos Dams I was just thinking about the network path that the traffic is taking, and I looked at the logs for the Nginx logs and I am seeing this
Chris Sayre  24 hours ago
2022/07/14 18:28:49 [error] 31#31: *1 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:28:59 [error] 31#31: *3 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:29:09 [error] 31#31: *5 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:29:19 [error] 31#31: *7 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:29:29 [error] 31#31: *9 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:29:39 [error] 31#31: *11 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:29:49 [error] 31#31: *13 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:29:59 [error] 31#31: *15 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:30:09 [error] 31#31: *17 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:30:19 [error] 31#31: *19 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:30:30 [error] 31#31: *21 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:30:40 [error] 31#31: *23 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:30:50 [error] 31#31: *25 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:31:00 [error] 31#31: *27 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:31:10 [error] 31#31: *29 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:31:20 [error] 31#31: *31 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:31:30 [error] 31#31: *33 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:31:40 [error] 31#31: *35 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:31:50 [error] 31#31: *37 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:32:00 [error] 31#31: *39 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:32:10 [error] 31#31: *41 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:32:20 [error] 31#31: *43 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:32:30 [error] 31#31: *45 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:32:40 [error] 31#31: *47 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:32:50 [error] 31#31: *49 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:33:00 [error] 31#31: *51 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:33:10 [error] 31#31: *53 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:33:21 [error] 31#31: *55 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:33:31 [error] 31#31: *57 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:33:41 [error] 31#31: *59 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:33:51 [error] 31#31: *61 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:34:01 [error] 31#31: *63 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:34:11 [error] 31#31: *65 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:34:21 [error] 31#31: *67 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:34:31 [error] 31#31: *69 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:34:41 [error] 31#31: *71 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:34:51 [error] 31#31: *73 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:35:11 [error] 31#31: *75 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:35:21 [error] 31#31: *77 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:35:31 [error] 31#31: *79 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:35:41 [error] 31#31: *81 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:35:51 [error] 31#31: *83 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:36:01 [error] 31#31: *85 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:36:11 [error] 31#31: *87 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:36:21 [error] 31#31: *89 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:36:31 [error] 31#31: *91 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.31:1515", bytes from/to client:94/0, bytes from/to upstream:0/94 2022/07/14 18:36:41 [error] 31#31: *93 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.255.0.2, server: 0.0.0.0:1514, upstream: "10.0.2.30:1515", bytes from/to client:94/0, bytes from/to upstream:0/94
Carlos Dams  19 hours ago
Hi @Chris Sayre,
About the following error, are you running the commands from the article? for example docker-compose?
Deployment error
Ignoring unsupported options: links, restart, ulimits failed to create service wazuh2_wazuh2.indexer: Error response from daemon: rpc error: code = InvalidArgument desc = name must be valid as a DNS name componentI will check on a lab environment for the nginx part, I'm not sure what that could mean.
Chris Sayre  6 hours ago
@Carlos Dams I am using portainer to deploy and manage our docker containers.
Chris Sayre  5 hours ago
@Carlos Dams a lot of progress over the last hour.
Chris Sayre  5 hours ago
I got the single node deployment up and running.
Chris Sayre  5 hours ago
I am going to try and redeploy the multinode.
Chris Sayre  25 minutes ago
So I guess, I just screwed something up in my initial docker file.
Chris Sayre  25 minutes ago
Everything seems to be working now.
Carlos Dams  23 minutes ago
Hi @Chris Sayre,
Glad the single node deployment worked, did you try connecting an Agent?About this "I am using portainer to deploy and manage our docker containers." I recommend you to use docker-compose  command as stated in Wazuh's documentation, however, allow me some time to investigate if it is ok to use docker stack deploySo about that last message, that is really good news! did you use the same docker stack deploy command?
Chris Sayre  22 minutes ago
This stack will be deployed using the equivalent of the docker stack deploy command.
Chris Sayre  22 minutes ago
This is what portainer has written on their site.
Chris Sayre  21 minutes ago
Things seem to be working now.
:tada:1
Chris Sayre  21 minutes ago
I have 3 agents connected.
Chris Sayre  21 minutes ago
Working on getting a linux agent connected now.
Carlos Dams  20 minutes ago
gotcha, that is the beauty of docker haha seems easier to start from scratch than troubleshooting.
Chris Sayre  18 minutes ago
Yeah. I agree. There were a few edits that I had to make to the docker file. Were you able to get a docker file with a . in the name to deploy?
Chris Sayre  13 minutes ago
Here is what was causing me all of the issues.
Chris Sayre  13 minutes ago
# Wazuh App Copyright (C) 2017, Wazuh Inc. (License GPLv2) version: '3.7' services: wazuh.master:
Chris Sayre  13 minutes ago
The name of the service has a . in it
Carlos Dams  13 minutes ago
I have not tried it but seems it is not possible: https://stackoverflow.com/a/50571512
Chris Sayre  11 minutes ago
OK. Should I submit a PR to correct the docker compose files?
New
Carlos Dams  4 minutes ago
It was failing because it was being run as part of a swarm, the name of the service with a dot . is correct to run a single container (edited) 
Chris Sayre  < 1 minute ago
Thank you for all of the help.

Reply all
Reply to author
Forward
0 new messages