Best Practice Road Warrior Scenario and DMZ

652 views
Skip to first unread message

Erik Vetters

unread,
Mar 1, 2018, 8:50:31 AM3/1/18
to Wazuh mailing list
Hello Wazuh List,

I have a simple question regarding a lot of road warrior's. I have not found anything regarding a scenario for people often outside the internal
network.

Is better to place the whole Wazuh/Elasticstack Server into DMZ and let the agents communicate from inside and outside the network, but
then of course leave all the data in to DMZ.

Or place the Wazuh Manager into the DMZ and leave the Elastic Stuff in the internal network. And do communication over filebeat.

From what I have read in the documentation that is like a Clustered Setup and should be possible.

What do you think ? What is the best practice for this scenario ?

Kind regards,
Erik


Kat

unread,
Mar 1, 2018, 3:39:22 PM3/1/18
to Wazuh mailing list
I did something a little different.
I have a 2 manager cluster, but I keep one inside AWS and one inside the corp net.
(some might say put your managers in AWS, but that leaves all eggs in one basket)
They are actually connected in a cluster via a single OpenVPN point-point tunnel. 
(this avoids unnecessary port/server exposure to the internet. 
only the AWS instance is exposed for agent communication.

Now, when traveling - a user is on the internal network and the hit the internal manager. No issues. 
When users head home or on the road, they connect to the AWS manager. 

Only been testing it for about 1-2 months now and about 50-70 agents, but so far, have not had any real issues.

-Kat

Robert H

unread,
Mar 23, 2018, 1:41:52 PM3/23/18
to Wazuh mailing list
I hope I can join into this thread.  I'm preparing for a similar situation with 400 remote users.  It's my impression that they will always be external, but in case some do roam I'm curious how you have this working.

My initial plan is to put one manager in the DMZ and have that manager contact the other managers via the cluster.  This would be via a FQDN to identify the manager.

@ Kat,
I'm curious how you setup the agents to connect to either the internal or external manager.  Did you modify the ossec.conf on the agents to add 2 manager server's information or do you use a single FQDN that resolves to the DMZ when outside the network and to the internal manager when inside the network?

Regards,
Robert

Erik Vetters

unread,
Mar 26, 2018, 8:31:25 AM3/26/18
to Wazuh mailing list
Hi,

I'm just starting with the internal ones and one manager and later I want to setup a cluster one manager internal which then is already in place one into the dmz. regarding FQDN

I guess split DNS could be a option, internal resolves to internal and when they are external and use "other" DNS Servers they resolve to DMZ.  I dunno if this work but should be possible.

Many Greetings
Erik

migue...@wazuh.com

unread,
Apr 15, 2018, 2:11:32 PM4/15/18
to Wazuh mailing list
Hi,

You can set up a cluster of Wazuh Managers to give some kind of roaming capability to your infrastructure. The cluster basically consists of two types of nodes:
  1. Master node: this is in charge of the control of the cluster itself, by synchronizing task and information related to:
    • Agent registration/deletion.
    • Rules, Decoders, and CDB lists.
    • Grouping agents.
    • Centralized configuration.
  2. Client node: client nodes will receive all files, configurations, and keys from the manager node.
Both types of nodes can be used to fetch and process event from agents. This kind of architecture can give you HA and load balancing between managers and agents, but also can be used for a roaming purpose.
Consider the following architecture as an example:
  1. The Wazuh Manager client node on the outside.

  2. The Wazuh Manager master in your internal network.
  3. The Elastic Stack cluster on your internal network.
  4. VPN tunnel or TCP/1516 and TCP/5000 ports open to permit communication between Manager and Filebeat/Logstash nodes.

All the comms between the Managers could be done via the Wazuh Cluster (by default use the TCP/1516 port and a secure transport), the events could be forward from the Managers to a Logstash appliance in the internal network (the default listen port is the TCP/5000 and the events could be sent via a secure transport as well). The Wazuh Agents can be pointed to multiple Wazuh Managers and switch when the current manager is not reachable.
<client>
  <server>
    <address>internal.company.net</address>
    <port>1514</port>
    <protocol>udp</protocol>
  </server>
  <server>
    <address>external.example.net</address>
    <port>1514</port>
    <protocol>udp</protocol>
  </server>
</client>
The above will set the Agent to use the internal.company.net as the first manager if the agent can't connect to that it will continue with the next one external.example.net. Like the client.keys will be synchronized among the Wazuh Managers in the cluster, the Agent should be able to connect to any of the managers when are reachable. If it on the internal network and the internal manager is reachable will connect to that one otherwise it will try to connect to the external manager.

Other architecture can imply to open other ports to the outside. In this case, the internal network manager will absorb all the load:
  1. The Wazuh Manager in the internal network (possibly on a DMZ).
  2. The Elastic Stack cluster on the internal network or a DMZ as well.
  3. Open the UDP or TCP 1514 port for the agents and manager communications (this port should be probably DNATed to the DMZ manager address).
  4. The communication between the manager and the Elastic Stack can be done via Filebeat with a secure transport.
In this case, you use the same agent configuration as above but point to the internal DNS manager host and the external DNS DNATed host. The agents should make the manager switch automatically depending which one is reachable or not.

Also, consider using the "any" keyword on the IP address definition of the client registration, this due to the fact that the agent can be connected from multiples sources as well.

Here some documentation links that can be useful:
In addition, we've detected some bugs on the Wazuh cluster, new fixes and improvements will be released in the next weeks.

I hope this helps.

C. L. Martinez

unread,
Apr 16, 2018, 7:44:45 AM4/16/18
to migue...@wazuh.com, Wazuh mailing list
A really interesting thread.

But I am seeing some points here:

a) How do you configure node clients? for example in the previous sample, master configuration is:

<cluster>
  <name>test_cluster</name>
  <node_name>manager_01</node_name>
  <node_type>master</node_type>
  <key>ugdtAnd7Pi9myP7CVts4qZaZQEQcRYZa</key>
  <interval>2m</interval>
  <port>1516</port>
  <bind_addr>0.0.0.0</bind_addr>
  <nodes>
    <node>172.17.0.2</node>
    <node>172.17.0.3</node>
   </nodes>
  <hidden>no</hidden>
</cluster>
What is the correct config for node 172.17.0.2 and 172.17.03? Something like:

<cluster>
  <name>test_cluster</name>
  <node_name>node_client_01</node_name>
  <node_type>client</node_type>
  <key>ugdtAnd7Pi9myP7CVts4qZaZQEQcRYZa</key>
  <interval>2m</interval>
  <port>1516</port>
  <bind_addr>0.0.0.0</bind_addr>
  <hidden>no</hidden>
</cluster> ??

But where do you configure who is the master node?

b) To configure ELK integration, only manager needs to be configured to use wazuh-api or all cluster nodes need to be configured? If the last case is the correct, how do you configure wazu-app to connect with the API?

Thanks.


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/909520b6-6c3d-42aa-a422-be98624874b5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Miguelangel Freitas

unread,
Apr 16, 2018, 4:52:50 PM4/16/18
to C. L. Martinez, Wazuh mailing list
Hi,

It's necessary to set all of the cluster nodes IP addresses inside the <cluster> tag for master and clients as well, for example:

For a master node:

<cluster>
  <name>test_cluster</name>
  <node_name>manager_master_01</node_name>
  <node_type>master</node_type>
  <key>ugdtAnd7Pi9myP7CVts4qZaZQEQcRYZa</key>
  <interval>2m</interval>
  <port>1516</port>
  <bind_addr>0.0.0.0</bind_addr>
  <nodes>
<node>172.17.0.2</node>
<node>172.17.0.3</node>
   </nodes>
  <hidden>no</hidden>
</cluster>

For a client node:

<cluster>
  <name>test_cluster</name>
  <node_name>manager_client_01</node_name>
  <node_type>client</node_type>
  <key>ugdtAnd7Pi9myP7CVts4qZaZQEQcRYZa</key>
  <interval>2m</interval>
  <port>1516</port>
  <bind_addr>0.0.0.0</bind_addr>
  <nodes>
<node>172.17.0.2</node>
<node>172.17.0.3</node>
   </nodes>
  <hidden>no</hidden>
</cluster>

The nodes will assume their role specified on the <node_type> tag, please take a look here: https://documentation.wazuh.com/current/user-manual/manager/wazuh-cluster.html#use-case-deploying-a-wazuh-cluster

In case of the API you should install and configure it on the master node of the cluster, the master node maintains the accurate data from itself and the client nodes in terms of the agents status. In addition, you should point the Wazuh APP to the Wazuh API master node.

I hope this helps.

Best Regards,

Miguelangel Freitas

https://docs.google.com/uc?export=download&id=0B3pCWMHmmbAmVF9pUHlfVU5KSms&revid=0B3pCWMHmmbAmRUlDdWppaktmNVFNZnROZVNoUDVUV1RIVkJVPQ 

C. L. Martinez

unread,
Apr 19, 2018, 5:51:10 AM4/19/18
to Miguelangel Freitas, Wazuh mailing list
Hi Miguel,

 I have configured everything as you told me, but in Kibana there are no client nodes alerts, only those of the manager... And the manager hasn't registered any alerts from the client nodes..

Miguelangel Freitas

unread,
Apr 19, 2018, 11:21:30 AM4/19/18
to C. L. Martinez, Wazuh mailing list
Hi C.L.

I'm assuming you do not get the alerts from the Wazuh Manager client node into the Kibana, you may also need to configure Filebeat to forward the events to Logstash in this scenario. The cluster maintains synchronized some files and status between master and client nodes, but not the generated alerts, please take a look at the following to configure Filebeat on the master and client nodes as well:

For RPM-based distros:


For DEB-based distros:


I hope it helps.

Best Regards,

Miguelangel Freitas

https://docs.google.com/uc?export=download&id=0B3pCWMHmmbAmVF9pUHlfVU5KSms&revid=0B3pCWMHmmbAmRUlDdWppaktmNVFNZnROZVNoUDVUV1RIVkJVPQ 

Reply all
Reply to author
Forward
0 new messages