Installing Wazuh in cluster, and Elasticsearch

966 views
Skip to first unread message

Alvaro Victoriano

unread,
Sep 19, 2019, 1:10:33 AM9/19/19
to Wazuh mailing list
Hello Wazuh Team.

I have a question please.

Is it posible to install Wazuh in cluster and Elasticsearch in cluster as well accoarding to your Docs here? https://documentation.wazuh.com/3.10/installation-guide/installing-elastic-stack/configure-elasticsearch-cluster.html

And i have doubt about the username and password of Elastic in this step. what it would?
is it posible to enable Xpack security for each node?

output.elasticsearch:
        hosts: ['http://<elasticsearch_ip_node1>:9200','http://<elasticsearch_ip_node2>:9200','http://<elasticsearch_ip_node3>:9200']
        loadbalance: true


Thank you
Message has been deleted

Pedro de Castro

unread,
Sep 25, 2019, 11:48:09 AM9/25/19
to Wazuh mailing list
Hi Alvaro,

I am not sure to understand your question, let me know if you need other answers.

Regarding Wazuh and Elasticsearch clusters, yes, they can work together, you can have a Wazuh cluster (It will provide high availability and support more agents load) and an Elasticsearch cluster (it will provide high availability and improve speed and overall performance).
You can find more info about building an Elasticsearch cluster in their documentation (adding nodes to Elasticsearch cluster)

X-Pack plugins are shipped by default in all nodes (at least the Security plugin), you need to enable the plugin in all the nodes by modifying the elasticsearch.yml configuration file. 
The password is auto-generated, you will need to run a CLI which will guide you to the next steps.
Here you can have more info about Setting up X-Pack Security plugin.

I hope it helps, regards,
Pedro.

Alvaro Victoriano

unread,
Sep 27, 2019, 2:59:57 AM9/27/19
to Wazuh mailing list
Thank you so much Carlos.

Thank you so much Pedor, yes you are right over all, you understood what i want and sorry if there was less explanation.

Yes, currently I have Wazuh installed in cluster, I have 1 master and 3 workers, so i was revising your docs and i saw the new docs about Elastic in cluster and I thought i can install antoher Elastic nodes where wazuh workers are allready installed, so my doubt was this, and about the Xpack security if it should be add as Carlos mentioned to all filebeats.

Thank you again Pedro and Carlos

Alvaro Victoriano

unread,
Sep 27, 2019, 3:15:36 PM9/27/19
to Wazuh mailing list
I have another doubt please.

Iam using SSL Ca for the comunication between Filebeat and Elasticsearch, and allready Xpack plugin is enabled in filebeat nodes.

As i will have more than one of Elastic, Should I use the same certificate the filebeat using for all Elasticsearch? I mean to copy the same certificate to Elastic nodes and so on?
or how its gonna be as iam going to configure cluster?


nano /etc/filebeat/filebeat.yml

output
.elasticsearch:

        hosts
: ['http://<elasticsearch_ip_node1>:9200','http://<elasticsearch_ip_node2>:9200','http://<elasticsearch_ip_node3>:9200']
        loadbalance
: true

nano /etc/filebeat/filebeat.yml
output.elasticsearch.username: elastic
output.elasticsearch.password: password


In the previous configuration, if iam going to generate for each node of Elastic a password then how its gonna be the configurations in filebeat to set the user and password for each Filebeat node?

Thank you

Juan Carlos Rodríguez

unread,
Oct 1, 2019, 5:02:29 AM10/1/19
to Wazuh mailing list
Hi Alvaro,

Let me get this straight,

You can generate a certificate for each node, but I recommend that if there are not too many copies the same certificate.

On the other hand, the password is the same for all the Elastic cluster, because it is configured in the master node and replicated to the rest. So in all Filebeat files, you will have to enter the same Elastic password.

Please tell me if this was your doubt and if you have any other, and I will try to resolve it.

Hope this helps.

Regards,
Juan Carlos

Alvaro Victoriano

unread,
Oct 7, 2019, 3:12:27 AM10/7/19
to Wazuh mailing list
Hello Juan.

I have tried, sorry to tell you it didnt work.
I have doubt about the Master node as you said to replicate the passowrd to the other nodes, how its gonna be defineded if all the configurations are same in all nodes?


I would like to share the configurations maybe it will help.
I have 3 Wazuh-Worker and 1 Master

I set SSL comunication accoarding to your docs:

instances:
   
- name: "wazuh-master-ELK"
      ip
:
       
- "10.0.0.1"
   
- name: "worker1"
      ip
:
       
- "10.0.0.2"
   
- name: "worker1"
      ip
:
       
- "10.0.0.3"
    - name: "worker3"
      ip
:
        
- "10.0.0.3"


and i set /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive to ELK


about the cluster:

cluster.name: elastic-cluster
node.name: <node_name>
network.host: <elasticsearch_ip>
discovery.seed_hosts:
 - <elasticsearch_ip_node1>
 - <elasticsearch_ip_node2>
 - <elasticsearch_ip_node3>
cluster.initial_master_nodes:
 - <master_node_1>
 - <master_node_2>
 - <master_node_3>

about the cluster.initial_master_nodes: its just a name?


Thanks


José Antonio Sánchez Robles

unread,
Oct 17, 2019, 1:01:02 PM10/17/19
to Wazuh mailing list
I Álvaro Victoriano,

Sorry for the late response. First of all, this is the link to configure Xpack with Wazuh in our Documentation.

In the first step, you generated the certificates with the file `instances.yml`.

Your example 
instances:
    - name: "wazuh-master-ELK"
      ip:
        - "10.0.0.1"
    - name: "worker1"
      ip:
        - "10.0.0.2"
    - name: "worker1"
      ip:
        - "10.0.0.3"
    - name: "worker3"
      ip:
        - "10.0.0.3"

You say you have a Wazuh manager, three Wazuh workers, and three Elasticsearch nodes, so you need to generate certificates for four Filebeat instances (one for each Wazuh instance), three for the Elasticsearch nodes and maybe one more if you use Kibana.


instances:
    - name: "wazuh-manager"
      ip:
        - "10.0.0.1"
    - name: "wazuh-worker-1"
      ip:
        - "10.0.0.2"
    - name: "wazuh-worker-1"
      ip:
        - "10.0.0.3"
    - name: "wazuh-worker-3"
      ip:
        - "10.0.0.4"
    - name: "es-node-1"
      ip:
        - "10.0.0.5"
    - name: "es-node-2"
      ip:
        - "10.0.0.6"
    - name: "es-node-3"
      ip:
        - "10.0.0.7"
    - name: "kibana"
      ip:
        - "10.0.0.8"

Change the IPs and the names, and in one of your elasticsearch nodes generate the certificates with this command:
/usr/share/elasticsearch/bin/elasticsearch-certutil cert ca --pem --in instances.yml --out certs.zip

Configure first your Elasticsearch nodes:
mkdir /etc/elasticsearch/certs/ca -p
cp ca/ca.crt /etc/elasticsearch/certs/ca
cp <es-node-x>/<es-node-x>.* /etc/elasticsearch/certs # Change the <es-node-x>
chown -R elasticsearch: /etc/elasticsearch/certs
chmod -R 770 /etc/elasticsearch/certs

Add the proper settings per Elastic node:
# Transport layer
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/<es-node-x>.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/<es-node-x>.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca/ca.crt" ]

# HTTP layer
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.verification_mode: certificate
xpack.security.http.ssl.key: /etc/elasticsearch/certs/<es-node-x>.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/<es-node-x>.crt
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca/ca.crt" ]

Restart the service:
systemctl restart kibana

Now is time to configure the Filebeat of the Wazuh instances. Create the directory /etc/filebeat/certs, then copy the certificate authorities, the certificate and the key there.
mkdir /etc/filebeat/certs/ca -p
cp ca/ca.crt /etc/filebeat/certs/ca
cp wazuh-manager/wazuh-manager.crt /etc/filebeat/certs
cp wazuh-manager/wazuh-manager.key /etc/filebeat/certs
chmod 770 -R /etc/filebeat/certs

Add the proper settings in /etc/filebeat/filebeat.yml

output.elasticsearch.hosts: ['<es-node-ip:9200>', '<es-node-ip:9200>', '<es-node-ip:9200>']
output.elasticsearch.protocol: https
output.elasticsearch.loadbalance: true
output.elasticsearch.ssl.certificate: "/etc/filebeat/certs/wazuh-manager.crt"
output.elasticsearch.ssl.key: "/etc/filebeat/certs/wazuh-manager.key"
output.elasticsearch.ssl.certificate_authorities: ["/etc/filebeat/certs/ca/ca.crt"]

In the previous answers, you showed your Filebeat configuration, change it to look like in our example, use multilevel for output.elasticsearch throws an error 

nano 
/etc/filebeat/filebeat.yml
output
.elasticsearch:
        hosts: ['http://<elasticsearch_ip_node1>:9200','http://<elasticsearch_ip_node2>:9200','http://<elasticsearch_ip_node3>:9200']
        loadbalance
: true

Test your Filebeat configuration using `filebeat test output` and restart the service if it works.
systemctl restart filebeat

When your Elasticsearch nodes and Filebeat instances are configurated, enabled the Xpack security in all of your Elastic nodes adding this line in the files `/etc/elasticsearch/elasticsearch.yml`
xpack.security.enabled: true

And restart all nodes
systemctl restart elasticsearch

Generate credentials for all the Elastic Stack pre-built roles and users
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

Set up credentials for Filebeat. Add the following two lines in all instances of Filbeat in the files `/etc/filebeat/filebeat.yml`.
output.elasticsearch.username: "elastic"
output.elasticsearch.password: "<your_password>"

And restart Filebeat:
systemctl restart filebeat

If everything has worked, your cluster should already be working using ssl encryption and credentials.

I hope this helps.
Please tell me if this was your doubt and if you have any other, and I will try to resolve it.

Regards,
Jose


Alvaro Victoriano

unread,
Oct 21, 2019, 4:13:38 AM10/21/19
to Wazuh mailing list


Hello Jose.

Thank you so much for this great clarification.

About the certifications i allready done it and working fine with this infrastructure:

instances:
   - name: "wazuh-master-ELK"
     ip:
       - "10.0.0.1"
   - name: "worker1"
     ip:
       - "10.0.0.2"
   - name: "worker1"
     ip:
       - "10.0.0.3"
   - name: "worker3"
     ip:
       - "10.0.0.4"


About the infrastrucute, i dont have 3 elastic nodes, iam going to use smae Workers of wazuh to install another 3 elastic nodes, so its gonna be like this:

instances:
   - name: "wazuh-master-ELK"
     ip:
       - "10.0.0.1"
   - name: "worker1 & ELK"
     ip:
       - "10.0.0.2"
   - name: "worker1   & ELK"
     ip:
       - "10.0.0.3"
   - name: "worker3  & ELK"
     ip:
       - "10.0.0.4"

I think its viable right?
As the certifcates are signed to each node by its IP, so i dont need to generate a new certificates to elastic nodes, but using the one which allready generated to each worker which filebeat is allready using correct?

In this way, the thing is to change the configurations are allready exist.

So iam just going to copy the certificate from each worker node from the directory of filebeat /filebeat/certs to /elasticsearch/certs then add the new configurations to Elastic.


The credential for Elastic i allready generated it, in the first infrastructure and  the Xpack is Activated, it will make a problem now? or its gonna replicate automatically to the new Elastic Nodes?


I would show in very simple way what i have done with 2 Elastick nodes and Wazuh-Manager without SSL and Xpack, and please you tell me if there is any error in this.

Node01, ELK
transport.host: localhost
transport.tcp.port: 9300
http.port: 8200

cluster.name: new-cluster
node.name: node01
network.host: 192.168.0.24
discovery.seed_hosts:
 - 192.168.0.24
 - 192.168.0.25

cluster.initial_master_nodes:
 - node01
 - node02


Node02, ELK
transport.host: localhost
transport.tcp.port: 9300
http.port: 8200

cluster.name: new-cluster
node.name: node02
network.host: 192.168.0.25
discovery.seed_hosts:
 - 192.168.0.24
 - 192.168.0.25

cluster.initial_master_nodes:
 - node01
 - node02


For each filebeat in each node:
output.elasticsearch:
        loadbalance: true

Kibana
I have a question please:
With this same configurations in all Elastic, all of are going to be Master nodes right? so i can do changes from any node of them?



Regards.

José Antonio Sánchez Robles

unread,
Oct 21, 2019, 10:37:37 AM10/21/19
to Wazuh mailing list
Hi Álvaro,

Yes, if you use the same server for two services you only need a certificate for an IP. I think it's viable, too.

In the configuration files of your node, you configure transport.port in localhost, when the other nodes, Kibana or Filebeat try to connect with them, they will reject the requests.
Remove this configuration or change this to the local or public IP of the server.

Then, add the certificates and enable Xpack in all Elasticserach nodes.

About your question: With this same configuration in all Elastic, all of are going to be Master nodes right? so I can do changes from any node of them?
Yes, this config runs the all nodes as master.

Alvaro Victoriano

unread,
Oct 24, 2019, 3:02:49 PM10/24/19
to Wazuh mailing list

Hi Jose.

I have one more error.

After setting up everything, ELK workes fine, but iam getting this error in Wazuh App, i tried to remove it compeletly and reinstall it but it didtn work.

Could you help me with this please?

Thanks


Screenshot.png
Screenshot (1).png

José Antonio Sánchez Robles

unread,
Oct 25, 2019, 2:56:04 AM10/25/19
to Wazuh mailing list
Hi Álvaro,

What version of the Wazuh app and Elastiticsearch did you install?


Alvaro Victoriano

unread,
Oct 25, 2019, 10:49:28 AM10/25/19
to Wazuh mailing list
Hi Jose.

ELK 7.3.2
Wazuh 3.10.2

Alvaro Victoriano

unread,
Nov 18, 2019, 7:03:04 PM11/18/19
to Wazuh mailing list
Hello.

everything is fine about the configurations.

I could discover what is the main problem, all its about Elasticserach, when i set up elastic in cluster it was after starting elastic for the first time where allready been generated independent UUID for each node and having thier own localhost as a cluster.

So the solution is starting the elastic for first time with the cluster configurations for all nodes, or remove entire /var/lib/elasticsearch/nodes and start cluster.

Pedro, Juan and Jose, thank you so much for your help.

1.png
Reply all
Reply to author
Forward
0 new messages