Changing SecretPassword Into MyOwn Password in Wazuh Docker Not Working

2,029 views
Skip to first unread message

DUCARROZ Birgit

unread,
Jun 28, 2021, 6:30:11 AM6/28/21
to Wazuh mailing list
Hi,

I try to change the SecretPassword by my own password, but it is not
working (see error below).

I applied point 2.3 "Use a secure password for the admin user on
Elasticsearch" of your doc
https://documentation.wazuh.com/current/docker/wazuh-container.html, but
is there not missing something in your doc?


In the internal_users.yml file are the passhashes for admin,
kibanaserver, kibanaro logstash, readall and snapshotrestore (5 hashes)
while you have in the production-cluster.yml only 3 possibilities to set
a password:

wazuh-master:
environment:
- ELASTIC_PASSWORD=SecretPassword
wazuh-worker:
environment:
- ELASTIC_PASSWORD=SecretPassword
kibana:
environment:
- ELASTICSEARCH_PASSWORD=SecretPassword


I changed this password and changed the hash in my local
~/script/config/wazuh/internal_users.yml
-rw-rw-r-- 1 elasticsearch root 1281 Jun 25 09:38 internal_users.yml


volumes:

-
/volume/config/wazuh/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml


User rights of internal_users.yml:
==================================
-rw-rw-r-- 1 elasticsearch root 1137 Jun 25 10:39 internal_users.yml


ERROR:
======
elasticsearch_1 | [2021-06-25T07:43:30,966][WARN
][c.a.o.s.a.BackendRegistry] [elasticsearch] Authentication finally
failed for admin from 172.18.0.5:48492
kibana_1 |
{"type":"log","@timestamp":"2021-06-25T07:43:30Z","tags":["error","elasticsearch","data"],"pid":48,"message":"[ResponseError]:
Response Error"}
kibana_1 |
{"type":"log","@timestamp":"2021-06-25T07:43:33Z","tags":["error","elasticsearch","data"],"pid":48,"message":"[ResponseError]:
Response Error"}

Can someone please help me?
Thank you a lot,
Regards
Birgit

Alfonso Ruiz-Bravo

unread,
Jun 28, 2021, 8:46:33 AM6/28/21
to Wazuh mailing list
Hello Birgit!

I think there is a bit of confusion, perhaps because I have mixed up terms. Let's separate 2.3 Use a secure password for the admin user on Elasticsearch process into two parts:

First. Change the password for Open Distro. 

At this point, you will change the admin user password for Open Distro. To do this you will have to modify the https://github.com/wazuh/wazuh-docker/blob/master/production_cluster/elastic_opendistro/internal_users.yml file. You will have to create a hash for the new password and replace it. Example:

- Old

admin:
  hash: "$2y$12$K/SpwjtB.wOHJ/Nc6GVRDuc1h0rM1DfvziFRNPtk27P.c4yDr9njO"
  reserved: true
  backend_roles:
  - "admin"
  description: "Demo admin user"
. . .

- New hash: $2y$12$sVNXzTVkocrGNsDw5Lo32OQHpzYr8JmCn8F8alzyQmKWZ6R2Q.75u

admin:
  hash: "$2y$12$sVNXzTVkocrGNsDw5Lo32OQHpzYr8JmCn8F8alzyQmKWZ6R2Q.75u"
  reserved: true
  backend_roles:
  - "admin"
  description: "Demo admin user"
. . .

You will then have to mount this file as described in the guide so that it is present in the container. You can test if this worked fine by accessing the Open Distro container and checking the contents:
> cat plugins/opendistro_security/securityconfig/internal_users.yml

This I believe you have done correctly, I think the confusion comes now, in the https://github.com/wazuh/wazuh-docker/blob/master/production-cluster.yml file. 

If you have done the first part correctly the Open Distro security index will contain the user admin with his new password. This user is used by Wazuh managers so that their Filebeats can send alerts without problems and is also used by Kibana, to access the Open Distro indexes and display visualizations, dashboards, etc...  For this you will have to enter the new password in the following fields: 


If you have done these steps correctly then the error is another one and I apologize for not understanding you correctly. In this case, we would have to consult some configuration files in the containers and perform some checks in the Open Distro API. 

Do not hesitate to contact us with any questions you may have. 

Best regards,

Alfonso Ruiz-Bravo

Birgit Ducarroz

unread,
Jun 29, 2021, 6:40:41 AM6/29/21
to Wazuh mailing list
Hi Alfonso,

First of all, thank you for your detailed answer. Your documentation is not confusing and I find that you have not mixed up terms.
For the first problem:

I checked twice in my files, an I believe that I have exactly done what you described. This is why I am confused and don't understand why it is not working. Actually I get another error (see at the end of this post):

I confirm that the new password is working when logging into https://myhost.ch:9200 as admin
but I get an unable to connect (a blanc browser page) when opening the browser on https://myhost.ch:5601

The only difference in my production-cluster.yml and the official one  https://github.com/wazuh/wazuh-docker/blob/master/production-cluster.yml is, that I deleted the sections elasticsearch-2: and elasticsearch-3. So in my case, the external mounted file
internal_users.yml figures only once under

elasticsearch:
(...)
volumes:
  - /volume/config/wazuh/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml

(and not 3 times as in the official file).

I confirm having changed the hash for the demo admin user in my internal_users.yml and the cleartext password in my

I logged into the elasticsearch container and I can also confirm that it takes internal_users.yml located on the external volume.
I checked also that the user-rights on my local host of this file are the same as the one inside  the elasticsearch docker container user of the for this file (user 1000).

Where else would you search to find the bug?


ERROR:
======
wazuh-worker_1   | 2021-06-29T10:21:34.622Z    ERROR    [publisher_pipeline_output]    pipeline/output.go:154    Failed to connect to backoff(elasticsearch(https://diuf.unifr.ch:9200)): Get "https://diuf.unifr.ch:9200": context deadline exceeded
wazuh-worker_1   | 2021-06-29T10:21:34.622Z    INFO    [publisher_pipeline_output]    pipeline/output.go:145    Attempting to reconnect to backoff(elasticsearch(https://diuf.unifr.ch:9200)) with 1 reconnect attempt(s)
wazuh-worker_1   | 2021-06-29T10:21:34.622Z    INFO    [publisher]    pipeline/retry.go:219    retryer: send unwait signal to consumer
wazuh-worker_1   | 2021-06-29T10:21:34.622Z    INFO    [publisher]    pipeline/retry.go:223      done
kibana_1         | Wazuh alerts template not loaded - sleeping.
kibana_1         | Wazuh alerts template not loaded - sleeping.
wazuh-master_1   | 2021-06-29T10:21:39.813Z    ERROR    [publisher_pipeline_output]    pipeline/output.go:154    Failed to connect to backoff(elasticsearch(https://diuf.unifr.ch:9200)): Get "https://diuf.unifr.ch:9200": context deadline exceeded

For the second problem - this is what I am missing in your doc and this will be my next question when the first problem is solved:
Where else do I have to change the (other) SecretPassword(s) contained in internal_users.yml ? (I mean which files do I have to change for the hashes of the demo users for kibanaserver, kibanaro, logstash, readall and snapshotrestore
AND also where else must I change the standard API_PASSWORD=MyS3cr37P450r.*- in case if I change it in my production-cluster.yml
?

Kind regards,
Birgit

Alfonso Ruiz-Bravo

unread,
Jun 29, 2021, 8:28:18 AM6/29/21
to Birgit Ducarroz, Wazuh mailing list
Hello Birgit,

- Browser Error:

For browser access, have you tried accessing from  https://myhost.ch  or https://myhost.ch:443? The Nginx container is listening on that port (443) and should redirect to Kibana (it still has to accept the certificates if they are self-signed).

We can make sure that Open Distro is in a good state by running the following query:


If the cluster status is OK, we know that the container is OK. 

- Filebeat ERROR:

This may be due to a configuration error. In the Wazuh worker 1 container can you try to launch this command?

filebeat test output

We will be able to see if the Filebeat from worker 1 is able to reach its destination correctly. To do this it will check these settings (/etc/filebeat/filebeat.yml):

output.elasticsearch:
  hosts: 
['$ELASTICSEARCH_URL']
  username: 
$ELASTIC_USERNAME
  password: 
$ELASTIC_PASSWORD
  ssl.verification_mode: 
$FILEBEAT_SSL_VERIFICATION_MODE
  ssl.certificate_authorities:
['$SSL_CERTIFICATE_AUTHORITIES']
  ssl.certificate:
$SSL_CERTIFICATE
  ssl.key:
$SSL_KEY

Note that I have put the variables, not their actual values. These variables are the ones you will find in the production-cluster.yml file:

. . .
  wazuh-worker:
    image: wazuh/wazuh-odfe:4.3.0
    hostname: wazuh-worker
    restart: always
    environment:
      - ELASTICSEARCH_URL=https://elasticsearch:9200
      - ELASTIC_USERNAME=admin
      - ELASTIC_PASSWORD=SecretPassword
      - FILEBEAT_SSL_VERIFICATION_MODE=full
      - SSL_CERTIFICATE_AUTHORITIES=/etc/ssl/root-ca.pem
      - SSL_CERTIFICATE=/etc/ssl/filebeat.pem
      - SSL_KEY=/etc/ssl/filebeat.key
    volumes:
. . .

In this case, the filebeat.yml should look like this:

output.elasticsearch:
  hosts: 
[' https://elasticsearch:9200']
  username: 
admin
  password: 
SecretPassword
  ssl.verification_mode: 
full
  ssl.certificate_authorities:
[' /etc/ssl/root-ca.pem  ']
  ssl.certificate:
/etc/ssl/filebeat.pemRemember to also modify the ELASTICSEARCH_PASSWORD environment variable from your Wazuh and Kibana containers.
  ssl.key:
/etc/ssl/filebeat.key


- Internal  users (admin, kibanaserver, kibanaro, logstash, readall and snapshotrestore)

 As the Wazuh Docker repository is configured, you only need to change the ADMIN user hash, as it is used to connect Filebeat and Kibana to Open Distro. You do not need to change the hashes of the other users unless you are going to use them. 

Now, as I say, both Kibana and the Filebeats of the Wazuh managers must be able to communicate with Open Distro, you have to modify the ELASTICSEARCH_PASSWORD environment variable from your Wazuh and Kibana containers. Examples:

- Case 1:

User: admin
Pass: my_new_pass
The my_new_pass password hash is set in Open Distro's internal_users.yml for the admin user. 

Your production-cluster.yml file should look like this:

services:
  wazuh-master:
. . .
    environment:
      - ELASTICSEARCH_URL=https://elasticsearch:9200
      - ELASTIC_USERNAME=admin
      - ELASTIC_PASSWORD= my_new_pass
. . .
  wazuh-worker:
. . .
    environment:
      - ELASTICSEARCH_URL=https://elasticsearch:9200
      - ELASTIC_USERNAME=admin
      - ELASTIC_PASSWORD= my_new_pass
. . .
  kibana:
. . .
    environment:
      - ELASTICSEARCH_USERNAME=admin
      - ELASTICSEARCH_PASSWORD=my_new_pass


- Case 2:

User: my_new_user
Pass: my_new_pass
The my_new_pass password hash is set in Open Distro's internal_users.yml for the my_new:user user. 

Your production-cluster.yml file should look like this:

services:
  wazuh-master:
. . .
    environment:
      - ELASTICSEARCH_URL=https://elasticsearch:9200
      - ELASTIC_USERNAME= my_new_user
      - ELASTIC_PASSWORD= my_new_pass
. . .
  wazuh-worker:
. . .
    environment:
      - ELASTICSEARCH_URL=https://elasticsearch:9200
      - ELASTIC_USERNAME= my_new_user
      - ELASTIC_PASSWORD= my_new_pass
. . .
  kibana:
. . .
    environment:
      - ELASTICSEARCH_USERNAME= my_new_user
      - ELASTICSEARCH_PASSWORD=my_new_pass


- Change API_PASSWORD

You can change it directly in the environment variable:

  wazuh-master:
. . .
    environment:
 . . .
      - API_PASSWORD=MyS3cr37P450r.*-

Or you could use the Wazuh API:



Do not hesitate to contact us with any questions you may have. 

Best regards,

WazuhAlfonso Ruiz-Bravo
Cloud computing engineer
WazuhThe Open Source Security Platform


--
You received this message because you are subscribed to a topic in the Google Groups "Wazuh mailing list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/wazuh/QCnVn92cT2A/unsubscribe.
To unsubscribe from this group and all its topics, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/e7326351-4bf1-4c5e-9539-f618f08d0534n%40googlegroups.com.

Birgit Ducarroz

unread,
Jul 2, 2021, 4:03:07 AM7/2/21
to Wazuh mailing list
Holà Alfonso,

Again thank you for your detailed response.
Below my answers:
1) https://myhost.ch  or https://myhost.ch:443 --> Browser error unable to connect.

2) curl -k -u admin https://myhost.ch:9200/_cluster/health?pretty
Enter host password for user 'admin':
{
  "cluster_name" : "wazuh-cluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 2,
  "active_shards" : 2,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 66.66666666666666
}

3) [root@wazuh-worker /]# filebeat test output
elasticsearch: https://myhost.ch:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: xxx.xx.xx.xx
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.3
    dial up... OK
  talk to server... OK
  version: 7.10.0

4 ) [root@wazuh-worker /]# cat /etc/filebeat/filebeat.yml

# Wazuh - Filebeat configuration file
filebeat.modules:
  - module: wazuh
    alerts:
      enabled: true
    archives:
      enabled: false

setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.template.overwrite: true
setup.ilm.enabled: false
output.elasticsearch:
  hosts: ['https://myhost.ch:9200']
  username: 'admin'
  password: 'test'
  ssl.verification_mode: full
  ssl.certificate_authorities: ['/etc/ssl/myhost.ch.crt']
  ssl.certificate: '/etc/ssl/myhost.ch.crt'
  ssl.key: '/etc/ssl/myhost.ch.key'

--> Compairing my production-cluster.yml with yours, I see that my ELASTICSEARCH_URL=myhost.ch:9200 and not elasticsearch:9200. I changed into elasticsearch:9200, but then I get a new error:
elasticsearch_1  | Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
elasticsearch_1  |     at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
kibana_1         | Wazuh alerts template not loaded - sleeping.

So I guess there is a communication error between the ELASTICSEARCH_URL and the SSL certificate, which would like to have as ELASTICSEARCH_URL =myhost.ch. But how can I correct that?


5) "Remember to also modify the ELASTICSEARCH_PASSWORD environment variable from your Wazuh and Kibana containers." --> This has been done and is ok.

6) Internal  users (admin, kibanaserver, kibanaro, logstash, readall and snapshotrestore) --> ok I understand.

Regards,
Birgit




Birgit Ducarroz

unread,
Jul 2, 2021, 4:35:02 AM7/2/21
to Wazuh mailing list
I just wanted to add this: All my configruation works when using the standard password.

Birgit Ducarroz

unread,
Jul 2, 2021, 5:17:04 AM7/2/21
to Wazuh mailing list
I have another guess:
Might it be possible that the generated hash is wrong?

I use docker run --rm -ti amazon/opendistro-for-elasticsearch:1.12.0 bash /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh

Here the versions I use in the production-cluster.yml:
  wazuh-master:
    image: wazuh/wazuh-odfe:4.1.2

  wazuh-worker:
    image: wazuh/wazuh-odfe:4.1.2

  elasticsearch:
    image: amazon/opendistro-for-elasticsearch:1.12.0

  kibana:
    image: wazuh/wazuh-kibana-odfe:4.1.2

  nginx:
    image: nginx:stable

I mean I think it is just not possible that SecretPassword with its standard hash "$2y$12$K/SpwjtB.wOHJ/Nc6GVRDuc1h0rM1DfvziFRNPtk27P.c4yDr9njO"

works but

for example TestPass with a generated hash $2y$12$wFpoIjUvId2AMhJ0pzw.MuAxA/21BzLzZrp4TDrQS7MCnyJV.ShcW

does not work??

Birgit Ducarroz

unread,
Jul 2, 2021, 6:03:26 AM7/2/21
to Wazuh mailing list
I discovered now the following:
I generated a new hash for SecretPassword.  
hash: "$2y$12$zCQv3/hS6Zj3swfZOQbdeOX.f74kSkxnRM5O2.aM0sejI./8mQXaq"
--> it works.

So, I assume it is not the generated hash, but the cleartext TestPassword is not token anywhere in the configuration. Why and where?

Steps to reproduce:
================

- deleting all docker containers and images
- docker volume prune (all volumes pruned)

I changed 3 times SecretPassword in production-cluster.yml :

:1,$s/SecretPassword/TestPassword/g
cat production-cluster.yml |grep Password

      - ELASTIC_PASSWORD=TestPassword
      - ELASTIC_PASSWORD=TestPassword
      - ELASTICSEARCH_PASSWORD=TestPassword

cat production-cluster.yml |grep Password
      - ELASTIC_PASSWORD=TestPassword
      - ELASTIC_PASSWORD=TestPassword
      - ELASTICSEARCH_PASSWORD=TestPassword

1.12.0 bash /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh
[Password:] --> entering TestPassword
$2y$12$FDgsEKSxYWBmANqfatcX9OK3GzXSGLeZxSKKBqgw/blLWCrElqi8G

cat /volume/config/wazuh/internal_users.yml | grep Elqi8G
  hash: "$2y$12$FDgsEKSxYWBmANqfatcX9OK3GzXSGLeZxSKKBqgw/blLWCrElqi8G"

docker-compose -f  /wazuh-docker/production-cluster.yml up

Entering into wazuh-worker to check the password:
[root@wazuh-worker /]# cat /etc/filebeat/filebeat.yml

# Wazuh - Filebeat configuration file
filebeat.modules:
  - module: wazuh
    alerts:
      enabled: true
    archives:
      enabled: false

setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.template.overwrite: true
setup.ilm.enabled: false
output.elasticsearch:
  hosts: ['https://myhost.ch:9200']
  username: 'admin'
  password: 'TestPassword'

  ssl.verification_mode: full
  ssl.certificate_authorities: ['/etc/ssl/myhost.ch.crt']
  ssl.certificate: '/etc/ssl/myhost.ch.crt'
  ssl.key: '/etc/ssl/myhost.ch.key'

Error:
====
kibana_1         | {"type":"log","@timestamp":"2021-07-02T09:58:34Z","tags":["error","elasticsearch","data"],"pid":50,"message":"[ResponseError]: Response Error"}
elasticsearch_1  | [2021-07-02T09:58:34,330][WARN ][c.a.o.s.a.BackendRegistry] [elasticsearch] Authentication finally failed for admin from 172.18.0.5:42754

Thanks again for any suggestion.
Have a nice week-end!
Regards,
Birgit

Alfonso Ruiz-Bravo

unread,
Jul 2, 2021, 7:08:53 AM7/2/21
to Birgit Ducarroz, Wazuh mailing list
Hello Birgit!

- Certificate

As you say, there may be a problem with the certificates, but if you have the same ones in all the containers, you should get the same error in all of them. As you have changed the url of the Open Distro/Elasticsearch container it is possible that they are failing because of the Common Name of the certificate if the one you have expects the CN set by you, but you find that the container has the name Elasticsearch.

You can either change the container name to match the CN of your new certificate, you can create new certificates that match the Elasticserach CN or you can try the default ones created by Wazuh.


-   generated hash is wrong

This is strange, I doubt that the hash is not correct. This process should be correct. We are going to perform a series of checks:

Steps to reproduce:
================

- deleting all docker containers and images
- docker volume prune (all volumes pruned)

I changed 3 times SecretPassword in production-cluster.yml :

:1,$s/SecretPassword/TestPassword/g
cat production-cluster.yml |grep Password

      - ELASTIC_PASSWORD=TestPassword
      - ELASTIC_PASSWORD=TestPassword
      - ELASTICSEARCH_PASSWORD=TestPassword

cat production-cluster.yml |grep Password
      - ELASTIC_PASSWORD=TestPassword
      - ELASTIC_PASSWORD=TestPassword
      - ELASTICSEARCH_PASSWORD=TestPassword


1º. 

It looks like the credentials are working according to the curl to the Open Distro cluster health you performed and the Filebeat test output command. No errors can be seen here.

On the one hand we are going to check if alerts are arriving from Wazuh managers to Open Distro.

Run the following query:

curl -k -u admin https://<ELASTIC_ADDRESS>:9200/_cat/indices?s=index

This query will list all Open Distro indexes. We will look at the Wazuh alert indexes.

 Then restart each Wazuh manager running the following command:

service wazuh-manager restart

Wait a few minutes and check the indexes again:

curl -k -u admin https://<ELASTIC_ADDRESS>:9200/_cat/indices?s=index

The last Wazuh alert index you have should have increased in size. If this is the case, then Wazuh+Filbeat is working properly. On the other hand, if the number of documents in the index has not increased, the alerts are not arriving and there is a communication problem.



It seems that the main error comes from the communication between Kibana and Open Distro. The first point would be to see if the Open Distro container is accessible from the Kibana container. To do so, access the Kibana pod and execute the following sample queries:

curl -k -u admin https://<ELASTIC_ADDRESS>:9200/_cat/indices?s=index
curl -k -u admin https://<ELASTIC_ADDRESS>:9200/_cluster/health?pretty

If the communication is correct, you should receive the response without any problem. If there is any error, it means that the communication between containers is failing. If communications are OK, let's try to check the credentials.

In Filebeat you have been able to check that the new credentials have changed.

(you wrote)

cat production-cluster.yml |grep Password
      - ELASTIC_PASSWORD=TestPassword
      - ELASTIC_PASSWORD=TestPassword
      - ELASTICSEARCH_PASSWORD=TestPassword

. . .

[root@wazuh-worker /]# cat /etc/filebeat/filebeat.yml
. . .
  password: 'TestPassword'
. . .

Could you check now if for Kibana they also changed in the /usr/share/kibana/config/kibana.yml file? Do not be surprised if you do not find the credentials settings in the file, as they may be in a keystore.

When you can, perform these checks.

I hope you also have a good weekend and I am sorry for not making faster progress with your problem.

Best regards,

WazuhAlfonso Ruiz-Bravo
Cloud computing engineer
WazuhThe Open Source Security Platform

Birgit Ducarroz

unread,
Jul 5, 2021, 5:12:36 AM7/5/21
to Wazuh mailing list
Hi Alfonso,

Again thank you for your answer and the time you investigated.
I tried again this morning and it works. I am unable to say what happened, since the configuration file is the same as last Friday.
But for all users which might have the same problem I post here my production-cluster.yml which works actually.

Just two other questions BTW:
1) The very last section "volumes" (below the nginx section) - is this meant to add the config on the local host? - in example
volumes:
 (...)
 ossec-etc:/volume/config/ossec/etc       ?
 (...)
  worker-ossec-etc:
 (...)

--> I would like now continue to have all the important configuration files on my local host. Or must I just use this file:
    - ./production_cluster/wazuh_cluster/wazuh_manager.conf:/wazuh-config-mount/etc/ossec.conf
?

2) Is it possible to run the containers which run as root by default as non-root user? If so, how must I configure this? And if this is not possilbe, what is best practice to keep a hacker out of my host in case one of the wazuh containers would be hacked? (In example destroy-recreate the wazuh-containers regularly to have security updates?)

production-cluster.yml
==================
# Wazuh App Copyright (C) 2021 Wazuh Inc. (License GPLv2)
version: '3.7'

services:
  wazuh-master:
    image: wazuh/wazuh-odfe:4.1.2
    hostname: wazuh-master
    restart: always
    ports:
      - "1515:1515"
      - "514:514/udp"
      - "55000:55000"
    environment:
      - ELASTICSEARCH_URL=https://myhost.ch:9200
      - ELASTIC_USERNAME=admin
      - ELASTIC_PASSWORD=hooray-it-works
      - FILEBEAT_SSL_VERIFICATION_MODE=full
      - SSL_CERTIFICATE_AUTHORITIES=/etc/ssl/myhost.ch.crt
      - SSL_CERTIFICATE=/etc/ssl/myhost.ch.crt
      - SSL_KEY=/etc/ssl/myhost.ch.key
      - API_USERNAME=acme-user
      - API_PASSWORD=MyS3cr37P450r.*-
    volumes:
      - ossec-api-configuration:/var/ossec/api/configuration
      - ossec-etc:/var/ossec/etc
      - ossec-logs:/var/ossec/logs
      - ossec-queue:/var/ossec/queue
      - ossec-var-multigroups:/var/ossec/var/multigroups
      - ossec-integrations:/var/ossec/integrations
      - ossec-active-response:/var/ossec/active-response/bin
      - ossec-agentless:/var/ossec/agentless
      - ossec-wodles:/var/ossec/wodles
      - filebeat-etc:/etc/filebeat
      - filebeat-var:/var/lib/filebeat
      - /volume/config/certs/OU/myhost.ch.crt:/etc/ssl/myhost.ch.crt
      - /volume/config/certs/OU/myhost.ch.key:/etc/ssl/myhost.ch.key
      - ./production_cluster/wazuh_cluster/wazuh_manager.conf:/wazuh-config-mount/etc/ossec.conf

  wazuh-worker:
    image: wazuh/wazuh-odfe:4.1.2

    hostname: wazuh-worker
    restart: always
    environment:
      - ELASTICSEARCH_URL=https://myhost.ch:9200
      - ELASTIC_USERNAME=admin
      - ELASTIC_PASSWORD=hooray-it-works
      - FILEBEAT_SSL_VERIFICATION_MODE=full
      - SSL_CERTIFICATE_AUTHORITIES=/etc/ssl/myhost.ch.crt
      - SSL_CERTIFICATE=/etc/ssl/myhost.ch.crt
      - SSL_KEY=/etc/ssl/myhost.ch.key
    volumes:
      - worker-ossec-api-configuration:/var/ossec/api/configuration
      - worker-ossec-etc:/var/ossec/etc
      - worker-ossec-logs:/var/ossec/logs
      - worker-ossec-queue:/var/ossec/queue
      - worker-ossec-var-multigroups:/var/ossec/var/multigroups
      - worker-ossec-integrations:/var/ossec/integrations
      - worker-ossec-active-response:/var/ossec/active-response/bin
      - worker-ossec-agentless:/var/ossec/agentless
      - worker-ossec-wodles:/var/ossec/wodles
      - worker-filebeat-etc:/etc/filebeat
      - worker-filebeat-var:/var/lib/filebeat
      - /volume/config/certs/OU/myhost.ch.crt:/etc/ssl/myhost.ch.crt
      - /volume/config/certs/OU/myhost.ch.key:/etc/ssl/myhost.ch.key
      - ./production_cluster/wazuh_cluster/wazuh_worker.conf:/wazuh-config-mount/etc/ossec.conf

  elasticsearch:
    image: amazon/opendistro-for-elasticsearch:1.12.0
    hostname: elasticsearch
    restart: always
    ports:
      - "9200:9200"
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - elastic-data-1:/usr/share/elasticsearch/data
      - /volume/config/certs/OU/myhost.ch.crt:/usr/share/elasticsearch/config/myhost.ch.crt
      - /volume/config/certs/OU/myhost.ch.key:/usr/share/elasticsearch/config/myhost.ch.key
      - ./production_cluster/elastic_opendistro/elasticsearch-node1.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - /volume/config/wazuh/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
  kibana:
    image: wazuh/wazuh-kibana-odfe:4.1.2
    hostname: kibana
    restart: always
    ports:
      - 5601:5601
    environment:
      - ELASTICSEARCH_USERNAME=admin
      - ELASTICSEARCH_PASSWORD=hooray-it-works
      - SERVER_SSL_ENABLED=true
      - SERVER_SSL_CERTIFICATE=/usr/share/kibana/config/myhost.ch.crt
      - SERVER_SSL_KEY=/usr/share/kibana/config/myhost.ch.key
      - WAZUH_API_URL="https://wazuh-master"
      - API_USERNAME=acme-user
      - API_PASSWORD=MyS3cr37P450r.*-
    volumes:
      - /volume/config/certs/OU/kibana/myhost.ch.crt:/usr/share/kibana/config/myhost.ch.crt
      - /volume/config/certs/OU/kibana/myhost.ch.key:/usr/share/kibana/config/myhost.ch.key

    depends_on:
      - elasticsearch
    links:
      - elasticsearch:elasticsearch
      - wazuh-master:wazuh-master

  nginx:
    image: nginx:stable
    hostname: nginx
    restart: always
    ports:
      - "80:80"
      - "443:443"
      - "1514:1514"
    depends_on:
      - wazuh-master
      - wazuh-worker
      - kibana
    links:
      - wazuh-master:wazuh-master
      - wazuh-worker:wazuh-worker
      - kibana:kibana
    volumes:
      - ./production_cluster/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      -  /volume/config/certs/OU:/etc/nginx/ssl:ro

volumes:
  ossec-api-configuration:
  ossec-etc:
  ossec-logs:
  ossec-queue:
  ossec-var-multigroups:
  ossec-integrations:
  ossec-active-response:
  ossec-agentless:
  ossec-wodles:
  filebeat-etc:
  filebeat-var:
  worker-ossec-api-configuration:
  worker-ossec-etc:
  worker-ossec-logs:
  worker-ossec-queue:
  worker-ossec-var-multigroups:
  worker-ossec-integrations:
  worker-ossec-active-response:
  worker-ossec-agentless:
  worker-ossec-wodles:
  worker-filebeat-etc:
  worker-filebeat-var:
  elastic-data-1:

Kind regads,
Birgit

Alfonso Ruiz-Bravo

unread,
Jul 5, 2021, 5:55:23 AM7/5/21
to Birgit Ducarroz, Wazuh mailing list
Hello Birgit!!

I am glad that you already have your operating environment and I thank you for the detail of sharing your configuration with the community. Thanks to these actions we grow in the right direction.

1) This is due to the nomenclature of the docker compose. An entry under the top-level volumes key can be empty, in which case it uses the default driver configured by the Engine (in most cases, this is the local driver): https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference
 
2) To properly run Wazuh, no matter if it's in containers or not, you would need to run it using a root user. This is due to how Wazuh is designed and the level of permissions it needs to run its own services. Right now it is not possible to run Wazuh without being root, but it is an option that we have to research, because for security reasons it is advisable and the community asks us frequently.

Here you have the Docker security documentation: https://docs.docker.com/engine/security/

You can also find security best practice guides for docker on the net. You can try to follow them as best as possible.

 Finally, I think it would be interesting for you to use Wazuh to monitor the host where you have your containers: https://documentation.wazuh.com/current/docker-monitor/monitoring_containers_activity.html

I hope this information is helpful to you.

Best regards,

WazuhAlfonso Ruiz-Bravo
Cloud computing engineer
WazuhThe Open Source Security Platform

Birgit Ducarroz

unread,
Jul 9, 2021, 4:15:32 AM7/9/21
to Wazuh mailing list
Hi Alfonso,

Thank you for these hints and the links and sorry for my late response - I had some other stuff in the pipeline.
I will check these documentations and if once non-root containers would be available I would be happy :-)

Kind regards and have a nice week-end!
Birgit
Reply all
Reply to author
Forward
0 new messages