Logs deletion, new Docker 4.5.3 version, persistence and problems

465 views
Skip to first unread message

Tech Master

unread,
Oct 15, 2023, 12:41:11 PM10/15/23
to Wazuh | Mailing List
Hi everyone.
My Ubuntu VM, 100 GB, had no more space.
It was with Wazuh Docker 4.3.9
Since there was no more disk space, docker compose down didn't work either.
I had to run:

sudo su
cd /var/lib/docker/volumes/single-node_wazuh_logs/_data
rm -rf ./archives/2022
rm -rf ./alerts/2022

I would like to start by saying that I would like to use log rotation and save indexes and logs on S3, but for now I am not capable of doing so.

Then I ran docker-compose down.
I made a: git clone https://github.com/wazuh/wazuh-docker.git ./wazuh-docker-v4.5.3 -b v4.5.3 --depth=1
I edited docker-compose.yml with the new indexer admin password set to the previous version.
I also increased the memory: - "OPENSEARCH_JAVA_OPTS=-Xms4g -Xmx4g"
I copied the folder of certificates created in the previous version, preserving the permissions on directories and files.
I copied the internal_users.yml file from the previous version, containing the hash of the new admin password

Then:
docker-compose up

wazuh.dashboard_1 | FATAL {"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/ [1000] maximum shards open;"}],"type":"validation_exception","reason":"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000 ] maximum shards open;"},"status":400}

wazuh.dashboard_1 | An OpenSearch Dashboards keystore already exists. Overwrite? [y/N] Created OpenSearch Dashboards keystore in /usr/share/wazuh-dashboard/config/opensearch_dashboards.keystore
wazuh.dashboard_1 | Wazuh APP already configured


What is causing the problem?
Does it depend on the logs that I manually deleted or some error on persistence?

Tech Master

unread,
Oct 15, 2023, 12:51:48 PM10/15/23
to Wazuh | Mailing List
I took a snapshot of the VM before performing those steps. If I performed an incorrect procedure I can do a snapshot restore and perform the correct steps.

Thank you

Tech Master

unread,
Oct 15, 2023, 5:00:08 PM10/15/23
to Wazuh | Mailing List
Furthermore, I did not follow the following steps (in the previous version 4.3.9 they were not part of the docker deployment guide). I don't know if they can affect the problem.

Applying the changes
Start the deployment stack.
docker-compose up -d
Run docker ps and note the name of the first Wazuh indexer container. For example, single-node-wazuh.indexer-1, or multi-node-wazuh1.indexer-1.
Run docker exec -it <WAZUH_INDEXER_CONTAINER_NAME> bash to enter the container. For example:
docker exec -it single-node-wazuh.indexer-1 bash
Set the following variables:
export INSTALLATION_DIR=/usr/share/wazuh-indexer
CACERT=$INSTALLATION_DIR/certs/root-ca.pem
KEY=$INSTALLATION_DIR/certs/admin-key.pem
CERT=$INSTALLATION_DIR/certs/admin.pem
export JAVA_HOME=/usr/share/wazuh-indexer/jdk
Wait for the Wazuh indexer to initialize properly.
The waiting time can vary from two to five minutes. It depends on the size of the cluster, the assigned resources, and the speed of the network. Then, run the securityadmin.sh script to apply all changes.

Stuti Gupta

unread,
Oct 16, 2023, 12:32:36 AM10/16/23
to Wazuh | Mailing List
HiTeam,
Hope you are doing well today and thank you for using wazuh.
Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"
From the logs, its indicates that you have reached the shards limit. There are two possible solutions:
Solution 1: Increase the shards limit.
  • Reduce the number of shards.
Increase the shards limit:
This option will quickly solve the solution but it is not advisable for the long run as it will bring more problems in the future. However, this guide will explain how to do it in case it is needed.
The following setting is the one responsible for this limit: cluster.routing.allocation.total_shards_per_node
It is possible to change the setting using the WI API. You can either use the Dev tools option within the management section in the Wazuh Dashboard:
PUT _cluster/settings { "persistent" : { "cluster.routing.allocation.total_shards_per_node" : 1200 } }
or curl the API directly from a terminal:
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d' { "persistent" : { "cluster.routing.allocation.total_shards_per_node" : 1200 } } '
Reduce the number of shards:
Reaching the limit of shards means no retention policies are applied to the environment. This could lead to storing the data forever and cause failure in the system.
It is necessary to delete old indices to reduce the number of shards. It is necessary to check what the indices stored in the environment, the following API call can help:
GET _cat/indices
Then, it is necessary to delete indices that are not needed or older indices. Bear in mind that this cannot be retrieved unless there are backups of the data either using snapshots or Wazuh alerts backups.
The API call to delete indices is:
DELETE <index_name>
We always recommend this option.
Finally, restart the all Wazuh components once again.To know about shards please refr to https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster

Solution2:  You can increase the space or you can clear the cache for that you use the following command:
sync; echo 3 > /proc/sys/vm/drop_caches
 Solution3: To Solve the issue you can follow the following steps to remove the unassigned shards or make allocation properly:
1,Check Elasticsearch Cluster Health:
From  output, you can check multiple useful information like Cluster Name, Cluster Status, Number of Nodes, Active Primary Shards, Active Shards, Relocating Shards, Active Shards, Unassigned shards (219) etc. You can use command curl -XGET -k -u user:pass "https://localhost:9200/_cluster/health"2. Check all Elasticsearch Unassigned Shards: You can check the name of the shards that are unassigned and its current state from using the command  curl -XGET -k -u admin:admin https://localhost:9200/_cat/shards?h=index,shards,state,prirep,unassigned.reason | grep UNASSIGNED3. Delete or allocate Unassigned Shards: You can use command: curl -XGET -k -u admin:admin "https://localhost:9200/_cat/shards" | grep UNASSIGNED | awk {'print $1'} | xargs -i curl -XDELETE "
https://localhost:9200/{}"  to delete all the unassigned shards. We are grepping all the UNASSIGNED shards and feeding the output to awk command to get the unassigned shards name. This name will be passed to xargs command as an input which will be used by curl command to delete all the unassigned shards.
 Or you can allocate the shardsThe reroute command allows for manual changes to the allocation of individual shards in the cluster. For example, a shard can be moved from one node to another explicitly, an allocation can be cancelled, and an unassigned shard can be explicitly allocated to a specific node. curl -X POST -k -u admin:admin "https://localhost:9200/_cluster/reroute?metric=none"
https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-reroute.html

Solution4: Since storage space has a cost and a limit, you may have to delete old data to ensure you can maintain the retention period that you need.
Alerts generated by Wazuh are sent to an Elasticsearch daily index named wazuh-alerts-4.x-YYYY.MM.DD by using the default configuration. You can create policies that govern the lifecycle of the indices based on different phases.
Four phases can be defined in a Lifecycle Policy:
  • Hot phase. For recent data that is actively accessed.
  • Warm phase. Data that you may wish to access, but less often.
  • Cold phase. Similar to the warm phase you may also freeze indices to reduce overhead.
  • Delete phase. Data that reaches this phase is deleted.
You can follow the steps mentioned in this document. https://wazuh.com/blog/wazuh-index-management/You can also take snapshots of the indices that automatically back up your Wazuh indices in local or Cloud-based storage and restore them at any given time. To do so please refer to https://wazuh.com/blog/index-backup-management/

Hope this will helps.

Tech Master

unread,
Oct 17, 2023, 4:46:24 AM10/17/23
to Wazuh | Mailing List
Many thanks,
I followed Solution 1: Reduce the number of shards.

I solved it and now I access the Wazuh GUI.

I have to look carefully:
https://wazuh.com/blog/wazuh-index-management/
https://wazuh.com/blog/index-backup-management/
An update with articles referring to the new interface and OpenSearch would be appreciated.
As soon as I do some tests I will write.

Furthermore, regarding storage problems relating to alerts and archives logs in:
/var/lib/docker/volumes/single-node_wazuh_logs/_data
I was asking for information on deleting older files to make external backups to S3.

Stuti Gupta

unread,
Oct 17, 2023, 11:13:40 PM10/17/23
to Wazuh | Mailing List
For restoring the data you can refer to https://wazuh.com/blog/index-backup-management/
And you can also refer to it for backups and restoring https://documentation.wazuh.com/current/user-manual/files-backup/index.html

Tech Master

unread,
Jan 3, 2024, 2:32:49 PM1/3/24
to Wazuh | Mailing List
HI,
During this Christmas period I read, studied and took tests.
In one of my laboratories I created a new VM Ubuntu Server 22.04, I installed Wazuh Docker single node v.4.7.1
- Enabling the Wazuh archives: logall and logall_json in ossec_config in the Wazuh server/manager
- Visualizing the events on the dashboard:
manager:
I edited /var/lib/docker/volumes/single-node_filebeat_etc/_data/filebeat.yml
archives:
   enabled: true
indexers:
Stack management > Index patterns > Create index pattern: wazuh-archives-*


Wazuh archives and alerts and archive indexes take up storage.

I read carefully:

ISM/ILM - Index life management:
https://documentation.wazuh.com/current/user-manual/wazuh-indexer/index-life-management.html#index-life-management
https://wazuh.com/blog/wazuh-index-management/

SM - Snapshot Management:
https://wazuh.com/blog/index-backup-management/


I want to solve 2 problems:

1) The alerts and archives files present on the Wazuh server/manager could be used for forensic and compliance purposes.
The .jsons were sent to the indexer by Filebeat anyway, so the index is OK.
I would need (I think) a cron to be able to move the files to S3 storage (setting immutability for compliance reasons), freeing up space on the server.
Do you have cron examples for this purpose?
Reading around it seems to me that s3fs is better than rclone, what do you recommend?

2) on the indexer alerts and archives indexes must be eliminated via Index life management (it seems quite simple to me).
But, before deleting them, I would like to make a backup, with one year retention on S3 storage.


I tried installing the s3 plugin directly into the indexer container, but I had a problem:

wazuh-indexer@wazuh:~/plugins$ /usr/share/wazuh-indexer/bin/opensearch-plugin install repository-s3
/usr/share/wazuh-indexer/bin/opensearch-env: line 108: cd: /etc/wazuh-indexer: No such file or directory
-> Installing repository-s3
-> Downloading repository-s3 from opensearch
[======================================================================== ] 100%??
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@
@ WARNING: plugin requires additional permissions @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@
* java.lang.RuntimePermission accessDeclaredMembers
* java.lang.RuntimePermission getClassLoader
* java.lang.reflect.ReflectPermission suppressAccessChecks
* java.net.NetPermission setDefaultAuthenticator
* java.net.SocketPermission * connect,resolve
* java.util.PropertyPermission opensearch.allow_insecure_settings read,write
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y
-> Failed installing repository-s3
-> Rolling back repository-s3
-> Rolled back repository-s3
Exception in thread "main" java.nio.file.FileSystemException: /usr/share/wazuh-indexer/plugins/.installing-4390580237951055143 -> /usr/share/wazuh-indexer/plugins/repository-s3: Directory not empty
         at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
         at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
         at java.base/sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:416)
         at java.base/sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:266)
         at java.base/java.nio.file.Files.move(Files.java:1432)
         at org.opensearch.plugins.InstallPluginCommand.movePlugin(InstallPluginCommand.java:920)
         at org.opensearch.plugins.InstallPluginCommand.installPlugin(InstallPluginCommand.java:897)
         at org.opensearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:276)
         at org.opensearch.plugins.InstallPluginCommand.execute(InstallPluginCommand.java:250)
         at org.opensearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:104)
         at org.opensearch.cli.Command.mainWithoutErrorHandling(Command.java:138)
         at org.opensearch.cli.MultiCommand.execute(MultiCommand.java:104)
         at org.opensearch.cli.Command.mainWithoutErrorHandling(Command.java:138)
         at org.opensearch.cli.Command.main(Command.java:101)
         at org.opensearch.plugins.PluginCli.main(PluginCli.java:60)

How can I solve it?

Reply all
Reply to author
Forward
0 new messages