Hi Adil
To solve the storage issue you can refer to the following solutions:
By default, the Wazuh server retains logs and does not delete them automatically. However, you can choose when to manually or automatically delete these logs according to your legal and regulatory requirements. To apply a deletion of alerts and archives older than 7 days, then run crontab -e (as root) then paste next piece of text:
0 0 * * mon find /var/ossec/logs/alerts/alerts.json -type f -mtime +7 -exec rm -f {} ;
0 0 * * mon find /var/ossec/logs/archives/ -type f -mtime +7 -exec rm -f {} ;
This will execute the tasks every day at 00:01 am for Crontab to delete files in alerts/archives older than 7 days. Bear in mind that archive files could be really big in size.
You can also take snapshots of the indices that automatically back up your Wazuh indices in local or Cloud-based storage and restore them at any given time. To do so please refer to
https://wazuh.com/blog/index-backup-managementhttps://wazuh.com/blog/wazuh-index-management/For all in one deployment where manager and indexer is on one server, then please delete indices and apply other solutions as well:
Delete the indices manuallyIt is necessary to delete old indices to if they are no use. It is necessary to check what the indices stored in the environment, the following API call can help:
GET _cat/indices
Then, it is necessary to delete indices that are not needed or older indices. Bear in mind that this cannot be retrieved unless there are backups of the data either using snapshots or Wazuh alerts backups.
The API call to delete indices is:
DELETE <index_name>
Or CLI command
# curl -k -u admin:admin -XDELETE https://<WAZUH_INDEXER_IP>:9200/wazuh-alerts-4.x-YYYY.MM.DD
You can use wildcards (*) to delete more indices in one query.
Once you make enough room in the disk,
Index management policies:To minimize storage usage you can enable the policies according to your needs from here:
Regarding retention policies, we must understand that Wazuh + Wazuh Indexer is storing data, therefore, we need to configure both to auto-maintain the relevant data using the retention policies. It's worth mentioning that by default, none of them have retention policies applied, so they are not deleting old/unnecessary data after deployment.
In Wazuh Indexer, you have to set the days for how long you want to keep data in the hot state (fast access data that requires more RAM), cold state (slower access data that requires less RAM,) and the deletion state. An example would be 30 days before moving hot data to a cold state and 360 days before sending data to a deletion state.
After the creation of the retention policy, you must apply it to the existent indices (wazuh-alerts-* and/or wazuh-archives-*) and also add the wazuh template to it so new indices (that are created every day) are also included in the retention policy. All is well explained in our blog.
For that you can follow
https://documentation.wazuh.com/current/user-manual/wazuh-indexer/index-life-management.htmlFine-tune rules:The Wazuh indexer node should have a minimum of 4GB RAM and 2 CPU cores, but it's recommended to have 16GB RAM and 8 CPU cores, The amount of data depends on the generated alerts per second (APS). If the usage is more then we recommend examining the agent log or syslog to pinpoint the specific events or event types contributing to the high log volume. Analyzing these logs facilitates the detection of anomalies or patterns, use this information to fine-tune Wazuh rules and filters to focus on the most relevant events and reduce false positives.
https://wazuh.com/blog/creating-decoders-and-rules-from-scratch/Additionally, you can add
manager-worker node:You can upscale your environment by adding a worker node. For that, you can refer to
https://documentation.wazuh.com/current/user-manual/upscaling/index.htmlHope this helps