The Wazuh indexer data has grown to around 68GB.
Vulnerability Detection is enabled in our environment. To reduce load, we already changed the vulnerability feed update interval from 60 minutes to 24 hours.
However, even after this change, the indexer size continues to grow. This indicates that vulnerability processing and indexing are still generating a high volume of data due to the number of agents and continuous inventory updates.
This is not a disk issue. Further tuning of Vulnerability Detection and index retention is required.
Hi
Muhammad Ali Khan
You should start by deleting agents that are no longer in use or no longer connected to the manager. This will remove the vulnerability detection data for disconnected agents and should free up a noticeable amount of disk space.
Next, delete old indices whose data is no longer required, for example, indices older than one year.
To review the existing indices, use the following API call:
GET _cat/indices
From the output, identify and delete unnecessary or old indices. Please note that deleted indices cannot be recovered unless they were backed up using snapshots or Wazuh alert backups.
To delete an index, run:
DELETE <index_name>
Or via the CLI:
curl -k -u admin:admin -XDELETE https://<WAZUH_INDEXER_IP>:9200/wazuh-alerts-4.x-YYYY.MM.DD
You can also use wildcards (*) to delete multiple indices at once.
You can automate index deletion by configuring Index Lifecycle Management (ILM) policies, as explained here:
https://wazuh.com/blog/wazuh-index-management
You can configure snapshots to automatically back up Wazuh indices to local or cloud storage for future restoration. More details are available here:
https://wazuh.com/blog/index-backup-management
Change the number of replicas: By default, Wazuh creates multiple primary and replica shards, which is useful in multi-node clusters. However, in a single-node setup this causes unnecessary storage usage and overhead. To save space, reduce the number of replicas to zero by running the following command on the Wazuh indexer node:
curl -k -u "<INDEXER_USERNAME>:<INDEXER_PASSWORD>" -XPUT \Adding another Wazuh indexer node will increase capacity and improve performance. You can follow the official guide here:
https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/add-wazuh-indexer-nodes.html
I also recommend filtering the data monitored by Wazuh to reduce unnecessary ingestion. In Syscheck, you can use the ignore and registry_ignore options. For example:
<directories>/etc,/usr/bin,/usr/sbin</directories>You can also use ignore, restrict, filter, and query options in the localfile configuration so that only the required data is monitored and analyzed by Wazuh. https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/localfile.html#ignore
Finally, if you have an all-in-one deployment, I recommend moving from an all-in-one deployment to a distributed environment. A distributed setup can handle higher log volumes more efficiently by leveraging multiple nodes and additional resources.