Hi
Irene RomeroThe hardware requirement depends upon EPS. The server ingesting more events per second (EPS) than it can handle, our suggestions are focused on scaling your architecture. Keep in mind that each Wazuh manager node with 16GB of RAM and 8 CPUs can handle around 5000 EPS, and you currently have only 4 CPUs with 32GB of RAM, so you need to increase your CPU resources.
https://documentation.wazuh.com/current/installation-guide/wazuh-dashboard/index.html#hardware-requirementsWazuh managers scale better horizontally than vertically, meaning it is more effective to have 2 Wazuh manager nodes in a cluster with half the resources of a single node. Additionally, if you are heavily using the Wazuh indexer on the same node as the Wazuh manager master node, this can create resource conflicts. In such cases, we recommend using a distributed architecture (or multi-node)for your environment.
https://documentation.wazuh.com/current/user-manual/upscaling/adding-server-node.htmlAdditionally, to solve the storage issue you can refer to the following solutions:
Solution 1: Delete the indices manuallyIt is necessary to delete old indices to if they are no use. It is necessary to check what the indices stored in the environment, the following API call can help:
GET _cat/indices
Then, it is necessary to delete indices that are not needed or older indices. Bear in mind that this cannot be retrieved unless there are backups of the data either using snapshots or Wazuh alerts backups.
The API call to delete indices is:
DELETE <index_name>
Or CLI command
# curl -k -u admin:admin -XDELETE https://<WAZUH_INDEXER_IP>:9200/wazuh-alerts-4.x-YYYY.MM.DD
You can use wildcards (*) to delete more indices in one query.
Once you make enough room in the disk,
Solution 2 : Index management policies:
To minimize storage usage you can enable the policies according to your needs from here:
Regarding retention policies, we must understand that Wazuh + Wazuh Indexer is storing data, therefore, we need to configure both to auto-maintain the relevant data using the retention policies. It's worth mentioning that by default, none of them have retention policies applied, so they are not deleting old/unnecessary data after deployment.
In Wazuh Indexer, you have to set the days for how long you want to keep data in the hot state (fast access data that requires more RAM), cold state (slower access data that requires less RAM,) and the deletion state. An example would be 30 days before moving hot data to a cold state and 360 days before sending data to a deletion state.
After the creation of the retention policy, you must apply it to the existent indices (wazuh-alerts-* and/or wazuh-archives-*) and also add the wazuh template to it so new indices (that are created every day) are also included in the retention policy. All is well explained in our blog.
You can check the retention policies in:
Opensearch Menu >> Stack Management >> Index Management
Opensearch Menu >> Index Management
For that you can follow
https://documentation.wazuh.com/current/user-manual/wazuh-indexer/index-life-management.html
Solution3: Cronjob and ILM for logTo apply a deletion of alerts and archives older than 7 days, run crontab -e (as root) then paste the next piece of text:
0 0 * * mon find /var/ossec/logs/alerts/alerts.json -type f -mtime +7 -exec rm -f {} ;
0 0 * * mon find /var/ossec/logs/archives/ -type f -mtime +7 -exec rm -f {} ;
This will execute the tasks every day at 00:01 am for Crontab to delete files in alerts/archives older than 7 days. Bear in mind that archive files could be really big in size.
For complete information: Documentation and if you are using linux you can use the following Linux command with >
> /var/ossec/logs/alerts/alerts.json
You can also take snapshots of the indices that automatically back up your Wazuh indices in local or Cloud-based storage and restore them at any given time. To do so please refer to
https://wazuh.com/blog/index-backup-managementhttps://wazuh.com/blog/wazuh-index-management/Solution4: Fine-tune rules:The Wazuh indexer node should have a minimum of 4GB RAM and 2 CPU cores, but it's recommended to have 16GB RAM and 8 CPU cores, The amount of data depends on the generated alerts per second (APS). If the usage is more then we recommend examining the agent log or syslog to pinpoint the specific events or event types contributing to the high log volume. Analyzing these logs facilitates the detection of anomalies or patterns, use this information to fine-tune Wazuh rules and filters to focus on the most relevant events and reduce false positives.
https://wazuh.com/blog/creating-decoders-and-rules-from-scratch/Hope this helps