Hi,
I would recommend doing a few checks to find out what is taking up so much disk space.
You should check which files are taking up a lot of space. Usually, those files are logs files, which are located at /var/ossec/logs. Old files are rotated into folders sorted by date.
Basically, you will find logs of 3 types:
wazuh-manager.wazuh-manager.You will have to investigate which logs are filling up your disk and act accordingly. For example, here I use the ncdu tool, to know which directory is occupying more space ncdu /var/ossec/logs
# ncdu /var/ossec/logs
--- /var/ossec/logs ---
2.2 MiB [##########] ossec.log
1.2 MiB [##### ] /alerts
20.0 KiB [ ] /api
12.0 KiB [ ] /firewall
12.0 KiB [ ] /archives
e 4.0 KiB [ ] /wazuh
e 4.0 KiB [ ] /cluster
4.0 KiB [ ] api.log
0.0 B [ ] integrations.log
0.0 B [ ] cluster.log
0.0 B [ ] active-responses.log
Here are some measures you can apply:
If Wazuh logs fill your disk: You probably have some kind of DEBUG enabled, which causes a large number of logs per second. If yes and not necessary, then proceed to disable them from /var/ossec/etc/local_internal_options.conf or /var/ossec/etc/internal_options.conf.
If event logging (archives) fills up your disk: This can be the main cause of your disk filling up quickly. By default, this logging is disabled but can be enabled by the user for debugging issues or specific use cases. It serves to log all the events received by the wazuh-manager, so in case you have many agents or even a few agents reporting many events per second, it fills the disk quite fast.
You can check if you have it enabled by checking logall and logall_json in your /var/ossec/etc/ossec.conf file. In case it is active and you don’t need it, disable it and restart wazuh-manager (systemctl restart wazuh-manager).
If logging alerts (alerts) fill your disk: In this case, you can apply some retention policy to delete old logs, or delete the logs you are not interested in. Note that if your alerts are sent and indexed in elasticsearch, it may not be necessary to retain old alert log files in your wazuh-manager.
I hope this information is helpful.
Regards.
Hi,
In this case, the large files seem to be the alert logs.
When the wazuh-manager generates an alert, it is stored in both alerts.json and alerts.log, and almost immediately Filebeat sends it to Elasticsearch where it is indexed (keep this in mind for the last comment of this message).
Specifically, it seems that for that day 23/02/22 you are receiving a very high amount of alerts, and that makes that when they are stored in the corresponding files, they occupy quite a lot of memory.
First of all, you have to check if the number of these alerts you are generating is normal according to the environment you have. Maybe there are some agent/agents constantly sending events that generate noisy alerts (too often and not useful enough). Try to identify using that alert log file if there is any alert ID that repeats a lot compared to the rest and if it makes sense that there are so many. If necessary, you can disable the generation of such alerts by editing the corresponding rule.
Regarding disk space, remember that if the alerts are sent and indexed in Elasticsearch, you may not need to store them in alerts.log and alerts.json files, so you can directly apply some policy of compression, rotation, or deletion of these files (depending on the characteristics of your business), or you can even store these backups in other storage units if necessary.
Hi,
Yes, we already know that what is filling up your disk are the alert logs, so I don’t mean that you should find the files that take up the most space, but that you should find the reason why so many alerts are being generated.
Think that for each alert generated by your wazuh-manager, it is being stored in that log file, so you have to see why so many alerts are being generated. To do this you can count the number of IDs of each alert, and find the most repeated ones (parsing the information of the alerts registered in those files that are occupying space), and see if it makes sense that there are so many alerts. Remember that you can deactivate the rules you need to avoid such a high number of alerts in case they are not useful or necessary.
Also, remember that you can delete these log files of alerts from previous days (depending on your information policy), in case they are already indexed in Elasticsearch and you do not need them in the alert logs.
As a summary, you have to see why so many alerts are being generated in your environment, reduce them if necessary, or directly apply deletion policies, rotation… to the log files of these alerts for previous dates.
Regards.