wazuh full disk space consumed automatically

4,515 views
Skip to first unread message

siddha...@gmail.com

unread,
Feb 23, 2022, 6:15:58 AM2/23/22
to Wazuh mailing list
Dear Team,

We are using wazuh 4.2 all in one machine on ubuntu 20.04.
I have noticed that the disk space is full.
i have run some cmds to check where space is consumed i found this

root@WAZUHAIO:~# du -sh  /var/lib/elasticsearch/nodes/0/indices/
150G    /var/lib/elasticsearch/nodes/0/indices/

root@WAZUHAIO:~# du -sh  /var/log
90G     /var/log

root@WAZUHAIO:~# du -sh  /var/ossec/
215G    /var/ossec/

please suggest.
Any help would be appreciated.
Thank you.

Jonathan Martín Valera

unread,
Feb 23, 2022, 6:38:05 AM2/23/22
to Wazuh mailing list

Hi,

I would recommend doing a few checks to find out what is taking up so much disk space.

You should check which files are taking up a lot of space. Usually, those files are logs files, which are located at /var/ossec/logs. Old files are rotated into folders sorted by date.

Basically, you will find logs of 3 types:

  • Wazuh logs: Logs that record the status of Wazuh and the actions carried out.
  • Event logs (archives): Logs that record all the events received by the wazuh-manager.
  • Alert logs (alerts): Logs that record all alerts generated by the wazuh-manager.

You will have to investigate which logs are filling up your disk and act accordingly. For example, here I use the ncdu tool, to know which directory is occupying more space ncdu /var/ossec/logs

# ncdu /var/ossec/logs

--- /var/ossec/logs ---
    2.2 MiB [##########] ossec.log                                                                                                                                    
    1.2 MiB [#####     ] /alerts
   20.0 KiB [          ] /api
   12.0 KiB [          ] /firewall
   12.0 KiB [          ] /archives
e   4.0 KiB [          ] /wazuh
e   4.0 KiB [          ] /cluster
    4.0 KiB [          ]  api.log
    0.0   B [          ]  integrations.log
    0.0   B [          ]  cluster.log
    0.0   B [          ]  active-responses.log

Here are some measures you can apply:

  • If Wazuh logs fill your disk: You probably have some kind of DEBUG enabled, which causes a large number of logs per second. If yes and not necessary, then proceed to disable them from /var/ossec/etc/local_internal_options.conf or /var/ossec/etc/internal_options.conf.

  • If event logging (archives) fills up your disk: This can be the main cause of your disk filling up quickly. By default, this logging is disabled but can be enabled by the user for debugging issues or specific use cases. It serves to log all the events received by the wazuh-manager, so in case you have many agents or even a few agents reporting many events per second, it fills the disk quite fast.

    You can check if you have it enabled by checking logall and logall_json in your /var/ossec/etc/ossec.conf file. In case it is active and you don’t need it, disable it and restart wazuh-manager (systemctl restart wazuh-manager).

  • If logging alerts (alerts) fill your disk: In this case, you can apply some retention policy to delete old logs, or delete the logs you are not interested in. Note that if your alerts are sent and indexed in elasticsearch, it may not be necessary to retain old alert log files in your wazuh-manager.

I hope this information is helpful.

Regards.

siddha...@gmail.com

unread,
Feb 23, 2022, 8:25:01 AM2/23/22
to Wazuh mailing list
Hello Jonathan,

Thanks for your support.
I have checked and found this output.

root@WAZUHAIO:~# du -sh /var/ossec/logs/alerts/2022/Feb/ossec-alerts-23.json
117G    /var/ossec/logs/alerts/2022/Feb/ossec-alerts-23.json
root@WAZUHAIO:~# du -sh /var/ossec/logs/alerts/2022/Feb/ossec-alerts-23.log
74G     /var/ossec/logs/alerts/2022/Feb/ossec-alerts-23.log

ncdu tool is not installed in the machine and now not able to install due to space is no left now.
I have not enabled debug mode.
and also checked logall option and logall_json is not enabled and selected no in /var/ossec/etc/ossec.conf.
Please suggest.

Jonathan Martín Valera

unread,
Feb 23, 2022, 10:33:58 AM2/23/22
to Wazuh mailing list

Hi,

In this case, the large files seem to be the alert logs.

When the wazuh-manager generates an alert, it is stored in both alerts.json and alerts.log, and almost immediately Filebeat sends it to Elasticsearch where it is indexed (keep this in mind for the last comment of this message).

Specifically, it seems that for that day 23/02/22 you are receiving a very high amount of alerts, and that makes that when they are stored in the corresponding files, they occupy quite a lot of memory.

First of all, you have to check if the number of these alerts you are generating is normal according to the environment you have. Maybe there are some agent/agents constantly sending events that generate noisy alerts (too often and not useful enough). Try to identify using that alert log file if there is any alert ID that repeats a lot compared to the rest and if it makes sense that there are so many. If necessary, you can disable the generation of such alerts by editing the corresponding rule.

Regarding disk space, remember that if the alerts are sent and indexed in Elasticsearch, you may not need to store them in alerts.log and alerts.json files, so you can directly apply some policy of compression, rotation, or deletion of these files (depending on the characteristics of your business), or you can even store these backups in other storage units if necessary.

siddha...@gmail.com

unread,
Feb 24, 2022, 12:56:16 AM2/24/22
to Wazuh mailing list
Hi Jonathan,
Thanks again for your support.
today i have seen that some are released automatically and it is consuming again.
i have run this cmd and got this output.

--- /var/ossec/logs ----------------------------------------------------------------------------------------------------------------------------------------------------
   29.1 GiB [##########] /alerts
  899.1 MiB [          ] /firewall
  109.6 MiB [          ] /archives
    1.1 MiB [          ] /api
   80.0 KiB [          ]  api.log.2022-02-23
   64.0 KiB [          ]  api.log
   48.0 KiB [          ] /wazuh
   32.0 KiB [          ]  ossec.log
e   4.0 KiB [          ] /cluster

    0.0   B [          ]  integrations.log
    0.0   B [          ]  cluster.log
    0.0   B [          ]  active-responses.log

and this file ossec-alerts-23.json , ossec-alerts-23.log which is taking more space and it's showing there today.
I have removed some devices and the disk space is going to fill automatically.
please suggest.

Jonathan Martín Valera

unread,
Feb 25, 2022, 6:39:55 AM2/25/22
to Wazuh mailing list

Hi,

Yes, we already know that what is filling up your disk are the alert logs, so I don’t mean that you should find the files that take up the most space, but that you should find the reason why so many alerts are being generated.

Think that for each alert generated by your wazuh-manager, it is being stored in that log file, so you have to see why so many alerts are being generated. To do this you can count the number of IDs of each alert, and find the most repeated ones (parsing the information of the alerts registered in those files that are occupying space), and see if it makes sense that there are so many alerts. Remember that you can deactivate the rules you need to avoid such a high number of alerts in case they are not useful or necessary.

Also, remember that you can delete these log files of alerts from previous days (depending on your information policy), in case they are already indexed in Elasticsearch and you do not need them in the alert logs.

As a summary, you have to see why so many alerts are being generated in your environment, reduce them if necessary, or directly apply deletion policies, rotation… to the log files of these alerts for previous dates.

Regards.

Reply all
Reply to author
Forward
0 new messages