Heavy Disk Space Usage by wazuh-states-vulnerabilities

442 views
Skip to first unread message

Karl Napf

unread,
Jan 9, 2025, 10:59:11 AMJan 9
to Wazuh | Mailing List

Hello everyone,

We’ve encountered significant disk space usage in our Wazuh environment, specifically related to the wazuh-states-vulnerabilities index. Here are the details:

  1. Issue:

    • The wazuh-states-vulnerabilities index is consuming a substantial amount of disk space, growing disproportionately over time.
  2. Setup Details:

    • SIEM Stack: Wazuh with Logstash and OpenSearch.
    • Tenant Data: For a specific tenant, the index sizes are as follows:
      • Worker 1: 150 GB
      • Worker 2: 200 GB
      • Worker 3: 107 GB
      • Worker 4: 211 GB
    • Index Details: According to OpenSearch, the index shows:
      • Total size: 51 MB
      • Total documents: 131,380
      • Deleted documents: 26,569
    • Impact: High disk usage is starting to affect system performance and storage capacity.
  3. Questions:

    • What are the best practices for managing the size of the wazuh-states-vulnerabilities index?
    • Is there a way to configure Wazuh to limit the volume of data stored in this index without affecting functionality?
    • Can specific types of data (e.g., older or resolved vulnerabilities) be safely excluded from indexing?

Any guidance or shared experiences with managing the size of this index would be greatly appreciated.

Thank you in advance,

Karl

Carlos Anguita López

unread,
Jan 10, 2025, 10:41:03 AMJan 10
to Wazuh | Mailing List

Hello,

Can you ask some questions in order to have more information to better approach the issue?

  • How many agents do you have on each worker?
  • How many inactive agents that weren't eliminated have you got? These are agents leave remnant data.

On the other hand, where is the bulk of this data? That is, we need to know where these data are being concentrated. Could you run some du -h commands on the paths where there is more space occupied and share the results? Please look at the /var/ossec/queue directories, among others.

Please remember to hide sensitive information.

Thank you.

Karl Napf

unread,
Jan 10, 2025, 12:07:33 PMJan 10
to Wazuh | Mailing List
Hi, 
thank you for answering :)
  • Overall we have around 1500 agents, they distribute more or less evenly but some of those agents are syslog/fw forwarders so they produce massively more log volume.
  • Hard to tell, around 100 haven't been connected for more than 10 days
(ns: sshd) wazuh-worker-XXXXXX:/mnt/volume/wazuh-queue# du -h
372.0K ./fim/db
376.0K ./fim
8.0K ./logcollector
49.1G ./db
4.0K ./router
60.0K ./keystore
192.0K ./syscollector/db
204.0K ./syscollector
72.0K ./fts
4.0K ./vulnerabilities/dictionaries
8.0K ./vulnerabilities
4.0K ./agentless
4.0K ./sockets
4.0K ./alerts
43.8M ./vd/delayed
52.0K ./vd/state_track
6.1G ./vd/feed
39.2M ./vd/reports
39.3M ./vd/event
90.5M ./vd/inventory
6.3G ./vd
55.8M ./indexer/db/wazuh-states-vulnerabilities-CLUSTERNAME
55.8M ./indexer/db
156.7G ./indexer/wazuh-states-vulnerabilities-CLUSTERNAME
156.7G ./indexer
4.0K ./tasks
4.0K ./cluster/wazuh-worker-XXXXXXXX
8.0K ./cluster
5.9M ./rids
4.0K ./vd_updater/tmp/downloads
4.5M ./vd_updater/tmp/contents
4.6M ./vd_updater/tmp
39.4M ./vd_updater/rocksdb/updater_vulnerability_feed_manager_metadata
39.4M ./vd_updater/rocksdb
43.9M ./vd_updater
212.2G .

This is one example, the other workers are similar with indexer/wazuh-states-vulnerabilities-CLUSTERNAME taking by far most of the disk space between 120GB-210GB

Carlos Anguita López

unread,
Jan 13, 2025, 10:46:51 AMJan 13
to Wazuh | Mailing List

Hello,

This very large directory is the one that has the elements that have not yet been processed. Assuming that there is no problem with the connection to Wazuh Indexer, they should be removed little by little.

But, it can happen in some cases and it is known that this processing goes into a loop and the folder only grows in size. This is fixed in version 4.10.0 which is now available. Here is the upgrade guide to 4.10.0.

In case it is not possible to upgrade, then you could reset all the module information as explained here. This should solve the problem.

We are currently working on an enhancement for 4.10.1 on this issue that will make the processing faster and, thus, the queue will go down faster. You can read it here.

Hope it helps.

Karl Napf

unread,
Jan 16, 2025, 7:58:32 AMJan 16
to Wazuh | Mailing List
Update to 4.9 is proving difficult so we are currently bound to 4.8 but the "tabula rasa" approach did help to free of disk space. CVE´s are _notoriously_ finicky but seem to work.

Thanks for the assistance :)

Marlon Estrella

unread,
Feb 20, 2025, 12:46:17 AMFeb 20
to Wazuh | Mailing List
Hi all,

it is possible to delete manually the .sst files under the /indexer/wazuh-states-vulnerabilities-wazuh-cluster.

Capture.JPG

Reply all
Reply to author
Forward
0 new messages