Hello,
Thank you for posting in our community and using Wazuh.
It’s highly likely that you have run out of shards. There are two things really important regarding the DB when it comes to Elasticsearch. On the one hand, DISK SPACE, data needs space on the disk, nothing new here but on the other hand, we have SHARDS, since Elasticsearch is oriented to be used in a cluster environment, it uses shards (fragments) to store data in the nodes and by default, each node has a maximum limit of 1000 shards. If you’ve reached the max limit, FB won’t be able to parse new data in Elasticsearch, no matter if it comes from the alerts.json (wazuh-manager) or the recovery.json (recovery script).
curl -k -u <UserName>:<Password> https://<ElasticIP>:9200/_cat/allocation?vshards disk.indices disk.used disk.avail disk.total disk.percent host ip node
93 50.4mb 10.2gb 9.3gb 19.5gb 52 192.168.11.100 192.168.11.100 inode-1
The first collum is the shards used per node, one line per node in the cluster, in my case I have only 1 elastic node and 93 shards active and in use, bear in mind that unassigned shards (in the last row, if there’s any, I don’t have any) also consume shards in the node. If this is the case, you can free shards by deleting old indices, for instance:
To delete ALL indices from January of 2021, we can use:
curl -k -u <User>:<Password> -X DELETE https://<ElasticIP>:9200/*2021.01*curl -k -u <User>:<Password> -X DELETE https://<ElasticIP>:9200/*2021* If you need to check indices before deleting them, use:
curl -k -u <User>:<Password> -X GET https://<ElasticIP>:9200/_cat/indices/*2021.*?v
You’ll see something like this in return:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open wazuh-alerts-4.x-2021.05.13 819gIjqIQCSdnvvWO8E4DQ 1 0 468 0 616.2kb 616.2kb
green open wazuh-alerts-4.x-2021.05.14 ajB_eIaSTEWhNtaF7GgyRA 1 0 1 0 12kb 12kb
green open wazuh-alerts-4.x-2021.05.17 k2T_PLg0SPmtxBJlIfIU5Q 1 0 87 0 169.9kb 169.9kb
green open wazuh-alerts-4.x-2021.06.29 H7YFXIzBRzSg0vjs_axtNg 1 0 4 0 31.1kb 31.1kb
green open wazuh-alerts-4.x-2021.05.18 YuoQ4WRVRF2ycKrneUWTyA 1 0 1 0 12kb 12kb
green open wazuh-alerts-4.x-2021.08.09 h7qPTOB2Qsy0PL362eEBrg 1 0 9 0 69.7kb 69.7kb
Also, you could increase the max_limit of shards per node BUT THIS IS NOT RECOMMENDED as a real solution to the problem and could lead to an unstable performance in the DB and even to data loss, so do not increase this value over a 20/30 percent.
curl -k -u <User>:<Password> -XPUT https://<ElasticIP>:9200/_cluster/settings -H "Content-Type: application/json" -d '{"persistent":{"cluster.max_shards_per_node":"1200"}}'
After this is done, you should automate the DB cleaning with a retention policy, otherwise, the DB will store data till there’s no more available space left on the disk or you reached the max shards limit again. We have a blog that will help you in this process, make sure you follow the guidelines for Elasticsearch Stack (the first one), called ILM (Index Lifecycle Management), or the guide for OpenDistro/Wazuh-Indexer (called ISM - Index State Management), the second guide. Here’s the link.
Finally, if you could share with me how many Elastic nodes you have and send me one sample of one wazuh-alerts-X index, I could probably help you to optimize the shards usage in your environment.
A sample like this one: green open wazuh-alerts-4.x-2021.05.13 819gIjqIQCSdnvvWO8E4DQ 1 0 468 0 616.2kb 616.2kb
Let me know if this helped!
John.-
Hi Chad,
I see, no problem. Yes, deleting old indices means deleting data, there are three things you can do to increase the available shards in your Elastic, delete indices to free shards, expand the resources horizontally in the Elastic cluster (by adding one or more nodes) or increase the max_shards_per_node value (Elasticsearch does not recommend doing so, do it under your own risk).
Maybe I wasn’t clear enough, the amount of data is one thing, another is the shards you are using to store indices in Elastic. One index could use 3 shards and 1 MB of disk space or 1 shard and 10 GB of disk space, the amount of data you are sending will affect the space on the disk, not the shards.
You could have an ILM/ISM policy without a deletion phase, but remember, you’ll be storing data forever and you have limits on the DB usage, so by doing this, eventually, you’ll fall to the point you are now, out of shards or disk space.
Regards,
John.-

Hey Chad,
Glad to help!
Space is not the issue … now, it’s a really small partition though, only 40 GB so keep an eye on it. The problem here is shards, we can see you already increased the maximum to 1500, this is not recommended by the vendor.
To free shards, you can delete old indices or add more nodes to the elastic cluster.
To check your max shards settings run this:
curl -k -u <User>:<Password> https://<ElasticIP>:9200/_cluster/settingsNOTE: If you don’t have shards available in Elastic, you won’t be able to add new data to the DB.
Regards,
John.-