Kibana error "TOO_MANY_REQUESTS/12/disk usage exceeded"

9,617 views
Skip to first unread message

Maria Juárez

unread,
Jan 16, 2023, 12:35:08 PM1/16/23
to Wazuh mailing list
Hello, I hope someone can help me, I've been trying to understand how to solve this error for a couple of days.
I try to access the Wazuh graphical interface, but it rejects connection, I check the server services and I find that "Failed to start Kibana". I checked the log and found the following error:

FATAL  Error: Unable to complete saved object migrations for the [.kibana_task_manager] index. Please check the health of your Elasticsearch cluster and try again. Unexpected Elasticsearch ResponseError: statusCode: 429, method: PUT, url: /.kibana_task_manager_7.17.6_001/_mapping?timeout=60s error: [cluster_block_exception]: index  blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];,

I have consulted around 10 pages to try to solve it, but I don't understand how. I found that I should safely remove indexes, but I can't find a way to do it from the Linux CLI since I don't have access to the graphical interface. Can I use the API from the CLI? or any ideas?
Thanks!

Jesus Linares

unread,
Jan 17, 2023, 4:43:28 AM1/17/23
to Wazuh mailing list
Hello,

[cluster_block_exception]: index  blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];,

It looks like you don't have enough disk space. When you are low on disk, the indexer could reach the flood-stage disk usage watermark, and depending on the situation, if it affects system indices, some features may become unavailable (login, searching, indexing, etc).

You need to add more space to your disk or free up space.

If you can remove indices, just access your node server and use the indexer API to remove some indices: 
What version of Wazuh are you using? Do you have wazuh-indexer, Elasticsearch, Opendistro? Do you have a cluster or only 1 node for the indexer?

I hope it helps.
Message has been deleted

Stuti Gupta

unread,
Jul 20, 2023, 5:36:34 AM7/20/23
to Wazuh mailing list
Hi Maria,
Hope you are doing well and thanks for using Wazuh!
[cluster_block_exception]: index blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];,
This error indicates a data node is critically low on disk space and has reached the flood-stage disk usage watermark. This is the issue of "DISK USAGE EXCEEDED” When you run out of disk space, the indexer might hit the flood-stage disk consumption watermark. To solve this issue there are follwing solutions:  https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html.The right solution depends on the context - for example a production environment vs a development environment.Solution 1: free up disk space
Freeing up enough disk space so that more than 5% of the disk is free will solve this problem.
$ curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'Solution 2: Delete the indexes
 you can delete the the indexes if it is not in use, using the fowling command:
curl -k -u <username>:<passward> -XDELETE 'http://localhost:9200/<index name>'Solution 3:  change the flood stage watermark setting
Change the "cluster.routing.allocation.disk.watermark.flood_stage" setting to something else. It can either be set to a lower percentage or to an absolute value. Here's an example of how to change the setting from the docs:
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d' { "transient": { "cluster.routing.allocation.disk.watermark.low": "100gb", "cluster.routing.allocation.disk.watermark.high": "50gb", "cluster.routing.allocation.disk.watermark.flood_stage": "10gb", "cluster.info.update.interval": "1m" } } '
Best Regards,
Stuti Gupta
Reply all
Reply to author
Forward
0 new messages