no access to elastic

992 views
Skip to first unread message

mariano hinjos

unread,
Dec 17, 2021, 7:50:27 AM12/17/21
to Wazuh mailing list
I get this error, does anyone know how to fix it?


"index [.security-7] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];"


in kibana log

{"type":"error","@timestamp":"2021-12-17T13:48:43+01:00","tags":[],"pid":14073,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Error: Internal Server Error\n    at HapiResponseAdapter.toError (/usr/share/kibana/src/core/server/http/router/response_adapter.js:121:19)\n    at HapiResponseAdapter.toHapiResponse (/usr/share/kibana/src/core/server/http/router/response_adapter.js:75:19)\n    at HapiResponseAdapter.handle (/usr/share/kibana/src/core/server/http/router/response_adapter.js:70:17)\n    at Router.handle (/usr/share/kibana/src/core/server/http/router/router.js:164:34)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:93:5)\n    at handler (/usr/share/kibana/src/core/server/http/router/router.js:124:50)\n    at module.exports.internals.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:45:28)\n    at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)\n    at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)\n    at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:312:32)\n    at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:221:9)"},"url":"https://10.120.34.34/internal/security/login","message":"Internal Server Error"}
{"type":"response","@timestamp":"2021-12-17T13:48:43+01:00","tags":[],"pid":14073,"method":"post","statusCode":500,"req":{"url":"/internal/security/login","method":"post","headers":{"host":"10.120.34.34","connection":"keep-alive","content-length":"164","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"96\", \"Google Chrome\";v=\"96\"","content-type":"application/json","sec-ch-ua-mobile":"?0","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36","kbn-version":"7.11.2","sec-ch-ua-platform":"\"Windows\"","accept":"*/*","origin":"https://10.120.34.34","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","sec-fetch-dest":"empty","referer":"https://10.120.34.34/login?next=%2F","accept-encoding":"gzip, deflate, br","accept-language":"en,es-ES;q=0.9,es;q=0.8"},"remoteAddress":"10.120.39.80","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36","referer":"https://10.120.34.34/login?next=%2F"},"res":{"statusCode":500,"responseTime":61,"contentLength":9},"message":"POST /internal/security/login 500 61ms - 9.0B"}


any ideas?

Gabriel Fernando Lojano Mayaguari

unread,
Dec 17, 2021, 8:46:46 AM12/17/21
to Wazuh mailing list

Hi mhinjos!
Hope you're having a good day so far!

The issue you are having happens when Elasticsearch thinks the disk is running low on space so it puts itself into read-only mode.

By default, Elasticsearch's decision is based on the percentage of disk space that's free, so on big disks, this can happen even if you have many gigabytes of free space.

The flood stage watermark is 95% by default, so on a 1TB drive you need at least 50GB of free space or Elasticsearch will put itself into read-only mode.

For docs about the flood stage watermark see https://www.elastic.co/guide/en/elasticsearch/reference/7.16/modules-cluster.html#disk-based-shard-allocation.

The right solution depends on the context - for example a production environment vs a development environment.

Solution 1: free up disk space
Freeing up enough disk space so that more than 5% of the disk is free will solve this problem. Elasticsearch won't automatically take itself out of read-only mode once enough disk is free though, you'll have to do something like this to unlock the indices:

$ curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Solution 2: change the flood stage watermark setting
Change the "cluster.routing.allocation.disk.watermark.flood_stage" setting to something else. It can either be set to a lower percentage or to an absolute value. Here's an example of how to change the setting from the docs:

PUT _cluster/settings
{
  "transient": {
    "cluster.routing.allocation.disk.watermark.low": "100gb",
    "cluster.routing.allocation.disk.watermark.high": "50gb",
    "cluster.routing.allocation.disk.watermark.flood_stage": "10gb",
    "cluster.info.update.interval": "1m"
  }
}
Again, after doing this you'll have to use the curl command above to unlock the indices, but after that they should not go into read-only mode again.

Hope this answer can help you!

Regards,
Fernando Lojano

mariano hinjos

unread,
Dec 20, 2021, 3:11:34 AM12/20/21
to Wazuh mailing list
Thanks for the help How do I check all these watermark setting?

Thanks a lot

Gabriel Fernando Lojano Mayaguari

unread,
Dec 20, 2021, 7:57:00 AM12/20/21
to Wazuh mailing list
Hi mhinjos!

All the settings explained above can be configured in the elasticsearch.yml config file located in /etc/elasticsearch/elasticsearch.yml or updated dynamically on a live cluster with the cluster-update-settings API.
See more info about these parameters in the next guide.

Glad to help you!
Regards,
Fernando Lojano

Reply all
Reply to author
Forward
0 new messages