Error in GUI and in kibana logs

428 views
Skip to first unread message

Cristian Radu

unread,
Aug 24, 2022, 10:28:09 AM8/24/22
to Wazuh mailing list
Hello,

I am seeing this error when I login on wazuh. Does anybody know what it means?

Also I am seeing this in kibana logs:

-- Logs begin at Wed 2022-08-24 13:55:20 UTC. --
Aug 24 14:15:00 wazuh-manager kibana[2409202]: {"type":"log","@timestamp":"2022-08-24T14:15:00Z","tags":["error","elasticsearch","data"],"pid":2409202,"message":"[validation_exception]: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}
Aug 24 14:15:00 wazuh-manager kibana[2409202]: {"type":"log","@timestamp":"2022-08-24T14:15:00Z","tags":["error","plugins","wazuh","cron-scheduler"],"pid":2409202,"message":"ResponseError: validation_exception"}
Aug 24 14:15:00 wazuh-manager kibana[2409202]: {"type":"log","@timestamp":"2022-08-24T14:15:00Z","tags":["error","elasticsearch","data"],"pid":2409202,"message":"[cluster_block_exception]: index [wazuh-monitoring-2022.34w] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];"}
Aug 24 14:15:00 wazuh-manager kibana[2409202]: {"type":"log","@timestamp":"2022-08-24T14:15:00Z","tags":["error","plugins","wazuh","monitoring"],"pid":2409202,"message":"cluster_block_exception"}
Aug 24 14:20:02 wazuh-manager kibana[2409202]: {"type":"log","@timestamp":"2022-08-24T14:20:02Z","tags":["error","elasticsearch","data"],"pid":2409202,"message":"[validation_exception]: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}
Aug 24 14:20:02 wazuh-manager kibana[2409202]: {"type":"log","@timestamp":"2022-08-24T14:20:02Z","tags":["error","plugins","wazuh","cron-scheduler"],"pid":2409202,"message":"ResponseError: validation_exception"}
Aug 24 14:20:02 wazuh-manager kibana[2409202]: {"type":"log","@timestamp":"2022-08-24T14:20:02Z","tags":["error","elasticsearch","data"],"pid":2409202,"message":"[validation_exception]: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}
Aug 24 14:20:02 wazuh-manager kibana[2409202]: {"type":"log","@timestamp":"2022-08-24T14:20:02Z","tags":["error","plugins","wazuh","cron-scheduler"],"pid":2409202,"message":"ResponseError: validation_exception"}
Aug 24 14:24:18 wazuh-manager kibana[2409202]: {"type":"error","@timestamp":"2022-08-24T14:24:18Z","tags":["connection","client","error"],"pid":2409202,"level":"error","error":{"message":"140460979709760:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1544:SSL alert number 46\n","name":"Error","stack":"Error: 140460979709760:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1544:SSL alert number 46\n"},"message":"140460979709760:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1544:SSL alert number 46\n"}

Thanks,
Cristian

Cristian Radu

unread,
Aug 24, 2022, 11:33:18 AM8/24/22
to Wazuh mailing list
I have manualy deleted some indexes from the directory directly. Not GUI or CLI. I searched them in the path and deleted. Now I get this error on kibana and it doesn't start at all.

 systemctl status kibana
● kibana.service - Kibana
     Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-08-24 15:28:01 UTC; 2min 43s ago
   Main PID: 2571957 (node)
      Tasks: 11 (limit: 19104)
     Memory: 172.8M
     CGroup: /system.slice/kibana.service
             └─2571957 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist -c /etc/kibana/kibana.yml

Aug 24 15:30:22 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:22Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
Aug 24 15:30:24 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:24Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
Aug 24 15:30:27 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:27Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
Aug 24 15:30:29 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:29Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
Aug 24 15:30:32 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:32Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
Aug 24 15:30:34 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:34Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
Aug 24 15:30:37 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:37Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
Aug 24 15:30:39 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:39Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
Aug 24 15:30:42 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:42Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
Aug 24 15:30:44 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:44Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
Aug 24 15:30:47 wazuh-manager kibana[2571957]: {"type":"log","@timestamp":"2022-08-24T15:30:47Z","tags":["error","elasticsearch","data"],"pid":2571957,"message":"[ResponseError]: Response Error"}
root@wazuh-manager:~#

Thanks,
Cristian

Tomas Benitez Vescio

unread,
Aug 24, 2022, 11:43:14 AM8/24/22
to Wazuh mailing list
Hi,
Thanks for using Wazuh!
It seems that the shard limit has been reached. No new indexes will be created unless some shards are deleted. By default, the shard limit per node is 1000. As a workaround, you can increase this limit to 2000 for example. As a permanent solution, you may delete some old indices that you don't need anymore (you will lose all the data contained on those indices) or reindex some indices into a new index to reduce the number of current shards (and afterward delete the reindexed indices).
You can check how to delete some indices here or you can also do this from the Wazuh Dashboard following these steps:
  1. Log into the Wazuh Dashboard interface
  2. Expand the left menu and click on Dev tools
  3. Enter DELETE /Name-of-the-index it will help autocomplete with the name of the index
  4. Click on the forward arrow icon to send the request
Also, you can change the number of shards using this guide although as I mentioned earlier this is not a permanent solution. 

But as you say that you have deleted some indexes manually and not using the GUI or CLI it seems that the data of your elasticsearch got corrupted, I recommend you refer to these threads of elasticsearch support: this and this, or also you could try deleting the indexes but now from the elasticsearch instance directly (reference).

Regards.

Cristian Radu

unread,
Aug 25, 2022, 10:47:41 AM8/25/22
to Wazuh mailing list
Hi,

Thanks for your answer! 
I checked your references mentioned and they do not help me. After I deleted the index from the filesystem, now I get on the GUI that "Kibana server is not ready yet" and with the same errors mentioned earlier.

When checking the health of the cluster I am getting this answer "curl -u admin:admin -k -XGET https://localhost:9200/_cluster/health?pretty
Open Distro Security not initialized"

So I cannot try anything from what you hinted.

Thanks,
Cristian

Cristian Radu

unread,
Aug 30, 2022, 3:52:55 PM8/30/22
to Wazuh mailing list
Hello,

I managed to find this:

export JAVA_HOME=/usr/share/elasticsearch/jdk/ && /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -icl --diagnose -migrate -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -nhnv -cacert /etc/elasticsearch/certs/root-ca.pem -cert /etc/elasticsearch/certs/admin.pem -key /etc/elasticsearch/certs/admin-key.pem -h localhost --accept-red-cluster
Open Distro Security Admin v7
Will connect to localhost:9300 ... done
Connected as CN=admin,OU=Docu,O=Wazuh,L=California,C=US
Elasticsearch Version: 7.10.2
Open Distro Security Version: 1.13.1.0
Diagnostic trace written to: /var/log/elasticsearch/securityadmin_diag_trace_2022-Aug-30_19-51-24.txt
Contacting elasticsearch cluster 'elasticsearch' ...
Clustername: elasticsearch
Clusterstate: RED
Number of nodes: 1
Number of data nodes: 1
.opendistro_security index already exists, so we do not need to create one.
ERR: .opendistro_security index state is RED.
ERR: Seems cluster is already migrated

How can I solve this? I googled and nothing found on how to solve the RED state.

Thanks,
Cristian

Cristian Radu

unread,
Aug 30, 2022, 3:59:11 PM8/30/22
to Wazuh mailing list
A small update:

export JAVA_HOME=/usr/share/elasticsearch/jdk/ && /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -icl --diagnose -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -nhnv -cacert /etc/elasticsearch/certs/root-ca.pem -cert /etc/elasticsearch/certs/admin.pem -key /etc/elasticsearch/certs/admin-key.pem -h localhost --accept-red-cluster

Open Distro Security Admin v7
Will connect to localhost:9300 ... done
Connected as CN=admin,OU=Docu,O=Wazuh,L=California,C=US
Elasticsearch Version: 7.10.2
Open Distro Security Version: 1.13.1.0
Diagnostic trace written to: /var/log/elasticsearch/securityadmin_diag_trace_2022-Aug-30_19-54-09.txt

Contacting elasticsearch cluster 'elasticsearch' ...
Clustername: elasticsearch
Clusterstate: RED
Number of nodes: 1
Number of data nodes: 1
.opendistro_security index already exists, so we do not need to create one.
ERR: .opendistro_security index state is RED.
Populate config from /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/
Will update '_doc/config' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
   FAIL: Configuration for 'config' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][config], source[n/a, actual length: [3.7kb], max length: 2kb]}] and a refresh]]
Will update '_doc/roles' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles.yml
   FAIL: Configuration for 'roles' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][roles], source[n/a, actual length: [4.7kb], max length: 2kb]}] and a refresh]]
Will update '_doc/rolesmapping' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles_mapping.yml
   FAIL: Configuration for 'rolesmapping' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][rolesmapping], source[{"rolesmapping":"eyJfbWV0YSI6eyJ0eXBlIjoicm9sZXNtYXBwaW5nIiwiY29uZmlnX3ZlcnNpb24iOjJ9LCJhbGxfYWNjZXNzIjp7InJlc2VydmVkIjpmYWxzZSwiYmFja2VuZF9yb2xlcyI6WyJhZG1pbiIsIndhenVoIl0sImRlc2NyaXB0aW9uIjoiTWFwcyBhZG1pbiB0byBhbGxfYWNjZXNzIn0sIm93bl9pbmRleCI6eyJyZXNlcnZlZCI6ZmFsc2UsInVzZXJzIjpbIioiXSwiZGVzY3JpcHRpb24iOiJBbGxvdyBmdWxsIGFjY2VzcyB0byBhbiBpbmRleCBuYW1lZCBsaWtlIHRoZSB1c2VybmFtZSJ9LCJsb2dzdGFzaCI6eyJyZXNlcnZlZCI6ZmFsc2UsImJhY2tlbmRfcm9sZXMiOlsibG9nc3Rhc2giXX0sImtpYmFuYV91c2VyIjp7InJlc2VydmVkIjpmYWxzZSwiYmFja2VuZF9yb2xlcyI6WyJraWJhbmF1c2VyIl0sInVzZXJzIjpbIndhenVoX3VzZXIiLCJ3YXp1aF9hZG1pbiJdLCJkZXNjcmlwdGlvbiI6Ik1hcHMga2liYW5hdXNlciB0byBraWJhbmFfdXNlciJ9LCJyZWFkYWxsIjp7InJlc2VydmVkIjpmYWxzZSwiYmFja2VuZF9yb2xlcyI6WyJyZWFkYWxsIl19LCJtYW5hZ2Vfc25hcHNob3RzIjp7InJlc2VydmVkIjpmYWxzZSwiYmFja2VuZF9yb2xlcyI6WyJzbmFwc2hvdHJlc3RvcmUiXX0sImtpYmFuYV9zZXJ2ZXIiOnsicmVzZXJ2ZWQiOnRydWUsInVzZXJzIjpbImtpYmFuYXNlcnZlciJdfSwid2F6dWhfdWlfYWRtaW4iOnsicmVzZXJ2ZWQiOnRydWUsImhpZGRlbiI6ZmFsc2UsImJhY2tlbmRfcm9sZXMiOltdLCJob3N0cyI6W10sInVzZXJzIjpbIndhenVoX2FkbWluIiwia2liYW5hc2VydmVyIl0sImFuZF9iYWNrZW5kX3JvbGVzIjpbXX0sIndhenVoX3VpX3VzZXIiOnsicmVzZXJ2ZWQiOnRydWUsImhpZGRlbiI6ZmFsc2UsImJhY2tlbmRfcm9sZXMiOltdLCJob3N0cyI6W10sInVzZXJzIjpbIndhenVoX3VzZXIiXSwiYW5kX2JhY2tlbmRfcm9sZXMiOltdfX0="}]}] and a refresh]]
Will update '_doc/internalusers' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
   FAIL: Configuration for 'internalusers' failed because of NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{zTodYsLQTh6B4p679nhKsw}{localhost}{127.0.0.1:9300}]]
Will update '_doc/actiongroups' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/action_groups.yml
   FAIL: Configuration for 'actiongroups' failed because of NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{zTodYsLQTh6B4p679nhKsw}{localhost}{127.0.0.1:9300}]]
Will update '_doc/tenants' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/tenants.yml
   FAIL: Configuration for 'tenants' failed because of NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{zTodYsLQTh6B4p679nhKsw}{localhost}{127.0.0.1:9300}]]
Will update '_doc/nodesdn' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/nodes_dn.yml
   FAIL: Configuration for 'nodesdn' failed because of NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{zTodYsLQTh6B4p679nhKsw}{localhost}{127.0.0.1:9300}]]
Will update '_doc/whitelist' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/whitelist.yml
   FAIL: Configuration for 'whitelist' failed because of NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{zTodYsLQTh6B4p679nhKsw}{localhost}{127.0.0.1:9300}]]
Will update '_doc/audit' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/audit.yml
   FAIL: Configuration for 'audit' failed because of NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{zTodYsLQTh6B4p679nhKsw}{localhost}{127.0.0.1:9300}]]
ERR: cannot upload configuration, see errors above

Please, help me!

Tomas Benitez Vescio

unread,
Aug 31, 2022, 8:32:49 AM8/31/22
to Wazuh mailing list
Hi, sorry for the delay.

Indeed seems like your problem is deeper than I first thought. I managed to find a discussion of a similar/if not the same problem with "Clusterstate: RED" after deleting some filesystem indices, you can check it out here

In short, you could try first listing the indices with red status using:
Delete the indices that appear in the output of the above command:
curl -k -u <user>:<pass> -X DELETE "https://localhost:9200/<index-name-in-red-status>"
Restart elasticsearch and kibana services
Please remember that if you don't have a backup and delete an index you will lose the data contained within.

Regards.

Cristian Radu

unread,
Aug 31, 2022, 2:44:47 PM8/31/22
to Wazuh mailing list
Hi,

I found that discussion as well but I get this output:

Open Distro Security not initialized.

So how do I initiate the Open Distro Security?

BR,
Cristian

Cristian Radu

unread,
Sep 6, 2022, 10:11:56 AM9/6/22
to Wazuh mailing list
Hello Tomas,

Any ideas on how to solve this?

BR,
Cristian

Cristian Radu

unread,
Sep 19, 2022, 3:30:16 PM9/19/22
to Wazuh mailing list
Hello,

I am getting this error now in kibana. I solved the opensecurity issue. Now I have this issue. Any ideas?


Screenshot 2022-09-19 222819.png

Also in Kibana logs I am seeing this:

Sep 19 22:27:41 wazuh-manager kibana[189775]: {"type":"error","@timestamp":"2022-09-19T19:27:41Z","tags":[],"pid":189775,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Er
ror: Internal Server Error\n    at HapiResponseAdapter.toError (/usr/share/kibana/src/core/server/http/router/response_adapter.js:132:19)\n    at HapiResponseAdapter.toHapiResponse (/usr/share/kibana/src/co
re/server/http/router/response_adapter.js:86:19)\n    at HapiResponseAdapter.handle (/usr/share/kibana/src/core/server/http/router/response_adapter.js:81:17)\n    at Router.handle (/usr/share/kibana/src/cor
e/server/http/router/router.js:164:34)\n    at process._tickCallback (internal/process/next_tick.js:68:7)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":n
ull,"search":null,"query":{},"pathname":"/api/check-stored-api","path":"/api/check-stored-api","href":"/api/check-stored-api"},"message":"Internal Server Error"}
Sep 19 22:27:41 wazuh-manager kibana[189775]: {"type":"response","@timestamp":"2022-09-19T19:27:41Z","tags":[],"pid":189775,"method":"post","statusCode":500,"req":{"url":"/api/check-stored-api","method":"po
st","headers":{"host":"10.1.220.178","connection":"keep-alive","content-length":"16","sec-ch-ua":"\"Google Chrome\";v=\"105\", \"Not)A;Brand\";v=\"8\", \"Chromium\";v=\"105\"","dnt":"1","kbn-xsrf":"kibana",
"sec-ch-ua-mobile":"?0","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36","content-type":"application/json","accept":"application
/json, text/plain, */*","sec-ch-ua-platform":"\"Windows\"","origin":"https://10.1.220.178","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","sec-fetch-dest":"empty","referer":"https://10.1.220.178/app
/wazuh","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9,ro-RO;q=0.8,ro;q=0.7","securitytenant":""},"remoteAddress":"10.1.140.237","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x6
4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36","referer":"https://10.1.220.178/app/wazuh"},"res":{"statusCode":500,"responseTime":30,"contentLength":9},"message":"POST /api/check-
stored-api 500 30ms - 9.0B"}

BR,
Cristian
Reply all
Reply to author
Forward
0 new messages