Elasticsearch error - Can see logs in Kibana

911 views
Skip to first unread message

Aravind Krish

unread,
May 11, 2021, 8:12:43 AM5/11/21
to Wazuh mailing list
Hello,

Could you please help to fix the issue related to ElasticSearch.? The wazuh 3.13 deployment is in Kubernetes. When I checked the logs of Elasticsearch pod, I see below error.

"stacktrace": ["org.elasticsearch.action.NoShardAvailableActionException: No shard available for [get [.kibana][_doc][space:default]: routing [null]]",
"at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:224) [elasticsearch-7.7.1.jar:7.7.1]",
"stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed",

So I collected the logs below.

[root@wazuh-elasticsearch-0 elasticsearch]# curl -XGET 'http://localhost:9200/_cluster/health'?pretty
{
  "cluster_name" : "wazuh",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 15,
  "active_shards" : 15,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

[root@wazuh-elasticsearch-0 elasticsearch]# curl -XGET 'http://localhost:9200/_cat/indices?v'
health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   wazuh-monitoring-3.x-2021.05.09 -GWGC6uZR8O087CK95rD_A   2   0         79            0    289.5kb        289.5kb
green  open   .apm-custom-link                gSYlJa7XTau5iNk9STqcUQ   1   0          0            0       208b           208b
green  open   .kibana_task_manager_1          eAG6P1n8S0CkUPxHE_mxXA   1   0          5            1     30.7kb         30.7kb
green  open   wazuh-monitoring-3.x-2021.04.29 yZ1dLjNPSYqjQWFgUWgi7w   2   0          0            0       416b           416b
green  open   .apm-agent-configuration        FZGvhzx1S_WRaAth36M7qQ   1   0          0            0       208b           208b
green  open   .kibana_2                       98pmC9t7RaW9n5jIpDfz5g   1   0         76            3      103kb          103kb
green  open   wazuh-monitoring-3.x-2021.05.11 2jQBhUvWTACj0sH7O7kxfg   2   0         47            0    198.4kb        198.4kb
green  open   wazuh-monitoring-3.x-2021.05.10 aiWdco5XQ2iEhYon0jZm7Q   2   0         97            0    294.8kb        294.8kb
green  open   .kibana_1                       RrF8_nTmTH2OmeBx1sqIXg   1   0         29            2     70.1kb         70.1kb
green  open   wazuh-monitoring-3.x-2021.04.30 rZ64c3gnReC1guj_2EBbwQ   2   0          0            0       416b           416b


[root@wazuh-elasticsearch-0 elasticsearch]# curl http://localhost:9200/_cat/templates
.slm-history                    [.slm-history-2*]                          2147483647 2
.monitoring-kibana              [.monitoring-kibana-7-*]                   0          7000199
.monitoring-es                  [.monitoring-es-7-*]                       0          7000199
.ml-stats                       [.ml-stats-*]                              0          7070199
.transform-internal-005         [.transform-internal-005]                  0          7070199
.monitoring-alerts-7            [.monitoring-alerts-7]                     0          7000199
wazuh                           [wazuh-alerts-3.x-*, wazuh-archives-3.x-*] 0          1
.ml-config                      [.ml-config]                               0          7070199
.watches                        [.watches*]                                2147483647 11
.triggered_watches              [.triggered_watches*]                      2147483647 11
.management-beats               [.management-beats]                        0          70000
ilm-history                     [ilm-history-2*]                           2147483647 2
.transform-notifications-000002 [.transform-notifications-*]               0          7070199
.logstash-management            [.logstash]                                0
.ml-inference-000001            [.ml-inference-000001]                     0          7070199
.ml-state                       [.ml-state*]                               0          7070199
.ml-meta                        [.ml-meta]                                 0          7070199
.ml-notifications-000001        [.ml-notifications-000001]                 0          7070199
.monitoring-beats               [.monitoring-beats-7-*]                    0          7000199
.monitoring-logstash            [.monitoring-logstash-7-*]                 0          7000199
.ml-anomalies-                  [.ml-anomalies-*]                          0          7070199
wazuh-agent                     [wazuh-monitoring-3.x-*]                   0
.watch-history-11               [.watcher-history-11*]                     2147483647 11

From all the logs, I couldnt find where the issue is.

I can find the alerts from agent in Worker pod, alerts.json file. But its not shown in Kibana.
Also filebeat test output command in worker pod also shows all green

Regards,
Arav

Rafael Antonio Rodriguez Otero

unread,
May 11, 2021, 5:50:59 PM5/11/21
to Aravind Krish, Wazuh mailing list
try with delete indexes to kibana


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/CAKOhOouySrUyRon7fGCsUcbm6qfGk1aS7dxArCdT1nvk7wOGYg%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages