Kibana server is not ready yet

9,734 views
Skip to first unread message

cda...@grayloon.com

unread,
Nov 20, 2018, 1:59:16 PM11/20/18
to Wazuh mailing list
Following the instructions in the docs, I upgraded to wazuh-manager 3.7 and ELK 6.5.0 yesterday on my CentOS 7.5 server. Now, Kibana isn't working. I can see that it's started and listening on port 5601, but the web interface won't load. I get the same error message via curl al


# ps -ef | grep -i kibana
kibana  
13070     1 35 12:48 ?        00:00:14 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml


# netstat -tlpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        
0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1345/master        
tcp        
0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      13823/nginx: master
tcp        
0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      13814/node          
tcp        
0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      13823/nginx: master
tcp        
0      0 0.0.0.0:10000           0.0.0.0:*               LISTEN      2587/perl          
tcp        
0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1470/sshd          
tcp6      
0      0 ::1:25                  :::*                    LISTEN      1345/master        
tcp6      
0      0 :::443                  :::*                    LISTEN      13823/nginx: master
tcp6      
0      0 127.0.0.1:9200          :::*                    LISTEN      14002/java          
tcp6      
0      0 ::1:9200                :::*                    LISTEN      14002/java          
tcp6      
0      0 :::80                   :::*                    LISTEN      13823/nginx: master
tcp6      
0      0 127.0.0.1:9300          :::*                    LISTEN      14002/java          
tcp6      
0      0 ::1:9300                :::*                    LISTEN      14002/java          
tcp6      
0      0 :::22                   :::*                    LISTEN      1470/sshd



A GitHub issue suggests messing with indices, but I don't know enough about Kibana to feel comfortable doing that. Anyone else experiencing this issue and have a fix/workaround?

miguel....@wazuh.com

unread,
Nov 21, 2018, 6:21:55 AM11/21/18
to Wazuh mailing list
Hello Cdavis,

What error appears in Kibana?

Could you use the followings command and paste here the output? 

systemctl status kibana -l | grep -i -E "(error|warning)"


cat /var/log/messages | grep -i -E "(error|warning)"



I will try to reproduce your problem in my lab so that I can offer you the best possible help.

Regards,

Miguel Casares

cda...@grayloon.com

unread,
Nov 21, 2018, 7:34:47 AM11/21/18
to Wazuh mailing list
# systemctl status kibana -l | grep -i -E "(error|warning)"
           
└─7137 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml



This is very small portion of the most recent output.
# cat /var/log/messages | grep -i -E "(error|warning)"
Nov 21 06:28:57 cloud-security1 kibana: FATAL  Error: Request Timeout after 30000ms
Nov 21 06:29:51 cloud-security1 logstash: [2018-11-21T06:29:51,947][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
Nov 21 06:29:51 cloud-security1 logstash: [2018-11-21T06:29:51,948][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>64}
Nov 21 06:29:52 cloud-security1 logstash: [2018-11-21T06:29:52,002][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
Nov 21 06:29:52 cloud-security1 logstash: [2018-11-21T06:29:52,003][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch'
but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>64}
Nov 21 06:29:52 cloud-security1 logstash: [2018-11-21T06:29:52,125][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
Nov 21 06:29:52 cloud-security1 logstash: [2018-11-21T06:29:52,125][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>64}
Nov 21 06:30:14 cloud-security1 logstash: [2018-11-21T06:30:14,830][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
Nov 21 06:30:14 cloud-security1 logstash: [2018-11-21T06:30:14,831][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch'
but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
Nov 21 06:30:16 cloud-security1 logstash: [2018-11-21T06:30:16,869][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
Nov 21 06:30:16 cloud-security1 logstash: [2018-11-21T06:30:16,869][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}


This shows multiple kibana indices:
# curl "http://localhost:9200/_cat/indices?v" | grep kibana
 
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 
Dload  Upload   Total   Spent    Left  Speed
100 47625  100 47625    0     0   2093      0  0:00:22  0:00:22 --:--:-- 12795
green  open  
.monitoring-kibana-6-2018.10.17 ZpPaJrWIRD-yX5WWQYqKdQ   1   0       8235            0      1.9mb          1.9mb
green  open  
.monitoring-kibana-6-2018.10.20 MwXQqK-aQG-2P-JWsPghKg   1   0       3972            0      1.1mb          1.1mb
green  open  
.monitoring-kibana-6-2018.10.16 n3VyX6JARWGc6RGolPTqrA   1   0       8531            0      2.1mb          2.1mb
green  open  
.monitoring-kibana-6-2018.10.15 JfEUFolnTJ2m3I50H-PGSQ   1   0       8627            0        2mb            2mb
yellow open  
.kibana_2                       L9NxvZI7RtCjcGPpxgxddA   1   0                                                  
green  open  
.monitoring-kibana-6-2018.10.21 JT8ICcxnSleD7ZLBs88VMQ   1   0       1805            0    524.7kb        524.7kb
green  open  
.monitoring-kibana-6-2018.10.19 ujWiV8RTSS2FkDLGbVSpwQ   1   0       5019            0      1.2mb          1.2mb
red    open  
.kibana                         jAcCnTlAQOiMYyzKCnFq-A   5   1                                                  
green  open  
.monitoring-kibana-6-2018.10.18 yTXHgLR1Qkqu_PT6ZvoIiA   1   0       7279            0      1.8mb          1.8mb

miguel....@wazuh.com

unread,
Nov 21, 2018, 9:59:30 AM11/21/18
to Wazuh mailing list
Hello,

It seems that there is a problem with Elasticsearch, as you can see in this log:

([2018-11-21T06:29:52,003][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/])

Logstash cannot access to Elasticsearch because it appears to be unreachable or down.

Could you try to run the following commands to get further information?

cat /var/log/elasticsearch/elasticsearch.log | grep -i -E "error|warn"

systemctl status elasticsearch.service -l

curl http://localhost:9200/?pretty


Regarding the Kibana index, after migrations have run, there will be multiple Kibana indices in Elasticsearch: (.kibana_1, .kibana_2, etc). Kibana only uses the index that the .kibana alias points to. The other Kibana indices can be safely deleted, but are left around as a matter of historical record, and to facilitate rolling Kibana back to a previous version. So do not worry about it.

Regards,

Miguel Casares

cda...@grayloon.com

unread,
Nov 21, 2018, 10:17:48 AM11/21/18
to Wazuh mailing list
# cat /var/log/elasticsearch/elasticsearch.log | grep -i -E "error|warn"

[2018-11-21T09:13:15,553][INFO ][o.e.n.Node               ] [GMo0q3h] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.ltluDg52, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=rpm]
[2018-11-21T09:13:46,038][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [GMo0q3h] Failed to clear cache for realms [[]]
[2018-11-21T09:13:47,991][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/kql-telemetry%3Akql-telemetry, params: {index=.kibana, id=kql-telemetry:kql-telemetry, type=doc}
[2018-11-21T09:13:48,167][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=1000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._id}
[2018-11-21T09:13:48,181][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/config%3A6.5.0, params: {index=.kibana, id=config:6.5.0, type=doc}
[2018-11-21T09:13:48,196][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {index=.kibana}
[2018-11-21T09:13:48,193][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {ignore_unavailable=true, index=.kibana, filter_path=aggregations.types.buckets}
[2018-11-21T09:13:48,225][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=10000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._source.canvas-workpad}
[2018-11-21T09:13:57,857][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=1000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._id}
[2018-11-21T09:13:57,855][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/config%3A6.5.0, params: {index=.kibana, id=config:6.5.0, type=doc}
[2018-11-21T09:13:57,868][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/kql-telemetry%3Akql-telemetry, params: {index=.kibana, id=kql-telemetry:kql-telemetry, type=doc}
[2018-11-21T09:13:57,876][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {ignore_unavailable=true, index=.kibana, filter_path=aggregations.types.buckets}
[2018-11-21T09:13:57,879][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {index=.kibana}
[2018-11-21T09:13:57,886][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=10000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._source.canvas-workpad}
[2018-11-21T09:14:07,864][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/kql-telemetry%3Akql-telemetry, params: {index=.kibana, id=kql-telemetry:kql-telemetry, type=doc}
[2018-11-21T09:14:07,869][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=1000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._id}
[2018-11-21T09:14:07,870][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {index=.kibana}
[2018-11-21T09:14:07,869][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/config%3A6.5.0, params: {index=.kibana, id=config:6.5.0, type=doc}
[2018-11-21T09:14:07,874][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=10000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._source.canvas-workpad}
[2018-11-21T09:14:07,875][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {ignore_unavailable=true, index=.kibana, filter_path=aggregations.types.buckets}
[2018-11-21T09:14:17,871][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=10000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._source.canvas-workpad}
[2018-11-21T09:14:17,868][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/config%3A6.5.0, params: {index=.kibana, id=config:6.5.0, type=doc}
[2018-11-21T09:14:17,875][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/kql-telemetry%3Akql-telemetry, params: {index=.kibana, id=kql-telemetry:kql-telemetry, type=doc}
[2018-11-21T09:14:17,879][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {index=.kibana}
[2018-11-21T09:14:17,883][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=1000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._id}
[2018-11-21T09:14:17,898][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {ignore_unavailable=true, index=.kibana, filter_path=aggregations.types.buckets}
[2018-11-21T09:14:27,875][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=1000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._id}
[2018-11-21T09:14:27,872][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/config%3A6.5.0, params: {index=.kibana, id=config:6.5.0, type=doc}
[2018-11-21T09:14:27,876][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=10000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._source.canvas-workpad}
[2018-11-21T09:14:27,873][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/kql-telemetry%3Akql-telemetry, params: {index=.kibana, id=kql-telemetry:kql-telemetry, type=doc}
[2018-11-21T09:14:27,878][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {ignore_unavailable=true, index=.kibana, filter_path=aggregations.types.buckets}
[2018-11-21T09:14:27,881][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {index=.kibana}
[2018-11-21T09:14:37,880][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=1000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._id}
[2018-11-21T09:14:37,877][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/kql-telemetry%3Akql-telemetry, params: {index=.kibana, id=kql-telemetry:kql-telemetry, type=doc}
[2018-11-21T09:14:37,881][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=10000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._source.canvas-workpad}
[2018-11-21T09:14:37,882][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {ignore_unavailable=true, index=.kibana, filter_path=aggregations.types.buckets}
[2018-11-21T09:14:37,882][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {index=.kibana}
[2018-11-21T09:14:37,882][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/config%3A6.5.0, params: {index=.kibana, id=config:6.5.0, type=doc}
[2018-11-21T09:14:47,889][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=1000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._id}
[2018-11-21T09:14:47,889][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/kql-telemetry%3Akql-telemetry, params: {index=.kibana, id=kql-telemetry:kql-telemetry, type=doc}
[2018-11-21T09:14:47,893][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {index=.kibana}
[2018-11-21T09:14:47,893][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {ignore_unavailable=true, index=.kibana, filter_path=aggregations.types.buckets}
[2018-11-21T09:14:47,893][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/doc/config%3A6.5.0, params: {index=.kibana, id=config:6.5.0, type=doc}
[2018-11-21T09:14:47,906][WARN ][r.suppressed             ] [GMo0q3h] path: /.kibana/_search, params: {size=10000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._source.canvas-workpad}



# systemctl status elasticsearch.service -l
elasticsearch.service - Elasticsearch
   
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
 
Drop-In: /etc/systemd/system/elasticsearch.service.d
           
└─elasticsearch.conf
   
Active: active (running) since Wed 2018-11-21 09:13:02 CST; 2min 34s ago
     
Docs: http://www.elastic.co
 
Main PID: 1465 (java)
   
CGroup: /system.slice/elasticsearch.service
           
├─1465 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.ltluDg52 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
           └─2356 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller


Nov 21 09:13:02 REDACTED systemd[1]: Started Elasticsearch.
Nov 21 09:13:02 REDACTED systemd[1]: Starting Elasticsearch...



# curl http://localhost:9200/?pretty
{
 
"name" : "GMo0q3h",
 
"cluster_name" : "elasticsearch",
 
"cluster_uuid" : "gJKX7kK7R-StzzcfmMNNug",
 
"version" : {
   
"number" : "6.5.0",
   
"build_flavor" : "default",
   
"build_type" : "rpm",
   
"build_hash" : "816e6f6",
   
"build_date" : "2018-11-09T18:58:36.352602Z",
   
"build_snapshot" : false,
   
"lucene_version" : "7.5.0",
   
"minimum_wire_compatibility_version" : "5.6.0",
   
"minimum_index_compatibility_version" : "5.0.0"
 
},
 
"tagline" : "You Know, for Search"
}

miguel....@wazuh.com

unread,
Nov 21, 2018, 10:36:42 AM11/21/18
to Wazuh mailing list
Hello,

Ok, make sure that the Elasticsearch template was properly inserted:


And restart Elasticsearch:

systemctl restart elasticsearch


After that, try to restart Kibana services and check if everything is working:

systemctl restart kibana.service

systemctl status kibana
.service -l




And check if you can access the browser.

Please let me know if that works for you.

Regards,

Miguel Casares

Florian Große

unread,
Nov 21, 2018, 11:07:39 AM11/21/18
to Wazuh mailing list
Hi,

we had the same issue with our Kibana after the update to 6.5.0 and I was able to fix it by deleting the kibana-indices in elasticsearch.

In our log appeared this message:
Nov 19 08:46:33 gendry kibana[11816]: {"type":"log","@timestamp":"2018-11-19T07:46:33Z","tags":["warning","migrations"],"pid":11816,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."}


After these steps our Kibana was working again: 
(the deleted indices were recreated at the start of kibana)

 service kibana stop
service kibana start

I hope it helps.

Florian

miguel....@wazuh.com

unread,
Nov 22, 2018, 4:18:58 AM11/22/18
to Wazuh mailing list
Hello Florian,

Thank you so much for your help.

Let's see if it works for Cdavis.


On the other hand, Florian, if you have any question or issue, please do not hesitate to contact us. The community contributions are what help Wazuh to grow and improve as a security platform. We really appreciate it.

Regards,

Miguel Casares

cda...@grayloon.com

unread,
Nov 26, 2018, 9:17:31 AM11/26/18
to Wazuh mailing list
I ran the following and received the JSON output of the template:

Then, I restarted elasticsearch and kibana, but I'm still getting a 503 error from:

# curl -XGET http://localhost:5601/status -I
HTTP
/1.1 503 Service Unavailable
retry-after: 30
content
-type: text/html; charset=utf-8
cache
-control: no-cache
content
-length: 30
Date: Mon, 26 Nov 2018 14:15:55 GMT
Connection: keep-alive

I also tried Florian's suggestion to the delete the kibana indices, but that didn't work either.

cda...@grayloon.com

unread,
Nov 26, 2018, 9:48:23 AM11/26/18
to Wazuh mailing list
I just noticed several of these errors repeated in the system log this morning. I'm not sure if this is related or not.

Nov 26 08:46:09 cloud-security1 kibana: Unhandled rejection Error: Request Timeout after 30000ms
Nov 26 08:46:09 cloud-security1 kibana: at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15
Nov 26 08:46:09 cloud-security1 kibana: at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)
Nov 26 08:46:09 cloud-security1 kibana: at ontimeout (timers.js:498:11)
Nov 26 08:46:09 cloud-security1 kibana: at tryOnTimeout (timers.js:323:5)
Nov 26 08:46:09 cloud-security1 kibana: at Timer.listOnTimeout (timers.js:290:5)


cda...@grayloon.com

unread,
Nov 26, 2018, 11:54:01 AM11/26/18
to Wazuh mailing list
Here's another error from the elasticsearch log that may help. I run ELK and Wazuh on a single server – not a cluster of any kind.

[2018-11-26T08:51:15,024][WARN ][o.e.x.m.e.l.LocalExporter] [GMo0q3h] unexpected error while indexing monitoring document
org
.elasticsearch.xpack.monitoring.exporter.ExportException: UnavailableShardsException[[.monitoring-kibana-6-2018.11.26][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-kibana-6-2018.11.26][0]] containing [index {[.monitoring-kibana-6-2018.11.26][doc][EkB_UGcBbFdx1-_DvDm6], source[{"cluster_uuid":"gJKX7kK7R-StzzcfmMNNug","timestamp":"2018-11-26T14:50:05.018Z","interval_ms":10000,"type":"kibana_stats","source_node":{"uuid":"GMo0q3h3SCWK6SCuiKHDiQ","host":"127.0.0.1","transport_address":"127.0.0.1:9300","ip":"127.0.0.1","name":"GMo0q3h","timestamp":"2018-11-26T14:50:05.018Z"},"kibana_stats":{"kibana":{"uuid":"010a5dbc-63d4-4817-87b1-7587c0f750fa","name":"cloud-security1.grayloon.com","index":".kibana","host":"localhost","transport_address":"localhost:5601","version":"6.5.0","snapshot":false,"status":"red"},"usage":{"kql":{"optInCount":0,"optOutCount":0,"defaultQueryLanguage":"default-lucene"},"index":".kibana","dashboard":{"total":0},"visualization":{"total":0},"search":{"total":0},"index_pattern":{"total":0},"graph_workspace":{"total":0},"timelion_sheet":{"total":0},"xpack":{"spaces":{"available":false,"enabled":false},"reporting":{"available":false,"enabled":false,"browser_type":"chromium","_all":0,"csv":{"available":false},"printable_pdf":{"available":false},"status":{},"lastDay":{"_all":0,"csv":{"available":false},"printable_pdf":{"available":false},"status":{}},"last7Days":{"_all":0,"csv":{"available":false},"printable_pdf":{"available":false},"status":{}}}},"infraops":{"last_24_hours":{"hits":{"infraops_hosts":0,"infraops_docker":0,"infraops_kubernetes":0,"logs":0}}},"rollups":{"index_patterns":{"total":0},"saved_searches":{"total":0},"visualizations":{"total":0,"saved_searches":{"total":0}}}}}}]}]]]
 at org
.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:128) ~[?:?]
 at java
.util.stream.ReferencePipeline$3$1.accept(Unknown Source) ~[?:1.8.0_161]
 at java
.util.stream.ReferencePipeline$2$1.accept(Unknown Source) ~[?:1.8.0_161]
 at java
.util.Spliterators$ArraySpliterator.forEachRemaining(Unknown Source) ~[?:1.8.0_161]
 at java
.util.stream.AbstractPipeline.copyInto(Unknown Source) ~[?:1.8.0_161]
 at java
.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) ~[?:1.8.0_161]
 at java
.util.stream.ForEachOps$ForEachOp.evaluateSequential(Unknown Source) ~[?:1.8.0_161]
 at java
.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(Unknown Source) ~[?:1.8.0_161]
 at java
.util.stream.AbstractPipeline.evaluate(Unknown Source) ~[?:1.8.0_161]
 at java
.util.stream.ReferencePipeline.forEach(Unknown Source) ~[?:1.8.0_161]
 at org
.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:129) ~[?:?]
 at org
.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:111) ~[?:?]
 at org
.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) ~[elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) ~[elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:85) ~[elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:81) ~[elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:607) ~[elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:414) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:409) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:91) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:901) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:873) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:932) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryIfUnavailable(TransportReplicationAction.java:778) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:731) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:892) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:317) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:244) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:559) [elasticsearch-6.5.0.jar:6.5.0]
 at org
.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:624) [elasticsearch-6.5.0.jar:6.5.0]
 at java
.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_161]
 at java
.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_161]
 at java
.lang.Thread.run(Unknown Source) [?:1.8.0_161]
Caused by: org.elasticsearch.action.UnavailableShardsException: [.monitoring-kibana-6-2018.11.26][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-kibana-6-2018.11.26][0]] containing [index {[.monitoring-kibana-6-2018.11.26][doc][EkB_UGcBbFdx1-_DvDm6], source[{"cluster_uuid":"gJKX7kK7R-StzzcfmMNNug","timestamp":"2018-11-26T14:50:05.018Z","interval_ms":10000,"type":"kibana_stats","source_node":{"uuid":"GMo0q3h3SCWK6SCuiKHDiQ","host":"127.0.0.1","transport_address":"127.0.0.1:9300","ip":"127.0.0.1","name":"GMo0q3h","timestamp":"2018-11-26T14:50:05.018Z"},"kibana_stats":{"kibana":{"uuid":"010a5dbc-63d4-4817-87b1-7587c0f750fa","name":"cloud-security1.grayloon.com","index":".kibana","host":"localhost","transport_address":"localhost:5601","version":"6.5.0","snapshot":false,"status":"red"},"usage":{"kql":{"optInCount":0,"optOutCount":0,"defaultQueryLanguage":"default-lucene"},"index":".kibana","dashboard":{"total":0},"visualization":{"total":0},"search":{"total":0},"index_pattern":{"total":0},"graph_workspace":{"total":0},"timelion_sheet":{"total":0},"xpack":{"spaces":{"available":false,"enabled":false},"reporting":{"available":false,"enabled":false,"browser_type":"chromium","_all":0,"csv":{"available":false},"printable_pdf":{"available":false},"status":{},"lastDay":{"_all":0,"csv":{"available":false},"printable_pdf":{"available":false},"status":{}},"last7Days":{"_all":0,"csv":{"available":false},"printable_pdf":{"available":false},"status":{}}}},"infraops":{"last_24_hours":{"hits":{"infraops_hosts":0,"infraops_docker":0,"infraops_kubernetes":0,"logs":0}}},"rollups":{"index_patterns":{"total":0},"saved_searches":{"total":0},"visualizations":{"total":0,"saved_searches":{"total":0}}}}}}]}]]
 
... 12 more

Javier Castro

unread,
Nov 30, 2018, 7:17:10 PM11/30/18
to Wazuh mailing list
Hello,

when you upgrade elasticsearch while still indexing data to it, filebeat and logstash will queue events and then will try to send them all to elasticsearch once it is back up and running. This can take a bit of time or, depending on the load, trigger bulk rejections and other problems.

Can you check what is your cluster health? if it is not at 100% (or 50% in a single host environment using 1 replica), the number should increase over time. 

curl localhost:9200/_cat/health?v

Also, Kibana 6.5.0 has a known issue affecting permissions in .kibana indices when using x-pack, maybe it's related to your issue (https://www.elastic.co/guide/en/kibana/current/release-notes-6.5.0.html).

Hope that helps,
regards.

cda...@grayloon.com

unread,
Dec 5, 2018, 9:09:06 AM12/5/18
to Wazuh mailing list
I ran curl localhost:9200/_cat/health?v. It got to this point and basically stopped. I'm not familiar with x-pack.

epoch    timestamp cluster    status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1544018628 14:03:48  elasticsearch red             1         1   1650 1650    0    4     1923             8              10.6m                 46.1%


cda...@grayloon.com

unread,
Dec 7, 2018, 2:21:15 PM12/7/18
to Wazuh mailing list
Javier,

I'm still having issues. I removed the ELK stack and reinstalled. Now at version 6.5.1. I was going to try the steps in the link you provided, but I don't know how to do that via command line. Right now, I have a single .kibana_1 index, but I'm still seeing migration messages in my log:

Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana.

So, I ran the following:

curl -XDELETE "http://localhost:9200/.kibana_1"
systemctl restart kibana


That didn't help. I get the same error about migration. Should I completely remove and reinstall ELK?

thm...@gmail.com

unread,
Dec 10, 2018, 1:52:46 PM12/10/18
to Wazuh mailing list
hey guys,

i'm afraid I'm facing the same problem.
I recently updated to 6.5.2 and it seems that Kibana is no longer able to load. In the browser I'm seeing the error "Kibana server is not ready"
and in the logs I'm seeing this error:

Dec 10 19:44:58 sec kibana[1887]: {"type":"error","@timestamp":"2018-12-10T18:44:58Z","tags":["warning","stats-collection"],"pid":1887,"level":"error","error":{"message":"[search_phase_execution_exception] all s
Dec 10 19:44:58 sec kibana[1887]: {"type":"log","@timestamp":"2018-12-10T18:44:58Z","tags":["warning","stats-collection"],"pid":1887,"message":"Unable to fetch data from kibana collector"}
Dec 10 19:44:58 sec kibana[1887]: {"type":"error","@timestamp":"2018-12-10T18:44:58Z","tags":["warning","stats-collection"],"pid":1887,"level":"error","error":{"message":"[no_shard_available_action_exception] No
Dec 10 19:44:58 sec kibana[1887]: {"type":"log","@timestamp":"2018-12-10T18:44:58Z","tags":["warning","stats-collection"],"pid":1887,"message":"Unable to fetch data from kql collector"}
Dec 10 19:44:58 sec kibana[1887]: {"type":"error","@timestamp":"2018-12-10T18:44:58Z","tags":["warning","stats-collection"],"pid":1887,"level":"error","error":{"message":"[no_shard_available_action_exception] No
Dec 10 19:44:58 sec kibana[1887]: {"type":"log","@timestamp":"2018-12-10T18:44:58Z","tags":["warning","stats-collection"],"pid":1887,"message":"Unable to fetch data from kibana_settings collector"}
Dec 10 19:44:58 sec kibana[1887]: {"type":"error","@timestamp":"2018-12-10T18:44:58Z","tags":["warning","stats-collection"],"pid":1887,"level":"error","error":{"message":"[search_phase_execution_exception] all s
Dec 10 19:44:58 sec kibana[1887]: {"type":"log","@timestamp":"2018-12-10T18:44:58Z","tags":["warning","stats-collection"],"pid":1887,"message":"Unable to fetch data from canvas collector"}
Dec 10 19:44:58 sec kibana[1887]: {"type":"error","@timestamp":"2018-12-10T18:44:58Z","tags":["warning","stats-collection"],"pid":1887,"level":"error","error":{"message":"[search_phase_execution_exception] all s
Dec 10 19:44:58 sec kibana[1887]: {"type":"log","@timestamp":"2018-12-10T18:44:58Z","tags":["warning","stats-collection"],"pid":1887,"message":"Unable to fetch data from rollups collector"}

curl -XGET http://localhost:5601/status -I
HTTP/1.1 503 Service Unavailable
retry-after: 30
content-type: text/html; charset=utf-8
cache-control: no-cache
content-length: 30
Date: Mon, 10 Dec 2018 18:45:38 GMT
Connection: keep-alive
 
I've tried deleting the indices, but that didn't help either.
I have even deleted and re-installed the kibana app. But that didn't help either.

Has anyone managed to fix this problem yet?

best regards,
theresa

Russell Butturini

unread,
Dec 10, 2018, 2:16:03 PM12/10/18
to thm...@gmail.com, Wazuh mailing list
The last responder is correct.  This is related to an unresolved bug in the latest ELK release and not specific to Wazuh.  I was able to get things working again by stopping the Kibana services, deleting the .kibana_1 index, starting the services, deleting the .kibana_2 index, stopping the services, and deleting the .kibana_1 index again.  It was a real pain and appears that there may be some timing issues as well in play so you may have to repeat the above steps a few times.  


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/1671fd08-c77b-4cb3-b287-53fca884b814%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

thm...@gmail.com

unread,
Dec 10, 2018, 2:24:32 PM12/10/18
to Wazuh mailing list
Hi Russell,

thanks for your fast response.
For some reason I don't seem to have a .kibana_1 nor .kibana_2 index. They must be called differently in my installation, any idea how to find this out?
I've tried deleting the .kibana_1 and .kibana_2 indices, however always got the error that they didn't exist.
Weird.

Do you have a link to this unresolved bug in Elastic?

cheers,
theresa
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

Russell Butturini

unread,
Dec 10, 2018, 3:55:16 PM12/10/18
to thm...@gmail.com, Wazuh mailing list
The index name is actually defined in your /etc/kibana/kibana.yml file.  Look for the value called kibana.index.  This is .kibana by default (and commented out) but if it has been changed for some reason you will see the name being used instead.  Also you can use curl -X GET "localhost:9200/_cat/indices/"  to see your indices by name and you can see if you have something different there.  So if it's been changed you will get <indexname>_1 etc.  One thing I also did for testing purposes and to see the logs as they are generated is to stop the Kibana service and manually launch the executable from /usr/share/kibana/bin/kibana.

There is a good discussion thread on the issue you are experiencing here:


Let me know if there is more I can do to help!

-Russell


To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/1671fd08-c77b-4cb3-b287-53fca884b814%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Russell Butturini

unread,
Dec 10, 2018, 4:03:06 PM12/10/18
to thm...@gmail.com, Wazuh mailing list
Also, I should mention I did see some errors about the migration indices being read only.  I used this as a temporary workaround which then allowed them to be properly deleted:

curl -X PUT http://localhost:9200/_all/_settings -H 'Content-Type: application/json' -d'{ "index.blocks.read_only_allow_delete" : false } }'


It's important after you send the DELETE request into Elasticsearch for the indices to verify they are actually gone because I found you can send the request, it gets queued up, and then if there is an issue with permissions or blocking the operation just fails and you never get any feedback.  So after you send the request to delete, use the curl -X GET "localhost:9200/_cat/indices/" to make sure they're really gone.

cda...@grayloon.com

unread,
Dec 11, 2018, 9:14:18 AM12/11/18
to Wazuh mailing list
Russell,

Thank you! I was finally able to resolve my issue with Kibana after running the following commands:

systemctl stop kibana

curl
-X PUT http://localhost:9200/_all/_settings -H 'Content-Type: application/json' -d'{ "index.blocks.read_only_allow_delete" : false } }'

curl
-XDELETE "http://localhost:9200/.kibana_1"

systemctl start kibana

Russell Butturini

unread,
Dec 11, 2018, 9:41:15 AM12/11/18
to cda...@grayloon.com, Wazuh mailing list
No problem! It's important to note this should probably be undone to keep your event data from being wiped out by a bad guy :-)



--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
Reply all
Reply to author
Forward
0 new messages