index .kibana_# problem after docker restart.

599 views
Skip to first unread message

joh nte

unread,
Jan 14, 2021, 5:36:36 AM1/14/21
to Wazuh mailing list
Hi, i'm facing a new problem with my Wazuh Docker installation.
After making a backup copy, i've tryed to update wazuh 4.0.3 to 4.0.4 and then i've seen that this release is not aviable yet, so, after putting things back as they were and coming back to the previous settings, i've restarted the docker but, on browser, i've noticed that the system still on "Kibana server is not ready" for a long time.

reading kibana logs (docker container logs --tail 100 wazuh-docker_kibana_1) i',ve noticed this:

Waiting for Kibana API. Sleeping 5 seconds
{"type":"log","@timestamp":"2021-01-14T10:09:29Z","tags":["info","savedobjects-service"],"pid":121,"message":"Creating index .kibana_5."}
Waiting for Kibana API. Sleeping 5 seconds
Waiting for Kibana API. Sleeping 5 seconds
{"type":"log","@timestamp":"2021-01-14T10:09:59Z","tags":["warning","savedobjects-service"],"pid":121,"message":"Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms"}
Waiting for Kibana API. Sleeping 5 seconds
{"type":"log","@timestamp":"2021-01-14T10:10:01Z","tags":["warning","savedobjects-service"],"pid":121,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_5/0TP2VpB5Tc2Q9a379TUoag] already exists, with { index_uuid=\"0TP2VpB5Tc2Q9a379TUoag\" & index=\".kibana_5\" }"}
{"type":"log","@timestamp":"2021-01-14T10:10:01Z","tags":["warning","savedobjects-service"],"pid":121,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_5 and restarting Kibana."}

and so i deleted the .kibana_5 index by going into the elasticserach docker and put this command "curl -XDELETE 'http://localhost:9200/.kibana_5'  --header "content-type: application/JSON""

Stopping the docker and restarting it, the problem still remain.


so i told myself, "no problem, i have the backup copy " and  actually, restoring everything worked, apart from the agent's registration dates which were wrong.
I've already faced this issue, so, after stopping the docker and removing the global.db file in the  /var/ossec/queue/db/ folder, i restard the docker and..   here it is again, the previous problem .. 

Kibana create and index
 {"type":"log","@timestamp":"2021-01-14T10:09:29Z","tags":["info","savedobjects-service"],"pid":121,"message":"Creating index .kibana_5."} 
and then  conflict with itself?
{"type":"log","@timestamp":"2021-01-14T10:10:01Z","tags":["warning","savedobjects-service"],"pid":121,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_5 and restarting Kibana."}  


Have anyone else faced this problem? Can someone guide me towards a solution?

Franco Hielpos

unread,
Jan 18, 2021, 11:28:51 AM1/18/21
to Wazuh mailing list
Hello Joh,

Could you confirm me what Kibana and Elasticsearch versions you are using? Elasticsearch's logs could have some useful information too.

According to some issues, this problems relates to failed migrations:

Here is more information about saved migrations:

Lets check what .kibana indices you have:

And also, which is the alias for the .kibana index:
curl -X GET "http://localhost:9200/.kibana/_alias/"

Some people found that deleting the kibana_N indices fix the problem, but this will lead to custom dashboards or visualizations to be deleted.

Some other people reported to have this issue when doing a migration without cluster allocation enabled, so that's another thing that you could try out in the meantime:

Setting cluster routing allocation on /etc/elasticsearch/elasticsearch.yml:
cluster.routing.allocation.enable : "all"

I will be waiting for your feedback!

Regards,
Franco Hielpos

joh nte

unread,
Jan 19, 2021, 4:54:49 AM1/19/21
to Wazuh mailing list
Hi Franco,
Thanks for the response, i'm using elasticsearch 7.9.3 and kibana 7.9.3 too

Theese are my kibana indices:
green  open   .kibana-event-log-7.9.3-000001
green  open   .kibana_3                                      
green  open   .kibana_4                                      
green  open   .kibana_5                                      
green  open   .kibana_task_manager_1          
green  open   .kibana_task_manager_2         

and running the kibana aliases command i got this: {".kibana_4":{"aliases":{".kibana":{}}}}

I've already tryed to delete .kibana_5 index trought this command:

curl -XDELETE 'http://localhost:9200/.kibana_5'  --header "content-type: application/JSON"
 
But when the docker restart, kibana recreate the same index and then the error reappear.

Yesterday i restore my old wazuh setup and upgraded all the docker to the latest version, including wazuh at 4.0.4, and everythings went fine!
Then today i've tryed to stop e re rund the container, so i only gave this two command:
docker-compose stop
docker-compose up -d
And i noticed the same error in kibana logs..

So it seems that every time i stop and restart the container, kibana conflicts with an index that he himself creates shortly before ... what can i do? 

joh nte

unread,
Jan 19, 2021, 5:09:40 AM1/19/21
to Wazuh mailing list
for the  "cluster.routing.allocation.enable : "all"  " in my wazuh-docker configuration, i've entered in elasticsearch docker (docker exec -ti wazuh-docker_elasticsearch_1 bash) and searched for the yml file, founding it in the "config" folder, but there is no "cluster.routing.allocation" option.

joh nte

unread,
Jan 19, 2021, 9:52:28 AM1/19/21
to Wazuh mailing list
update:

I've deleted .kibana_5,  .kibana_4 and .kibana_3, restarted only the kibana docker and it works, (note: there wasn't a .kibana_2 index).
After another docker stop and up, kibana create .kibana_2 and then give me the same error fot .kibana_2.
I've deleted .kibana_2, restarted only the kibana docker and it worked again.
Now i've rebooted the docker-compose a few times and it seems that there is no error, plus i still have all my dashboard and data.

I don't know why or when it recreates a new kibana index onto restart, i hope that now, having only kibana_1 that work well, after some reboot it don'try to recreate other indices..


If anyone knows what this error is due to, i'd love to know.

Thanks ;)

Franco Hielpos

unread,
Jan 21, 2021, 4:32:19 PM1/21/21
to joh nte, Wazuh mailing list
Hello Joh, sorry for the late reply

I am glad that you could solve it.

When upgrading, Kibana creates a new version of the .kibana_N index and creates an alias to .kibana, for some reason yours got corrupted while migrating, which caused Kibana didn't start. Here you will find more information about Kibana migrations:

Don't hesitate to ask if you have any questions!


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/a92951e1-945d-4ee4-8312-1e91faeae160n%40googlegroups.com.


--
Franco Hielpos
Reply all
Reply to author
Forward
0 new messages