Hi, i'm facing a new problem with my Wazuh Docker installation.
After making a backup copy, i've tryed to update wazuh 4.0.3 to 4.0.4 and then i've seen that this release is not aviable yet, so, after putting things back as they were and coming back to the previous settings, i've restarted the docker but, on browser, i've noticed that the system still on "Kibana server is not ready" for a long time.
reading kibana logs (docker container logs --tail 100 wazuh-docker_kibana_1) i',ve noticed this:
Waiting for Kibana API. Sleeping 5 seconds
{"type":"log","@timestamp":"2021-01-14T10:09:29Z","tags":["info","savedobjects-service"],"pid":121,"message":"Creating index .kibana_5."}
Waiting for Kibana API. Sleeping 5 seconds
Waiting for Kibana API. Sleeping 5 seconds
{"type":"log","@timestamp":"2021-01-14T10:09:59Z","tags":["warning","savedobjects-service"],"pid":121,"message":"Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms"}
Waiting for Kibana API. Sleeping 5 seconds
{"type":"log","@timestamp":"2021-01-14T10:10:01Z","tags":["warning","savedobjects-service"],"pid":121,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_5/0TP2VpB5Tc2Q9a379TUoag] already exists, with { index_uuid=\"0TP2VpB5Tc2Q9a379TUoag\" & index=\".kibana_5\" }"}
{"type":"log","@timestamp":"2021-01-14T10:10:01Z","tags":["warning","savedobjects-service"],"pid":121,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_5 and restarting Kibana."}
and so i deleted the .kibana_5 index by going into the elasticserach docker and put this
command "curl -XDELETE 'http://localhost:9200/.kibana_5' --header "content-type: application/JSON""Stopping the docker and restarting it, the problem still remain.
so i told myself, "no problem, i have the backup copy " and
actually, restoring everything worked, apart from the agent's registration dates which were wrong.
I've already faced this issue, so, after stopping the docker and removing the global.db file in the
/var/ossec/queue/db/ folder, i restard the docker and..
here it is again, the previous problem ..
Kibana create and index
{"type":"log","@timestamp":"2021-01-14T10:09:29Z","tags":["info","savedobjects-service"],"pid":121,"message":"Creating index .kibana_5."}
and then conflict with itself?
{"type":"log","@timestamp":"2021-01-14T10:10:01Z","tags":["warning","savedobjects-service"],"pid":121,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_5 and restarting Kibana."}
Have anyone else faced this problem? Can someone guide me towards a solution?