I need help, please. I tried the following comand to erase the index, but when I start kibana, it creates it again and the same mistake comes up (the mistake is at the bottom).
// curl -XDELETE 'http://localhost:9200/.kibana_task_manager_2' --header "content-type: application/JSON" -u elastic -p
THE MISTAKE I GET:
< Feb 28 14:00:52 srvwaz.com kibana[1058]: {"type":"log","@timestamp":"2020-02-28T14:00:52Z","tags":["info","savedobjects-service"],"pid":1058,"message":"Starting saved objects migrations"}
Feb 28 14:00:52 srvwaz.com kibana[1058]: {"type":"log","@timestamp":"2020-02-28T14:00:52Z","tags":["info","savedobjects-service"],"pid":1058,"message":"Creating index .kibana_task_manager_2."}
Feb 28 14:00:52 srvwaz.com kibana[1058]: {"type":"log","@timestamp":"2020-02-28T14:00:52Z","tags":["warning","savedobjects-service"],"pid":1058,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_2/g8bjExgTS1286AKdfmIp8Q] already exists, with { index_uuid="g8bjExgTS1286AKdfmIp8Q" & index=".kibana_task_manager_2" }"}
Feb 28 14:00:52 srvwaz.com kibana[1058]: {"type":"log","@timestamp":"2020-02-28T14:00:52Z","tags":["warning","savedobjects-service"],"pid":1058,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_2 and restarting Kibana."} />
Could you check on Elasticsearch how many .kibana_N indices do you have?
run http://localhost:9200/_cat/indices?pretty | grep -i "kibana" on your Elasticsearch server
curl -I http://localhost:5601/status
curl -XDELETE elastic_ip:9200/.kibana_*
curl -XDELETE elastic_ip:9200/.kibana_task_manager_*
systemctl restart kibana