Wazuh dashboard server is not ready yet - wazuh 4.10.1-1

196 views
Skip to first unread message

German DiCasas

unread,
Mar 6, 2025, 6:15:59 PM3/6/25
to Wazuh | Mailing List
Hi team,

After upgrade from 4.8.2 to 4.10.1-1 I have "Wazuh dashboard server is not ready yet". I have all in one (manager, indexer and dasbboard over same server). Over each instalation , at the moment to reemplace a file I selected yes. One of the was the file /etc/wazuh-indexer/jvm.options, I've already changed that . but still not working.

At this moment wazuh-indexer, wazuh-manager. filebeat and wazuh-dashboard are active. I can connect to curl -k -u kibanaserver:pass https://localhost:9200/_cluster/health , also if I use admin as user too. 

What can be the issue? below some commands, let me know...

/var/ossec/bin/wazuh-control status

wazuh-clusterd not running...
wazuh-modulesd is running...
wazuh-monitord is running...
wazuh-logcollector is running...
wazuh-remoted is running...
wazuh-syscheckd is running...
wazuh-analysisd is running...
wazuh-maild not running...
wazuh-execd is running...
wazuh-db is running...
wazuh-authd is running...
wazuh-agentlessd not running...
wazuh-integratord is running...
wazuh-dbd not running...
wazuh-csyslogd not running...
wazuh-apid is running...

cat /var/ossec/logs/ossec.log | grep -i -E "error|warn|fail"
2025/03/06 18:37:06 indexer-connector: WARNING: IndexerConnector initialization failed for index 'wazuh-states-vulnerabilities-wazuh-1', retrying until the connection is successful.
2025/03/06 19:39:03 indexer-connector: WARNING: IndexerConnector initialization failed for index 'wazuh-states-vulnerabilities-wazuh-1', retrying until the connection is successful.


filebeat test output
elasticsearch: https://127.0.0.1:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 127.0.0.1
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.3
    dial up... OK
  talk to server... OK
  version: 7.10.2


cat /usr/share/wazuh-dashboard/data/wazuh/logs/wazuhapp.log | grep -i -E "error|warn"
{"data":{"config":{"data":"{}","method":"get","params":{},"url":"https://127.0.0.1:55000/manager/stats/analysisd"},"message":"connect ECONNREFUSED 127.0.0.1:55000","stack":"Error: connect ECONNREFUSED 127.0.0.1:55000\n    at Function.AxiosError.from (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/axios/lib/core/AxiosError.js:89:14)\n    at RedirectableRequest.handleRequestError (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/axios/lib/adapters/http.js:606:25)\n    at RedirectableRequest.emit (node:events:513:28)\n    at RedirectableRequest.emit (node:domain:489:12)\n    at ClientRequest.eventHandlers.<computed> (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/follow-redirects/index.js:14:24)\n    at ClientRequest.emit (node:events:513:28)\n    at ClientRequest.emit (node:domain:489:12)\n    at TLSSocket.socketErrorListener (node:_http_client:502:9)\n    at TLSSocket.emit (node:events:513:28)\n    at TLSSocket.emit (node:domain:489:12)\n    at emitErrorNT (node:internal/streams/destroy:151:8)\n    at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n    at processTicksAndRejections (node:internal/process/task_queues:82:21)"},"date":"2025-03-06T20:50:00.506Z","level":"info","location":"Cron-scheduler"}
{"data":{"config":{"data":"{}","method":"get","params":{},"url":"https://127.0.0.1:55000/manager/stats/remoted"},"message":"Request failed with status code 500","stack":"AxiosError: Request failed with status code 500\n    at settle (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/axios/lib/core/settle.js:19:12)\n    at IncomingMessage.handleStreamEnd (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/axios/lib/adapters/http.js:585:11)\n    at IncomingMessage.emit (node:events:525:35)\n    at IncomingMessage.emit (node:domain:489:12)\n    at endReadableNT (node:internal/streams/readable:1359:12)\n    at processTicksAndRejections (node:internal/process/task_queues:82:21)"},"date":"2025-03-06T20:56:11.256Z","level":"info","location":"Cron-scheduler"}
{"data":{"config":{"data":"{}","method":"get","params":{},"url":"https://127.0.0.1:55000/manager/stats/analysisd"},"message":"Request failed with status code 500","stack":"AxiosError: Request failed with status code 500\n    at settle (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/axios/lib/core/settle.js:19:12)\n    at IncomingMessage.handleStreamEnd (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/axios/lib/adapters/http.js:585:11)\n    at IncomingMessage.emit (node:events:525:35)\n    at IncomingMessage.emit (node:domain:489:12)\n    at endReadableNT (node:internal/streams/readable:1359:12)\n    at processTicksAndRejections (node:internal/process/task_queues:82:21)"},"date":"2025-03-06T20:56:11.273Z","level":"info","location":"Cron-scheduler"}
{"data":{"config":{"data":"{}","method":"get","params":{},"url":"https://127.0.0.1:55000/manager/stats/analysisd"},"message":"Request failed with status code 500","stack":"AxiosError: Request failed with status code 500\n    at settle (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/axios/lib/core/settle.js:19:12)\n    at IncomingMessage.handleStreamEnd (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/axios/lib/adapters/http.js:585:11)\n    at IncomingMessage.emit (node:events:525:35)\n    at IncomingMessage.emit (node:domain:489:12)\n    at endReadableNT (node:internal/streams/readable:1359:12)\n    at processTicksAndRejections (node:internal/process/task_queues:82:21)"},"date":"2025-03-06T21:00:14.680Z","level":"info","location":"Cron-scheduler"}
{"data":{"config":{"data":"{}","method":"get","params":{},"url":"https://127.0.0.1:55000/manager/stats/remoted"},"message":"Request failed with status code 500","stack":"AxiosError: Request failed with status code 500\n    at settle (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/axios/lib/core/settle.js:19:12)\n    at IncomingMessage.handleStreamEnd (/usr/share/wazuh-dashboard/plugins/wazuh/node_modules/axios/lib/adapters/http.js:585:11)\n    at IncomingMessage.emit (node:events:525:35)\n    at IncomingMessage.emit (node:domain:489:12)\n    at endReadableNT (node:internal/streams/readable:1359:12)\n    at processTicksAndRejections (node:internal/process/task_queues:82:21)"},"date":"2025-03-06T21:00:14.985Z","level":"info","location":"Cron-scheduler"}
{"date":"2025-03-06T21:00:15.625Z","level":"error","location":"monitoring:getApiInfo","message":"Request failed with status code 500"}

journalctl -u wazuh-dashboard | grep -iE "err|warn"
Mar 06 19:39:48 hostname opensearch-dashboards[14441]: {"type":"log","@timestamp":"2025-03-06T22:39:48Z","tags":["error","opensearch","data"],"pid":14441,"message":"[search_phase_execution_exception]: all shards failed"}
Mar 06 19:39:50 hostname opensearch-dashboards[14441]: {"type":"log","@timestamp":"2025-03-06T22:39:50Z","tags":["error","opensearch","data"],"pid":14441,"message":"[search_phase_execution_exception]: all shards failed"}
Mar 06 19:39:53 hostname opensearch-dashboards[14441]: {"type":"log","@timestamp":"2025-03-06T22:39:53Z","tags":["error","opensearch","data"],"pid":14441,"message":"[search_phase_execution_exception]: all shards failed"}
Mar 06 19:40:23 hostname opensearch-dashboards[17090]: {"type":"log","@timestamp":"2025-03-06T22:40:23Z","tags":["error","opensearch","data"],"pid":17090,"message":"[resource_already_exists_exception]: index [.kibana_2/31x7bynDQLySiE1AypNv5Q] already exists"}
Mar 06 19:40:23 hostname opensearch-dashboards[17090]: {"type":"log","@timestamp":"2025-03-06T22:40:23Z","tags":["warning","savedobjects-service"],"pid":17090,"message":"Unable to connect to OpenSearch. Error: resource_already_exists_exception: [resource_already_exists_exception] Reason: index [.kibana_2/31x7bynDQLySiE1AypNv5Q] already exists"}
Mar 06 19:40:23 hostname opensearch-dashboards[17090]: {"type":"log","@timestamp":"2025-03-06T22:40:23Z","tags":["warning","savedobjects-service"],"pid":17090,"message":"Another OpenSearch Dashboards instance appears to be migrating the index. Waiting for that migration to complete. If no other OpenSearch Dashboards instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting OpenSearchDashboards."}
Mar 06 19:42:30 hostname opensearch-dashboards[17959]: {"type":"log","@timestamp":"2025-03-06T22:42:30Z","tags":["error","opensearch","data"],"pid":17959,"message":"[resource_already_exists_exception]: index [.kibana_2/31x7bynDQLySiE1AypNv5Q] already exists"}
Mar 06 19:42:30 hostname opensearch-dashboards[17959]: {"type":"log","@timestamp":"2025-03-06T22:42:30Z","tags":["warning","savedobjects-service"],"pid":17959,"message":"Unable to connect to OpenSearch. Error: resource_already_exists_exception: [resource_already_exists_exception] Reason: index [.kibana_2/31x7bynDQLySiE1AypNv5Q] already exists"}
Mar 06 19:42:30 hostname opensearch-dashboards[17959]: {"type":"log","@timestamp":"2025-03-06T22:42:30Z","tags":["warning","savedobjects-service"],"pid":17959,"message":"Another OpenSearch Dashboards instance appears to be migrating the index. Waiting for that migration to complete. If no other OpenSearch Dashboards instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting OpenSearchDashboards."}


cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -iE 'WARN|ERR|FAIL'
[2025-03-06T19:39:50,908][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}
org.opensearch.action.search.SearchPhaseExecutionException: all shards failed
        at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:770) [opensearch-2.16.0.jar:2.16.0]
        at org.opensearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:548) [opensearch-2.16.0.jar:2.16.0]
[2025-03-06T19:39:51,166][WARN ][o.o.c.r.a.AllocationService] [node-1] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-03-06T19:39:51,927][WARN ][o.o.c.r.a.AllocationService] [node-1] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-03-06T19:39:52,652][WARN ][o.o.c.r.a.AllocationService] [node-1] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-03-06T19:39:53,418][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}
org.opensearch.action.search.SearchPhaseExecutionException: all shards failed
        at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:770) [opensearch-2.16.0.jar:2.16.0]
        at org.opensearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:548) [opensearch-2.16.0.jar:2.16.0]
[2025-03-06T19:39:53,819][WARN ][o.o.c.r.a.AllocationService] [node-1] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-03-06T19:39:53,968][WARN ][o.o.c.r.a.AllocationService] [node-1] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-03-06T19:39:54,063][WARN ][o.o.c.r.a.AllocationService] [node-1] Falling back to single shard assignment since batch mode disable or multiple custom allocators set


Regards,

German


Jeremias Ignacio Posse

unread,
Mar 6, 2025, 7:44:59 PM3/6/25
to Wazuh | Mailing List

Hi German,

It looks like your Wazuh Dashboard is not fully functional after the upgrade. Based on the logs and status checks, here are some possible solutions:

  1. Check Wazuh API status

    • Run: systemctl status wazuh-apid
    • If it's not running, restart it: systemctl restart wazuh-apid
  2. Verify Wazuh Manager API connectivity

  3. Restart all services in the correct order

    • Try restarting in this order:

      systemctl restart wazuh-indexer systemctl restart wazuh-manager systemctl restart filebeat systemctl restart wazuh-dashboard
  4. Check OpenSearch/Wazuh-Indexer cluster health

  5. Fix OpenSearch Dashboard migration issue

    • Your logs indicate an index conflict:

      [resource_already_exists_exception]: index [.kibana_2] already exists
  1. Ensure correct permissions

    • Verify ownership of OpenSearch indices and dashboard:

      chown -R wazuh:wazuh /usr/share/wazuh-dashboard chown -R wazuh-indexer:wazuh-indexer /var/lib/wazuh-indexer
  2. Check JVM settings

    • Ensure the /etc/wazuh-indexer/jvm.options file is correctly configured for memory limits and JVM options.
    • Restart Wazuh Indexer if changes are made.
  3. Look for additional errors in logs

    • /var/ossec/logs/ossec.log
    • /usr/share/wazuh-dashboard/data/wazuh/logs/wazuhapp.log
    • /var/log/wazuh-indexer/wazuh-indexer.log

Let me know if any of these steps help or if you need further assistance.

German DiCasas

unread,
Mar 7, 2025, 8:07:05 AM3/7/25
to Wazuh | Mailing List
HI  Jeremias ,

The point 5 worked for me. Can you explain what do  curl -XDELETE -k -u kibanaserver:pass "https://localhost:9200/.kibana_2"  and and what error could have occurred in the upgrade from 4.8.2 to 4.10.1?

On the other hand, I have logs over api.log but the status is this:

systemctl status wazuh-apid
Unit wazuh-apid.service could not be found.

Thanks to fix my wazuh ufter update

Regards

German

Jeremias Ignacio Posse

unread,
Mar 10, 2025, 8:08:33 AM3/10/25
to Wazuh | Mailing List

Hi German,

Glad point 5 worked for you!

The command:

curl -XDELETE -k -u kibanaserver:pass "https://localhost:9200/.kibana_2"

deletes the .kibana_2 index, which may help resolve dashboard migration issues after the upgrade.

Regarding the upgrade from 4.8.2 to 4.10.1, common issues include:

  • Index migration conflicts
  • Configuration changes
  • Permission issues
  • Missing services

Your wazuh-apid.service is missing. Try:

systemctl daemon-reload systemctl enable --now wazuh-apid systemctl restart wazuh-apid systemctl status wazuh-apid

Also, check logs:

cat /var/ossec/logs/api.log

Let me know what errors you find.

Best,
Jeremias

German DiCasas

unread,
Mar 10, 2025, 9:31:53 AM3/10/25
to Wazuh | Mailing List
Jeremias,

Wazuh still working but wazuh-apid is not found.


systemctl status wazuh-apid
Unit wazuh-apid.service could not be found.
root@hostname:/var/ossec/etc/rules# systemctl enable --now wazuh-apid
Failed to enable unit: Unit file wazuh-apid.service does not exist.

Regarding to cat /var/ossec/logs/api.log seems to be working as last mail

...
2025/03/10 10:30:00 INFO: wazuh-wui 127.0.0.1 "GET /cluster/status" with parameters {} and body {} done in 0.054s: 200
2025/03/10 10:30:00 INFO: wazuh-wui 127.0.0.1 "GET /agents" with parameters {"offset": "0", "limit": "1", "q": "id!=000"} and body {} done in 0.020s: 200
2025/03/10 10:30:00 INFO: wazuh-wui 127.0.0.1 "GET /agents" with parameters {"offset": "0", "limit": "500", "q": "id!=000"} and body {} done in 0.027s: 200

Regards,

German

Jeremias Ignacio Posse

unread,
Apr 3, 2025, 4:12:05 AM4/3/25
to Wazuh | Mailing List

Hi German,

Sorry for the delay! Are you still having issues?

If `wazuh-apid.service`  is still missing, I recommend trying the following:

  1. Check if Wazuh API is installed:

    dpkg -l | grep wazuh-api # Debian/Ubuntu rpm -qa | grep wazuh-api # RHEL/CentOS

    If it’s not installed, you’ll need to install it.

  2. Reinstall the Wazuh API (if missing):

    apt update && apt install wazuh-api -y # Debian/Ubuntu yum install wazuh-api -y # RHEL/CentOS
  3. Check if the API is running manually:

    ps aux | grep wazuh-apid

    If it's running as a process but not as a service, restarting Wazuh Manager might help:

    systemctl restart wazuh-manager

Let me know if you’re still having trouble!

Best,
Jeremias

Reply all
Reply to author
Forward
0 new messages