Wazuh dashboard server is not ready yet since (probably) 4.4 automatic update

1,083 views
Skip to first unread message

Franck Ehret

unread,
Mar 31, 2023, 7:16:37 AM3/31/23
to Wazuh mailing list
Hi there,

I've the message "Wazuh dashboard server is not ready yet" at least since today, I just noticed everything was auto updated to 4.4 (auto update is something I want)

My system has 3 servers as follow:
- srv01 - Frontend
- srv03 - Wazuh Manager
- srv05 - Elastic Search Open Distro

When I run the following command from srv01 to srv05, I get something back :

[xxxx@srv01 ~]# curl -X GET "https://srv05:9200/_cluster/health?pretty" -u admin:mypass  -k
{
  "cluster_name" : "wazuh-cluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "discovered_master" : true,
  "discovered_cluster_manager" : true,
  "active_primary_shards" : 937,
  "active_shards" : 937,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 29,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 96.99792960662525
}


[xxxx@srv01 ~]# curl -X GET "https://srv05:9200" -u admin:mypass -k
{
  "name" : "srv05",
  "cluster_name" : "wazuh-cluster",
  "cluster_uuid" : "myuuid",
  "version" : {
    "number" : "7.10.2",
    "build_type" : "rpm",
    "build_hash" : "f2f809ea280ffba217451da894a5899f1cec02ab",
    "build_date" : "2022-12-12T22:17:42.341124910Z",
    "build_snapshot" : false,
    "lucene_version" : "9.4.2",
    "minimum_wire_compatibility_version" : "7.10.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "The OpenSearch Project: https://opensearch.org/"
}


I attached a part of /var/log/wazuh-indexer/wazuh-cluster.log logs, I found some errors around 5:00 AM, which might be related to srv03 updating.

Since then, I have plenty of those in the file:
[2023-03-31T13:12:33,312][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Executing attempt_transition_step for wazuh-alerts-4.x-2022.12.03
[2023-03-31T13:12:33,313][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Finished executing attempt_transition_step for wazuh-alerts-4.x-2022.12.03
[2023-03-31T13:12:33,314][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Executing attempt_transition_step for wazuh-alerts-4.x-2022.10.11
[2023-03-31T13:12:33,314][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Finished executing attempt_transition_step for wazuh-alerts-4.x-2022.10.11
[2023-03-31T13:12:33,942][INFO ][o.o.j.s.JobScheduler     ] [srv05] Will delay 80725 miliseconds for next execution of job wazuh-alerts-4.x-2023.02.11
[2023-03-31T13:12:34,385][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Executing attempt_transition_step for wazuh-alerts-4.x-2023.02.11
[2023-03-31T13:12:34,386][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Finished executing attempt_transition_step for wazuh-alerts-4.x-2023.02.11
[2023-03-31T13:12:34,440][INFO ][o.o.j.s.JobScheduler     ] [srv05] Will delay 162721 miliseconds for next execution of job wazuh-alerts-4.x-2023.02.01
[2023-03-31T13:12:34,446][INFO ][o.o.j.s.JobScheduler     ] [srv05] Will delay 19541 miliseconds for next execution of job wazuh-monitoring-2023.6w
[2023-03-31T13:12:34,474][INFO ][o.o.j.s.JobScheduler     ] [srv05] Will delay 170882 miliseconds for next execution of job wazuh-statistics-2023.52w
[2023-03-31T13:12:34,953][INFO ][o.o.j.s.JobScheduler     ] [srv05] Will delay 154012 miliseconds for next execution of job wazuh-alerts-4.x-2023.01.04
[2023-03-31T13:12:34,991][INFO ][o.o.j.s.JobScheduler     ] [srv05] Will delay 103911 miliseconds for next execution of job wazuh-alerts-4.x-2023.03.14
[2023-03-31T13:12:35,422][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Executing attempt_transition_step for wazuh-alerts-4.x-2023.02.01
[2023-03-31T13:12:35,422][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Finished executing attempt_transition_step for wazuh-alerts-4.x-2023.02.01
[2023-03-31T13:12:35,423][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Executing attempt_transition_step for wazuh-monitoring-2023.6w
[2023-03-31T13:12:35,424][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Finished executing attempt_transition_step for wazuh-monitoring-2023.6w
[2023-03-31T13:12:35,425][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Executing attempt_transition_step for wazuh-statistics-2023.52w
[2023-03-31T13:12:35,425][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Finished executing attempt_transition_step for wazuh-statistics-2023.52w
[2023-03-31T13:12:35,426][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Executing attempt_transition_step for wazuh-alerts-4.x-2023.01.04
[2023-03-31T13:12:35,426][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Finished executing attempt_transition_step for wazuh-alerts-4.x-2023.01.04
[2023-03-31T13:12:35,427][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Executing attempt_transition_step for wazuh-alerts-4.x-2023.03.14
[2023-03-31T13:12:35,427][INFO ][o.o.i.i.ManagedIndexRunner] [srv05] Finished executing attempt_transition_step for wazuh-alerts-4.x-2023.03.14


So should I wait or is there something I have to do to to make that working again?
Wazuh manager seems to work fine as I'm getting email alerts.

Thanks in advance and best regards
Franck
wazuh-cluster extract.txt

Franck Ehret

unread,
Apr 1, 2023, 4:43:40 AM4/1/23
to Wazuh mailing list
Hi there,

Found that in journal from wazuh dashboard (I tried to reinstall it):

{"type":"log","@timestamp":"2023-04-01T08:30:02Z","tags":["warning","savedobjects-service"],"pid":998,"message":"Another OpenSearch Dashboards instance appears to be migrating the index. Waiting for that migration to complete. If no other OpenSearch Dashboards instance is attempting migrations, you can get past this message by deleting index .kibana_3 and restarting OpenSearchDashboards."}
opensearch-dashboards
10:30
{"type":"log","@timestamp":"2023-04-01T08:30:02Z","tags":["warning","savedobjects-service"],"pid":998,"message":"Unable to connect to OpenSearch. Error: resource_already_exists_exception: [resource_already_exists_exception] Reason: index [.kibana_3/oeoJ8yNLRGKwT6-ifkjJyg] already exists"}
opensearch-dashboards
10:30
{"type":"log","@timestamp":"2023-04-01T08:30:02Z","tags":["error","opensearch","data"],"pid":998,"message":"[resource_already_exists_exception]: index [.kibana_3/oeoJ8yNLRGKwT6-ifkjJyg] already exists"}
opensearch-dashboards
10:30
{"type":"log","@timestamp":"2023-04-01T08:30:02Z","tags":["info","savedobjects-service"],"pid":998,"message":"Creating index .kibana_3."}
opensearch-dashboards
10:30
{"type":"log","@timestamp":"2023-04-01T08:30:02Z","tags":["info","savedobjects-service"],"pid":998,"message":"Detected mapping change in \"properties.visualization-visbuilder\""}
opensearch-dashboards
10:30
{"type":"log","@timestamp":"2023-04-01T08:30:01Z","tags":["info","savedobjects-service"],"pid":998,"message":"Starting saved objects migrations"}
opensearch-dashboards
10:30
{"type":"log","@timestamp":"2023-04-01T08:30:01Z","tags":["info","savedobjects-service"],"pid":998,"message":"Waiting until all OpenSearch nodes are compatible with OpenSearch Dashboards before starting saved objects migrations..."}
opensearch-dashboards
10:30
{"type":"log","@timestamp":"2023-04-01T08:30:00Z","tags":["info","plugins-system"],"pid":998,"message":"Setting up [45] plugins: [alertingDashboards,usageCollection,opensearchDashboardsUsageCollection,opensearchDashboardsLegacy,mapsLegacy,share,opensearchUiShared,legacyExport,embeddable,expressions,data,home,console,apmOss,management,indexPatternManagement,advancedSettings,savedObjects,reportsDashboards,indexManagementDashboards,dashboard,visualizations,visTypeVega,visTypeTable,visTypeTimeline,timeline,visTypeMarkdown,visBuilder,tileMap,regionMap,customImportMapDashboards,inputControlVis,ganttChartDashboards,visualize,notificationsDashboards,bfetch,charts,visTypeVislib,visTypeTimeseries,visTypeTagcloud,visTypeMetric,discover,savedObjectsManagement,securityDashboards,wazuh]"}
opensearch-dashboards
10:30
{"type":"log","@timestamp":"2023-04-01T08:30:00Z","tags":["warning","config","deprecation"],"pid":998,"message":"\"opensearch.requestHeadersWhitelist\" is deprecated and has been replaced by \"opensearch.requestHeadersAllowlist\""}
opensearch-dashboards
10:29
{"type":"log","@timestamp":"2023-04-01T08:29:59Z","tags":["info","plugins-service"],"pid":998,"message":"Plugin \"visTypeXy\" is disabled."}
opensearch-dashboards
10:29
{"type":"log","@timestamp":"2023-04-01T08:29:59Z","tags":["info","plugins-service"],"pid":998,"message":"Plugin \"dataSource\" is disabled."}
opensearch-dashboards
10:29
{"type":"log","@timestamp":"2023-04-01T08:29:59Z","tags":["info","plugins-service"],"pid":998,"message":"Plugin \"dataSourceManagement\" has been disabled since the following direct or transitive dependencies are missing or disabled: [dataSource]"}
opensearch-dashboards
10:29
Started wazuh-dashboard.

Seems something is not working as expected on Opensearch, any help?

Message has been deleted

Selu López

unread,
Apr 3, 2023, 5:05:48 AM4/3/23
to Wazuh mailing list
Hi Franck,

It seems that similar errors have been reported on other occasions, for example in the threads below. Check them out, maybe you find them useful to fix the problem:
I would recommend reviewing which .kibana indices you have:
curl -X GET https://localhost:9200/_cat/indices?pretty

And then delete the versioned indices and restart, as suggested in your error output: 
curl -X DELETE http://localhost:9200/.kibana_3

Please note that, as stated in some of the threads, deleting the kibana_N indices could lead to custom dashboards or visualizations being deleted.

I hope this helps you solve the problem. Let me know otherwise.

Regards,
Selu.


Franck Ehret

unread,
Apr 3, 2023, 8:13:03 AM4/3/23
to Wazuh mailing list
Hello Seul,

Thanks a lot for your feedback, deleting the .kibana dashboards did the trick to get back in business. :-)
(I don't use custom dashboards, so it was easy for me to decide!)

But before trying this solution, I did notice yesterday that my "cluster" was yellow because several shards were in UNASSIGNED mode.

Here is today's sample (all the other I could find I removed manually all unnecessary replicate):

index                                                   shard prirep state      node       unassigned.reason
.opendistro-ism-managed-index-history-2023.04.01-000254 0     r      UNASSIGNED            INDEX_CREATED
.opendistro-ism-managed-index-history-2023.04.02-000255 0     r      UNASSIGNED            INDEX_CREATED
wazuh-alerts-4.x-2022.04.20                             2     p      STARTED    srv05
wazuh-alerts-4.x-2022.04.20                             1     p      STARTED    srv05
wazuh-alerts-4.x-2022.04.20                             0     p      STARTED    srv05 


Those ".opendistro-ism-managed-index-history-" are in unassigned mode because they try to have a replicate but I only have one node (and it will remain so).
It was not the case before March, the only unassigned shards were during the last weeks. That might coincide with an update (probably Opensearch 2.X ?) and a shard naming convention change.

Can you confirm the behavior and help me create (again) the necessary policy to avoid that? Thx in advance. :-)

Kind regards
Franck

Franck Ehret

unread,
Apr 3, 2023, 8:14:29 AM4/3/23
to Wazuh mailing list
@Selu, sorry for the misspell... my French corrector inverted the 2 last letters (seul means solo) ! :-)

Selu López

unread,
Apr 4, 2023, 4:42:53 AM4/4/23
to Wazuh mailing list
Hi Franck,

No problem about the misspelling ;-) Setting the number of replicas to 0 for the history index should work. To do so, you can try running the following:

curl -X PUT "https://<wazuh_indexer_ip>:9200/_cluster/settings"  -u <user>:<password> -k -H 'Content-Type: application/json' -d'
{
   "persistent" : {
      "opendistro" : {
         "index_state_management" : {
            "history" : {
               "number_of_replicas" : "0"
             }
          }
       }
   }
}
'


Kind regards,
Selu.

Franck Ehret

unread,
Apr 5, 2023, 4:23:34 AM4/5/23
to Wazuh mailing list
Thanks a lot Selu, all solved! ;-)
Reply all
Reply to author
Forward
0 new messages