Can't see alerts in dashboard

85 views
Skip to first unread message

Gabriel

unread,
Jul 31, 2025, 7:28:58 AM7/31/25
to Wazuh | Mailing List
I have alerts in aler ts.json and archives.json file, all services (indexer, manager, dashboard, filebeat) works without issues (if I run journalctl on them), filebeat test output gives me all OK, and there is not any useful info in /var/ossec/ossec.log and /var/log/wazuh-indexer/wazuh-indexer-cluster.log and /var/log/filebeat/. And also only 10% of my disk in use, so its not sbout disk space.

I don't know where to look, or how to fix the problem. It seems like all services working properly but I just don't getting alerts.

Md. Nazmur Sakib

unread,
Jul 31, 2025, 7:35:04 AM7/31/25
to Wazuh | Mailing List
Hi Gabriel,

For the log missing in the Dashboard issue.

Try restarting the Wazuh Indexer and Filebeat services with the following commands:

systemctl restart wazuh-indexer

systemctl restart filebeat


After restarting, check if the issue is resolved. If you still don't see alerts for security events, please share the output of the following commands:

filebeat test output


Share the output of the cluster health
curl -k -u admin:<password> -XGET https://<127.0.0.1/indexer-ip>:9200/_cluster/health?pretty


Also, share the logs from the following log files.

tail /var/ossec/logs/alerts/alerts.json
cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"
cat /var/log/filebeat/filebeat | grep -i -E "error|warn"




Let me know the update on the issue.

Gabriel

unread,
Jul 31, 2025, 12:05:05 PM7/31/25
to Wazuh | Mailing List
I already did all of this. I restarted services and even rebooted server.

As I said - filebeat test outpu t is OK:

\elasticsearch: https://127.0.0.1:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 127.0.0.1
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.2
    dial up... OK
  talk to server... OK
  version: 7.10.2

Here is cluster health:

{
  "cluster_name" : "wazuh-indexer-cluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "discovered_master" : true,
  "discovered_cluster_manager" : true,
  "active_primary_shards" : 424,
  "active_shards" : 424,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 33,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 92.77899343544857
}

And here is output of the logs(i sorted for unique logs):

Filebeat:

/var/log/filebeat/filebeat.1:2025-07-31T13:22:37.501+0200       ERROR   [publisher_pipeline_output]     pipeline/output.go:154  Failed to connect to backoff(elasticsearch(https://127.0.0.1:9200)): 503 Service Unavailable: OpenSearch Security not initialized.
/var/log/filebeat/filebeat.2:2025-07-31T13:10:27.382+0200       ERROR   instance/beat.go:956    Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://localhost:5601/api/status fails: fail to execute the HTTP GET request: Get "http://localhost:5601/api/status": dial tcp 127.0.0.1:5601: connect: connection refused. Response: .
/var/log/filebeat/filebeat.6:2025-07-31T06:45:58.046+0200       ERROR   [elasticsearch] elasticsearch/client.go:224     failed to perform any bulk index operations: Post "https://127.0.0.1:9200/_bulk": dial tcp 127.0.0.1:9200: connect: connection refused

In ossec.log logs there are no errors or warnings associated with problem.

Indexer errors:

Failed to get ISM policies with templates: Failed to execute phase [query], all shards failed
Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@240eee6a] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@253c1f81] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@2ef31f9b] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@490ece1d] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@4c684fd5] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@523088aa] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@735da889] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
Failure No shard available for [org.opensearch.action.get.MultiGetShardRequest@7d377058] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
get managed-index failed: NoShardAvailableActionException[No shard available for [org.opensearch.action.get.MultiGetShardRequest@560cb258]]
get managed-index failed: NoShardAvailableActionException[No shard available for [org.opensearch.action.get.MultiGetShardRequest@7111313e]]
get managed-index failed: NoShardAvailableActionException[No shard available for [org.opensearch.action.get.MultiGetShardRequest@71753a17]]
get managed-index failed: NoShardAvailableActionException[No shard available for [org.opensearch.action.get.MultiGetShardRequest@c9a01a2]]
[indexer] Config override setting update called with empty string. Ignoring.
[indexer] Default endpoint could not be created, auditlog will not work properly.
] [indexer] Error while sweeping shard [.opendistro-ism-config][0], error message: all shards failed
] [indexer] Error while sweeping shard [.opendistro-reports-definitions][0], error message: all shards failed
[indexer] Failed to initialize LogType config index and builtin log types
[indexer] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[indexer] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
] [indexer] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes
[indexer] Java vector incubator module is not readable. For optimal vector performance, pass '--add-modules jdk.incubator.vector' to enable Vector API.
] [indexer] JVM arguments [-Xshare:auto, -Dopensearch.networkaddress.cache.ttl=60, -Dopensearch.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.security.manager=allow, -Djava.locale.providers=SPI,COMPAT, -Xms20g, -Xmx20g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/var/lib/wazuh-indexer/tmp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/wazuh-indexer, -XX:ErrorFile=/var/log/wazuh-indexer/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/wazuh-indexer/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.security.manager=allow, -Djava.util.concurrent.ForkJoinPool.common.threadFactory=org.opensearch.secure_sm.SecuredForkJoinWorkerThreadFactory, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///etc/wazuh-indexer/opensearch-performance-analyzer/opensearch_security.policy, --add-opens=jdk.attach/sun.tools.attach=ALL-UNNAMED, -XX:MaxDirectMemorySize=10737418240, -Dopensearch.path.home=/usr/share/wazuh-indexer, -Dopensearch.path.conf=/etc/wazuh-indexer, -Dopensearch.distribution.type=deb, -Dopensearch.bundled_jdk=true]
] [indexer] Master key is a required config for using create and update datasource APIs. Please set plugins.query.datasources.encryption.masterkey config in opensearch.yml in all the cluster nodes. More details can be found here: https://github.com/opensearch-project/sql/blob/main/docs/user/ppl/admin/datasources.rst#master-key-config-for-encrypting-credential-information
[indexer] message: index [.opensearch-observability/HL_rBeKMQq2g2ZT7Xg3pxQ] already exists
[indexer] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'
[indexer] No default storage available, audit log may not work properly. Please check configuration.
[indexer] Not yet initialized (you may need to run securityadmin)
] [indexer] path: /.kibana/_count, params: {index=.kibana}
] [indexer] path: /.kibana/_search, params: {rest_total_hits_as_int=true, size=1000, index=.kibana, from=0}
[indexer] unexpected failure while sending request [internal:cluster/shard/started] to [{indexer}{3RhkIGDqQZKTBjsnnUqpgg}{8MmGZDBJTwGkfM6EDVtp6w}{127.0.0.1}{127.0.0.1:9300}{dimr}{shard_indexing_pressure_enabled=true}] for shard entry [StartedShardEntry{shardId [[wazuh-alerts-4.x-2025.03.07][0]], allocationId [szIjbyPeScWAbseB_gbjdQ], primary term [17], message [after existing store recovery; bootstrap_history_uuid=false]}]
[indexer] unexpected failure while sending request [internal:cluster/shard/started] to [{indexer}{3RhkIGDqQZKTBjsnnUqpgg}{8MmGZDBJTwGkfM6EDVtp6w}{127.0.0.1}{127.0.0.1:9300}{dimr}{shard_indexing_pressure_enabled=true}] for shard entry [StartedShardEntry{shardId [[wazuh-alerts-4.x-2025.04.12][1]], allocationId [AFGUx8b5Qq-KddJcwRi0-w], primary term [17], message [after existing store recovery; bootstrap_history_uuid=false]}]
] [indexer] WARNING: A restricted method in java.lang.foreign.Linker has been called
] [indexer] WARNING: java.lang.foreign.Linker::downcallHandle has been called by the unnamed module
] [indexer] WARNING: Use --enable-native-access=ALL-UNNAMED to avoid a warning for this module

 


четверг, 31 июля 2025 г. в 13:35:04 UTC+2, Md. Nazmur Sakib:

Md. Nazmur Sakib

unread,
Aug 4, 2025, 8:44:52 AM8/4/25
to Wazuh | Mailing List
I am looking into your logs. I will send a resposne on this by tomorrow.

Md. Nazmur Sakib

unread,
Aug 5, 2025, 11:42:57 AM8/5/25
to Wazuh | Mailing List

The full trace is not present on the cluster.log you've shared. Could you please run the following commands and share the output

- systemctl restart wazuh-indexer

- systemctl status wazuh-indexer

- journalctl -xeu wazuh-indexer.service


(It is a plus if you can get all the last logs from /var/log/wazuh-indexer/wazuh-cluster.log, with the corresponding logs to this last restart are enough. You can share the full log in a text file in the attachment.)


I can see a number of unassigned shares. The issue can be due to some unassigned system shards.


Check the unassigned shards and their unassignment reason.

curl -k -XGET -u admin:<admin_user’s_PASSWORD> "https://127.0.0.1:9200/_cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state" | grep UNASSIGNED


curl -k -XGET -u admin:<admin_user’s_PASSWORD> "https://127.0.0.1:9200//_cluster/allocation/explain?pretty"


Looking forward to your update on the issue.
Reply all
Reply to author
Forward
0 new messages