Wazuh Discover dashboard empty (wazuh-alerts-* missing)

40 views
Skip to first unread message

john

unread,
Oct 9, 2025, 7:33:14 AM (yesterday) Oct 9
to Wazuh | Mailing List

After restarting wazuh manager, dashboard, indexer, my Discover dashboard in Wazuh UI is empty. The index pattern wazuh-alerts-* no longer shows any data.

Here’s what I’ve observed:

Environment

  • Wazuh Manager: working, generating alerts

  • Filebeat: active and running

  • Wazuh Indexer: reachable at https://127.0.0.1:9200

  • Wazuh Dashboard: loads fine, but shows no data

More info:

Does anyone have a same issue? pls would really appreciate help



Stuti Gupta

unread,
Oct 9, 2025, 8:16:03 AM (yesterday) Oct 9
to Wazuh | Mailing List

Hi 

As you said, Wazuh Manager is getting alerts. Can you please also check the /var/ossec/logs/alerts/alerts.jso to conform if trh wazuh-manager is getting current logs.

If it is the can you please  run the following command:
filebeat test output

output should be like this: 
elasticsearch: https://127.0.0.1:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 127.0.0.1
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.2
    dial up... OK
  talk to server... OK
  version: 7.10.2

I you see any errors, the issue can be related to configuration, connectivity, certificates, etc. In that case, you can share the error you are getting and also check Filebeat logs:

cat /var/log/filebeat/filebeat | grep -i -E "error|warn"

If this is functioning correctly, it indicates that both the Wazuh manager and Filebeat are operating smoothly, and Filebeat is successfully forwarding logs to the Wazuh indexer. Next, check the status of the Wazuh indexer to ensure it’s active:

systemctl status wazuh-indexer

Check the cluster health with:

curl -XGET -k -u user:pass "https://localhost:9200/_cluster/health"

Or on the web interface, go to Indexer management → Dev Tools and run this command:

GET _cluster/health

Check the number of shards, because if the total shards cross the limit per node (default 1000 per indexer node), the indexer stops indexing. The solution for this is:

Depending on the number of nodes, you can change the primary and replica shards and re-index the old indices: https://documentation.wazuh.com/current/user-manual/wazuh-indexer/wazuh-indexer-tuning.html#setting-the-number-of-replicas

Adding more indexer nodes: https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/add-wazuh-indexer-nodes.html

Deleting old indices: Use the API or CLI to delete older wazuh-alerts indices:

DELETE <index_name>

Or via cURL:

curl -k -u admin:<Indexer_Password> -XDELETE "https://<WAZUH_INDEXER_IP>:9200/wazuh-alerts-4.x-YYYY.MM.DD"

Use ILM: https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/index-lifecycle-management.html

If the issue still persists, share the logs from the indexer log files:

cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"

In the cluster logs, you can find information like low disk.watermark, which indicates a low storage issue. If you see this, you need to increase the storage or delete some old logs to make space for new logs: https://wazuh.com/blog/recover-your-data-using-wazuh-alert-backups/

john

unread,
Oct 9, 2025, 1:27:05 PM (22 hours ago) Oct 9
to Wazuh | Mailing List
Dear Stuti

Thanks for your response

indeed the alerts.json displays the logs as well as the "filebeat test output" command works smoothly

the file "cat /var/log/filebeat/filebeat | grep -i -E "error|warn"" doesn't contain anything valuable and no errors could be found there, only small off-topic warnings

wazuh-indexer works well

But the health status is set to yellow:
{
  "cluster_name": "wazuh-indexer-cluster",
  "status": "yellow",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "discovered_master": true,
  "discovered_cluster_manager": true,
  "active_primary_shards": 997,
  "active_shards": 997,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 3,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 99.7
}

here're the logs command:

 cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"

[2024-12-07T03:08:11,877][WARN ][o.o.s.h.HTTPBasicAuthenticator] [node-1] No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'

[2024-12-07T03:46:58,379][ERROR][o.o.a.a.AlertIndices ] [node-1] info deleteOldIndices

[2024-12-07T03:46:58,380][ERROR][o.o.a.a.AlertIndices ] [node-1] info deleteOldIndices

[2024-12-07T08:21:05,046][WARN ][o.o.s.a.BackendRegistry ] [node-1] Authentication finally failed for {LINUX USER} from 127.0.0.1:59848

[2024-12-07T08:33:06,929][INFO ][o.o.n.Node ] [node-1] JVM arguments [-Xshare:auto, -Dopensearch.networkaddress.cache.ttl=60, -Dopensearch.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.security.manager=allow, -Djava.locale.providers=SPI,COMPAT, -Xms1024m, -Xmx1024m, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/var/log/wazuh-indexer/tmp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/wazuh-indexer, -XX:ErrorFile=/var/log/wazuh-indexer/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/wazuh-indexer/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.security.manager=allow, -Djava.util.concurrent.ForkJoinPool.common.threadFactory=org.opensearch.secure_sm.SecuredForkJoinWorkerThreadFactory, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///etc/wazuh-indexer/opensearch-performance-analyzer/opensearch_security.policy, --add-opens=jdk.attach/sun.tools.attach=ALL-UNNAMED, -XX:MaxDirectMemorySize=536870912, -Dopensearch.path.home=/usr/share/wazuh-indexer, -Dopensearch.path.conf=/etc/wazuh-indexer, -Dopensearch.distribution.type=deb, -Dopensearch.bundled_jdk=true]

[2024-12-07T08:33:37,846][WARN ][o.o.s.c.Salt ] [node-1] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes

[2024-12-07T08:33:38,034][ERROR][o.o.s.a.s.SinkProvider ] [node-1] Default endpoint could not be created, auditlog will not work properly.

[2024-12-07T08:33:38,035][WARN ][o.o.s.a.r.AuditMessageRouter] [node-1] No default storage available, audit log may not work properly. Please check configuration.

[2024-12-07T08:33:40,210][WARN ][o.o.s.p.SQLPlugin ] [node-1] Master key is a required config for using create and update datasource APIs. Please set plugins.query.datasources.encryption.masterkey config in opensearch.yml in all the cluster nodes. More details can be found here: https://github.com/opensearch-project/sql/blob/main/docs/user/ppl/admin/datasources.rst#master-key-config-for-encrypting-credential-information

[2024-12-07T08:33:42,372][WARN ][o.o.g.DanglingIndicesState] [node-1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually

[2024-12-07T08:33:44,241][WARN ][o.o.p.c.s.h.ConfigOverridesClusterSettingHandler] [node-1] Config override setting update called with empty string. Ignoring.

[2024-12-07T08:33:44,453][WARN ][o.o.o.i.ObservabilityIndex] [node-1] message: index [.opensearch-observability/lZ6YXhLTSfmQp4ZDYGgFlQ] already exists

[2024-12-07T08:33:44,482][WARN ][o.o.s.SecurityAnalyticsPlugin] [node-1] Failed to initialize LogType config index and builtin log types



FYI, idk if this info will be needed, but i still want to provide it to you

cat /var/ossec/logs/ossec.log | grep -i -E "error|warn" displays the following:
25/10/09 00:08:02 wazuh-remoted: WARNING: Unexpected message (hex): '3c3133343e3120323032352d31302d30385432313a30373a35385a206872732d696e7465726e6174696f6e612d39647773316c767920436865636b506f696e74203239393132202d205b616374696f6e3a22446574656374223b20666c6167733a22333933323136223b2069666469723a22696e626f756e64223b2069666e616d653a22626f6e6431302e333031223b206c6f677569643a227b307836386536643262322c3078302c307833343030343036342c307833626335373464387d223b206f726967696e3a223130302e3130302e33312e3634223b206f726967696e7369636e616d653a22434e3d43502d47572d4d534b2c4f3d4d616e6167656d656e745f536572766963652e2e396b71713837223b2073657175656e63656e756d3a2239223b2074696d653a2231373539393537363738223b2076657273696f6e3a2235223b205f5f706f6c6963795f69645f7461673a2270726f647563743d56504e2d312026204669726557616c6c2d315b64625f7461673d7b43313746364339452d433344342d443334332d423439442d3741304237324246423034387d3b6d676d743d4d616e6167656d656e745f536572766963653b646174653d313735383834323338333b706f6c6963795f6e616d653d43502d47572d4d534b2d546573745c5d223b206473743a223138352e3138372e3131332e323039223b206d6573736167655f696e666f3a22416464726573732073706f6f66696e67223b2070726f647563743a2256504e2d312026204669726557616c6c2d31223b2070726f746f3a2236223b20735f706f72743a223537303330223b20736572766963653a22343433223b207372633a223139322e3136382e3233302e323534225d0a'
2025/10/09 00:08:02 wazuh-remoted: WARNING: Too big message size from socket [207].
2025/10/09 00:08:02 wazuh-remoted: WARNING: Unexpected message (hex): '3c34363e3120323032352d31302d30395430303a2.........EDITED For simplicity..................................73736f6369617465642e225d20536563757264656e0a'
2025/10/09 00:08:02 wazuh-remoted: WARNING: Too big message size from socket [205].
2025/10/09 00:08:03 wazuh-remoted: WARNING: Unexpected message (hex): '3c3133343e312022416363657074223b20666c612..........EDITED For simplicity........................................0225d'
2025/10/09 00:08:03 wazuh-remoted: WARNING: Too big message size from socket [205].
2025/10/09 00:08:04 wazuh-remoted: WARNING: Unexpected message (hex): '3c3133343e31203230321636365707352d31302d3 .........EDITED For simplicity..................................  2e3130302e3330225d0a'
2025/10/09 00:08:04 wazuh-remoted: WARNING: Too big message size from socket [205].
2025/10/09 00:08:04 wazuh-remoted: WARNING: Unexpected message (hex): '31223b207372633a223139322e3136382e3233302e323534225d0a'
2025/10/09 00:08:04 wazuh-remoted: WARNING: Too big message size from socket [205].
2025/10/09 00:08:05 wazuh-remoted: WARNING: Unexpected message (hex): '3c3133343e3120323032352d31302d30385432313a30383a30325a206872732d696e7465726e6174696f6e612d39647773316c767920436865636b506f696e74203239393132202d205b616374696f6e3a22446574656374223b20666c6167733a22333933323136223b2069666469723a22696e626f756e64223b2069666e616d653a2262^C


Again, thanks for responding
Looking forward to hearing from you

Sincerely,
John

Stuti Gupta

unread,
4:53 AM (7 hours ago) 4:53 AM
to Wazuh | Mailing List
From the cluster health, it seems that the shards are full (default shard limit is 1000), which will cause indexing issues — new logs may stop being indexed, and that is the reason you are unable to see new alerts in the wazuh-dashboard. To resolve this, apply the following solutions:

Manually Delete Indices
You should review the stored indices using the following API call:
GET _cat/indices
From there, you can delete unnecessary or old indices. Note that deleted indices cannot be retrieved unless backed up through snapshots or Wazuh alert backups. The API call to delete indices is:
DELETE <index_name>
Or via the CLI:
curl -k -u admin:admin -XDELETE https://<WAZUH_INDEXER_IP>:9200/wazuh-alerts-4.x-YYYY.MM.DD
You can also use wildcards (*) to delete multiple indices in one go.

Index Management Policies
You can automate index deletion by setting up Index Lifecycle Management (ILM) policies, as explained in this post:(https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/index-lifecycle-management.html). Additionally, you can set up snapshots to automatically back up Wazuh indices to local or cloud storage for restoration when needed. More details on this can be found in the (https://wazuh.com/blog/index-backup-management) article.

Add an Indexer Node
Adding another indexer node will increase the capacity. For more information on how to do this, refer to the official guide: (https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/add-wazuh-indexer-nodes.htmll).

For the second error, that is :

Too big message size from socket [207].
2025/10/09 00:08:02 wazuh-remoted: WARNING: Unexpected message (hex): '3c34363e3120323032352d31302d30395430303a2.........EDITED For simplicity..................................73736f6369617465642e225d20536563757264656e0a'

This log line means your Wazuh manager is receiving data from an agent (or syslog source) that is larger than Wazuh’s allowed message size and  so it’s dropping them. You can either increase Wazuh’s message limit or make the sender split the logs. As the header content exceeds the maximum size currently allowed by Wazuh:
Agent-manager messages follow this format:
<message> ::= <header> <payload>
<header> ::= <byte> <byte> <byte> <byte>

The <header> is a 32-bit unsigned little-endian integer indicating the payload size in bytes. The value range is [0, OS_MAXSTR] (up to 65536 bytes). Should any message not honour this constraint, Remoted will print this warning.

Remoted prints a "too big message" warning log https://github.com/wazuh/wazuh/issues/13762#top

You can try using rsyslog, but store the messages in a specific file so that you can then monitor it using localfile:
https://documentation.wazuh.com/current/cloud-service/your-environment/send-syslog-data.html#rsyslog-on-linux
# Storing Messages from a Remote System into a specific File if $fromhost-ip startswith 'xxx.xxx.xxx.' then /var/log/<file_name.log>

Then your localfile configuration on the agent can look similar to the below:
<localfile>
  <log_format>syslog</log_format>
  <location>/var/log/<file_name.log></location>
</localfile>

refer https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/localfile.html
Reply all
Reply to author
Forward
0 new messages