Alerts and Archives logs doesn't appear on dashboard and indexes

133 views
Skip to first unread message

m mun

unread,
Nov 28, 2025, 9:58:49 AMNov 28
to Wazuh | Mailing List
Hi Wazuh Team,

I am facing an issue where the the logs doesn't appear in my dashboard, however the archives and alerts logs file are there. Last logs that could be searched was on 26th Nov (which is last 2 days). I have tried to restart all services including indexers and filebeat but no difference.

Hope anyone could give advice on this matter. Thankyou

victor....@wazuh.com

unread,
Nov 28, 2025, 11:53:26 AMNov 28
to Wazuh | Mailing List

Hello, If you suddenly stop receiving alerts or events in your dashboard, it’s possible that an unexpected issue occurred in the connection between your indexer and Filebeat.

First, I recommend checking whether Filebeat is properly configured and running:


filebeat test output


You should see an output similar to:

elasticsearch: https://127.0.0.1:9200...

  parse url... OK

  connection...

    parse host... OK

    dns lookup... OK

    addresses: 127.0.0.1

    dial up... OK

  TLS...

    security: server's certificate chain verification is enabled

    handshake... OK

    TLS version: TLSv1.3

    dial up... OK

  talk to server... OK

  version: 7.10.2


This documentation may also be helpful: https://documentation.wazuh.com/current/user-manual/wazuh-dashboard/troubleshooting.html#no-alerts-on-the-wazuh-dashboard-error


Next, review the Indexer and Filebeat logs for any errors or warnings:


cat /var/log/wazuh-indexer/wazuh-indexer-cluster.log | grep -i -E "error|warn"

cat /var/log/filebeat/filebeat | grep -i -E "error|warn"


Finally, check the disk space on your system, as full storage can prevent new indices from being created.


Please review and send back any evidence you collect to help determine the root cause of the issue.

m mun

unread,
Nov 30, 2025, 1:24:47 AMNov 30
to Wazuh | Mailing List
Hi,

Thankyou  for the advice, i have tried the suggestion and the result is as  below:
1. the filebeat test output is all OK
2. i also tried this command referring to the guideline you shared : 

curl https://<WAZUH_INDEXER_IP>:9200/_cat/indices/wazuh-alerts-* -u <WAZUH_INDEXER_USERNAME>:<WAZUH_INDEXER_PASSWORD> -k ,

and there are list of indexes files but it is the same as the one i saw on dasboard where the lates indexes file are on 25/11

3. I have tried to find for wazuh-indexer-cluster.log file but the file is not there,
only these file exists :
-rw-r-----  1 wazuh-indexer wazuh-indexer    99854 Nov 28 12:09 wazuh-cluster_deprecation.json
-rw-r-----  1 wazuh-indexer wazuh-indexer    56855 Nov 28 12:09 wazuh-cluster_deprecation.log
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_index_indexing_slowlog.json
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_index_indexing_slowlog.log
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_index_search_slowlog.json
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_index_search_slowlog.log
-rw-r-----  1 wazuh-indexer wazuh-indexer   131993 Nov 29 17:09 wazuh-cluster.log
-rw-r-----  1 wazuh-indexer wazuh-indexer   324003 Nov 29 17:09 wazuh-cluster_server.json
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_task_detailslog.json
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_task_detailslog.log

4. I have run this command : 

cat /var/log/filebeat/filebeat | grep -i -E "error|warn"

there are no error message, just warnings that is similar to this :
2025-11-29T17:03:43.407+0800    WARN    [elasticsearch] elasticsearch/client.go:408     Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc242cb9ba0e6a6fc, ext:104092023870402, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, .......... 

5. the disk space is only on 38% of usage





victor....@wazuh.com

unread,
Dec 1, 2025, 4:46:14 AMDec 1
to Wazuh | Mailing List

Perfect. Let’s take a deeper look into your environment.


Verify that the Wazuh manager is generating alerts

First, we need to confirm whether the manager is producing alerts and whether the issue lies in forwarding them to the Wazuh indexer.

Please provide the following:

  • The /var/ossec/logs/ossec.log file, including any errors or warnings you find.
  • The timestamp of the most recent alert generated by the manager.


Also, check the manager status and share the output:

/var/ossec/bin/wazuh-control status



Check the Wazuh indexer logs

From the indexer side, please share your wazuh-cluster.log file, including any complete error or warning messages. You can filter them with:

cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"


If you see any errors or warnings, paste the full log lines so we can analyze them thoroughly.


Also, share the indexer service status:

systemctl status wazuh-indexer



Share the complete Filebeat warning

Please also provide the full Filebeat warning message, as it may give us valuable clues:


2025-11-29T17:03:43.407+0800 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc242cb9ba0e6a6fc, ext:104092023870402, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, ...


In this case, it appears to be a mapping-type conflict; we should take a look at the full event


Finally, please share the version you are using.



Please share all relevant evidence you can gather to help us troubleshoot the environment.

m mun

unread,
Dec 3, 2025, 11:55:30 PMDec 3
to Wazuh | Mailing List
Hi,

The output of the testings are as below:
1. The only warnings on  /var/ossec/logs/ossec.log file is about
2. the latest wazuh alerts are on 
Nov 26, 2025 @ 07:59:58.378


3. the result of error/warn message in wazuh-cluster.log is as below:
[2025-12-04T10:00:57,245][WARN ][o.o.w.QueryGroupTask     ] [node-1] QueryGroup _id can't be null, It should be set before accessing it. This is abnormal behaviour

4. wazuh indexer is running:
● wazuh-indexer.service - wazuh-indexer
     Loaded: loaded (/lib/systemd/system/wazuh-indexer.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2025-11-28 12:09:15 +08; 5 days ago
 

5. The warning message from the filebeat shows (apologise that i need to masked some informations since it is confidential) :
2025-12-04T10:08:55.788+0800    WARN    [elasticsearch] elasticsearch/client.go:408    
Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{...},
Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"},
Fields:{"agent":{"ephemeral_id":"[MASKED]","hostname":"[MASKED]","id":"[MASKED]",
"name":"[MASKED]","type":"filebeat","version":"7.10.2"},
"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.archives","module":"wazuh"},
"fields":{"index_prefix":"wazuh-archives-4.x-"},"fileset":{"name":"archives"},
"host":{"name":"[MASKED]"},"input":{"type":"log"},
"log":{"file":{"path":"[STANDARD_PATH]/archives.json"},"offset":[MASKED]},
"message":"{\"timestamp\":\"2025-12-04T10:08:55.383+0800\",
\"agent\":{\"id\":\"[MASKED]\",\"name\":\"[MASKED]\",\"ip\":\"[MASKED]\"},
\"manager\":{\"name\":\"[MASKED]\"},
\"id\":\"[MASKED]\",
\"full_log\":\"[CONTENT_SHORTENED]\",
\"decoder\":{\"name\":\"syscollector\"},
\"data\":{\"type\":\"dbsync_processes\",\"process\":{\"pid\":\"[MASKED]\",
\"name\":\"[MASKED_PROCESS]\",\"state\":\"[MASKED]\",\"ppid\":\"[MASKED]\"...},
\"operation_type\":\"INSERTED\"},\"location\":\"syscollector\"}",
"service":{"type":"wazuh"}},...}} (status=400):
{"type":"validation_exception","reason":"Validation Failed:
1: this action would add [3] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}


6. the wazuh version that we are using right now is :
v4.12.0

i guess the no 5 is the main issue. apologise that i could not share the full error logs since it contains few confidential information. Hope you are still willing to further advice on this issue


m mun

unread,
Dec 4, 2025, 1:36:54 AMDec 4
to Wazuh | Mailing List
correction :

1. The only warnings on  /var/ossec/logs/ossec.log file is about agent connection

victor....@wazuh.com

unread,
Dec 9, 2025, 6:03:33 AM (14 days ago) Dec 9
to Wazuh | Mailing List

It looks like your cluster has reached its maximum shard capacity. Once this limit is hit, the indexer blocks the creation of any new indices. This includes indices such as:

  • wazuh-archives-*
  • wazuh-alerts-*

Since Filebeat is unable to create the next rollover index, all incoming data fails to be indexed, which is why your dashboards stop showing new alerts or archives information. The behavior you’re observing is fully consistent with a shard-limit condition.


How to resolve the issue

You can restore normal operation by taking one of the following actions:

  1. Remove or close old indices
  2. Deleting unnecessary indices is the most common and safest approach. Closing indices (instead of deleting) is also an option if you may need the data later.
  3. Reindex and consolidate data
  4. If you have many small indices, reindexing them into fewer, larger ones can significantly reduce your shard count.
  5. Increase the shard limit (not recommended)
  6. Raising the maximum number of shards per node is possible, but it often leads to long-term performance degradation. Use this option only as a temporary measure.


Preventing the issue in the future

To avoid hitting the shard cap again, it’s best to apply an index lifecycle policy that automatically manages retention, deleting or closing old indices before they accumulate. You can refer to the Wazuh documentation for guidance on configuring these policies: https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/index-lifecycle-management.html


Note for single-node deployments

If you're running a single node, ensure that each index is created with only one primary shard. You can verify this in the Filebeat template located at:

/etc/filebeat/wazuh-template.json. Check that index.number_of_shards is set to 1, as additional shards unnecessarily consume your limited shard capacity.

m mun

unread,
Dec 10, 2025, 1:08:25 AM (13 days ago) Dec 10
to Wazuh | Mailing List
Hi, 

Thank you for the advice, I have a few clarifications needed :

1. could you explain more what is the difference between close index and delete index ?
as far as i understand right now from the retention policy when we set to delete the index, the logs file will still available in /var/ossec/logs/alerts but the data becomes not searchable on dashboard. How about close index ?

2. Currently all indexes are created daily. To reindex and consolidate data, do you have any documentations or guide i could refer?
3. to increase shards limit, is there any recommendation or baseline to calculate how many shards limit should we set for single node environment ?

For additional information yes we do run on single node only. I have change the index.number_of_shards to 1
this is before :
Screenshot 2025-12-10 123836.png

this is after:
Screenshot 2025-12-10 124219.png


Thank you for the advice provided

victor....@wazuh.com

unread,
Dec 10, 2025, 5:52:18 AM (13 days ago) Dec 10
to Wazuh | Mailing List

1. Difference between close index and delete index


When an index is deleted, Indexer removes the index from its data directory, but Wazuh server still keeps the original log files under logs directory because those are maintained by the server, not the indexer.

In this case, close and delete index are operations from the Indexer side. Here’s the distinction:


Delete Index
  • Completely removes the index from Indexer.
  • The shards and all stored documents are deleted from Indexer’s data path.
  • The data becomes permanently unsearchable in the dashbord.
  • Raw log files generated by Wazuh remain untouched under /var/ossec/logs/alerts because Indexer does not manage those files.
  • Irreversible, the index cannot be reopened.
Close Index
  • The index stays on disk inside Indexer but becomes inactive.
  • All its shards are closed and freed from memory (so they no longer consume heap).
  • The index cannot be searched while closed, but it can be reopened at any time.
  • Useful when you want to preserve data but temporarily reduce resource usage.
  • Does not free disk space, only memory and CPU load.

Both operations can be performed through the Index Management > Indexes menu

2. Reindexing and consolidating your daily indices

Since a new alert index is created daily, shard count grows quickly, especially because earlier these indices were created with 3 primary shards each, which is unnecessary on a single-node cluster.

Now that you’ve changed index.number_of_shards to 1, new indices will be much more efficient.


To reduce existing shard usage, you can reindex daily indices into larger consolidated indices, such as weekly or monthly ones. This can be done through Index Management > Indexes. Select desired indices to reindex, and click on Reindex in Actions

After verifying the new consolidated index, the old daily indices can be deleted to free shards and disk.


You can find more information in https://docs.opensearch.org/latest/im-plugin/reindex-data/


3. Shard limit recommendations for a single-node environment

Because your cluster has only one node, setting index.number_of_shards = 1 is exactly the right move.

Your previous configuration with 3 primary shards per index was causing unnecessary shard overhead.




In general, avoid increasing shard limits; keep index.number_of_shards=1, consolidate or close old indices, delete obsolete ones, and automate retention using Index Lifecycle Management https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/index-lifecycle-management.html

m mun

unread,
Dec 11, 2025, 12:52:51 PM (11 days ago) Dec 11
to Wazuh | Mailing List
Hi victor,

Thankyou so much for your advice and help. I have closed the unnecessary indexes and the indexing works now. the alerts and archives are displaying on dashboard.

I just have one last thing to ask advice for. For the missing indexes during the shards limitation, is there any method that i can re index all the past archives and make it visible on dashboard?
appreciate your advice on this. Thankyou again.

victor....@wazuh.com

unread,
Dec 12, 2025, 4:27:42 AM (11 days ago) Dec 12
to Wazuh | Mailing List
For this scenario, I recommend using the recovery script described in the Restoring old logs section of the documentation.
The script decompresses all archived logs and generates a recovery.json file in the /tmp directory. After running it, add the resulting JSON file to your Wazuh Filebeat module.
Feel free to adjust the script parameters as needed for your case.

Let us know if you encounter any issues during the index recovery process.

m mun

unread,
Dec 14, 2025, 5:17:18 PM (8 days ago) Dec 14
to Wazuh | Mailing List
Hi, 

I have done the index recovery for archives, just want to confirm, does the newly indexed old logs will have their original timestamp or will take the index timestamp (which is today)?
Also, i have noticed one thing about the index health when i did some checking. there are a few of unassigned shards. do i need to concern about this or it is something that will resolve automatically later ?
Screenshot 2025-12-13 004322.png

Again, really appreciate your time on this. thank you.

Pablo Moliz Arias

unread,
7:11 AM (15 hours ago) 7:11 AM
to Wazuh | Mailing List

Hi, I am glad to hear that the index recovery worked and the dashboard is fully operational again.

Here is the detailed explanation regarding your final questions:

1. Will the recovered logs have the original timestamp or today's date?

They will retain their original timestamp. You do not need to worry about the ingestion time. When you use the recovery script, it extracts the logs into a JSON format that preserves the original event information.
  • The Wazuh Filebeat module uses a pipeline that reads the specific timestamp field contained within the log entry. It uses this field to determine both the date of the index where the data will be stored and the time displayed in the dashboard.

  • Since the logs preserve their historical date, they will not appear in the "Last 15 minutes" view. You must adjust the Time Picker in the top right corner of the dashboard to cover the specific dates you recovered to see them.

2. Should I be concerned about "Unassigned Shards" and "Yellow" status?

No, this is expected behavior for your environment.

Looking at your screenshot, the cluster status is Yellow with unassigned_shards: 5.

  • The "Yellow" status indicates that all your data is safe (Primary shards are active), but the Replica shards (backup copies) could not be assigned. Elasticsearch/OpenSearch enforces a rule that a replica shard cannot exist on the same node as its primary shard. Since you are running a Single-Node cluster (number_of_nodes: 1), the system has nowhere to place these replicas, so they remain "Unassigned" indefinitely.

  • No immediate action is needed. Your cluster is fully functional for searching and indexing. The "Yellow" warning is simply informing you that you lack high availability (which is natural in a single-node setup).


Relevant Documentation:


Reply all
Reply to author
Forward
0 new messages