Alerts and Archives logs doesn't appear on dashboard and indexes

298 views
Skip to first unread message

m mun

unread,
Nov 28, 2025, 9:58:49 AM11/28/25
to Wazuh | Mailing List
Hi Wazuh Team,

I am facing an issue where the the logs doesn't appear in my dashboard, however the archives and alerts logs file are there. Last logs that could be searched was on 26th Nov (which is last 2 days). I have tried to restart all services including indexers and filebeat but no difference.

Hope anyone could give advice on this matter. Thankyou

victor....@wazuh.com

unread,
Nov 28, 2025, 11:53:26 AM11/28/25
to Wazuh | Mailing List

Hello, If you suddenly stop receiving alerts or events in your dashboard, it’s possible that an unexpected issue occurred in the connection between your indexer and Filebeat.

First, I recommend checking whether Filebeat is properly configured and running:


filebeat test output


You should see an output similar to:

elasticsearch: https://127.0.0.1:9200...

  parse url... OK

  connection...

    parse host... OK

    dns lookup... OK

    addresses: 127.0.0.1

    dial up... OK

  TLS...

    security: server's certificate chain verification is enabled

    handshake... OK

    TLS version: TLSv1.3

    dial up... OK

  talk to server... OK

  version: 7.10.2


This documentation may also be helpful: https://documentation.wazuh.com/current/user-manual/wazuh-dashboard/troubleshooting.html#no-alerts-on-the-wazuh-dashboard-error


Next, review the Indexer and Filebeat logs for any errors or warnings:


cat /var/log/wazuh-indexer/wazuh-indexer-cluster.log | grep -i -E "error|warn"

cat /var/log/filebeat/filebeat | grep -i -E "error|warn"


Finally, check the disk space on your system, as full storage can prevent new indices from being created.


Please review and send back any evidence you collect to help determine the root cause of the issue.

m mun

unread,
Nov 30, 2025, 1:24:47 AM11/30/25
to Wazuh | Mailing List
Hi,

Thankyou  for the advice, i have tried the suggestion and the result is as  below:
1. the filebeat test output is all OK
2. i also tried this command referring to the guideline you shared : 

curl https://<WAZUH_INDEXER_IP>:9200/_cat/indices/wazuh-alerts-* -u <WAZUH_INDEXER_USERNAME>:<WAZUH_INDEXER_PASSWORD> -k ,

and there are list of indexes files but it is the same as the one i saw on dasboard where the lates indexes file are on 25/11

3. I have tried to find for wazuh-indexer-cluster.log file but the file is not there,
only these file exists :
-rw-r-----  1 wazuh-indexer wazuh-indexer    99854 Nov 28 12:09 wazuh-cluster_deprecation.json
-rw-r-----  1 wazuh-indexer wazuh-indexer    56855 Nov 28 12:09 wazuh-cluster_deprecation.log
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_index_indexing_slowlog.json
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_index_indexing_slowlog.log
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_index_search_slowlog.json
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_index_search_slowlog.log
-rw-r-----  1 wazuh-indexer wazuh-indexer   131993 Nov 29 17:09 wazuh-cluster.log
-rw-r-----  1 wazuh-indexer wazuh-indexer   324003 Nov 29 17:09 wazuh-cluster_server.json
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_task_detailslog.json
-rw-r-----  1 wazuh-indexer wazuh-indexer        0 Jun 19 23:41 wazuh-cluster_task_detailslog.log

4. I have run this command : 

cat /var/log/filebeat/filebeat | grep -i -E "error|warn"

there are no error message, just warnings that is similar to this :
2025-11-29T17:03:43.407+0800    WARN    [elasticsearch] elasticsearch/client.go:408     Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc242cb9ba0e6a6fc, ext:104092023870402, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, .......... 

5. the disk space is only on 38% of usage





victor....@wazuh.com

unread,
Dec 1, 2025, 4:46:14 AM12/1/25
to Wazuh | Mailing List

Perfect. Let’s take a deeper look into your environment.


Verify that the Wazuh manager is generating alerts

First, we need to confirm whether the manager is producing alerts and whether the issue lies in forwarding them to the Wazuh indexer.

Please provide the following:

  • The /var/ossec/logs/ossec.log file, including any errors or warnings you find.
  • The timestamp of the most recent alert generated by the manager.


Also, check the manager status and share the output:

/var/ossec/bin/wazuh-control status



Check the Wazuh indexer logs

From the indexer side, please share your wazuh-cluster.log file, including any complete error or warning messages. You can filter them with:

cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"


If you see any errors or warnings, paste the full log lines so we can analyze them thoroughly.


Also, share the indexer service status:

systemctl status wazuh-indexer



Share the complete Filebeat warning

Please also provide the full Filebeat warning message, as it may give us valuable clues:


2025-11-29T17:03:43.407+0800 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc242cb9ba0e6a6fc, ext:104092023870402, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, ...


In this case, it appears to be a mapping-type conflict; we should take a look at the full event


Finally, please share the version you are using.



Please share all relevant evidence you can gather to help us troubleshoot the environment.

m mun

unread,
Dec 3, 2025, 11:55:30 PM12/3/25
to Wazuh | Mailing List
Hi,

The output of the testings are as below:
1. The only warnings on  /var/ossec/logs/ossec.log file is about
2. the latest wazuh alerts are on 
Nov 26, 2025 @ 07:59:58.378


3. the result of error/warn message in wazuh-cluster.log is as below:
[2025-12-04T10:00:57,245][WARN ][o.o.w.QueryGroupTask     ] [node-1] QueryGroup _id can't be null, It should be set before accessing it. This is abnormal behaviour

4. wazuh indexer is running:
● wazuh-indexer.service - wazuh-indexer
     Loaded: loaded (/lib/systemd/system/wazuh-indexer.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2025-11-28 12:09:15 +08; 5 days ago
 

5. The warning message from the filebeat shows (apologise that i need to masked some informations since it is confidential) :
2025-12-04T10:08:55.788+0800    WARN    [elasticsearch] elasticsearch/client.go:408    
Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{...},
Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"},
Fields:{"agent":{"ephemeral_id":"[MASKED]","hostname":"[MASKED]","id":"[MASKED]",
"name":"[MASKED]","type":"filebeat","version":"7.10.2"},
"ecs":{"version":"1.6.0"},"event":{"dataset":"wazuh.archives","module":"wazuh"},
"fields":{"index_prefix":"wazuh-archives-4.x-"},"fileset":{"name":"archives"},
"host":{"name":"[MASKED]"},"input":{"type":"log"},
"log":{"file":{"path":"[STANDARD_PATH]/archives.json"},"offset":[MASKED]},
"message":"{\"timestamp\":\"2025-12-04T10:08:55.383+0800\",
\"agent\":{\"id\":\"[MASKED]\",\"name\":\"[MASKED]\",\"ip\":\"[MASKED]\"},
\"manager\":{\"name\":\"[MASKED]\"},
\"id\":\"[MASKED]\",
\"full_log\":\"[CONTENT_SHORTENED]\",
\"decoder\":{\"name\":\"syscollector\"},
\"data\":{\"type\":\"dbsync_processes\",\"process\":{\"pid\":\"[MASKED]\",
\"name\":\"[MASKED_PROCESS]\",\"state\":\"[MASKED]\",\"ppid\":\"[MASKED]\"...},
\"operation_type\":\"INSERTED\"},\"location\":\"syscollector\"}",
"service":{"type":"wazuh"}},...}} (status=400):
{"type":"validation_exception","reason":"Validation Failed:
1: this action would add [3] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}


6. the wazuh version that we are using right now is :
v4.12.0

i guess the no 5 is the main issue. apologise that i could not share the full error logs since it contains few confidential information. Hope you are still willing to further advice on this issue


m mun

unread,
Dec 4, 2025, 1:36:54 AM12/4/25
to Wazuh | Mailing List
correction :

1. The only warnings on  /var/ossec/logs/ossec.log file is about agent connection

victor....@wazuh.com

unread,
Dec 9, 2025, 6:03:33 AM12/9/25
to Wazuh | Mailing List

It looks like your cluster has reached its maximum shard capacity. Once this limit is hit, the indexer blocks the creation of any new indices. This includes indices such as:

  • wazuh-archives-*
  • wazuh-alerts-*

Since Filebeat is unable to create the next rollover index, all incoming data fails to be indexed, which is why your dashboards stop showing new alerts or archives information. The behavior you’re observing is fully consistent with a shard-limit condition.


How to resolve the issue

You can restore normal operation by taking one of the following actions:

  1. Remove or close old indices
  2. Deleting unnecessary indices is the most common and safest approach. Closing indices (instead of deleting) is also an option if you may need the data later.
  3. Reindex and consolidate data
  4. If you have many small indices, reindexing them into fewer, larger ones can significantly reduce your shard count.
  5. Increase the shard limit (not recommended)
  6. Raising the maximum number of shards per node is possible, but it often leads to long-term performance degradation. Use this option only as a temporary measure.


Preventing the issue in the future

To avoid hitting the shard cap again, it’s best to apply an index lifecycle policy that automatically manages retention, deleting or closing old indices before they accumulate. You can refer to the Wazuh documentation for guidance on configuring these policies: https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/index-lifecycle-management.html


Note for single-node deployments

If you're running a single node, ensure that each index is created with only one primary shard. You can verify this in the Filebeat template located at:

/etc/filebeat/wazuh-template.json. Check that index.number_of_shards is set to 1, as additional shards unnecessarily consume your limited shard capacity.

m mun

unread,
Dec 10, 2025, 1:08:25 AM12/10/25
to Wazuh | Mailing List
Hi, 

Thank you for the advice, I have a few clarifications needed :

1. could you explain more what is the difference between close index and delete index ?
as far as i understand right now from the retention policy when we set to delete the index, the logs file will still available in /var/ossec/logs/alerts but the data becomes not searchable on dashboard. How about close index ?

2. Currently all indexes are created daily. To reindex and consolidate data, do you have any documentations or guide i could refer?
3. to increase shards limit, is there any recommendation or baseline to calculate how many shards limit should we set for single node environment ?

For additional information yes we do run on single node only. I have change the index.number_of_shards to 1
this is before :
Screenshot 2025-12-10 123836.png

this is after:
Screenshot 2025-12-10 124219.png


Thank you for the advice provided

victor....@wazuh.com

unread,
Dec 10, 2025, 5:52:18 AM12/10/25
to Wazuh | Mailing List

1. Difference between close index and delete index


When an index is deleted, Indexer removes the index from its data directory, but Wazuh server still keeps the original log files under logs directory because those are maintained by the server, not the indexer.

In this case, close and delete index are operations from the Indexer side. Here’s the distinction:


Delete Index
  • Completely removes the index from Indexer.
  • The shards and all stored documents are deleted from Indexer’s data path.
  • The data becomes permanently unsearchable in the dashbord.
  • Raw log files generated by Wazuh remain untouched under /var/ossec/logs/alerts because Indexer does not manage those files.
  • Irreversible, the index cannot be reopened.
Close Index
  • The index stays on disk inside Indexer but becomes inactive.
  • All its shards are closed and freed from memory (so they no longer consume heap).
  • The index cannot be searched while closed, but it can be reopened at any time.
  • Useful when you want to preserve data but temporarily reduce resource usage.
  • Does not free disk space, only memory and CPU load.

Both operations can be performed through the Index Management > Indexes menu

2. Reindexing and consolidating your daily indices

Since a new alert index is created daily, shard count grows quickly, especially because earlier these indices were created with 3 primary shards each, which is unnecessary on a single-node cluster.

Now that you’ve changed index.number_of_shards to 1, new indices will be much more efficient.


To reduce existing shard usage, you can reindex daily indices into larger consolidated indices, such as weekly or monthly ones. This can be done through Index Management > Indexes. Select desired indices to reindex, and click on Reindex in Actions

After verifying the new consolidated index, the old daily indices can be deleted to free shards and disk.


You can find more information in https://docs.opensearch.org/latest/im-plugin/reindex-data/


3. Shard limit recommendations for a single-node environment

Because your cluster has only one node, setting index.number_of_shards = 1 is exactly the right move.

Your previous configuration with 3 primary shards per index was causing unnecessary shard overhead.




In general, avoid increasing shard limits; keep index.number_of_shards=1, consolidate or close old indices, delete obsolete ones, and automate retention using Index Lifecycle Management https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/index-lifecycle-management.html

m mun

unread,
Dec 11, 2025, 12:52:51 PM12/11/25
to Wazuh | Mailing List
Hi victor,

Thankyou so much for your advice and help. I have closed the unnecessary indexes and the indexing works now. the alerts and archives are displaying on dashboard.

I just have one last thing to ask advice for. For the missing indexes during the shards limitation, is there any method that i can re index all the past archives and make it visible on dashboard?
appreciate your advice on this. Thankyou again.

victor....@wazuh.com

unread,
Dec 12, 2025, 4:27:42 AM12/12/25
to Wazuh | Mailing List
For this scenario, I recommend using the recovery script described in the Restoring old logs section of the documentation.
The script decompresses all archived logs and generates a recovery.json file in the /tmp directory. After running it, add the resulting JSON file to your Wazuh Filebeat module.
Feel free to adjust the script parameters as needed for your case.

Let us know if you encounter any issues during the index recovery process.

m mun

unread,
Dec 14, 2025, 5:17:18 PM12/14/25
to Wazuh | Mailing List
Hi, 

I have done the index recovery for archives, just want to confirm, does the newly indexed old logs will have their original timestamp or will take the index timestamp (which is today)?
Also, i have noticed one thing about the index health when i did some checking. there are a few of unassigned shards. do i need to concern about this or it is something that will resolve automatically later ?
Screenshot 2025-12-13 004322.png

Again, really appreciate your time on this. thank you.

Pablo Moliz Arias

unread,
Dec 22, 2025, 7:11:31 AM12/22/25
to Wazuh | Mailing List

Hi, I am glad to hear that the index recovery worked and the dashboard is fully operational again.

Here is the detailed explanation regarding your final questions:

1. Will the recovered logs have the original timestamp or today's date?

They will retain their original timestamp. You do not need to worry about the ingestion time. When you use the recovery script, it extracts the logs into a JSON format that preserves the original event information.
  • The Wazuh Filebeat module uses a pipeline that reads the specific timestamp field contained within the log entry. It uses this field to determine both the date of the index where the data will be stored and the time displayed in the dashboard.

  • Since the logs preserve their historical date, they will not appear in the "Last 15 minutes" view. You must adjust the Time Picker in the top right corner of the dashboard to cover the specific dates you recovered to see them.

2. Should I be concerned about "Unassigned Shards" and "Yellow" status?

No, this is expected behavior for your environment.

Looking at your screenshot, the cluster status is Yellow with unassigned_shards: 5.

  • The "Yellow" status indicates that all your data is safe (Primary shards are active), but the Replica shards (backup copies) could not be assigned. Elasticsearch/OpenSearch enforces a rule that a replica shard cannot exist on the same node as its primary shard. Since you are running a Single-Node cluster (number_of_nodes: 1), the system has nowhere to place these replicas, so they remain "Unassigned" indefinitely.

  • No immediate action is needed. Your cluster is fully functional for searching and indexing. The "Yellow" warning is simply informing you that you lack high availability (which is natural in a single-node setup).


Relevant Documentation:


Message has been deleted

m mun

unread,
Dec 29, 2025, 5:19:29 AM12/29/25
to Wazuh | Mailing List
Hi Team, seems like the indexes doesnt indexed properly again. however this time the error of shards limit is not there. here are the error for each command

command : tail -f /var/log/filebeat/filebeat
2025-12-29T01:32:44.760+0800    ERROR   [publisher_pipeline_output]     pipeline/output.go:180  failed to publish events: temporary bulk send failure

command:  cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"
[2025-12-29T00:00:00,571][WARN ][o.o.c.r.a.AllocationService] [node-1] Falling back to single shard assignment since batch mode disable or multiple custom allocators set
[2025-12-29T00:00:00,685][WARN ][o.o.c.r.a.AllocationService] [node-1] Falling back to single shard assignment since batch mode disable or multiple custom allocators set

appreciate if you could advice on required action to solve this issue, thankyou

Pablo Moliz Arias

unread,
Dec 29, 2025, 6:17:41 AM12/29/25
to Wazuh | Mailing List
Hi,

The temporary bulk send failure in Filebeat confirms that the Wazuh Indexer is actively blocking incoming data. Since you confirmed the shard limit is not the issue,
the most probable cause after a large log recovery is that your cluster triggered a Disk Flood Stage or a Memory Circuit Breaker.

During your recovery process, if the disk usage momentarily spiked above 95% (flood stage watermark), the Indexer automatically switches all indices to Read-Only to protect the data.
Even if you deleted files and disk space dropped back to 42%, the cluster does not automatically unlock the indices.

Please follow these steps and share the results:

1. Verify if the indices are locked

Run the following command . Localhost refers to the service running on your local machine:

curl -k -u <USER>:<PASSWORD> -XGET "https://localhost:9200/_settings?pretty" | grep "read_only_allow_delete"

 1.1. If the output is empty: The indices are not locked by the disk watermark

 1.2. If you see "index.blocks.read_only_allow_delete": "true": Your indices are locked.


2. Unlock the indices (Only if you saw "true" above)

If you confirmed the lock in the previous step, run this command to force the indices to accept data again:



curl -k -u <USER>:<PASSWORD> -XPUT "https://localhost:9200/_all/_settings" -H 'Content-Type: application/json' -d'
{
 "index.blocks.read_only_allow_delete": null
}
'



Filebeat should stop showing errors and resume sending logs immediately after this.

m mun

unread,
Dec 30, 2025, 3:30:29 AM (13 days ago) 12/30/25
to Wazuh | Mailing List
Hi,

I didnt get any result for this command :


curl -k -u <USER>:<PASSWORD> -XGET "https://localhost:9200/_settings?pretty" | grep "read_only_allow_delete"

is there any other path should i check ?

Pablo Moliz Arias

unread,
Dec 31, 2025, 8:41:12 AM (12 days ago) 12/31/25
to Wazuh | Mailing List

Hi,

Since the command returned nothing, we can rule out the disk lock—that's good news.

However, the Filebeat errors show the Indexer is still overwhelmed. Given the heavy log recovery you just ran, it is highly likely that you have maxed out the JVM memory (Circuit Breaker) or the cluster health has dropped to "Red."

Please run these checks so we can pinpoint exactly where it's stuck:



1. Check Cluster Health
We need to verify if the cluster state is Red (which stops indexing) or Yellow/Green.

curl -k -u <USER>:<PASSWORD> -XGET "https://localhost:9200/_cluster/health?pretty"


2. Check Memory (JVM Heap) Usage
If the Heap usage is too high (above 95%), the Indexer triggers a "Circuit Breaker" to prevent a crash, causing it to reject new logs.


3. Check for specific Memory Errors
Since the previous log check didn't show much, let’s specifically look for memory exceptions that correspond to the bulk failures:

cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i "CircuitBreakingException"

m mun

unread,
Jan 2, 2026, 12:15:58 AM (10 days ago) Jan 2
to Wazuh | Mailing List
Hi and happy new year,

These are the results for all checklist:

1. {
  "cluster_name" : "wazuh-cluster",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "discovered_master" : true,
  "discovered_cluster_manager" : true,
  "active_primary_shards" : 1108,
  "active_shards" : 1108,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 52,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 95.51724137931035
}


2. 
name   heap.percent ram.percent cpu
node-1           58          99   1

3. No results

Pablo Moliz Arias

unread,
Jan 5, 2026, 4:46:37 AM (7 days ago) Jan 5
to Wazuh | Mailing List

Hi, happy new year.

Thanks for the results. This clarifies everything.

The Diagnosis:

  1. Memory is NOT the issue: Your Heap is at 58%, which is healthy. 

  2. The problem is "Status: RED": You have 52 unassigned shards. While the cluster is Red, it cannot index data into those specific indices, causing Filebeat to fail.

  3. Shard Overload: You have 1108 active shards on a single node. This is extremely high. Managing this many shards consumes massive system resources (which explains your System RAM at 99%, even if Heap is low)

m mun

unread,
Jan 5, 2026, 7:56:41 AM (7 days ago) Jan 5
to Wazuh | Mailing List
Understood, in that case, should I delete the old indexes instead of close it like before ? appreciate your extended advice on this issue. Thank you

Pablo Moliz Arias

unread,
Jan 5, 2026, 11:26:52 AM (7 days ago) Jan 5
to Wazuh | Mailing List

Hi,

In your specific case: Yes, delete them.

Your server is overloaded because it is trying to manage too many individual "pieces" of data.

  • Closing them hides the data, but the system still has to carry the weight of managing the files.

  • Deleting removes the weight completely.

Since your System RAM is at 99%, you need to fully remove the old data to to free up the server.

Recommended Action: Delete the months you don't need anymore (for example, early 2025):


After you delete a few months, the "Red" status should disappear. check and confirm that it turns Green or Yellow!

Pablo Moliz Arias

unread,
Jan 7, 2026, 6:22:40 AM (5 days ago) Jan 7
to Wazuh | Mailing List
Hi M Mun,

Following up on my previous answer, I wanted to share some additional context on why this happened and how to prevent it in the future.

1. The Root Cause (Shard Limit) By default, the recommended limit is 1,000 shards per node. Your cluster currently has 1,108. Even if the limit is manually increased, managing this many files on a single node causes the high Disk and RAM usage (99%) you are seeing.

2. Long-term Solutions Once you delete the old data and the cluster returns to Green/Yellow status, consider these steps to keep it healthy:

ISM Policies: Configure "Index State Management" to automatically delete indices older than a specific time (e.g., 90 days) so you don't have to do it manually.

Check Replicas: Since you are running a single node, ensure your indices are set to 0 replicas.

Snapshots: If you need to keep old data for legal reasons, we recommend using Snapshots to back up indices to an external repository before deleting them from the live server.

Add a Node: If you strictly need to keep all 1100+ shards online, you will need to add a second Indexer node to share the load.


I hope this resolves your issue and you don't experience any further problems. If you need anything else, please contact us.

m mun

unread,
6:27 AM (2 hours ago) 6:27 AM
to Wazuh | Mailing List
Hi,

Thankyou for the suggestions, currently the replicas are already set to 0.

I have deleted unnecessary indexes by using policy, and the indexing has resumes and the status of cluster has return to yellow, but the memory usage remains high at 99% and there are still some unassigned shards.

is it a normal condition or this is actually a sign for me to add another node? since i have done all other recommended action to maintain the cluster health.

Thank you for your time and attention on this matter.
Reply all
Reply to author
Forward
0 new messages