Hello, If you suddenly stop receiving alerts or events in your dashboard, it’s possible that an unexpected issue occurred in the connection between your indexer and Filebeat.
First, I recommend checking whether Filebeat is properly configured and running:
filebeat test output
You should see an output similar to:
elasticsearch: https://127.0.0.1:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 127.0.0.1
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.3
dial up... OK
talk to server... OK
version: 7.10.2
This documentation may also be helpful: https://documentation.wazuh.com/current/user-manual/wazuh-dashboard/troubleshooting.html#no-alerts-on-the-wazuh-dashboard-error
Next, review the Indexer and Filebeat logs for any errors or warnings:
cat /var/log/wazuh-indexer/wazuh-indexer-cluster.log | grep -i -E "error|warn"
cat /var/log/filebeat/filebeat | grep -i -E "error|warn"
Finally, check the disk space on your system, as full storage can prevent new indices from being created.
Please review and send back any evidence you collect to help determine the root cause of the issue.
4. I have run this command :
cat /var/log/filebeat/filebeat | grep -i -E "error|warn"
there are no error message, just warnings that is similar to this :
2025-11-29T17:03:43.407+0800 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc242cb9ba0e6a6fc, ext:104092023870402, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, ..........
5. the disk space is only on 38% of usage
Perfect. Let’s take a deeper look into your environment.
First, we need to confirm whether the manager is producing alerts and whether the issue lies in forwarding them to the Wazuh indexer.
Please provide the following:
Also, check the manager status and share the output:
/var/ossec/bin/wazuh-control status
From the indexer side, please share your wazuh-cluster.log file, including any complete error or warning messages. You can filter them with:
cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"
If you see any errors or warnings, paste the full log lines so we can analyze them thoroughly.
Also, share the indexer service status:
systemctl status wazuh-indexer
Please also provide the full Filebeat warning message, as it may give us valuable clues:
2025-11-29T17:03:43.407+0800 WARN [elasticsearch] elasticsearch/client.go:408 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc242cb9ba0e6a6fc, ext:104092023870402, loc:(*time.Location)(0x42417a0)}, Meta:{"pipeline":"filebeat-7.10.2-wazuh-archives-pipeline"}, ...
In this case, it appears to be a mapping-type conflict; we should take a look at the full event
Finally, please share the version you are using.
Please share all relevant evidence you can gather to help us troubleshoot the environment.
It looks like your cluster has reached its maximum shard capacity. Once this limit is hit, the indexer blocks the creation of any new indices. This includes indices such as:
Since Filebeat is unable to create the next rollover index, all incoming data fails to be indexed, which is why your dashboards stop showing new alerts or archives information. The behavior you’re observing is fully consistent with a shard-limit condition.
How to resolve the issue
You can restore normal operation by taking one of the following actions:
Preventing the issue in the future
To avoid hitting the shard cap again, it’s best to apply an index lifecycle policy that automatically manages retention, deleting or closing old indices before they accumulate. You can refer to the Wazuh documentation for guidance on configuring these policies: https://documentation.wazuh.com/current/user-manual/wazuh-indexer-cluster/index-lifecycle-management.html
Note for single-node deployments
If you're running a single node, ensure that each index is created with only one primary shard. You can verify this in the Filebeat template located at:
/etc/filebeat/wazuh-template.json. Check that index.number_of_shards is set to 1, as additional shards unnecessarily consume your limited shard capacity.


1. Difference between close index and delete index
When an index is deleted, Indexer removes the index from its data directory, but Wazuh server still keeps the original log files under logs directory because those are maintained by the server, not the indexer.
In this case, close and delete index are operations from the Indexer side. Here’s the distinction:
Since a new alert index is created daily, shard count grows quickly, especially because earlier these indices were created with 3 primary shards each, which is unnecessary on a single-node cluster.
Now that you’ve changed index.number_of_shards to 1, new indices will be much more efficient.
To reduce existing shard usage, you can reindex daily indices into larger consolidated indices, such as weekly or monthly ones. This can be done through Index Management > Indexes. Select desired indices to reindex, and click on Reindex in Actions
After verifying the new consolidated index, the old daily indices can be deleted to free shards and disk.
You can find more information in https://docs.opensearch.org/latest/im-plugin/reindex-data/
Because your cluster has only one node, setting index.number_of_shards = 1 is exactly the right move.
Your previous configuration with 3 primary shards per index was causing unnecessary shard overhead.

Hi, I am glad to hear that the index recovery worked and the dashboard is fully operational again.
Here is the detailed explanation regarding your final questions:
The Wazuh Filebeat module uses a pipeline that reads the specific timestamp field contained within the log entry. It uses this field to determine both the date of the index where the data will be stored and the time displayed in the dashboard.
Since the logs preserve their historical date, they will not appear in the "Last 15 minutes" view. You must adjust the Time Picker in the top right corner of the dashboard to cover the specific dates you recovered to see them.
No, this is expected behavior for your environment.
Looking at your screenshot, the cluster status is Yellow with unassigned_shards: 5.
The "Yellow" status indicates that all your data is safe (Primary shards are active), but the Replica shards (backup copies) could not be assigned. Elasticsearch/OpenSearch enforces a rule that a replica shard cannot exist on the same node as its primary shard. Since you are running a Single-Node cluster (number_of_nodes: 1), the system has nowhere to place these replicas, so they remain "Unassigned" indefinitely.
No immediate action is needed. Your cluster is fully functional for searching and indexing. The "Yellow" warning is simply informing you that you lack high availability (which is natural in a single-node setup).
Relevant Documentation:
Cluster Health Status: Red or yellow cluster health status | OpenSearch/Elastic Docs
Wazuh Indexer tuning: Wazuh indexer tunning
The temporary bulk send failure in Filebeat confirms that the Wazuh Indexer is actively blocking incoming data. Since you confirmed the shard limit is not the issue,
the most probable cause after a large log recovery is that your cluster triggered a Disk Flood Stage or a Memory Circuit Breaker.
Please follow these steps and share the results:
1. Verify if the indices are locked
Run the following command . Localhost refers to the service running on your local machine:
curl -k -u <USER>:<PASSWORD> -XGET "https://localhost:9200/_settings?pretty" | grep "read_only_allow_delete"
1.1. If the output is empty: The indices are not locked by the disk watermark
1.2. If you see "index.blocks.read_only_allow_delete": "true": Your indices are locked.
2. Unlock the indices (Only if you saw "true" above)
If you confirmed the lock in the previous step, run this command to force the indices to accept data again:
curl -k -u <USER>:<PASSWORD> -XPUT "https://localhost:9200/_all/_settings" -H 'Content-Type: application/json' -d'
{
"index.blocks.read_only_allow_delete": null
}
'
Filebeat should stop showing errors and resume sending logs immediately after this.
Hi,
Since the command returned nothing, we can rule out the disk lock—that's good news.
However, the Filebeat errors show the Indexer is still overwhelmed. Given the heavy log recovery you just ran, it is highly likely that you have maxed out the JVM memory (Circuit Breaker) or the cluster health has dropped to "Red."
Please run these checks so we can pinpoint exactly where it's stuck:
Hi, happy new year.
Thanks for the results. This clarifies everything.
The Diagnosis:
Memory is NOT the issue: Your Heap is at 58%, which is healthy.
The problem is "Status: RED": You have 52 unassigned shards. While the cluster is Red, it cannot index data into those specific indices, causing Filebeat to fail.
Shard Overload: You have 1108 active shards on a single node. This is extremely high. Managing this many shards consumes massive system resources (which explains your System RAM at 99%, even if Heap is low)
Hi,
In your specific case: Yes, delete them.
Your server is overloaded because it is trying to manage too many individual "pieces" of data.
Closing them hides the data, but the system still has to carry the weight of managing the files.
Deleting removes the weight completely.
Since your System RAM is at 99%, you need to fully remove the old data to to free up the server.
Recommended Action: Delete the months you don't need anymore (for example, early 2025):