No alerts in Dashboard

345 views
Skip to first unread message

Phil Schilling

unread,
Jul 15, 2024, 9:33:15 PM7/15/24
to Wazuh | Mailing List
I had a drive space issue on Friday in my opensearch cluster and deleted some indexes.  Since that point I am not get alerts in the Dashboard.  No new indexes are being created, alerts are generated and emailed from the Manager.  Filebeat shows all three of the cluster machines are accessable by filebeat.  It seems that I did something when deleting the older indexes that are not allowing new ones to be created.  
Can anyone give me an idea of where to look?  Tnank you in advance.

Phil

Stuti Gupta

unread,
Jul 16, 2024, 12:04:45 AM7/16/24
to Wazuh | Mailing List
Hi Phil Schilling 

Can you please share the steps that you used to delete the indices? 
Next, check the status of the Wazuh indexer to ensure it's active. Additionally, execute the following commands:
curl https://<WAZUH_INDEXER_IP>:9200/_cat/indices/wazuh-alerts-* -u <wazuh_indexer_user>:<wazuh_indexer_password> -k
Check the cluster health with:
curl -XGET -k -u user:pass "https://localhost:9200/_cluster/health"
Can you please check for any error in the wazuh-indexer log file: 
cat /var/log/wazuh-indexer/wazuh-cluster.log

Additionally, to delete the indices you can refre to :
Solution1:
It is necessary to delete old indices to if they are no, the following API call can help know about indices stored in the environment:
GET _cat/indices
Bear in mind that this cannot be retrieved unless there are backups of the data either using snapshots or Wazuh alerts backups.
The API call to delete old indices is:
DELETE <index_name>
Or CLI command
 # curl -k -u admin:admin -XDELETE https://<WAZUH_INDEXER_IP>:9200/wazuh-alerts-4.x-YYYY.MM.DD
You can use wildcards (*) to delete more indices in one query.
Once you make enough room in the disk,

Solution 2 : Index management policies:
Once you make enough room in the disk (15% free), you have to create retention policies to avoid this issue again and delete automatically the old indices.
In Wazuh Indexer, you have to set the days for how long you want to keep data in the hot state (fast access data that requires more RAM), cold state (slower access data that requires less RAM,) and the deletion state. An example would be 30 days before moving hot data to a cold state and 360 days before sending data to a deletion state. After the creation of the retention policy, you must apply it to the existent indices (wazuh-alerts-* and/or wazuh-archives-*) and also add the wazuh template to it so new indices (that are created every day) are also included in the retention policy.
For that you can follow https://documentation.wazuh.com/current/user-manual/wazuh-indexer/index-life-management.html

You can also take snapshots of the indices that automatically back up your Wazuh indices in local or Cloud-based storage and restore them at any given time. To do so please refer to https://wazuh.com/blog/index-backup-managementhttps://wazuh.com/blog/wazuh-index-management/

Hope this helps 

Phil Schilling

unread,
Jul 16, 2024, 5:58:13 AM7/16/24
to Wazuh | Mailing List
I used curl -k -X DELETE https://<WAZUH_INDEXER_IP>:9200/wazuh-alerts-4.x-YYYY.03.* to delete March and April alert indices.  The wazuh-indexer is running.  Listing the wazuh-alerts indexes show July 12th as being the last one created. Here is the response to the cluster health.
{"cluster_name":"lascluster2","status":"green","timed_out":false,"number_of_nodes":3,"number_of_data_nodes":3,"discovered_master":true,"discovered_cluster_manager":true,"active_primary_shards":433,"active_shards":710,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}
I do not see anything that looks like an error in the wazuh-cluster.log that would be causing this.   I had checked all of these before posting but double checked them again this morning.  I will set up a retention policy as soon as I can get this running again.    Thank you for your assistance.

Phil

Phil Schilling

unread,
Jul 17, 2024, 1:24:24 PM7/17/24
to Wazuh | Mailing List
Anyone have any ideas.  I have gained no ground on this issue.  Still no new indexes created, no events on the dashboard.   I can't find any errors in anything.  Is there a way to clear all the indexes and get it to start over?  I would greatly appreciate anyone's assistance on this.   Thank you.

Phil

Stuti Gupta

unread,
Jul 18, 2024, 3:55:21 AM7/18/24
to Wazuh | Mailing List
Hi Phili Schilling 

Can you please share the wazuh-indexer logs and check if there is any error or warning, using the command:
cat /var/log/wazuh-indexer/wazuh-cluster.log 
Additionally, execute the following commands, to get more details on wazuh-indices:
curl https://<WAZUH_INDEXER_IP>:9200/_cat/indices/ -u <wazuh_indexer_user>:<wazuh_indexer_password> -k

Hope to hear from you soon 

Phil Schilling

unread,
Jul 18, 2024, 8:05:22 AM7/18/24
to Wazuh | Mailing List
I have attached the cluster.log and output of the curl command showing indexes.  Thank you for your help.

Phil
lascluster2.log
indices.txt

Stuti Gupta

unread,
Jul 19, 2024, 6:55:05 AM7/19/24
to Wazuh | Mailing List
Can you please restart the wazuh-indexer and then share its full logs, what you have shared its just a part of it?

Phil Schilling

unread,
Jul 19, 2024, 7:06:12 AM7/19/24
to Wazuh | Mailing List
I did a restart of wazuh-indexer, attached is the log after the restart.  Thank you.

Phil

lascluster2.log

Stuti Gupta

unread,
Jul 22, 2024, 6:43:17 AM7/22/24
to Wazuh | Mailing List
Hi Phil Schilling :

As you mentioned above " It seems that I did something when deleting the older indexes that are not allowing new ones to be created " Can you please share the steps that you performend? Also can you please check if the filebeat is reading the alerts.json you can do so using the following command:
lsof /var/ossec/logs/alerts/alerts.json.
Also, use the following command  to ensure the manager is generating alerts
tail -f /var/ossec/logs/alerts/alerts.json 
Also share the filebeat logs located in /var/log/filebeat/filebeat to verify what is happening 


Hope to hear from you soon 

Phil Schilling

unread,
Jul 22, 2024, 7:58:01 AM7/22/24
to Wazuh | Mailing List
Here is the output from lsof command

COMMAND    PID  USER   FD   TYPE DEVICE  SIZE/OFF    NODE NAME
wazuh-ana 1483 wazuh   14w   REG  252,0 321611392 4613891 /var/ossec/logs/alerts/alerts.json

Alerts are being generated  by using the tail -f alerts.json. Constant alerts are being generated.

Attached is the filebeat log 

As stated before to remove the old indexes I used.  
curl -k -u admin:admin -XDELETE https://<WAZUH_INDEXER_IP>:9200/wazuh-alerts-4.x-YYYY.MM.DD
Thank you for your assistance.
 
filebeat

Stuti Gupta

unread,
Jul 24, 2024, 5:44:06 AM7/24/24
to Wazuh | Mailing List
Hi Phil

Everything is looking fine till now Can you please share the following details to get more information:
1. Can you please share kernel logs, located at:
centos: cat /var/log/messages | grep -i filebeat (edited)
ubuntu: /var/log/syslog | grep -i filebeat in ubuntu
2. Share more filebeat logs located at /var/log/filebeat/filebeat
3. Can you please share the filebeat configuration located at /etc/filebeat/filbeat.yml, hide any confidential details, to check if they are dumping logs in a different place
4. Can you please share more information about your os and wazuh environment, like if it is a multinode or single node?


Hope to hear from you soon

Phil Schilling

unread,
Jul 24, 2024, 6:06:21 AM7/24/24
to Wazuh | Mailing List
There are no filebeat messages in /var/log/syslog.  Grep for filebeat returns nothing.   
I have attached the filebeat logs in a tgz file.  Please look at filebeat.5 as all of them are the same size but that one.  That is from the day when things stopped working.
I have attached filebeat.yml
The entire setup is on Ubuntu.  The 3 server indexing cluster are running 20.04 and the wazuh server is a single node running 22.04.

Thank you for your help.

filebeatlogs.tgz
filebeat.yml

Stuti Gupta

unread,
Jul 25, 2024, 6:19:41 AM7/25/24
to Wazuh | Mailing List
HI Phil 

I Have found this issue:
This is one of error log from filebeat.log:


Timestamp:time.Time{wall:0xc19bf7965ca1b624, ext:129285713737333, loc:(*time.Location)(0x42417a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x466740, Device:0xfc00}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"illegal_argument_exception","reason":"Document contains at least one immense term in field=\"previous_output\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is: '[123, 34, 119, 105, 110, 34, 58, 123, 34, 115, 121, 115, 116, 101, 109, 34, 58, 123, 34, 112, 114, 111, 118, 105, 100, 101, 114, 78, 97, 109]...', original message: bytes can be at most 32766 in length; got 43793","caused_by":{"type":"max_bytes_length_exceeded_exception","reason":"max_bytes_length_exceeded_exception: bytes can be at most 32766 in length; got 43793"}}
2024-07-12T00:00:03.636-0500 INFO log/harvester.go:302 Harvester started for file: /var/ossec/logs/alerts/alerts.json

I was able to identify the root cause, The error message you are encountering indicates that a document being indexed by Elasticsearch contains a term in the "previous_output" field that exceeds the maximum allowed length of 32766 bytes. This is causing the indexing operation to fail.
The probable solution is that you cannot increase max_bytes in Filebeat, but you can truncate those large fields into a determined number of bytes.
You can edit the filebeat.yml file located at /etc/filebeat and add the following:
processors:
truncate_fields:
max_bytes: 32766
Adding this will truncate any field to that size and it will avoid triggering the error that you are having.

I am leaving below some documentation regarding this:
Filebeat Truncate Fields: https://www.elastic.co/guide/en/beats/filebeat/current/truncate-fields.html

I hope it helps you.

Phil Schilling

unread,
Jul 25, 2024, 7:27:41 AM7/25/24
to Wazuh | Mailing List
Thank you so much.  I would never have found that issue.  It is now working.  I appreciate all of your time on this.

Phil

Phil Schilling

unread,
Aug 13, 2024, 2:57:49 PM8/13/24
to Wazuh | Mailing List
Your fix to truncate fields worked.  But yesterday I ran Ubuntu package updates and rebooted for kernel changes and alerts stopped showing up again.  I have attached the filebeat logs in a .tgz file as the last one with much data in it is right before the reboots.  Everything since the reboot stops at Attempting to connect to Elasticsearch version 7.10.2.  I have also attached the filebeat.yml file, though nothing has changed since I added the truncate statement.  filebeat test output works as it should an connects to all three nodes of the cluster.  If you need anything else please let me know.  Thank you for your assistance.
filebeatlogs.tgz
filebeat.yml

Phil Schilling

unread,
Aug 13, 2024, 3:02:09 PM8/13/24
to Wazuh | Mailing List
Nevermind my noise.  I found the issue with filebeat.  It was listed as a dead but was not showing up as failed.  I restarted filebeat and everything is working again.  Again sorry for the noise.
Reply all
Reply to author
Forward
0 new messages