Hey Sebastian Dario Bustos, first of all let me just start by thanking you for investing some time into this , I really appreciate your help as I am stuck on this issue for days and was not able to get any help.
For the File beat poortion I did check it again and earlier as well everything looked okay:
root@wazhu-01:/var/ossec/ruleset/rules# systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-07-27 17:15:18 UTC; 22h ago
Docs:
https://www.elastic.co/products/beats/filebeat Main PID: 2859647 (filebeat)
Tasks: 12 (limit: 27006)
Memory: 147.9M
CGroup: /system.slice/filebeat.service
└─2859647 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat
Jul 27 17:15:18 wazhu-01 systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..
root@wazhu-01:/var/ossec/ruleset/rules# filebeat test output
elasticsearch: https://10.250.0.9:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 10.250.0.9
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.2
dial up... OK
talk to server... OK
version: 7.10.2
for the disk space it is only used up till 15%
for the shards every thing looks good as it is at 188 right now
{
"cluster_name": "wazuh-indexer-cluster",
"status": "yellow",
"timed_out": false,
"number_of_nodes": 1,
"number_of_data_nodes": 1,
"discovered_master": true,
"discovered_cluster_manager": true,
"active_primary_shards": 188,
"active_shards": 188,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 23,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 89.0995260663507
}
I do have an index management policy set to remove the old indexes after 60 days
also something interesting. it is literally just these logs that do not show up on the dashboard. all the other alerts and logs are appearing properly in the dashboard.
One other thing that confused me is I have slack alerting set if the log level is 10 , and if change the log level to 10 in the rule I would get alerts on slack but still they would not show up on the dashboard.