Dashboard not showing logs

89 views
Skip to first unread message

Akhmad Fadhil

unread,
Jul 10, 2023, 10:44:28 PM7/10/23
to Wazuh mailing list
Hi there,
My wazuh is is not showing data for past 6 days, i've check the log with command
tail -n1 /var/ossec/logs/alerts/alerts.json
it's generating new alert.

Can you help me?

Thanks 

dashboard.png

Harshal Paliwal

unread,
Jul 10, 2023, 11:07:16 PM7/10/23
to Wazuh mailing list
Hi Team, Thanks for using Wazuh!

I'd be happy to help you troubleshoot the issue with Wazuh. If Wazuh is not showing data for the past six days but it's generating new alerts, there could be a few potential reasons for this. Here are some steps you can take to investigate and resolve the problem:

1. Examine Wazuh server logs: Check the Wazuh server logs for any errors or warnings that might indicate the cause of the issue. The log files are typically located in the `/var/ossec/logs` directory. Can you share the /var/ossec/logs/ossec.log file once? Please share /var/log/wazuh-indexer/<your_cluster_name>.log & journalctl -xe | grep wazuh-dashboard

2. Check disk space: Ensure that the disk where Wazuh stores its data has enough free space. If the disk is full, it can prevent Wazuh from indexing and storing new data. You can check the disk usage by running the df -h command and verifying that there is sufficient free space.

3. Restart Wazuh services: Try restarting the Wazuh server and agents to see if it resolves the issue. You can restart the Wazuh server by running the appropriate command for your system, such as:

   sudo systemctl restart wazuh-indexer
   sudo systemctl restart wazuh-manager
   sudo systemctl restart filebeat
   sudo systemctl restart wazuh-dashboard
   filebeat test output

If you have tried these steps and are still unable to resolve the issue, it might be helpful to provide more details about your Wazuh configuration, such as the version you are using, any relevant error messages from the logs, and any changes or updates that have occurred recently.

Regards
Harshal Paliwal

Akhmad Fadhil

unread,
Jul 10, 2023, 11:45:42 PM7/10/23
to Wazuh mailing list
Thank you Harshal,
when i check for the ossec.log i found that i run out of space
ossec.png

Akhmad Fadhil

unread,
Jul 11, 2023, 12:05:38 AM7/11/23
to Wazuh mailing list
I'll try to expand the server storage space,
thank you

Akhmad Fadhil

unread,
Jul 11, 2023, 12:35:08 AM7/11/23
to Wazuh mailing list
Hi Harshal, i have another question
when i try to "systemctl status filebeat"
it shows 
 filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
     Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2023-07-11 10:57:55 WIB; 32min ago
       Docs: https://www.elastic.co/beats/filebeat
   Main PID: 4171761 (filebeat)
      Tasks: 9 (limit: 9440)
     Memory: 53.5M
     CGroup: /system.slice/filebeat.service
             └─4171761 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat

Jul 11 11:30:19 wazuh filebeat[4171761]: 2023-07-11T11:30:19.352+0700        WARN        [elasticsearch]        elasticsearch/client.go:414        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.>
Jul 11 11:30:22 wazuh filebeat[4171761]: 2023-07-11T11:30:22.352+0700        WARN        [elasticsearch]        elasticsearch/client.go:414        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.>
Jul 11 11:30:22 wazuh filebeat[4171761]: 2023-07-11T11:30:22.352+0700        WARN        [elasticsearch]        elasticsearch/client.go:414        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.>
Jul 11 11:30:22 wazuh filebeat[4171761]: 2023-07-11T11:30:22.352+0700        WARN        [elasticsearch]        elasticsearch/client.go:414        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.>
Jul 11 11:30:25 wazuh filebeat[4171761]: 2023-07-11T11:30:25.354+0700        WARN        [elasticsearch]        elasticsearch/client.go:414        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.>
Jul 11 11:30:25 wazuh filebeat[4171761]: 2023-07-11T11:30:25.354+0700        WARN        [elasticsearch]        elasticsearch/client.go:414        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.>
Jul 11 11:30:25 wazuh filebeat[4171761]: 2023-07-11T11:30:25.354+0700        WARN        [elasticsearch]        elasticsearch/client.go:414        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.>
Jul 11 11:30:28 wazuh filebeat[4171761]: 2023-07-11T11:30:28.356+0700        WARN        [elasticsearch]        elasticsearch/client.go:414        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.>
Jul 11 11:30:28 wazuh filebeat[4171761]: 2023-07-11T11:30:28.356+0700        WARN        [elasticsearch]        elasticsearch/client.go:414        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.>
Jul 11 11:30:28 wazuh filebeat[4171761]: 2023-07-11T11:30:28.357+0700        WARN        [elasticsearch]        elasticsearch/client.go:414        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Date(2023, time.>
~

and it say that  the cluster reach maximum normal shards open, so is it that cause the dashboard wont show the event?
Thanks

Harshal Paliwal

unread,
Jul 11, 2023, 2:51:53 AM7/11/23
to Wazuh mailing list
Hi Team,Thanks for the update.
If you are getting the below cluster currently has [1000]/[1000] maximum shards open you can resolve that by following the process. If its a different error can you please share that error with us so we can provide you a solution accordingly.
type":"validation_exception","reason":"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}],"type":"validation_exception"...
There are two possible solutions:
  • Increase the shards limit.
  • Reduce the number of shards.
Increase the shards limit:
This option will quickly solve the solution but it is not advisable for the long run as it will bring more problems in the future. However, this guide will explain how to do it in case it is needed.
The following setting is the one responsible for this limit: cluster.routing.allocation.total_shards_per_node
It is possible to change the setting using the WI API. You can either use the Dev tools option within the management section in the Wazuh Dashboard:
PUT _cluster/settings { "persistent" : { "cluster.routing.allocation.total_shards_per_node" : 1200 } }
or curl the API directly from a terminal:
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d' { "persistent" : { "cluster.routing.allocation.total_shards_per_node" : 1200 } } '
Reduce the number of shards:
Reaching the limit of shards means no retention policies are applied to the environment. This could lead to storing the data forever and cause failure in the system.
It is necessary to delete old indices to reduce the number of shards. It is necessary to check what the indices stored in the environment, the following API call can help:
GET _cat/indices
Then, it is necessary to delete indices that are not needed or older indices. Bear in mind that this cannot be retrieved unless there are backups of the data either using snapshots or Wazuh alerts backups.
The API call to delete indices is:
DELETE <index_name>We always recommend This option.Hope this information helps you. Please feel free to reach out to us for any information/issues.

Akhmad Fadhil

unread,
Jul 11, 2023, 3:31:32 AM7/11/23
to Wazuh mailing list
Hi Harshal,
I've reduce the number of shards and the dashboard begin to show the events again.
Thanks for the help, it solved my problem.

Harshal Paliwal

unread,
Jul 11, 2023, 4:01:02 AM7/11/23
to Wazuh mailing list
Hi Akhmad,

Thanks for the update. Glad to hear that your issue is resolved now.
Please don't hesitate to contact us if you're having any problems.

Regards,
Reply all
Reply to author
Forward
0 new messages