no log shown in the security event

175 views
Skip to first unread message

Fawwas Hamdi

unread,
Mar 17, 2023, 3:28:32 AM3/17/23
to Wazuh mailing list
i hope someone can help me with this issue, my wazuh installation is based on the centos VM that is already provided and for the past 6 months or so is running normally but suddenly no log received as of march 15, ive done nothing to the vm or the configuration. ill attach some error that i found. wazuh-indexer, filebeat.service, wazuh-manager and wazuh-dashboard seems worked just fine.
wazuh cluster error log.txt
journalctl wazuh dashboard log.txt

Fawwas Hamdi

unread,
Mar 17, 2023, 3:31:51 AM3/17/23
to Wazuh mailing list
alert indices results
Capture.PNG

health results
Capture.PNG
Capture.PNG

Alexander Bohorquez

unread,
Mar 17, 2023, 8:39:25 AM3/17/23
to Wazuh mailing list
Hello Fawwas,

Thank you for using Wazuh! 

It seems your Wazuh-indexer cluster ran out of shards. This issue is normally detected when the Wazuh Indexer stopped ingesting data. It is observable that the Wazuh Dashboard is empty. Although this issue could lead to different root causes, this one could be one of them.

To identify this one, it is necessary to check the Wazuh Indexer logs as follows:

cat /var/log/wazuh-indexer/<cluster_name>.log | grep -i shards

or

cat /var/log/wazuh-indexer/<cluster_name>.log | grep -i error

The log file will be named after the cluster name so in our case by default it is wazuh-cluster.log

cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i shards

or

cat /var/log/wazuh-indexer/wazuh-cluster.log | grep -i error

Please execute those commands and let me know the output.

I look forward to your comments!

Fawwas Hamdi

unread,
Mar 17, 2023, 8:47:03 AM3/17/23
to Wazuh mailing list
hello Alexander,

Thank you for responding, below i attach the log
wazuh indexer log.txt
wazuh cluster error logs.txt

Fawwas Hamdi

unread,
Mar 19, 2023, 7:37:55 PM3/19/23
to Wazuh mailing list
is there any update on this?

Alexander Bohorquez

unread,
Mar 20, 2023, 4:40:15 PM3/20/23
to Wazuh mailing list
Hello Fawwas,

I apologize for the delay,

It seems the issue is related to the maximum shards limit reached. Based on that, I'll recommend the following:

There are two possible solutions:

  • Increase the shards limit.
  • Reduce the number of shards.

Option 1 will quickly solve the solution but it is not advisable for the long run as it will bring more problems in the future. However, this guide will explain how to do it in case it is needed.

The following setting is the one responsible for this limit: cluster.routing.allocation.total_shards_per_node

It is possible to change the setting using the Wazuh-indexer API. You can either use the Dev tools option within the management section in the Wazuh Dashboard:

PUT _cluster/settings

{

"persistent" : {

"cluster.routing.allocation.total_shards_per_node" : 1200

}

}


or curl the API directly from a terminal:

curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'

{

"persistent" : {

"cluster.routing.allocation.total_shards_per_node" : 1200

}

}

'


Querying the API from the terminal requires to use of credentials to authenticate. It is necessary to specify the IP of the service too.

These settings impose a hard limit which can result in some shards not being allocated. Use with caution.

Although, I'll recommend the Option 2:

Reaching the limit of shards means there are no retention policies applied to the environment. This could lead to storing the data forever and cause failure in the system.

To reduce the number of shards, it is necessary to delete old indices. It is necessary to check what are the indices stored in the environment, the following API call can help:

GET _cat/indices

Then, it is necessary to delete indices that are not needed or older indices. Bear in mind that this cannot be retrieved unless there are backups of the data either using snapshots or Wazuh alerts backups.

The API call to delete indices is:

DELETE <index_name>


By deleting indices, you will free up shards and the cluster will have more space to continue allocating indices.

Prevention
The next step is to avoid this from happening again, for that reason, it is necessary to guide through the complete resolution of the issue.

Normally, this can happen in a single-one installation as the Wazuh template is configured to use 3 shards per index. The first thing should be to clarify and understand the architecture and the retention policy of the environment.

1. Change the number of shards according to the infrastructure. Although the most optimal configuration of shards per index will be addressed in a different article, a good rule of thumb would be 1 shard per node.  However, 3 shards should be the maximum, from that point it is necessary to analyze the number of shards accordingly.

To change the number of shards, edit the /etc/filebeat/wazuh-template.json file.

"settings": {

"index.refresh_interval": "5s",

"index.number_of_shards": "3",


Then, it is necessary to reload the template and restart the Filebeat service:

# filebeat setup --index-management

# systemctl restart filebeat.


Reduce the number of shards and replicas, if no of the rest of the indices. For instance, it is possible to reduce the shards and replicas from the Wazuh UI settings. It will be necessary to analyze each case.

Set up a retention policy. This can be achieved using index policies as explained here: https://wazuh.com/blog/wazuh-index-management/


I hope this information helps. Please let me know how it goes!


Reply all
Reply to author
Forward
0 new messages