Filebeat filling daemon.log and syslog

197 views
Skip to first unread message

Leonardo Ventura

unread,
Feb 10, 2022, 2:12:26 PM2/10/22
to wa...@googlegroups.com
Hello Guys!

How are you doing?

Filebeat is filling the daemon.log and syslog with \\\\\ characters, when we stop the service, it stops logging, here´s the screen:

image.png

Got this msg too:

image.png

What can we do to stop this?

Thanks!

Alexander Bohorquez

unread,
Feb 10, 2022, 2:47:51 PM2/10/22
to Wazuh mailing list
Hello Leonardo,

Thank you for using Wazuh!

The log examples you've shared occur when Filebeat is unable to index the data into Elasticsearch. This may be caused by several reasons such as a disk space issue, the servers reaching the shard limit, among others.

Based on the screenshots you've shared, the error is related to your cluster having the maximum shards opened.

The server reaches the maximum shards limit in the cluster. By default, the shards limit by a node is 1000 shards. To fix this issue, you have multiple options:
  • Delete indices. This frees shards. You could do it with old indices you don't want/need. Or even, you could automate it with ILM policies to delete old indices after a period of time, as explained in this post: https://wazuh.com/blog/wazuh-index-management.
  • Add more nodes to your Elasticserach cluster.
  • Increment the max shards per node (not recommneded). But if you do this option, make sure you do not increase it too much, as it could provoke inoperability and performance issues in your Elasticsearch cluster. To do this:
curl -k -u USERNAME:PASSWORD -XPUT ELASTICSEARCH_HOST_ADDRESS/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "MAX_SHARDS_PER_NODE" } }'

replace the placeholders, where:

USERNAME : username to do the request
PASSWORD : password for the user
ELASTICSEARCH_HOST_ADDRESS : Elasticsearch host address. Include the protocol HTTPS if needed.
MAX_SHARDS_PER_NODE : Maximum shards by node. Maybe you could try with 1200 o something like that, depending on your case. I would recommend reviewing your architecture, probably don't have a good number of shards as 1000 is way too much for a small environment. 

I hope this information helps. Please let me know how it goes!

Reply all
Reply to author
Forward
0 new messages