Hello John,
Lets a look to the memory issue.
If you can send me the configurations that you made that'll help,
Regarding the questions
- After making changes as per points 1 and 2, I am not observing expected results after performing step :
- Could you please clarify how should I configure two rules to comply below 2 points ?
- 1)Use no more than 50% of available RAM.
- 2)Use no more than 32 GB.
- The issue I am facing is I am not observing "mlockall" value to true instead "false" after restarting services of elastic.
If you see that mlockall is false, then it means that the mlockall request has failed. You will also see a line with more information in the logs with the words Unable to lock JVM Memory.
The most probable reason, on Linux/Unix systems, is that the user running Elasticsearch doesn’t have permission to lock memory. This can be granted as follows:
.zip and .tar.gz
Set
ulimit -l unlimited as root before starting Elasticsearch. Alternatively, set
memlock to
unlimited in
/etc/security/limits.conf:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
or setting this JVM flag in the jvm.options configuration file.
What should I configure in place of -Xms4g and -Xmx4g ?
-Xms4g and -Xmx4g is how much ram in Gb you want to add to the heap size of the configuration, for example right now your heap size is for 4GB.
The ideal heap size is somewhere below 32 GB, as heap sizes above 32 GB become less efficient.
What these recommendations mean is that on a 64 GB cluster, we dedicate 32 GB to the Elasticsearch heap and 32 GB to the operating system in the container that hosts your cluster.
For example If you provision a 128 GB cluster, we create two 64 GB nodes, each node with 32 GB reserved for the Elasticsearch heap and 32 GB reserved for the operating system.
This is not the case since you have a single host environment.
So according to your system you could try adding
-Xms16g and -Xmx16g or -Xms26g and -Xmx26g (make sure is not above 32GB), however is important you give the right permission, from the explanation I did above.
Further I have also gone through your provided link for deciding how many shards? that article seems confused to decide how to manage data across elastic. Could your please suggest 1) how many shards and 2) what size of shard , should be configured with below deployment model.
By default Wazuh comes with 1000 Shards, is not recommended to increase this number.
The ebst way to manage the shards without it going above 1000 shards, is to add data retention polciies to you enviroment.
It is recommended to set the crontab script to remove logs from /var/ossec/logs/alerts as well. To apply crontab please run this command: crontab -e. It will open your crontab file where you will be able to add the commands you need.
Here is the example of the crontab script:
45 0 * * * find /var/ossec/logs/alerts/ -name "*.gz" -type f -mtime +365 -exec rm -f {} \;The script above will every day at 0:45 remove all the .gz files that are older than 365 days from the folder /var/ossec/logs/alerts/. You can modify the amount of days based on your needs.
Once you make your changes you just need to save the file.
Hope this helps,
Best regards.
Andre Cortes