Unable to lock JVM memory

286 views
Skip to first unread message

Andrehens Chicfici

unread,
Jul 25, 2024, 6:19:39 AM7/25/24
to Wazuh | Mailing List
Hey there,

When checking the wazuh-cluster-log, I get some errors relating to the JVM memory and heapsize. Especially HeapDumpOnOutOfMemoryError. The full error log is at the end of my post

This problem is also mentioned here https://groups.google.com/g/wazuh/c/7gAVdQeGl_o/m/Cyw9zDE0AgAJ but the workaround doesn't seem to work at my system and I was advised to open a new thread.

I already changed /etc/wazuh-indexer/jvm.options and set Xms4g (Initial heap size) and Xmx4g (Maximum heap size) to half of my system memory. But this didn't change the error messages. The error messages also recommend to change/etc/security/limits.conf but I am not sure how to fiddle with this.





[2024-07-24T14:39:19,692][WARN ][o.o.b.JNANatives         ] [node-1] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory

[2024-07-24T14:39:19,695][WARN ][o.o.b.JNANatives         ] [node-1] This can result in part of the JVM being swapped out.

[2024-07-24T14:39:19,695][WARN ][o.o.b.JNANatives         ] [node-1] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536

[2024-07-24T14:39:19,695][WARN ][o.o.b.JNANatives         ] [node-1] These can be adjusted by modifying /etc/security/limits.conf, for example:

[2024-07-24T14:39:19,696][WARN ][o.o.b.JNANatives         ] [node-1] If you are logged in interactively, you will have to re-login for the new limits to take effect.

[2024-07-24T14:39:19,817][INFO ][o.o.n.Node               ] [node-1] JVM arguments [-Xshare:auto, -Dopensearch.networkaddress.cache.ttl=60, -Dopensearch.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms4g, -Xmx4g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/opensearch-7420340218558769111, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/wazuh-indexer, -XX:ErrorFile=/var/log/wazuh-indexer/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/wazuh-indexer/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///etc/wazuh-indexer/opensearch-performance-analyzer/opensearch_security.policy, --add-opens=jdk.attach/sun.tools.attach=ALL-UNNAMED, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///usr/share/wazuh-indexer/plugins/opendistro-performance-analyzer/pa_config/es_security.policy, -XX:MaxDirectMemorySize=2147483648, -Dopensearch.path.home=/usr/share/wazuh-indexer, -Dopensearch.path.conf=/etc/wazuh-indexer, -Dopensearch.distribution.type=rpm, -Dopensearch.bundled_jdk=true]

[2024-07-24T14:39:21,341][WARN ][o.o.s.OpenSearchSecurityPlugin] [node-1] Directory /etc/wazuh-indexer/opensearch-performance-analyzer/backup has insecure file permissions (should be 0700)

[2024-07-24T14:39:25,529][WARN ][o.o.s.c.Salt             ] [node-1] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes

[2024-07-24T14:39:25,585][ERROR][o.o.s.a.s.SinkProvider   ] [node-1] Default endpoint could not be created, auditlog will not work properly.

[2024-07-24T14:39:25,586][WARN ][o.o.s.a.r.AuditMessageRouter] [node-1] No default storage available, audit log may not work properly. Please check configuration.

[2024-07-24T14:39:26,783][WARN ][o.o.s.p.SQLPlugin        ] [node-1] Master key is a required config for using create and update datasource APIs. Please set plugins.query.datasources.encryption.masterkey config in opensearch.yml in all the cluster nodes. More details can be found here: https://github.com/opensearch-project/sql/blob/main/docs/user/ppl/admin/datasources.rst#master-key-config-for-encrypting-credential-information

[2024-07-24T14:39:27,248][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,279][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,279][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,279][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,280][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,280][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,280][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,280][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,280][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,280][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,281][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,281][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,281][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,281][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,281][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,281][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,282][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,282][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,282][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,283][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,283][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,283][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,283][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,283][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,283][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,283][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,284][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,284][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,284][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,284][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,284][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,284][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,284][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,285][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,285][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:27,285][WARN ][o.o.p.c.ThreadPoolMetricsCollector] [node-1] Fail to read queue capacity via reflection

[2024-07-24T14:39:28,070][WARN ][o.o.g.DanglingIndicesState] [node-1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually

[2024-07-24T14:39:28,945][WARN ][o.o.b.BootstrapChecks    ] [node-1] memory locking requested for opensearch process but memory is not locked

[2024-07-24T14:39:29,265][WARN ][o.o.p.c.s.h.ConfigOverridesClusterSettingHandler] [node-1] Config override setting update called with empty string. Ignoring.

[2024-07-24T14:39:29,481][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)

[2024-07-24T14:39:29,494][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)

[2024-07-24T14:39:29,496][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)

[2024-07-24T14:39:29,498][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)

[2024-07-24T14:39:29,507][ERROR][o.o.i.i.ManagedIndexCoordinator] [node-1] get managed-index failed: NoShardAvailableActionException[No shard available for [org.opensearch.action.get.MultiGetShardRequest@3a7e91ee]]

[2024-07-24T14:39:29,532][WARN ][o.o.o.i.ObservabilityIndex] [node-1] message: index [.opensearch-observability/WXCWZgv0Q9CXAAztLQN8Cg] already exists

[2024-07-24T14:39:29,534][WARN ][o.o.s.SecurityAnalyticsPlugin] [node-1] Failed to initialize LogType config index and builtin log types

[2024-07-24T14:39:29,543][ERROR][o.o.i.i.ManagedIndexCoordinator] [node-1] Failed to get ISM policies with templates: Failed to execute phase [query], all shards failed

[2024-07-24T14:39:29,783][ERROR][o.o.s.a.BackendRegistry  ] [node-1] Not yet initialized (you may need to run securityadmin)

[2024-07-24T14:39:32,122][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}

[2024-07-24T14:39:34,638][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}

[2024-07-24T14:39:37,145][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}

[2024-07-24T14:39:39,652][WARN ][r.suppressed             ] [node-1] path: /.kibana/_count, params: {index=.kibana}

Stuti Gupta

unread,
Jul 25, 2024, 6:35:57 AM7/25/24
to Wazuh | Mailing List
Hi   Andrehens Chicfici

It seems that you are having a memory lock problem in your environment. You need to enable this option to give the elasticsearch process the permissions to lock the memory that it does not currently have.

1. Add the below line to the /etc/wazuh-indexer/opensearch.yml configuration file on the Wazuh indexer to enable memory locking:
bootstrap.memory_lock: true

2. Modify the limit of system resources. Configuring system settings depends on the operating system of the Wazuh indexer installation.
Create a new directory for the file that specifies the system limits:
mkdir -p /etc/systemd/system/wazuh-indexer.service.d/
Run the following command to create the wazuh-indexer.conf file in the newly created directory with the new system limit added:
cat > /etc/systemd/system/wazuh-indexer.service.d/wazuh-indexer.conf << EOF
[Service]
LimitMEMLOCK=infinity
EOF

3. Edit the /etc/wazuh-indexer/jvm.options file and change the JVM flags. Set a Wazuh indexer heap size value to limit memory usage. JVM heap limits prevent the OutOfMemory exception iJVM heap size of a data node is set to half the size of physical memory (RAM), up to 32 GB. For example, if the physical memory (RAM) is 128 GB per node, the heap size will still be 32 GB (the maximum heap size). Otherwise, heap size is calculated as half the size of physical memory. You can change the total heap space via /etc/wazuh-indexer/jvm.options file. Here is an example of an increase from ~ 2GB to 4GB:
# -Xms1931m
# -Xmx1931m
-Xms4g
-Xmx4g
Warning To prevent performance degradation due to JVM heap resizing at runtime, the minimum (Xms) and maximum (Xmx) size values must be the same.

4. Restart the Wazuh indexer service:
systemctl daemon-reload
systemctl restart wazuh-indexer


5. Verify that the setting was changed successfully, by running the following command to check that mlockall value is set to true:

curl -k -u <INDEXER_USERNAME>:<INDEXER_PASSWORD> "https://<INDEXER_IP_ADDRESS>:9200/_nodes?filter_path=**.mlockall&pretty"
Output
{
  "nodes" : {
    "sRuGbIQRRfC54wzwIHjJWQ" : {
      "process" : {
        "mlockall" : true
      }
    }
  }
}

If the output is false, the request has failed, and the following line appears in the /var/log/wazuh-indexer/wazuh-indexer.log file:

Unable to lock JVM Memory
Refer to: https://documentation.wazuh.com/current/user-manual/wazuh-indexer/wazuh-indexer-tuning.html

If the issue is still presistant then please ckeck the dick space and memory with following commads:
free -h
df -h

Please check your cluster health as well:
curl -k -u admin:<Password> https://localhost:9200/_cluster/health?pretty=true

Hope to hear from you soon 

Andrehens Chicfici

unread,
Aug 8, 2024, 6:03:26 AM8/8/24
to Wazuh | Mailing List
Hey,

the changes seemed to work!

I just get

[2024-08-08T03:44:45,836][ERROR][o.o.a.a.AlertIndices     ] [node-1] info deleteOldIndices
[2024-08-08T03:44:45,842][ERROR][o.o.a.a.AlertIndices     ] [node-1] info deleteOldIndices

when I check the wazuh-cluster logs with 'tail -n 2000000 /var/log/wazuh-indexer/wazuh-cluster.log | grep -i -E "error|warn"' now. I guess those are just informational messages although they're labeled as ERROR.

cheers

chic
Reply all
Reply to author
Forward
0 new messages