High Memory Consumption in jBPM WildFly during process and memory not getting released

15 views
Skip to first unread message

Ravi Ghadiya

unread,
May 19, 2025, 1:34:08 AMMay 19
to jBPM Setup
Hi,

I’m running jBPM on WildFly with PerProcessInstance Runtime strategy inside a Docker container (6 GB memory, 2 CPUs) and have configured the JVM with:

JAVA_OPTS: "-Xms64m -Xmx2048m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=512m -XX:NativeMemoryTracking=summary -XX:+UnlockDiagnosticVMOptions -XX:+PrintNMTStatistics -XX:+PrintGCDetails -Xloggc:/tmp/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/jbpm/dumps -XX:MaxDirectMemorySize=256m
-Xss256k -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true"

I have set PerProcessInstance Runtime strategy in project settings and from my backend app I'm sending around 10 concurrent requests to kie-server at a time to initiate process instance using signal API. I have to process hundreds of thousands of records, with one process Instance initiation request per one record.

Despite above JVM limits, JBPM able to serve around 2k-3k requests and Docker reports memory usage climbing up to ~6 GiB and memory not getting released, at which point the container is OOM‑killed. 
Inside the JVM, Native Memory Tracking shows only:
  • Heap: ~2 GiB reserved / ~1.03 GiB committed

  • Metaspace + Class: ~1.38 GiB reserved / ~0.38 GiB committed

  • Code cache: ~0.25 GiB reserved / ~0.15 GiB committed

  • GC, Thread stacks, etc.: ~0.2 GiB

— total ≈2 GiB.

However, the host cgroup stats report ~5.9 GiB of anonymous RSS, and pmap reports the JVM RSS as ~9.7 GiB.

PF memory stats in attached file(jbpm_memory_stats).

It’s over my head who or what is anonymously eating up this memory.

Any pointers, config snippets, or forum threads would be greatly appreciated!


Thanks in advance for your help,
Ravi G.

jbpm_memory_stats
Reply all
Reply to author
Forward
0 new messages