MaxHeapSize | 385,875,968 bytes |
UsedHeapSize | 86,126,752 bytes |
expression1 && expression2
True if both expression1 and expression2 are true.
&&
and ||
operators do not evaluate expression2 if the
value of expression1 is sufficient to determine the return
value of the entire conditional expression.
Don't forget that your Java process uses more memory than just the heap. There's also the PermGen, thread stacks, native code and some other things. All of these must add up to be less than your limit of 512M.
If you wan to clean up your shell script, create a template shell script that you include with your build pack. Merge the template with data during the compile script and output the path to your script from your build pack's release script.
> I indeed see a trace "Killed" in stderr but no more details [2].
What else are you expecting to see here?
>
> [1]:
>
> $ cf files myapp staging_info.yml
> Getting file contents... OK
>
> ---
> detected_buildpack:
> start_command: JAVA_HOME=.java JAVA_OPTS="-Dhttp.port=$PORT -XX:MaxPermSize=52428K
> -XX:OnOutOfMemoryError=.buildpack-diagnostics/killjava -Xmx384M -Xss1M"
Your -Xss value is huge. Each thread created by your application is going to consume 1M of thread stack, which counts against your total memory limit. In most Java apps you can get away with a quarter of that or less. Try something like 192k or 256k.
Your -Xss value is huge. Each thread created by your application is going to consume 1M of thread stack, which counts against your total memory limit. In most Java apps you can get away with a quarter of that or less. Try something like 192k or 256k.
Again, I'm assuming the java_buildpack heuristics are correct, but my app (running jonas container) might instanciate more threads than tomcat so that could be an explanation, I'll try reducing that. Weird though that "cf stats" does not show a larger memory usages for the few cases when the app properly starts
The key word here is heuristic. The build pack is going to do it's best to calculate these numbers correctly, but it's just making an educated guess. Sometimes you need to manually adjust further.
All this tells us is what the memory usage was when you ran "cf stats". Because "cf events" is telling us that your app was killed for exceeding it's memory limit, we know that at some point after you ran "cf stats" the app increased it's memory usage and eventually exceeded 512M.
> Besides, this would not explain why cf stats cmd [4] reports a low memory usage.
[14] https://www.kernel.org/doc/Documentation/cgroups/memory.txt section "5.2 stat file"[15] https://groups.google.com/forum/#!topic/slurm-devel/E48fd0gpoys
I'm probably missing something: reading through code and documentation appear as a mismatch: the stats_collector multiplies the rss cgroups metrics with 1024, whereas the cgroups documentation describe them as bytes already.
https://github.com/cloudfoundry/dea_ng/blob/master/lib/dea/stat_collector.rb#L33
@used_memory_in_bytes = info.memory_stat.rss * 1024
https://www.kernel.org/doc/Documentation/cgroups/memory.txt
rss - # of bytes of anonymous and swap cache memory (includes transparent hugepages).
To unsubscribe from this group and stop receiving emails from it, send an email to vcap-dev+u...@cloudfoundry.org.