--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
You can already run JFR as an direct attach agent and you can use both configuration and external controls to control how and what it will capture. The JFR report is a binary file that can be read currently only by JMC. However it is possible to have other things read from it as well. While I love these features in JFR, I find the corresponding views in JMC noisy, resulting in confused views that are full of jargon that doesn’t really map very well with the language developers use. For example, the allocations view shows TLAB allocations however it is completely missing an allocation frequency view. In a recent survey of attendees of my performance tuning talk, only a couple of people actually knew what a TLAB was. Now, one might argue that it is the role of an execution profiler to pick up on frequency events. However, execution profiler views on allocations are some what less then ideal. In my experience developers are less likely to pick up on an allocation issue with an execution profiler then they are when looking at the problem through the lens of an allocation profiler.
Further more, in my testing of JMC/JFR, it failed to identify the top bottlenecks in an application I use as part of my performance tuning workshop. In each successive round of tuning the top bottleneck was either buried in the noise floor, buried in a view that was either difficult to understand or difficult to find, or completely missed.
On May 4, 2016, at 10:59 AM, Richard Warburton <richard....@gmail.com> wrote:Hi,You can already run JFR as an direct attach agent and you can use both configuration and external controls to control how and what it will capture. The JFR report is a binary file that can be read currently only by JMC. However it is possible to have other things read from it as well. While I love these features in JFR, I find the corresponding views in JMC noisy, resulting in confused views that are full of jargon that doesn’t really map very well with the language developers use. For example, the allocations view shows TLAB allocations however it is completely missing an allocation frequency view. In a recent survey of attendees of my performance tuning talk, only a couple of people actually knew what a TLAB was. Now, one might argue that it is the role of an execution profiler to pick up on frequency events. However, execution profiler views on allocations are some what less then ideal. In my experience developers are less likely to pick up on an allocation issue with an execution profiler then they are when looking at the problem through the lens of an allocation profiler.Worth noting that the allocations view in JMC does have a total allocated size and an average size. So you can work out the number of allocations! I agree its less than ideal.I'm not sure what your complaint is around allocation vs execution though.
JMC does have a memory allocation profiler, and you can capture stack traces with it to see the root cause of allocation problems.
Not only that but many memory profilers bytecode weave code that causes more allocations, as Nitsan pointed out on his blog.
Further more, in my testing of JMC/JFR, it failed to identify the top bottlenecks in an application I use as part of my performance tuning workshop. In each successive round of tuning the top bottleneck was either buried in the noise floor, buried in a view that was either difficult to understand or difficult to find, or completely missed.I appreciate you may not want to talk about the problems in your workshop publicly, but can you describe the failure to find the bottlenecks in more detail?
You can already run JFR as an direct attach agent and you can use both configuration and external controls to control how and what it will capture. The JFR report is a binary file that can be read currently only by JMC. However it is possible to have other things read from it as well. While I love these features in JFR, I find the corresponding views in JMC noisy, resulting in confused views that are full of jargon that doesn’t really map very well with the language developers use. For example, the allocations view shows TLAB allocations however it is completely missing an allocation frequency view. In a recent survey of attendees of my performance tuning talk, only a couple of people actually knew what a TLAB was. Now, one might argue that it is the role of an execution profiler to pick up on frequency events. However, execution profiler views on allocations are some what less then ideal. In my experience developers are less likely to pick up on an allocation issue with an execution profiler then they are when looking at the problem through the lens of an allocation profiler.Worth noting that the allocations view in JMC does have a total allocated size and an average size. So you can work out the number of allocations! I agree its less than ideal.I'm not sure what your complaint is around allocation vs execution though.Not really sure how to best describe it. It’s more of an observation based on years of coming into projects where people have not been able to find the root cause because they were using the wrong lens (profiler with the right view).
Not only that but many memory profilers bytecode weave code that causes more allocations, as Nitsan pointed out on his blog.True, bytecode weaving will result in higher allocation rates… and it disturbs escape analysis which will affect how the allocators will decide how to allocate and so on.. however, it’s still been a very effective way of finding allocation hotspots…
Further more, in my testing of JMC/JFR, it failed to identify the top bottlenecks in an application I use as part of my performance tuning workshop. In each successive round of tuning the top bottleneck was either buried in the noise floor, buried in a view that was either difficult to understand or difficult to find, or completely missed.I appreciate you may not want to talk about the problems in your workshop publicly, but can you describe the failure to find the bottlenecks in more detail?It missed on hot locks…
On May 4, 2016, at 1:03 PM, Richard Warburton <richard....@gmail.com> wrote:Hi,You can already run JFR as an direct attach agent and you can use both configuration and external controls to control how and what it will capture. The JFR report is a binary file that can be read currently only by JMC. However it is possible to have other things read from it as well. While I love these features in JFR, I find the corresponding views in JMC noisy, resulting in confused views that are full of jargon that doesn’t really map very well with the language developers use. For example, the allocations view shows TLAB allocations however it is completely missing an allocation frequency view. In a recent survey of attendees of my performance tuning talk, only a couple of people actually knew what a TLAB was. Now, one might argue that it is the role of an execution profiler to pick up on frequency events. However, execution profiler views on allocations are some what less then ideal. In my experience developers are less likely to pick up on an allocation issue with an execution profiler then they are when looking at the problem through the lens of an allocation profiler.Worth noting that the allocations view in JMC does have a total allocated size and an average size. So you can work out the number of allocations! I agree its less than ideal.I'm not sure what your complaint is around allocation vs execution though.Not really sure how to best describe it. It’s more of an observation based on years of coming into projects where people have not been able to find the root cause because they were using the wrong lens (profiler with the right view).You think people be using memory profiling more and execution profiling less, right? Isn't that an educational problem rather than a tooling problem. Ie, as long as JMC ships with a memory profiler its doing its job.
It missed on hot locks…Were the locks JVM intrinsic monitors or Java 5 locks? I ask because a weakness in sigprof based profilers is their inability to sample sleeping code which is what you get with Java 5 locks. If its intrinsic monitors then I don't know what's going on. I probably don't think that a sigprof based profiler is a good tool for finding lock based problems in the same way that its a good for finding execution problems.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.