|Memory consumption of a Timer?||Scott Mitchell||8/28/12 7:19 AM|
Hi guys, I'm just getting started with package, and I'm loving it so far. I'm a little concerned I might go too wild with it though. Does anyone know how much memory is used by a timer object? I guess what I'm unclear on is how many items are maintained in the values collection for the histogram calculations.
|Re: Memory consumption of a Timer?||Ryan Tenney||8/28/12 1:30 PM|
The amount of memory required varies based on the number of times the timer has been updated and (apparently) the distribution of values it has received. A Timer that has been updated at least 2056 times with random longs occupies at most 88KB (roughly).
I'd share the source code for the program I used to find this answer but it's too embarrassingly bad to make it public. Instead, I'd recommend checking out classmexer (http://www.javamex.com/classmexer/) which provides a method to determine the deep memory usage of an object.
|Re: Memory consumption of a Timer?||Scott Mitchell||8/28/12 4:04 PM|
Thanks Ryan, that's heftier than I would have guessed so I'm glad I asked. I'll check out the link you referenced as well.
|Re: Memory consumption of a Timer?||Ryan Rupp||7/1/13 1:13 AM|
This is an old post but yes the Timers are fairly expensive, this is because internally they're storing 1024 entries (hardcoded - I believe this is just to get a fairly statistically accurate sampling) in a ConcurrentSkipList<Long, Long> (map is used for the time based weighting algorithm for removals) which is a relatively expensive data structure. In Metrics 3.0 though you can define the Reservoir implementation that is used which in turn allows you to define how many entries you're storing and there's new types such a the sliding window/fixed size reservoir that may eleviate this possibly (I haven't looked in too much detail yet though). My team did some memory testing of the different metric types in the past (pre 3.0 changes). The process was pretty primitive, basically create 10,000 of some probe type, force a few GCs (forces all referenced object to old gen) and measure the old gen heap memory usage vs the baseline (without probes) - this should then include the footprint of the probe itself + registering via JMX (metadata overhead?) - here's what they are per single probe by type:
Full Timer = ~88KB - assumes that you've taken at least 1024 timings which means the timer is full at which point it starts removing older entries via the removal algorithm
Empty Timer = ~1.8KB - assumes no timings were made yet for the timer, this may just be interesting if you have a timer that is only used for exceptions or something where it happens very infrequently so your reservoir would have a very low number typically (although you probably don't want to assume this)
Full Biased Histogram = ~87KB - this is pretty much the same as the timer as the timer = Biased Histogram + Meter
Full Uniform Histogram = ~9.3KB - interesting thing here is the uniform histogram also stores 1024 entries to derive percentiles but the underlying data structure is an AtomicLongArray, so the lower footprint is probably attributed to dealing with long primitives (vs Long) and that only an array needs to be used vs ConcurrentSkipListMap
Meter = ~1.8KB - this should just be a few AtomicLong fields, low footprint
Counter = ~0.8KB - should just be a single AtomicLong I believe + JMX metadata
Gauge = ~0.7KB - this was with the gauge all referencing the same constant string value so should just be the footprint of registering via JMX really
So I guess the main takeway we had was not to go too crazy on Timers, we were thinking of adding a new "SimpleTimer" that just tracks simple aggregate values (min/max/avg/count) for certain aspects of timing where we don't necessarily need the percentiles but the changes in 3.0 will probably allow for us to do something like this - the value though in the percentiles is being able to track fluctuating timings/distributions for trending purposes as trending the average of a timer over a JVM that's been alive for a month probably won't tell you much if something goes wrong (as the average will hardly move even with outliers at that point).