Hello
I am looking to learn what are common patterns others have used while adding codahale metrics to many processes (say thousands of JVMs).
Is the common way to use metrics directly in each JVM and leave the complexity of the collection task from that many connections to the aggregator/storage? E.g. I see graphite has caches, relays, and aggregators, and I've seen references to folks creating setups with load balancers at that level. Or, do some funnel raw stats to intermediary services that eventually have Reporters? Not sure if codahale supports a "relay" of sorts.
Any pointers to projects that you know of that may have these any processes producing stats would be welcome.
Thanks,
Alessandro