Well, you could just calculate the mean from a timer's snapshot.
In general, though, I'd recommend against using the arithmetic mean of a latency curve to help you make any decisions. It's not a helpful metric and you should use quantiles instead if you want a saner measure of centrality.
For example, a set containing 99 1ms measurements and a single 100ms measurement has an arithmetic mean of ~2ms. But because it's not normally distributed, this tells you fuck-all about what's actually going on: the standard deviation of that set is ~10ms.
In contrast, looking at the p95 (1ms), the p99 (2ms), and the p999 (100ms) tells you that instead of being a process which usually takes 2ms, it's a process which almost always takes 1ms but occasionally takes 100ms.
From there you know to focus on rare events (e.g., garbage collection, dropped packets, IO schedulers) instead of common events (e.g., suboptimal algorithms, unoptimized code). It's the difference between solving the problem and wasting time.