A time series is different to a metric.
A metric has a name and an optional selection of labels.
A time series is one specific metric & label combination.
So, for example, a metric could be called "requests_count", but two time
series could be "requests_count{response_code='200'}" or
"requests_count{system='frontend',authenticated='false'}".
As a result, in terms of the number of time series there is no
difference between 100 metrics with no labels and a single metric with a
label with 100 values.
How the difference affects performance will depend on how things are
being used. There is likely to be little difference in performance
during scraping, but query usage could make a bigger difference. A
metric with labels is expected to be aggregatable, so it would make
sense to arrange the data in that way if that would be true. If you were
to sum together all the different label combinations of a particular
metrics would the result make sense? An example, a metrics which counts
requests and has labels for error code would still make sense if you
summed everything together (rather than requests per code you would have
total number of requests).
Would it make sense in your case to use labels within a single metric?
If the different systems are completely unrelated that might not be the
case - a sum wouldn't mean anything and an average would be equally
useless as the different systems do a totally different selection of
work. However if you are looking at latencies end-to-end across multiple
systems in a flow, or have multiple instances of a system, then it does
sound like the use of labels would make more sense - sum would give you
the overall end-to-end latency or you could produce averages for a
particular system across instances.
--
Stuart Clark