You certainly could split things into two endpoints and scrape at
different intervals, however it is unlikely to make little/any
difference. From the Prometheus side data points within a time series
are very low impact. So for your aggregate endpoint you might be
scraping every 30 seconds and the full data every 2 minutes (the slowest
available scrape interval) meaning there are 4x less data points, which
has very little memory impact.
You mention that there is a high cardinality - that is the thing which
you need to fix, as that will be having the impact. You say there is a
problematic label applied to most of the metrics. Can it be removed?
What makes it problematic?
--
Stuart Clark