> With the recording rule, we created a new static label called highcardinality=“true” but this creates new time series. When doing remote write to our long term storage we are dropping those time series which has highcardinality=“true” but the original metric doesn’t have this label so its still getting into our remote write system.
Why don't you
configure your remote_write so that it only sends metrics with highcardinality="true"? Use write_relabel_configs with "keep" or "drop" rules.
> We are thinking of add a new label as part of metric_relabeling section with highcardinality=“false” and update the label to true using recording rules
Again, not sure exactly why you'd want to do that. Changing a label from one value to another also creates a new timeseries, because the bundle of labels is what defines a timeseries, so it's not really any different. But your recording rules *are* generating a new timeseries anyway.
I'm also not sure why you are saying that the recorded metrics have a "high" cardinality when compared to the original. Otherwise, you seem to have more or less the right ideas:
1. If you want to add a label like highcardinality="X" to your original source metrics, you can do this at scrape time, either using target relabelling (if it applies to all metrics from a given target) or metric relabelling (if it only applies to specific metrics)
2. You can set or override a label like highcardinality="Y" in your recording rules. You don't need label_replace() to do that; the
recording rule itself has a "labels" block.
BTW, it's standard practice that recording rules should be generating metrics with a different name. If you did this, you can match on the name pattern for remote storage. This is a case where
label_replace may do the job; I'm not sure if it's allowed to change __name__ with that, but it's worth a try. There are some hints on how to name metrics in recording rules at
https://prometheus.io/docs/practices/rules/#recording-rules
OTOH, I can see why you don't want to change the metric names, when you're not really rolling up metrics into a summary, you're just dropping a subset of metrics that are not of interest.