Do you mean every new instance of any timeseries, or a completely new metric name?
In principle you could alert on something like these:
# Timeseries which have appeared
expr: '{__name__=~".+"} unless {__name__=~".+"} offset 24h'
# Metrics which have appeared
expr: group by (__name__) ({__name__=~".+"}) unless group by (__name__) ({__name__=~".+"} offset 24h)
But those are expensive queries which touch every timeseries in the database, and are resource-heavy even on my small home test instance (v2.45.0, 31K series). You *might* get away with running them occasionally, e.g. once per day, if you have enough RAM.
Similarly, you could query the
series endpoint and diff the result against the previous result:
/api/v1/series?match[]={__name__=~".%2B"}
However, I tried it and that seems way more expensive. It nearly killed the same instance (maybe it's generating the entire JSON response in RAM before returning it)
If all you're concerned about is a sudden increase in load or cardinality of incoming metrics then you might be better off monitoring prometheus'
tsdb stats, also available under Status > TSDB Stats in the web interface, and/or the metrics at its /metrics endpoint: e.g.
# HELP prometheus_tsdb_head_series Total number of series in the head block.
# TYPE prometheus_tsdb_head_series gauge
prometheus_tsdb_head_series 31131