That's simply how alerting works in Prometheus. An alert is defined with an expression. Often this will be a filter expression, e.g.
expr: bytesFree < 1000
This is *not* a boolean, i.e. it's not a true/false: it's a PromQL filter. "bytesFree" returns all of the timeseries which have that metric name, and "< 1000" filters this set so that it includes only those timeseries where the value is less than 1000. So the result is an instant vector containing zero or more timeseries.
If this filter expression returns an empty result set, then there is no alert. If it returns one or more timeseries, then one or more alerts are generated. And if later on it no longer returns any timeseries, then those alerts are resolved.
In short: the absence of the metric which caused the alert in the first place, causes the alert to be resolved - and that's the only way that alerts *can* be resolved.
This also explains some other behaviour which sometimes people complain about. Suppose you add an annotation to an alert such as
description: Free space warning, bytesFree={{ $value }}
When you have an instance of the bytesFree timeseries with value 500 you'll get an alert like
description: Free space warning, bytesFree=500
but when the value of that metric increases to say 2000, the alert resolved message will also say
description: Free space warning, bytesFree=500
It can't say "bytesFree=2000" here, because the expression "bytesFree < 1000" returns no timeseries any more. The only value which is known is the value which caused the alert to fire. It's not the new value 2000 which causes the alert it be resolved; it's the absence of any value from the expression "bytesFree < 1000".