It sounds to me like this is a staleness issue. That is: the container_last_seen{...} metric which triggered the alert is no longer present in scrapes. The PromQL rule evaluation only looks back 5 minutes in time to find a data point. Anything older than that is not found.
When you have an PromQL expression like this:
expr: foo > 5
it's really a chained filter:
(1) "foo" filters down to just metrics with __name__="foo"
(2) "> 5" further filters down to just metrics where the current value is > 5
The alert then fires if the filter returns one or more timeseries; and if a particular timeseries triggered an alert, but subsequently vanishes, then it is considered to be resolved.
If a particular timeseries hasn't been seen in a scrape for more than 5 minutes, then it won't be returned in step (1).
That's my best guess at what's going on. To prove or disprove this, go into the PromQL browser in the web interface and enter
container_last_seen{id=~"/docker/.*"}[10m]
This will show you the raw datapoints (values and timestamps) over the last 10 minutes for that metric. If a given timeseries stopped being scraped, then you'll see no more data points added. So the last value scraped will be able to trigger an alert, but only for 5 minutes, until it becomes stale.