This is Prometheus staleness handling. In order to allow queries to function, Prometheus graph query evaluation (range query) is actually a series of query evaluations over time. Each one is independent of the next.
In order to support millisecond accurate timestamps, Prometheus will look back up to 5 minutes for samples from each evaluation timestamp.
There are a couple ways around this.
* Scrape faster. It's perfectly common and normal to scrape every 15s-60s in Prometheus. It basically doesn't take any more storage space due to the way Prometheus compresses samples. In fact, scraping slower wastes data by creating incomplete blocks.
* Use a function like `avg_over_time(metric[$__rate_interval])` to effectively increase the lookback. This needs to be combined with setting the "min step" in the Grafana query options to 30min to match your scrape interval.