This is an issue with graphite-exporter, not prometheus or staleness.
The problem is this: if your application simply stops sending data to graphite-exporter, then graphite-exporter has no idea whether the time series has finished or not, so it keeps exporting it for a while.
"To avoid using unbounded memory, metrics will be garbage collected five minutes after they are last pushed to. This is configurable with the --graphite.sample-expiry flag."
Once graphite-exporter stops exporting the metric, then on the next scrape prometheus will see that the timeseries has gone, and it will immediately mark it as stale (i.e. has no more values), and everything is fine.
Therefore, reducing --graphite.sample-expiry may help, although you need to know how often your application sends graphite data; if you set this too short, then you'll get gaps in your graphs.
Another option you could try is to get your application to send a "NaN" value at the end of this run. But technically this is a real NaN value, not a staleness marker (staleness markers are internally represented as a special kind of NaN, but that's an implementation detail that you can't rely on). Still, a NaN may be enough to stop Grafana showing any values from this point onwards.