rate() calculates the average rate between the *first* and *last* data points in the given time window.
irate() calculates the average rate between the *last two* data points in the given time window.
It uses the timestamps of the actual stored data points to calculate the rate, i.e. (v2-v1)/(t2-t1) (**)
However, you need at least two data points to get an answer. If your data is scraped at 1 minute intervals, then a 1-minute window will only ever contain one data point. A 90-second window will sometimes contain two data points (in which case a rate is available), or one data point (in which case there is no answer). If you graph this, the line will have gaps; to draw a point at time T, the rate shown is for the window between T-90 and T, which sometimes exists, and sometimes doesn't.
This is maybe surprising at first. But it is consistent: for example, count_over_time(foo) will tell you the number of data points *within the window*.
When you do an instant query, then the value of a metric at query time T is nearest *previous* value of the metric. So you might have expected rate(foo[1m]) to take the value of foo at the end of the window, and the value of foo at the start of the window, and calculate the rate between those. But that's not how it works, for several reasons. One is that it would have to look backwards *before* the start of the window to find the previous value (an instant query, by default, looks back up to 5 minutes). Another is because the rate would bounce up and down as points enter and leave the window, whereas prometheus calculates an accurate rate between two timestamped values.
(**) That is a simplified description, because there is additional work to handle counter resets. Basically, only periods of time within the window where the counter is not decreasing are considered, and an average rate is calculated from these.
For a slightly longer description, see: