If the result of fetching robots.txt is a failure, then we don’t know whether or not any URLs should (or shouldn’t) be fetched.
Which means it doesn’t make sense to blindly allow or disallow URLs, as often a crawler wants to treat an explicit “disallow” as “set status to blocked, and check again in a really long time" .
So if isDeferVisits() returns true, then typically a crawler would want to:
(a) set the fetch time of URLs for that domain to the current time plus some recheck interval (though only for URLs that are being checked, not every URL, of course), and
(b) ensure that the robots file is refetched within that recheck interval.