Hello,
During peak hours, 100 seconds would be ideal. However, at 2am (EST) when the load is much less, the wait time can be as little as 10 seconds. Having to wait 90 seconds extra per request is quite an excessive amount of time to wait between requests when 10 is sufficient in offhours. I'm currently processing a batch of 40,000 requests, which at 100 seconds per request would take about 47 days. Since opening this ticket, I've gotten a little familiar with what are the peak hours to the Google API, and it seems that 20 seconds during offpeak and 40 seconds during onpeak (although this has crept up to 50 now) is sufficient to run the requests uninterrupted.
The only issue I have with the exponential backoff policy is that it doesn't account for peak and off-peak hours. In my example, it would have increased wait times to 100 seconds during onpeak hours but would never come down during offpeak hours. While it would run, the runtime would be 47 days whereas with my guesstimated wait times, I can accomplish the same results with less than half the runtime. Ideally, getting a next wait time on each request would be the most efficient way of cutting runtime while not making too many requests.
I'm not sure if the above is an acceptable user case, but it does seem logical to cutting the runtime of large requests.
Thank you.