--
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/53A93983.10803%40opera.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CAFbEG_pS1QLuyLmEX0MkP0xxYygWOL5Z-Mo_5zBMO-h69f%3DdsQ%40mail.gmail.com.
This sounds like a nice feature. We've thought about (and in fact are planning to do, it just keeps getting superseded by higher priorities) something similar for downloads. We're also doing auto-retry when we present an error page to the user, but retrying at the URLRequest layer whenever a request was broken off part way through is more general.
My concerns for doing this generally are:
* I believe there are times when servers break the spec, and GETs are state changing. I'm not sure how to do so, but it would be nice to have a handle on how often this happens, and what the consequences of auto-retrying the requests are. (As a side note that you're probably aware of, you can't do this with POSTs, because they are state changing.)
* I'm not certain whether this behavior goes against Chrome's simplicity design principle. From the user's perspective, it's certainly simple behavior (modulo the point above), but the on-the-wire behavior of the browser isn't very simple. I'm inclined to think it should be implemented above the chrome/content line, with appropriate hooks in content, and I'm not sure if we have the appropriate hooks in place at the moment. How did you implement your prototype?
-- Randy
--
On Tue, Jun 24, 2014 at 1:40 AM, Jonny Rein Eriksen <jon...@opera.com> wrote:
If you are connected to the internet through a connection with a high error rate you can often experience TCP connections being reset. At times completing a page load can be a challenge and you have to reload a page multiple times to get the page loaded completely.--
Opera Presto always did well on the reliability category in Tom´s Hardware Web Browser Grand Prix: http://www.tomshardware.com/reviews/windows-7-chrome-20-firefox-13-opera-12,3228-13.html
I always assumed it is due to the way we did retries on requests that were reset, and based on this I wanted to implement this in Chromium.
At the moment I have this working. Chromium will either issue a Range request if supported by the server, or issue a normal GET if not and skip data to where the connection was aborted, before it feeds data to the consumer as if nothing had happened. Currently I have set it to retry 5 times just like we used to do for Presto.
My plan is to do so only for smaller(?) resources, if etag/last-modified matches and maybe not for the main document? I guess it should be possible to do so for images/js/css even without etag/last-modified. I want to be careful here, but believe we will get most of the effect anyway.
What I want to avoid is to merge resources together that have been modified in between retries and trigger bugs that can not be reproduced. Hence I am thinking of matching the last/next X bytes before I merge two responses together if etag/last-modified is missing.
Cheers,
Jonny Rein Eriksen
Opera Software
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/53A93983.10803%40opera.com.
You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CAFbEG_pS1QLuyLmEX0MkP0xxYygWOL5Z-Mo_5zBMO-h69f%3DdsQ%40mail.gmail.com.
I did investigate retrying when a GET request fails before receiving a full set of headers (That includes after a successful redirect), which is a point at which we can retry without worrying about partial gets, or receiving a different response that before. I only retried on "fast" failures, so no request timeouts. Think I only did if it the failure was in less than 10 seconds. In my experiment, under those very restrictive conditions, we had about a 3% recovery rate, mostly in the ERR_NAME_NOT_RESOLVED and ERR_CONNECTION_RESET cases, if I recall correctly.
I'd support trying to retry under more general conditions, but am definitely concerned about side effects and document mismatches. If we decide to adopt the feature, I'd suggest initially calculating a simple hash of all response bodies as they're received, and then we retry, always try to re-request the entire document, and compare hashes, logging when there's a mismatch and returning the original error. Can try to be smarter about the second request if hash failure rate looks good (And get rid of the hash checks, of course).