--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-dev/CAFv%2B6UpnUwfPQv8ADXAWO1k2jvDz%3Dz2e5ErqHg69kJCaY0yp%2Bg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-dev/CAFv%2B6Ur%2B4DNSpGvt2ue4vFrF1RGH5%3DNO8w7fmMhSgB%3DsbhVAQA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-dev/CAGsbWzT2hqCeF01Gui1TcJ-4iAhgo1F0EveCk4uk3PXAJCD1AA%40mail.gmail.com.
If the flakiness is not limited to a handful of individual tests, then I'd rather avoid disabling a whole swath of tests without first understanding the root cause of the flakiness.
I also don't think that disabling site-per-process (or the site_per_process_webkit_layout_tests test step) is a good option, since since M67 99% of Chrome users run with site-per-process.
That said, it does indeed seem that site-per-process makes tests more flaky and we should try to get to the bottom of this (not sure exactly how yet...). I've opened https://crbug.com/874695 to track this problem.
FWIW, I don't think OOB-CORS VS OOPIF bears any significant amount of blame for the difference - OOB-CORS accounts only for 167 out of 3958 flaky tests.One thing I did notice is that quite a few tests that are flaky with site-per-process had quite a slow "slowest run" (around 1100 tests had a "slowest run" of 3s or more). I don't know if "slowest run" only counts passes (and so ignores timeouts), but if so, then it would probably mean that site-per-process makes things slightly slower and possibly takes quite a few tests over a timeout cliff. I was not able to figure out how to limit the flakiness dashboard to only show timeouts (and working with the dashboard was fairly painful in general - it crashes quite often due to OOMs).
--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-dev/5ded21ba-cf4e-4e3b-a865-35e03b2699b7%40chromium.org.
On Wed, Aug 15, 2018 at 5:00 PM, Łukasz Anforowicz <luk...@chromium.org> wrote:If the flakiness is not limited to a handful of individual tests, then I'd rather avoid disabling a whole swath of tests without first understanding the root cause of the flakiness.That is understandable, but also not going to happen unless someone is actively working on this and we have some hope of an ETA on a fix. AFAIK, we don't have either of these things, though maybe you've started looking at it now?
I also don't think that disabling site-per-process (or the site_per_process_webkit_layout_tests test step) is a good option, since since M67 99% of Chrome users run with site-per-process.I never said it was a good option :). But, it is our policy to not run flaky tests on the waterfall, period. The burden of proof is on devs to show that their tests are stable before they get to run in the CQ and affect everyone.
We're not enforcing this perfectly, but that doesn't mean that things get free passes. Just because you've shipped something doesn't mean that, either.