Subject: Proposal: Disable Android web_tests and fix Android browser_tests
Dear Chromium-dev and blink-dev,
tl;dr: webkit_layout_tests are largely disabled; we should remove web tests from Android bots. Android browser_tests currently are not compiled and cannot run, but Team Bubblesort is actively hacking to fix it.
There's been some background chatter over the past while about seeming holes in our testing infrastructure on Android relating to web_tests and browser_tests.
For webkit_layout_tests, the WebTest harness needs work before it will run Android M+, leaving the bots stuck on K. Even in this set up, most tests are disabled, with a steady stream of additional tests being disabled over time. This means we're mostly just testing the WebTest Harness itself. Much of the actual coverage is provided by other platforms with android-specific behaviours being covered via pixel tests on the High-Dpi MacOS bots, and by the virtual/android tests on Linux. As such, we'd like to recommend explicitly deciding to disable the Android bots and handle coverage for web_tests in other platforms. See "Current State of Android Web Tests" for more details.
For browser_tests, the target does not get built on Android meaning (a) lost coverage for some features (b) some teams end up putting android functionality into the Linux build just to test it (c) some teams have written test code for browser_tests under OS_ANDROID expecting coverage only to have them not even be compiled. creis@ started a doc to track the impact. There are over 11,000 browser_tests (taken from examining linux build logs). While it's unclear exactly how many existing tests are applicable to Android, we estimate it to be in the thousands that could be eventually enabled. More critically though, there are tests we know we want on Android, but currently cannot write. Because of this, we're now actively building on top of jbudorick's work on Android browser_tests with hopes of getting a usable harness on the waterfall this quarter.
If there are any questions or concerns -- especially about the proposal to explicitly not support web_tests on Android -- please chime in on this thread! It a controversial choice, so we're taking a stance to start a discussion.
Yours truly,
Team Bubblesort (danakj, dcheng, ajwong)
Current State of Android Web Tests
-- General stats --
There is currently a single Android KitKat bot running web tests. It only runs:
969 tests, which is 1% of the 88689 tests run on Linux.
161 marked failures in TestExpectations
121 expected failures on the bot.
This means < 1% of the layout Test set is run. As such, the bot is mostly testing that the Android web test harness works.
-- What is being tested --
For the test that are run, the types that are included are very limited. There is no pixel test coverage included in Android web tests. Android pixel web tests stopped working when WebTests were switched to use Viz. However, before this there were already zero pixel tests passing, as evidenced when the legacy compositing mode was ripped out.
-- Where is equivalent coverage --
There is a virtual/android/ test suite which runs on Linux using cmd line params to make Blink behave the way it does on Android. These do include pixel tests and seem sufficient to cover the functionality gap. It is currently 100 tests, that specifically target per-platform differences on Android.
Speaking with the rendering team, the primary differences to be tested on Android are platform differences such as A) high-dpi B) scrolling behaviour. Examining these two cases
High-dpi is tested by the MacOS bots as well, providing good pixel test coverage for this feature in Blink painting. There is not Android-specific behaviour that differs from what we also test on MacOS.
Scrolling behaviour differences are tested via the virtual/android/ test suite on Linux already, and are not run on Android.
Thus the use of virtual/android and High-dpi tests on MacOS provide sufficient coverage for the rendering team.
-- What else is there? --
The set of tests that run on the Android bot are listed in web_tests/SmokeTests. These are either text-only tests, or marked as failures in TestExpectations, and these do not test useful Android-specific behaviours in Blink.
Examples:
Tests under compositing/ are text-only tests, with expectations that differ in the size of the content layer because root layer scrolling is enabled on Android. However the feature is actually tested by the virtual/android/rootscroller tests which run on Linux.
The single plugins/ test is not marked as Failure in TestExpectations, but it just verifies that the test *fails* regardless.
The fast/beacon/beacon-basic.html test does have a difference on Android, but it appears to be testing differences in the WebTest harness - on Linux the test is loaded from file:// (so doesn’t accept http:) but on Android (and Fuchsia) does accept it, seemingly because the file is loaded from http:// instead.
The crypto/random-values.html test has a difference on Android which is just that SharedArrayBuffer is not available so that result is missing from the expectations. This is repeated in other tests such as beacon-basic.html. This is not a useful thing to be testing.
After going through the Android expectations directory, I (danakj) am unable to find a useful platform-specific expectation.
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAHtyhaS0rrHiOT2KxOnqRVuh4RY39N5%2BBuRSpoGxPKXGTjPfFA%40mail.gmail.com.
Thanks Dirk for following up on this!Like I replied previously, "why can't we fix the existing tests and adding browser_tests" instead of "turning this off"?
I understand we need to use the resources effectively, though, if "webkit_layout_tests" has value without duplication from other tests, we'd better to have that coverage.
Also, Can we invest more proper on unit tests that are independent to platform? So they are not necessary to run on all the platforms.
As noted in the other reply I just sent, I feel like you're raising two issues that are best dealt with separately, so I'm splitting my replies and sending the one about the web_tests just to blink-dev (and bcc'ing chromium-dev ...).Also, +Yang Zhang, +Erik Staab and +Yihong Gu to try and make sure some of the EngProd folks are aware of this discussion.As someone who spent some large chunk of time trying to get the web_tests running on Android on the bots reasonably well years ago, I have mixed feelings about this proposal.
I guess I'm reluctantly okay with you turning this off, though I am worried that doing so will make it even harder to bring them back and run on newer versions.
I have additional thoughts that might be interesting to some ...First, I'm biased, but I'd be inclined to argue that the web_tests are the most important test suite we have in Chrome, and Android is actually our most-used platform. So, having it not run there seems like a bad thing.
But, pragmatically, we also simply don't have the hardware to run all of the tests with any sort of frequency (much like the point I just made about potentially running browser_tests), and that's not likely to change soon.
The point of the SmokeTests was to try and get some sort of broad but shallow coverage of the test suite. I don't actually know if I achieved ever achieved that, but I also don't know that I didn't. I also don't know if that was true at one point but is no longer true. I'm not sure what the state of code coverage on Android is, but it sure would be interesting to try and pull numbers to check. It was my hope that people would add the tests that are platform-specific to it over time, but that hasn't much happened.
However, given that we don't know, to say that the tests are only testing the harness feels a bit unnecessarily harsh to me :). That said, I've also never heard of us hitting an actual Android-specific bug when running that suite, so I also couldn't argue that it is a valuable suite to run at all.
And, certainly given that no one seems to be willing to invest in making it work on newer versions of Android, that also suggests that the test suite isn't that valuable.
It also suggests that much of the test suite is generic, and there's simply no need to run most of the tests on multiple platforms. The fact that we took web_tests out of the CQ on Mac and the world hasn't fallen over (indeed, I'm not sure if we see much in the way of an increased number of Mac failures in the suite on the waterfall at all) also suggests that.
As I noted in my browser_tests reply, we're increasingly living in a world of constrained budgets and we need to be thinking more and more about only running tests that find issues not found elsewhere, and that means that we're probably more likely to repeat that exercise with more test suites on more platforms in the future, to free up scarce hardware resources for testing things that actually need to be tested on them.
In my ideal world, I'd have someone sign up to make the test suite actually work on M+, and we'd have someone focusing on running just the tests that are testing Android-specific stuff and maybe a sanity check of some subset of the rest a la SmokeTests.
Side note: you wrote:> The single plugins/ test is not marked as Failure in TestExpectations, but it just verifies that the test *fails* regardless.That's actually the *correct* way to handle a test that is expected to fail. web_tests are change-detector tests as much as they are compliance tests. Marking a test as `Failure` means we lose the change-detection property. The obvious downside to this "correct" approach is that you then can't tell which things are actually failing. I wanted to fix that at some point (e.g., with an -expected-failure.txt or something) but could never come up with an approach that seemed better enough to be worth it. An alternative would be to WontFix that directory, since we don't expect plugins to run at all on Android, but that loses coverage. But, this is something of a pedantic point these days, as clearly we don't and probably never will do this consistently one way or another.
I don't actually understand your comments about why the platform-specific crypto results showing that SAB is missing isn't a useful thing to be testing, or what's wrong with the fact that the compositing/ tests have platform-specific expectations because they actually have platform-specific behavior, but I'm not sure either of these topics is all that important, either.
Thanks, Dana. It sounds like we're in agreement: it seems like running the tests on Android *should* provide value, but it's hard to see that they actually do. So, it makes sense to turn them off to reduce some of the complexity in the system until at least we get to a point where they're better maintained and more clearly targeted at high-value tests.I'm still not following one part of your response, though, which is that I'm not understanding your comments about not seeing tests for OS-specific differences.For example, isn't the plugins test needing to fail only on android an OS-specific difference?
Also, I'm pretty sure the webaudio tests are testing OS-specific implementations (or at least they used to), and I thought some of the media tests were doing so as well.
From: Dirk Pranke <dpr...@chromium.org>Date: Tue, May 14, 2019 at 12:00 PM
To: Dana Jansens
Cc: blink-dev, Albert J. Wong (王重傑), Daniel Cheng, Nasko Oskov, Yang Zhang, Erik Staab, Yihong GuThanks, Dana. It sounds like we're in agreement: it seems like running the tests on Android *should* provide value, but it's hard to see that they actually do. So, it makes sense to turn them off to reduce some of the complexity in the system until at least we get to a point where they're better maintained and more clearly targeted at high-value tests.I'm still not following one part of your response, though, which is that I'm not understanding your comments about not seeing tests for OS-specific differences.For example, isn't the plugins test needing to fail only on android an OS-specific difference?It's an ENABLE_PLUGINS difference, I'd say, though the test is failing in a way that could be caused by any number of problems. It is not testing "there are not plugins on Android", which is a property of content not of Blink anyhow, right? A content unittest would be better suited for that I believe. It seems we mostly disable plugin tests on Android. However I did find this test that seems to cover this need?
Also, I'm pretty sure the webaudio tests are testing OS-specific implementations (or at least they used to), and I thought some of the media tests were doing so as well.There's 2 platform specific expectations in webaudio. One has text differences around ArrayBufferView being shared and throwing an exception, I get the same result on Linux when I run it in Chrome. But the Linux web test runner seems to avoid it. This may be an Android-OS difference, but it looks more like the test environment to me. Can you confirm?The only other platform difference is a different binary result for one resampled codec test. I didn't think this is part of Blink, and would be covered by media unit tests and such elsewhere, but I may be mistaken?
From: <dan...@chromium.org>Date: Tue, May 14, 2019 at 9:28 AM
To: Dirk Pranke
Cc: blink-dev, Albert J. Wong (王重傑), Daniel Cheng, Nasko Oskov, Yang Zhang, Erik Staab, Yihong GuFrom: Dirk Pranke <dpr...@chromium.org>Date: Tue, May 14, 2019 at 12:00 PM
To: Dana Jansens
Cc: blink-dev, Albert J. Wong (王重傑), Daniel Cheng, Nasko Oskov, Yang Zhang, Erik Staab, Yihong GuThanks, Dana. It sounds like we're in agreement: it seems like running the tests on Android *should* provide value, but it's hard to see that they actually do. So, it makes sense to turn them off to reduce some of the complexity in the system until at least we get to a point where they're better maintained and more clearly targeted at high-value tests.I'm still not following one part of your response, though, which is that I'm not understanding your comments about not seeing tests for OS-specific differences.For example, isn't the plugins test needing to fail only on android an OS-specific difference?It's an ENABLE_PLUGINS difference, I'd say, though the test is failing in a way that could be caused by any number of problems. It is not testing "there are not plugins on Android", which is a property of content not of Blink anyhow, right? A content unittest would be better suited for that I believe. It seems we mostly disable plugin tests on Android. However I did find this test that seems to cover this need?Yeah, I think this is one of those things where you try and decide whether if having a unit test is sufficient or whether you really also want a functional (or integration) test.Also, I'm pretty sure the webaudio tests are testing OS-specific implementations (or at least they used to), and I thought some of the media tests were doing so as well.There's 2 platform specific expectations in webaudio. One has text differences around ArrayBufferView being shared and throwing an exception, I get the same result on Linux when I run it in Chrome. But the Linux web test runner seems to avoid it. This may be an Android-OS difference, but it looks more like the test environment to me. Can you confirm?The only other platform difference is a different binary result for one resampled codec test. I didn't think this is part of Blink, and would be covered by media unit tests and such elsewhere, but I may be mistaken?I think this is my confusion. You seem to be talking about different results, if I'm understanding you correctly, but I'm talking about different implementations (different code paths) that may produce the same result. If that's the case, you still should test both, right?
From: Dirk Pranke <dpr...@chromium.org>Date: Tue, May 14, 2019 at 12:00 PM
To: Dana Jansens
Cc: blink-dev, Albert J. Wong (王重傑), Daniel Cheng, Nasko Oskov, Yang Zhang, Erik Staab, Yihong GuThanks, Dana. It sounds like we're in agreement: it seems like running the tests on Android *should* provide value, but it's hard to see that they actually do. So, it makes sense to turn them off to reduce some of the complexity in the system until at least we get to a point where they're better maintained and more clearly targeted at high-value tests.I'm still not following one part of your response, though, which is that I'm not understanding your comments about not seeing tests for OS-specific differences.For example, isn't the plugins test needing to fail only on android an OS-specific difference?It's an ENABLE_PLUGINS difference, I'd say, though the test is failing in a way that could be caused by any number of problems. It is not testing "there are not plugins on Android", which is a property of content not of Blink anyhow, right? A content unittest would be better suited for that I believe. It seems we mostly disable plugin tests on Android. However I did find this test that seems to cover this need?Also, I'm pretty sure the webaudio tests are testing OS-specific implementations (or at least they used to), and I thought some of the media tests were doing so as well.There's 2 platform specific expectations in webaudio. One has text differences around ArrayBufferView being shared and throwing an exception, I get the same result on Linux when I run it in Chrome. But the Linux web test runner seems to avoid it. This may be an Android-OS difference, but it looks more like the test environment to me. Can you confirm?
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAHtyhaRUkgKpLkhqYDFNDO6Kg0gt5dCNbZFpjyq%3D6VvDRWXnAw%40mail.gmail.com.
From: Dirk Pranke <dpr...@chromium.org>
-- Dirk
-- Dirk
-- Dirk
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.
From: Dirk Pranke <dpr...@chromium.org>
-- Dirk
-- Dirk
-- Dirk
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAHtyhaS0rrHiOT2KxOnqRVuh4RY39N5%2BBuRSpoGxPKXGTjPfFA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/4199555e-223e-41d2-b1a5-41ef92e61981%40chromium.org.
FWIW, I just checked all external/wpt tests enabled on Android. And many results are wildly wrong -- WPT should never dump the layout tree as the output. We hadn't been running web_tests for a long time until John Budorick fixed https://crbug.com/824539 . The tests (or rather, the harness) have apparently all bitroted. +foolip
On Tuesday, May 14, 2019 at 1:35:57 PM UTC-4, Dana Jansens wrote:From: Dirk Pranke <dpr...@chromium.org>Date: Tue, May 14, 2019 at 12:35 PM
To: Dana Jansens
Cc: blink-dev, Albert J. Wong (王重傑), Daniel Cheng, Nasko Oskov, Yang Zhang, Erik Staab, Yihong GuFrom: <dan...@chromium.org>Date: Tue, May 14, 2019 at 9:28 AM
To: Dirk Pranke
Cc: blink-dev, Albert J. Wong (王重傑), Daniel Cheng, Nasko Oskov, Yang Zhang, Erik Staab, Yihong GuFrom: Dirk Pranke <dpr...@chromium.org>Date: Tue, May 14, 2019 at 12:00 PM
To: Dana Jansens
Cc: blink-dev, Albert J. Wong (王重傑), Daniel Cheng, Nasko Oskov, Yang Zhang, Erik Staab, Yihong GuThanks, Dana. It sounds like we're in agreement: it seems like running the tests on Android *should* provide value, but it's hard to see that they actually do. So, it makes sense to turn them off to reduce some of the complexity in the system until at least we get to a point where they're better maintained and more clearly targeted at high-value tests.
-- Dirk
-- Dirk
-- Dirk
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAHtyhaS0rrHiOT2KxOnqRVuh4RY39N5%2BBuRSpoGxPKXGTjPfFA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/4199555e-223e-41d2-b1a5-41ef92e61981%40chromium.org.
The harness had bitrotten after a few months of failing green. Some of the tests bitrotted and may have stayed that way.
You are probably already aware of this, but please note that the main reason for not_site_per_process_webkit_layout_tests step on the linux-rel CQ/waterfall bot is to have test coverage of Android-specific Site Isolation behavior. Maybe this test step should be mutated into android_simulation_webkit_layout_tests? I am not sure how virtual/android test suite fits here - it seems to cover only a subset of tests + it seems to inject Android-specific cmdline flags into webkit_layout_tests step on *every* bot/platform?
{ "prefix": "not-site-per-process", "base": ".", "args": ["--disable-site-isolation-trials"] },
On Wed, May 15, 2019 at 12:55 PM Łukasz Anforowicz <luk...@chromium.org> wrote:You are probably already aware of this, but please note that the main reason for not_site_per_process_webkit_layout_tests step on the linux-rel CQ/waterfall bot is to have test coverage of Android-specific Site Isolation behavior. Maybe this test step should be mutated into android_simulation_webkit_layout_tests? I am not sure how virtual/android test suite fits here - it seems to cover only a subset of tests + it seems to inject Android-specific cmdline flags into webkit_layout_tests step on *every* bot/platform?I did not realize this is meant to simulate Android specifically, that's interesting. There is also a not-site-per-process virtual test suite, which is defined to include a large number of different tests. It uses the flag --disable-site-isolation-trials which looks the same as not_site_per_process_webkit_layout_tests. I'm not sure if --disable-blink-features=LayoutNG is also part of the "emulate Android" goal there, but it is not part of the virtual test suite. So it seems like we have some redundant coverage?The not_site_per_process_webkit_layout_tests would not have a way to disable tests without also disabling them for the regular test run as well, I think?
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CA%2BOSsVbz0ZFWC%3DzcqMjBN%2BxWrGRrTJOtTRMCJzf%3DymHB1qY5DA%40mail.gmail.com.
On Wed, May 15, 2019 at 2:43 PM Marijn Kruisselbrink <m...@chromium.org> wrote:On Wed, May 15, 2019 at 11:38 AM <dan...@chromium.org> wrote:On Wed, May 15, 2019 at 12:55 PM Łukasz Anforowicz <luk...@chromium.org> wrote:You are probably already aware of this, but please note that the main reason for not_site_per_process_webkit_layout_tests step on the linux-rel CQ/waterfall bot is to have test coverage of Android-specific Site Isolation behavior. Maybe this test step should be mutated into android_simulation_webkit_layout_tests? I am not sure how virtual/android test suite fits here - it seems to cover only a subset of tests + it seems to inject Android-specific cmdline flags into webkit_layout_tests step on *every* bot/platform?I did not realize this is meant to simulate Android specifically, that's interesting. There is also a not-site-per-process virtual test suite, which is defined to include a large number of different tests. It uses the flag --disable-site-isolation-trials which looks the same as not_site_per_process_webkit_layout_tests. I'm not sure if --disable-blink-features=LayoutNG is also part of the "emulate Android" goal there, but it is not part of the virtual test suite. So it seems like we have some redundant coverage?The not_site_per_process_webkit_layout_tests would not have a way to disable tests without also disabling them for the regular test run as well, I think?Isn't that what third_party/blink/web_tests/flag-specific/ and third_party/blink/web_tests/FlagExpectations/ are for?Thanks, looks like yes, I had never seen that before. So it seems functionally similar to the virtual test suite then, and perhaps they are just redundant. Thanks for pointing it out :)
Ah good point. I continually forget that virtual test suites don't inherit the non-virtual expectations.On Wed, 15 May 2019 at 15:17, Łukasz Anforowicz <luk...@chromium.org> wrote:If we had a virtual/android (or virtual/not-site-per-process) that covers *all* layout tests, then every test failure/flake that affects both modes (the default mode + the android / not-site-per-process mode) would have to be duplicated. In other words, not_site_per_process_webkit_layout_tests step inherits the test expectations of the default mode, but a virtual test suite does not. This is desirable in some cases and not in others (which is the reason why we have both not_site_per_process_webkit_layout_tests [main coverage] + virtual/not-site-per-process [to cover a handful of tests with diverging test expectations]).
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAOPAaNLz1m27XvuV%3DvLX2Vv%2B8kAwj-ukG5DVTj8YsahC_gmhcQ%40mail.gmail.com.
+libe...@chromium.org, +wole...@chromium.org> The point of the SmokeTests was to try and get some sort of broad but shallow coverage of the test suite.This is how media/ team has used it.> Looking through the SmokeTests I was unable to find a single test that was actually testing something Android-specific.These media tests do somehting very android specific
They use a real gpu on the bot to test hw accelerated decode paths. Unit testing this platform and hardware specific code has historically been infeasable. I beleive these layout tests are the only test coverage we have of these code paths.> virtual/android/ test suite running on Linux.I'm not familiar with this. Can someone provide some pointers? I expect (but should confirm) that this doesn't have enough virtualization to emulate Android GPU video decode.
Chris--On Wed, May 15, 2019 at 8:24 AM Robert Ma <robe...@chromium.org> wrote:--From: John Budorick <jbud...@chromium.org>Date: Tue, May 14, 2019 at 6:44 PM
To: Robert Ma
Cc: blink-dev, Dirk Pranke, <dalec...@chromium.org>, Albert J. Wong (王重傑), Daniel Cheng, Nasko Oskov, Yang Zhang, Erik Staab, Yihong Gu, Philip JägenstedtThe harness had bitrotten after a few months of failing green. Some of the tests bitrotted and may have stayed that way.Yes that's what I meant and I've been aware of the issue for a while but haven't got time to look into it. Sorry if my phrasing wasn't clear.Also it just occurred to me that Fuchsia also runs web tests listed in SmokeTests. Although I don't think they are using any Android-specific harness (only the filtering & plumbing in blinkpy), +sergeyu just in case.
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAOPAaNLz1m27XvuV%3DvLX2Vv%2B8kAwj-ukG5DVTj8YsahC_gmhcQ%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CALG6eSqHmVXHci5SuBV7y-Cy6t_DzXZBfVybQ1x-evfNDQppmw%40mail.gmail.com.
> I beleive these layout tests are the only test coverage we have of these code paths.
As far as I know, that's still essentially correct.
There is some limited testing with mocked-out hardware bits these days, but it's not really a substitute for running it on the bots with real hardware end to end. Losing these tests would be unfortunate.
Does testing HW accelerated video decode need to be done via web_tests or would it be better well as content_browsertests instead?
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAPTJ0XHGwwv4O4dHpCXGoL8aKED%3DbsbqUu9RMF08QVTbODCg1w%40mail.gmail.com.