--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
Is there a best-practices document for testharness?
I tried to use the testharness to a new CL, and based on what I did compared to I would have done with the existing js-test, webaudio tests are very much less informative, especially on failure. For example, https://cs.chromium.org/chromium/src/third_party/WebKit/LayoutTests/webaudio/panner-automation-position-expected.txt?sq=package:chromium produces lots of useful information on pass, and even more useful information on test failure like the actual and expected values, their absolute and relative differences, and where the max diff and relative diff occurred and corresponding values. This is really useful for adjusting threshold for all the different platforms, and I can use the trybots to help with that instead of manually building on every single platform myself.
Having the expected value printed out also caught a recent change in V8 where the Math.exp change caused the test to fail because the printed values changed. (The test itself still passed because the required threshold was still met.)
If I rewrote this test with testharness, I think the V8 change wouldn't have been caught. And if the test did fail, the failure message wouldn't be fairly useless in telling me why.
Can you write those as separate tests instead with shared setup code?
Yes, we can add a bunch of globals (ick) so that the tests can be done individually. Well, maybe. Most of the tests use OfflineAudioContext.startRendering() which returns a promise, and the tests are in the promise resolver. There's usually a fair amount of state between the setup and the resolver to figure out if the test passes or not.