Having assert_throws() call output(e.message) will force us to update every expectations file at once, and will force us to test the exact spelling of the error message, which is probably too precise in many situations. Having a separate function to print the message will remove the conciseness of assert_promise_rejects() calls, since we'll have to extract the error in a separate line. Even giving assert_throws() a parameter to optionally print the message will still force users to depend on exact error message spellings, which is often a recipe for fragile tests. That's why my patch allowed regular expressions, not just exact strings.That said, maybe the opt-in route that checks exact messages is the best alternative. Are expectations files not part of the upstream tests?
On Wed, Jul 22, 2015 at 4:43 PM, 'Jeffrey Yasskin' via blink-dev <blin...@chromium.org> wrote:Having assert_throws() call output(e.message) will force us to update every expectations file at once, and will force us to test the exact spelling of the error message, which is probably too precise in many situations. Having a separate function to print the message will remove the conciseness of assert_promise_rejects() calls, since we'll have to extract the error in a separate line. Even giving assert_throws() a parameter to optionally print the message will still force users to depend on exact error message spellings, which is often a recipe for fragile tests. That's why my patch allowed regular expressions, not just exact strings.That said, maybe the opt-in route that checks exact messages is the best alternative. Are expectations files not part of the upstream tests?Correct, upstream tests do not have expectations per se. Either the tests are manual (== pixel tests), ref tests, or assertion-based, in which case they only care about what assertions might fail. We do a bit more and generate expectations when we want to track a test that only fails partially, but do not require expectations for most tests.
On Wed, Jul 22, 2015 at 5:28 PM, Dirk Pranke <dpr...@chromium.org> wrote:On Wed, Jul 22, 2015 at 4:43 PM, 'Jeffrey Yasskin' via blink-dev <blin...@chromium.org> wrote:Having assert_throws() call output(e.message) will force us to update every expectations file at once, and will force us to test the exact spelling of the error message, which is probably too precise in many situations. Having a separate function to print the message will remove the conciseness of assert_promise_rejects() calls, since we'll have to extract the error in a separate line. Even giving assert_throws() a parameter to optionally print the message will still force users to depend on exact error message spellings, which is often a recipe for fragile tests. That's why my patch allowed regular expressions, not just exact strings.That said, maybe the opt-in route that checks exact messages is the best alternative. Are expectations files not part of the upstream tests?Correct, upstream tests do not have expectations per se. Either the tests are manual (== pixel tests), ref tests, or assertion-based, in which case they only care about what assertions might fail. We do a bit more and generate expectations when we want to track a test that only fails partially, but do not require expectations for most tests.So is this the direction we want to go? Have assert_throws(func, {name: 'FooError', printMessage: true}) call printErrorMessage(e.message), which is provided by testharnessreport.js?
On Wed, Jul 22, 2015 at 5:28 PM, Dirk Pranke <dpr...@chromium.org> wrote:On Wed, Jul 22, 2015 at 4:43 PM, 'Jeffrey Yasskin' via blink-dev <blin...@chromium.org> wrote:Having assert_throws() call output(e.message) will force us to update every expectations file at once, and will force us to test the exact spelling of the error message, which is probably too precise in many situations. Having a separate function to print the message will remove the conciseness of assert_promise_rejects() calls, since we'll have to extract the error in a separate line. Even giving assert_throws() a parameter to optionally print the message will still force users to depend on exact error message spellings, which is often a recipe for fragile tests. That's why my patch allowed regular expressions, not just exact strings.That said, maybe the opt-in route that checks exact messages is the best alternative. Are expectations files not part of the upstream tests?Correct, upstream tests do not have expectations per se. Either the tests are manual (== pixel tests), ref tests, or assertion-based, in which case they only care about what assertions might fail. We do a bit more and generate expectations when we want to track a test that only fails partially, but do not require expectations for most tests.So is this the direction we want to go? Have assert_throws(func, {name: 'FooError', printMessage: true}) call printErrorMessage(e.message), which is provided by testharnessreport.js?
On Wed, Jul 22, 2015 at 5:36 PM, 'Jeffrey Yasskin' via blink-dev <blin...@chromium.org> wrote:So is this the direction we want to go? Have assert_throws(func, {name: 'FooError', printMessage: true}) call printErrorMessage(e.message), which is provided by testharnessreport.js?IMHO, that sounds like a plausible direction to run past the testharness maintainers to see if they're amenable or have a better suggestion - my thinking was leading in the same direction. We'd probably need to change our PRESUBMIT (etc) that currently rejects an -expected.txt that contains only PASS and no FAIL lines to allow e.g. PASS+MESSAGE. It assumes test maintainers can be convinced to sprinkle the primtMessage: flag in appropriate tests.This probably deserves a thread on public-test-infra.
On Wed, Jul 22, 2015 at 5:36 PM, 'Jeffrey Yasskin' via blink-dev <blin...@chromium.org> wrote:On Wed, Jul 22, 2015 at 5:28 PM, Dirk Pranke <dpr...@chromium.org> wrote:On Wed, Jul 22, 2015 at 4:43 PM, 'Jeffrey Yasskin' via blink-dev <blin...@chromium.org> wrote:Having assert_throws() call output(e.message) will force us to update every expectations file at once, and will force us to test the exact spelling of the error message, which is probably too precise in many situations. Having a separate function to print the message will remove the conciseness of assert_promise_rejects() calls, since we'll have to extract the error in a separate line. Even giving assert_throws() a parameter to optionally print the message will still force users to depend on exact error message spellings, which is often a recipe for fragile tests. That's why my patch allowed regular expressions, not just exact strings.That said, maybe the opt-in route that checks exact messages is the best alternative. Are expectations files not part of the upstream tests?Correct, upstream tests do not have expectations per se. Either the tests are manual (== pixel tests), ref tests, or assertion-based, in which case they only care about what assertions might fail. We do a bit more and generate expectations when we want to track a test that only fails partially, but do not require expectations for most tests.So is this the direction we want to go? Have assert_throws(func, {name: 'FooError', printMessage: true}) call printErrorMessage(e.message), which is provided by testharnessreport.js?IMHO, that sounds like a plausible direction to run past the testharness maintainers to see if they're amenable or have a better suggestion - my thinking was leading in the same direction. We'd probably need to change our PRESUBMIT (etc) that currently rejects an -expected.txt that contains only PASS and no FAIL lines to allow e.g. PASS+MESSAGE. It assumes test maintainers can be convinced to sprinkle the primtMessage: flag in appropriate tests.This probably deserves a thread on public-test-infra.(On tangent, I should note that it's explicitly a non-goal of the testharness maintainers to have stable error messages from failing assertions. For example, they consider it acceptable for an assertion message to contain a timestamp or random number, or platform specific data, which would not work with our -expected.txt mechanism. That *should* be orthogonal to your proposal above since the called function would be implemented by us and we'd explicitly want the input to be stable and fail if the message changes.)