Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[RFC] We deserve better than runtime warnings

83 views
Skip to first unread message

David Rajchenbach-Teller

unread,
Nov 20, 2014, 10:39:49 AM11/20/14
to dev-platform
Consider the following scenario:

1. Module A prints warnings when it’s used incorrectly;
2. Module B uses module A correctly;
3. Some future refactoring of module B starts using module A
incorrectly, hence displaying the warnings;
4. Nobody realizes for months, because we have too many warnings;
5. Eventually, something breaks.

How often has this happened to everyone of us?

This scenario has many variants (e.g. module A changed and nobody
realized that module B is now in a situation it misuses module A), but
they all boil down to the same thing: runtime warnings are designed to
be lost, not fixed. To make things worse, many of our warnings are not
actionable, simply because we have no way of knowing where they come
from – I’m looking at you, Cu.reportError.

I have put together an API that could replace runtime warnings with
something much more actionable, and much less noisy. They key aspects
are that:
- when the code is executed as part of the test suite, it causes test
failures;
- individual tests can whitelist individual failures;
- when the code is executed on a user's computer, it does nothing costly.

To avoid spamming everyone, I have put all the details on my blog:
http://yoric.rajtel.name/2014/11/20/rfc-we-deserve-better-than-runtime-warnings/

Feedback would be appreciated.

Cheers,
David

--
David Rajchenbach-Teller, PhD
Performance Team, Mozilla

signature.asc

Boris Zbarsky

unread,
Nov 20, 2014, 11:24:35 AM11/20/14
to
On 11/20/14, 10:38 AM, David Rajchenbach-Teller wrote:
> I have put together an API that could replace runtime warnings with
> something much more actionable, and much less noisy. They key aspects
> are that:
> - when the code is executed as part of the test suite, it causes test
> failures;
> - individual tests can whitelist individual failures;
> - when the code is executed on a user's computer, it does nothing costly.

This sounds lovely.

Note that in C++ for some of our test suites we already have this with
NS_ASSERTION and for all test suites we have it with MOZ_ASSERT.
Assuming, of course, that the test suite is run in debug builds...

-Boris

David Rajchenbach-Teller

unread,
Nov 20, 2014, 11:53:02 AM11/20/14
to Boris Zbarsky, dev-pl...@lists.mozilla.org
I wasn't aware that we could whitelist an individual NS_ASSERTION. How
do we do that?

On 20/11/14 17:24, Boris Zbarsky wrote:
> This sounds lovely.
>
> Note that in C++ for some of our test suites we already have this with
> NS_ASSERTION and for all test suites we have it with MOZ_ASSERT.
> Assuming, of course, that the test suite is run in debug builds...
>
> -Boris
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
signature.asc

Boris Zbarsky

unread,
Nov 20, 2014, 11:56:17 AM11/20/14
to
On 11/20/14, 11:51 AM, David Rajchenbach-Teller wrote:
> I wasn't aware that we could whitelist an individual NS_ASSERTION. How
> do we do that?

Ah, we can't. We can whitelist the number of assertions in a mochitest
(or a number range if the number is not quite stable), but not the text
of the assertion.

-Boris

David Rajchenbach-Teller

unread,
Nov 20, 2014, 12:06:28 PM11/20/14
to Boris Zbarsky, dev-pl...@lists.mozilla.org
I believe that we can provide something less fragile than that.

On 20/11/14 17:56, Boris Zbarsky wrote:
> Ah, we can't. We can whitelist the number of assertions in a mochitest
> (or a number range if the number is not quite stable), but not the text
> of the assertion.
>
> -Boris
signature.asc

L. David Baron

unread,
Nov 20, 2014, 12:15:40 PM11/20/14
to David Rajchenbach-Teller, Boris Zbarsky, dev-pl...@lists.mozilla.org
> On 20/11/14 17:56, Boris Zbarsky wrote:
> > Ah, we can't. We can whitelist the number of assertions in a mochitest
> > (or a number range if the number is not quite stable), but not the text
> > of the assertion.

On Thursday 2014-11-20 18:05 +0100, David Rajchenbach-Teller wrote:
> I believe that we can provide something less fragile than that.

It's not all that fragile, since most tests don't have known
assertions, so in that vast majority of tests, we have test coverage
of regressions of assertion counts.

This covers reftests, crashtests, and all mochitests except for
mochitest-browser-chrome (bug 847275 covers doing it there).

(We should probably make an attempt to go through the tests that
have assertion count ranges including 0, since nothing catches when
those assertions are fixed, and we're missing the test coverage
there.)

-David

--
𝄞 L. David Baron http://dbaron.org/ 𝄂
𝄢 Mozilla https://www.mozilla.org/ 𝄂
Before I built a wall I'd ask to know
What I was walling in or walling out,
And to whom I was like to give offense.
- Robert Frost, Mending Wall (1914)
signature.asc

Karl Tomlinson

unread,
Nov 20, 2014, 3:19:18 PM11/20/14
to
L. David Baron writes:

>> On 20/11/14 17:56, Boris Zbarsky wrote:
>> > Ah, we can't. We can whitelist the number of assertions in a mochitest
>> > (or a number range if the number is not quite stable), but not the text
>> > of the assertion.
>
> On Thursday 2014-11-20 18:05 +0100, David Rajchenbach-Teller wrote:
>> I believe that we can provide something less fragile than that.
>
> It's not all that fragile, since most tests don't have known
> assertions, so in that vast majority of tests, we have test coverage
> of regressions of assertion counts.
>
> This covers reftests, crashtests, and all mochitests except for
> mochitest-browser-chrome (bug 847275 covers doing it there).

runcppunittests.py, runxpcshelltests.py, and rungtests.py use
XPCOM_DEBUG_BREAK=stack-and-abort so we should have coverage there
too.

When web-platform-tests run with debug builds, we can do the same
there because known crashes can be annotated in those tests.

Anthony Jones

unread,
Nov 20, 2014, 5:21:32 PM11/20/14
to
There is a priority list of best to worst something like this:

1. Types
2. Compile time assertions
3. Unit tests
4. Fatal run time assertions
5. Non-fatal runtime assertions
6. Documentation

This is the order in which you are most likely to quickly find a
problem. Obviously 1 and 2 don't apply to Javascript although static
analysis could be the equivalent.

Anthony

David Rajchenbach-Teller

unread,
Nov 21, 2014, 6:52:40 AM11/21/14
to L. David Baron, Boris Zbarsky, dev-pl...@lists.mozilla.org
Well, for one thing, it's not self-documenting. For the other, unless
I'm missing something, we won't notice if an assertion is fixed and
replaced with another one.

And yes, catching when an assertion is fixed would clearly be useful, too.

Cheers,
David

On 20/11/14 18:14, L. David Baron wrote:
> It's not all that fragile, since most tests don't have known
> assertions, so in that vast majority of tests, we have test coverage
> of regressions of assertion counts.
>
> This covers reftests, crashtests, and all mochitests except for
> mochitest-browser-chrome (bug 847275 covers doing it there).
>
> (We should probably make an attempt to go through the tests that
> have assertion count ranges including 0, since nothing catches when
> those assertions are fixed, and we're missing the test coverage
> there.)
>
> -David
>


signature.asc

Gijs Kruitbosch

unread,
Nov 21, 2014, 11:32:30 AM11/21/14
to
On 20/11/2014 17:14, L. David Baron wrote:
>> On 20/11/14 17:56, Boris Zbarsky wrote:
>>> Ah, we can't. We can whitelist the number of assertions in a mochitest
>>> (or a number range if the number is not quite stable), but not the text
>>> of the assertion.
>
> On Thursday 2014-11-20 18:05 +0100, David Rajchenbach-Teller wrote:
>> I believe that we can provide something less fragile than that.
>
> It's not all that fragile, since most tests don't have known
> assertions, so in that vast majority of tests, we have test coverage
> of regressions of assertion counts.

Sure, although there are also a small number of tests which define
assertion counts of "between 10 and 18" or something like this... and
mostly when things change there (especially if in the positive
direction) we update those counts and move on with life. It would be
better if we had a more detailed understanding of why those assertions
are fired and which they are.

~ Gijs

L. David Baron

unread,
Nov 21, 2014, 11:42:35 AM11/21/14
to Gijs Kruitbosch, dev-pl...@lists.mozilla.org
In general, we should have bugs filed on the assertions, and
comments next to the assertion counts pointing to those bugs, just
like we do for known failures.

It wasn't practical to do that when switching on the assertion
checking initially, though, just given the difficulty of getting the
checking landed at all while the state of the tree was changing.
(I believe I still have the data used for that, though, so I could
probably go back and file the ones that are still present.)
signature.asc

L. David Baron

unread,
Nov 21, 2014, 11:51:01 AM11/21/14
to David Rajchenbach-Teller, Boris Zbarsky, dev-pl...@lists.mozilla.org
On Friday 2014-11-21 12:51 +0100, David Rajchenbach-Teller wrote:
> Well, for one thing, it's not self-documenting.

We should comment them better (i.e., have a bug on each one, and
point to the bug in a comment on the expectAssertions line). I
wasn't able to do that when initially landing the assertion checking
because, at the time, there were too many to keep up with the tree.
At this point I could probably go back through the data I used for
that to annotate the remaining ones.

> For the other, unless
> I'm missing something, we won't notice if an assertion is fixed and
> replaced with another one.

Fixed and replaced with another one should be pretty rare. (We're
also aiming for 0 assertions; we're just not quite there yet.)

(Less than one percent of mochitest-plain files are annotated as
expecting known assertions. So we have full assertion checking
test coverage on the other 99.3%.)

> And yes, catching when an assertion is fixed would clearly be useful, too.

We do that in most cases (we report an unexpected pass, which turns
the tree orange). We just don't when an assertion is marked as
intermittent, i.e., when a test has a range of assertions rather
than a fixed number.
signature.asc

Chris Peterson

unread,
Nov 21, 2014, 2:25:37 PM11/21/14
to
On 11/21/14 8:49 AM, L. David Baron wrote:
> On Friday 2014-11-21 12:51 +0100, David Rajchenbach-Teller wrote:
>> >Well, for one thing, it's not self-documenting.
> We should comment them better (i.e., have a bug on each one, and
> point to the bug in a comment on the expectAssertions line). I
> wasn't able to do that when initially landing the assertion checking
> because, at the time, there were too many to keep up with the tree.
> At this point I could probably go back through the data I used for
> that to annotate the remaining ones.

A self-documenting expectAssertions API might take an array of bug
numbers (for expected assertions) as a function argument instead of an
expected assertion count.

For extra credit, expectAssertions could use the Bugzilla API to
validate the bug numbers are relevant assertions and not RESOLVED FIXED. :)


chris
0 new messages