I think we have a fundamental problem with our unit tests: most of them don't have 100% pass rates. This is a problem because you can introduce errors without realizing it, because the NUnit tree doesn't look significantly different (lots of green and red before, vs. lots of green and red later).
For example, when I write the unit test page (
http://groups.google.com/group/dblinq/web/unit-tests) I wrote that SQL Server had 425 tests run with 70 failures. As I write this 432 tests are run (yay) with
190 failures (boo!) -- more than double the error count compared to March 23.
I don't know what caused the increased failures. Currently, I don't care.
What I do currently care about is preventing such regressions in the future, and the way to do that is by getting our unit tests 100% green (so that regressions and errors are actually visible, not hidden in a sea of existing errors).
I can think of two[0] solutions to this, and I'm welcome to any additional suggestions.
1. Use
#if's in the test code to remove failing tests.
2. Use
[Category] attributes on the tests to declare which tests shouldn't be executed. The Categories tab within NUnit allows you to specify which categories are executed.
The problem with (1) is a gigantic increase in code complexity, as each vendor will have a different set of failing unit tests, so we'd potentially need checks for every vendor on each method. This is incredibly bad.
(2) would at least avoid the "line noise" implied by
#if, though it could also get very "busy" (with upwards of 7
[Category] attributes on a given method).
Thoughts? Alternatives?
Thanks,
- Jon
[0] OK, a third solution would be to actually fix all the errors so that everything is green without using (1) or (2) above, but I don't think that this is practical in the short term.