Custom Outcome types?

9 views
Skip to first unread message

John Arrowwood

unread,
Apr 7, 2016, 6:26:52 PM4/7/16
to scalatest-users
I noticed that Outcome is a sealed abstract class, so it is not possible to define a custom outcome type.

I ask because I just finished writing a trait which allows me to mark tests as "broken".  By marking a test as broken, it implements the equivalent of pendingUntilFixed.  But the problem there is, it is not possible to distinguish between a test that is not implemented, and a test that is implemented but the application does not match the spec.  It would be kind of nice to have an additional report of the number of test cases "blocked by open defects" (or equivalent).  Similarly, if a "broken" test passes, it is currently marked as Failed, which is indistinguishable from a behavior that used to work and has suddenly started failing.  In that same vein, if you mark a test as "fixed" and it fails again, that is a possible regression.  But the reports don't let you distinguish them.

Admittedly, Failed means that someone needs to look at it, and if it is failing, someone needs to do what it takes to make it pass again, even if that means submitting a defect and then marking the test as broken.

But I'm wondering if there wouldn't be value in surfacing more fine-grained values for some of those other conditions, especially when doing integration testing.  As the number of tests that are blocked by defects grows, it will serve the dual function of saying, hey, look, our testing is finding bugs, and also, the bugs are piling up, maybe we need to work on fixing them.  

That being said, just how big of a headache would it be, allowing individuals to define their own custom sub-types?  

Bill Venners

unread,
Apr 10, 2016, 12:34:56 PM4/10/16
to scalate...@googlegroups.com
Hi John,

Sorry for the delay in responding. I think having a sealed hierarchy is important, so that exhaustiveness checking can be done when pattern matching on Outcome. We could potentially model PendingUntilFixed as its own Outcome. Could also have a different kind of failed status that means a PendingUntilFixed started working again. I don't think the second one is worth it, because as you say a failure means someone needs to go in and fix it, and in the case of a pendingUntilFixed clause, just removing it fixes the test.

I do see a value in being able to count the number of PendingUntilFixes. I's prefer not to add another outcome, like Broken, to Outcome. But that would be the way to do that I think. Then broken tests show up yellow and don't add to the number of failures. Instead of marking with pendingUntilFixed { ... }, you could mark with broken { ... }, which is more concise. 

I'll think about that, but the upshot is that I think Outcome should remain sealed. We can enhance it over time. Also we could think about ways to count things you are interested in counting that doesn't involve Outcome.

Bill

--
You received this message because you are subscribed to the Google
Groups "scalatest-users" group.
To post to this group, send email to scalate...@googlegroups.com
To unsubscribe from this group, send email to
scalatest-use...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/scalatest-users?hl=en
ScalaTest itself, and documentation, is available here:
http://www.artima.com/scalatest
---
You received this message because you are subscribed to the Google Groups "scalatest-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scalatest-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Bill Venners
Artima, Inc.
http://www.artima.com

John Arrowwood

unread,
Apr 11, 2016, 12:26:44 PM4/11/16
to scalate...@googlegroups.com
My philosophy (which I'm sure we share) is every line item in the test results should convey some useful bit of information to the reader.  Each line item should be relatively clear on what is included in that number, and what action to take to influence that number.  For example, if a test is broken, the way to influence that number is to go fix defects.  If the line item is "test defined but not implemented" the action item is to go implement those tests.

When you start putting different things into the same bucket, you start sending mixed signals, and render it difficult to take decisive action based on test results.  If both pending and broken go into the same bucket, you can no longer clearly understand what that line item of the report means.

I know we are in agreement on that, so I won't belabor the point.  

I do feel that having a single bucket for "broken" tests creates a similar situation, though. Allow me to give an example:

Some tests are broken, and we as an organization do not expect to ever get around to fixing them, unless by some chance we happen to be re-writing that module from scratch.  As the test suite grows, I expect that bucket to grow.  I'd like to think that as that number grows, the business would become increasingly willing to invest in raising the overall quality of the product by addressing some of those defects, but I know it may or may not ever come to pass.

On the other hand, other broken tests represent something that worked in the last build, and something that was done broke that behavior.  NOBODY should be ignoring these tests. 

But if I put them all in the same bucket, then the meaning of the "broken" bucket will become murky, and the ability to know at a glance what action should be taken is lost.  But if I can break the broken tests into two different categories (or divide them into sub-categories, "quality-debt" vs. "regression"), then the meaning of the numbers is clear, and the action that needs to be taken is also clear.  Thus, I will have accomplished my intent of clearly communicating to the reader.

So, if you add the ability to add a "broken test" outcome, can you give me the ability to mark it broken in different ways, and be able to sub-total each of those different ways?  Please?  :)

You received this message because you are subscribed to a topic in the Google Groups "scalatest-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scalatest-users/z-D2T4Hy82o/unsubscribe.
To unsubscribe from this group and all its topics, send an email to scalatest-use...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
John Arrowwood
503.863.4823

Reply all
Reply to author
Forward
0 new messages