Support for lazy test development?

57 views
Skip to first unread message

Nicholas Sterling

unread,
Jun 23, 2014, 3:56:33 PM6/23/14
to specs2...@googlegroups.com
Hi folks.

Does specs2 support lazy test development?  I probably need to explain the term...

We have another test framework -- not for unit tests -- that allows us to do the moral equivalent of

   f(x) mustMatchApproved

where the approved value for the test is to be determined later.  When we run the test the first time, we are shown what f(x) produced, and asked if that is correct.  If so, the result is saved as the approved answer.  That is, we get to interactively approve test results rather than having to manually encode them.

This makes it much simpler to write tests where the correct results are not so easily described.  It is often *far* easier to visually inspect a result and say "yes, that's correct" than it is to write out the result ourselves.  Picture, for example, a system that manipulates an attributed tree or DAG, perhaps in some phase of a compiler -- you want to test that given an input tree and some operation, a correct output tree is produced.

We are in a similar situation, and our tree dumps are sometimes hundreds of lines long.  We would like to be able to write unit tests for this functionality, but manually encoding those results would be very difficult, in addition to bloating the test files tremendously.

Also, a lazy test facility can offer assistance in dealing with test failures. It can, for example, show you the new output and ask you whether it should replace the old approved output -- perhaps you made a deliberate change that affects the result.  It could do some kind of a diff between the old and the new, so that you can see just what changed in all that output.  You could also imagine being able to drop into a REPL with expected and actual results as vals.

Is there any facility which even remotely offers this approval-based approach to writing tests, or offers a good starting point for writing one?

etorreborre

unread,
Jun 23, 2014, 9:39:33 PM6/23/14
to specs2...@googlegroups.com
Hi Nicholas,

There is nothing like that at the moment, but this is not the first time that this feature has been requested.

I agree that this would be useful and would like to see something like this integrated to specs2. However I don't have much time at the moment (my focus is on finishing the next major version). 
One thing you can do right now is to prototype this functionality by creating an ad-hoc matcher that:

 - shows the result the first time calls readLine for approval, then writes a file with the result
 - shows a comparison of the stored result with the new result the second time and asks for approval (or not if doing continuous integration)

Some things to think about:

 1. how to identify the results, which depend on the function f?
 2. how to pass the directory that is going to hold the files?
 3. how to change the behaviour for asking a confirmation or not?
 4. is there a way to do this without unrestricted side effects?

1. I don't know. There is currently a mechanism to save examples results (at the example level, not the function level). In the future I might extend it to store other kinds of results?
2. this can be passed through the command line arguments by mixing in the CommandLineArguments trait
3. same thing, use an argument
4. this might be done elegantly with the new structure (coming up in the next release) but I'm not sure

I'm sorry I can't offer much more for now....

Nicholas Sterling

unread,
Jun 23, 2014, 11:22:42 PM6/23/14
to specs2...@googlegroups.com
Honestly, Eric, this is better than I expected. At least you know what I am talking about and can see the value in it.

"I'm sorry I can't offer much more for now...."

Hey, listen -- thanks a ton for creating specs2 and sharing it with the world.  You're awesome.

The next step for me, to be honest, is to find out whether the answer is any different for scalatest. I suspect it isn't, in which case my team will have to think about how to make incremental progress on it ourselves, feeding it back to the community. Thanks for the thoughts on how to get started. We probably wouldn't have the free cycles to work on this in the near future, but I hope the conversation doesn't completely die here.

If any students out there are looking for an interesting summer project, this has potential....

Nicholas Sterling

unread,
Jun 24, 2014, 5:41:00 AM6/24/14
to specs2...@googlegroups.com
Eric, check this out.  Bill Venners remembered someone talking about this, so I googled for the term he used and found this package that may be just what I was looking for, the unit-test equivalent of our own lazy test facility:

etorreborre

unread,
Jun 24, 2014, 8:00:33 PM6/24/14
to specs2...@googlegroups.com
That should probably work out of the box with specs2. The only issue I see is that it is seems to be throwing Errors instead of AssertionErrors which is the regular JUnit error. So it is likely to be interpreted as an error and not a failure in specs2.

Anyway if I integrate such a facility in the future I'll have a look at what they did.

Eric.

Nicholas Sterling

unread,
Jun 25, 2014, 3:10:53 AM6/25/14
to specs2...@googlegroups.com
Well, there might be another problem.  A guy on my team tried to use ApprovalTests with specs2 today, and reported this:

"I tried using it in specs2 and I get an error:
Could not find Junit TestCase you are running
java.lang.RuntimeException: Could not find Junit TestCase you are running

If you look at the code it actually walks the stack trace trying to find the Junit test you're running from.  So, it might only work with Junit."

Given that it already works with Java and .NET, I wonder whether ApprovalTests has some internal interface that could be bridged to specs2.

etorreborre

unread,
Jun 25, 2014, 3:37:21 AM6/25/14
to specs2...@googlegroups.com
Yes, I've had a quick look at the code and it might be a bit too tied to JUnit unfortunately.
Reply all
Reply to author
Forward
0 new messages