Am Mittwoch, 12. Juni 2013 06:47:27 UTC+2 schrieb Christian Askeland:
> Thanks for replying, Phil.
>
> I'm looking forward to it, and good luck with the 1.0 release!
I was just about requesting the same feature; same setup here: moved from cppunit to catch, but now I am in need to identify slow running tests. An option for sorting the tests output based on their actual running time would also helpful.
> The idea behind the adaptive iterations is that Catch would first execute the test once, within a timer. If the recorded duration is less than some value (which would be some multiple of the timer resolution, giving some confidence in the timing) then it executes the test 10 times and checks again - then 100 times, then 1000, etc until it takes longer than the threshold value (increasing 10x seems to do the trick, but other multiples could work too).
> So eventually it has executed enough repetitions that the timed value is reasonably reliable (assuming background tasks are not significant).
>
> This can be useful if you want to keep an eye on the actual performance of something (as a compliment to proper profiling using a dedicated tool, of course).
OK, got it. Need a bit to think about where that could be applied in our team/project...
> If you just want to be alerted to long running tests then that's obviously not what you need. You just want to time a single execution. Clearly we need the latter before we can get the former anyway :-)
>
>
> As for sorting results in duration order that's an interesting possibility. The problem there is that results are reported progressively, as the tests are running. In order to sort them we'd need to store the results as we run then sort them before reporting. That's not out of the question (the Junit reporter does have to store results before writing them out due to the way it reports failure counts as attributes) - but my preference is to keep everything progressively reported where possible.
I could live with a custom reporter for this feature; I haven't looked into the (new) reporter interface yet, but maybe a custom reporter could also do the trick for the timeout thresholds (see below).
> The other alternative you mentioned - setting a threshold timeout - is also interesting. I suspect it's a lot more complex than it sounds, though, as I imagine it would require running each test on a separate thread. As well as the inherent complexity there it is multiplied by doing it portably (in C++98) with no external dependencies! You may achieve a similar effect by running the whole test executable from a script that is capable of killing the process after a timeout (most CI systems have such a feature built in already).
Threads would not do the trick here since you cannot cancel/kill a thread without risk, you would need a separate process. But I thought more in the direction you described next:
> But if you just want to be alerted that a test case is starting to take a bit too long then I suspect the simplest way to support that in Catch would be to make the duration a success/ failure criteria. i.e. if the test takes longer than so many milliseconds to run then the test fails (but still allow it to run to completion). It does introduce non-determinism into your testing, which I don't like, but could be useful. It could be made a "soft error", so it gets reported but does fail the overall test run.
This is the behavior I am striving for. My idea is to set a timeout threshold of say 10ms for the developers, signaling too slow tests during the development process (modify-compile-link-test cycle); getting a false failure here (e.g., due to temporary heavy background task loads) could be easily reverified by a rerun of the test by the developer. The threshold for the CI machines would be set to a lot less restrictive value (e.g. 50ms) in order to avoid reporting false failures; or we omit the timeout threshold at all for the CI machines.
I could also think to use the tagging system to classify the tests in fast, slow, medium, ultra slow categories, and ensure the right tagging of the tests with such a threshold.
If I would need that for only a subset of tests I would use a RAII style Timeout helper class for it with a CATCH_REQUIRE() test in the dtor, which gets instantiated at the top of the test implementation. But doing this for 100s or 1000s of tests manually... Maybe a custom reporter could help with that?
Or maybe we could have hooks to register for test-begin/test-end events?