Bug in test framework?

530 views
Skip to first unread message

SPC

unread,
Jan 14, 2016, 2:44:29 PM1/14/16
to cpputest
While running my passing tests in a bash while loop to make sure they always pass I had what appears to be a hiccup from the test framework.

849
..................................................
..................................................
................................................!!
........!.........................................
.....................
OK (221 tests, 218 ran, 1340 checks, 3 ignored, 0 filtered out, 40 ms)

850
..................................................
..................................................
.............................
unknown file:0: error: Failure in TEST(
NOTE: Assertion happened without being in a test run (perhaps in main?), 
      Something is very wrong. Check this assertion and fix)

      Something is very wrong. Check this assertion and fix:0: error:
Deallocating non-allocated memory
   allocated at file: <unknown> line: 0 size: 0 type: unknown
   deallocated at file: <unknown> line: 0 type: delete


.Segmentation fault (core dumped)

The number before the test dots is the number of successfully completed iterations of the tests. Opening the core file in gdb and doing a backtrace yields:

(gdb) info thread
* 1 Thread 0x7ff9c47b1720 (LWP 8857)  0x000000000045616c in MockSupport::installComparators (this=0x6bbd00, repository=...) at ../src/CppUTestExt/MockSupport.cpp:95
(gdb) back
#0  0x000000000045616c in MockSupport::installComparators (this=0x6bbd00, repository=...) at ../src/CppUTestExt/MockSupport.cpp:95
#1  0x0000000000456a68 in MockSupportPlugin::preTestAction (this=0x7fffa5790a20) at ../src/CppUTestExt/MockSupportPlugin.cpp:65
#2  0x000000000044cc28 in TestPlugin::runAllPreTestAction (this=0x7fffa5790a20, test=..., result=...) at ../src/CppUTest/TestPlugin.cpp:53
#3  0x000000000044e937 in UtestShell::runOneTestInCurrentProcess (this=0x6b92c0, plugin=0x7fffa57908c0, result=...) at ../src/CppUTest/Utest.cpp:197
#4  0x000000000044ffa0 in PlatformSpecificSetJmpImplementation (function=0x44dd50 <helperDoRunOneTestInCurrentProcess(void*)>, data=0x7fffa57907d0) at ../src/Platforms/Gcc/UtestPlatform.cpp:144
#5  0x000000000044dead in UtestShell::runOneTest (this=<value optimized out>, plugin=<value optimized out>, result=<value optimized out>) at ../src/CppUTest/Utest.cpp:182
#6  0x000000000044d729 in TestRegistry::runAllTests (this=0x6bb3a0, result=...) at ../src/CppUTest/TestRegistry.cpp:62
#7  0x00000000004451a3 in CommandLineTestRunner::runAllTests (this=0x7fffa5790970) at ../src/CppUTest/CommandLineTestRunner.cpp:119
#8  0x0000000000445286 in CommandLineTestRunner::runAllTestsMain (this=0x7fffa5790970) at ../src/CppUTest/CommandLineTestRunner.cpp:80
#9  0x000000000044548c in CommandLineTestRunner::RunAllTests (ac=2, av=0x7fffa5790b98) at ../src/CppUTest/CommandLineTestRunner.cpp:50
#10 0x00000000004039a7 in main (argc=2, argv=0x7fffa5790b98) at ../tests/AllTests.cpp:47
(gdb) list
95        if (getMockSupport(p)) getMockSupport(p)->installComparators(repository);
96 }
97
98 void MockSupport::removeAllComparators()
99 {
100    comparatorRepository_.clear();
101    for (MockNamedValueListNode* p = data_.begin(); p; p = p->next())
102        if (getMockSupport(p)) getMockSupport(p)->removeAllComparators();
103 }
104
(gdb) 

Since none of my functions appear in the back trace It would appear there is some issue with the mock support. Comments? Suggestions? I'll wait for a little while before making any further code changes, in case anybody thinks of anything further to look at in this core file. Since the initial complaint relates to a delete I suppose there could be some sort of stack/heap corruption, but determining where something like that might be is going to be hairy.

 

James Grenning

unread,
Jan 15, 2016, 11:30:57 AM1/15/16
to cpputest
I'm not saying its not a bug, but usually this odd behavior if because
of something in the code under test. I would be suspecting a bad
pointer, or a buffer/struct overflow or underflow leading to some
undefined C behavior.

See if you can get some more information if you do these things, adding
one at a time:
1 Turn on verbose with -v. This should tell you which test case is
failing, unless of course this chanciness the undefined behavior. If
the failure happens now, go to step 2. If no failure try step 4

2 Now that you know the test case, only run that one test case using the
-n testNameSubstring option. If you still get the failure, try to
whittle down the code involved and see if you can establish cause and
effect

3 If 1 fails and two does not fail try the -g groupNameSubstring option
instead of the -n option to run the whole test group. If still no
failure, be less selective on the group name to try to run more groups

4 Start splitting your list of test files in half and running the tests,
find a subset of test cases that show the problem, then whittle down.

Let us know how it goes. You might need to get a debugger out, but if
it is the result of a bad pointer and undefined behavior it can be
difficult to track down.

James

--------------------------------------------------------------------------
James Grenning - Author of TDD for Embedded C - wingman-sw.com/tddec
wingman-sw.com
wingman-sw.com/blog
twitter.com/jwgrenning
facebook.com/wingman.sw
> --
> You received this message because you are subscribed to the Google
> Groups "cpputest" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to cpputest+u...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

James Grenning

unread,
Jan 15, 2016, 11:54:25 AM1/15/16
to cpputest
I just re-read your problem. I thought you were using the -r switch to
run your tests in a loop. You might want to run your code with -r
instead of repeatedly in bash.

James

Steven Collins

unread,
Jan 15, 2016, 12:49:25 PM1/15/16
to cppu...@googlegroups.com
@$%@#$%@$!!!!

Tried your suggestion with the -r switch. I haven't reproduced the crash, but have a new problem I'll have to run to ground. A test that has never failed, since I got it passing, now hangs after a few iterations. Off the top of my head I can't think of any reason for that to happen, so I have to suspect it is due to something outside that test.  :-(

Running that particular in isolation via the -n switch doesn't result in a hang, but does start failing after numerous iterations of the -r switch due to resource exhaustion. That particular test uses a pseudo terminal to emulate a character device file. It looks like the O/S gets lazy about releasing the terminal once it is closed. An lsof for the process shows lots of these:

PlatformD 20157 user  491w   CHR 136,245      0t0     248 /dev/pts/245 (deleted)
PlatformD 20157 user  492r   CHR 136,246      0t0     249 /dev/pts/246 (deleted)

Any suggestions for dealing with this without IGNORE-ing that and any other test that uses a pseudo terminal?

Steven Collins

unread,
Jan 15, 2016, 1:04:15 PM1/15/16
to cppu...@googlegroups.com
Strike that last message. Looks like I missed closing the slave side of the psuedo-terminal in the destructor for the object under test. Ouch! Possible area for enhancement of the test framework to check for the number of open descriptors before and after test execution? Under Linux that would be fairly straightforward. Not sure about other environments.

Steven Collins

unread,
Jan 15, 2016, 3:58:46 PM1/15/16
to cppu...@googlegroups.com
Looks like file handle leak detection is a good candidate for a TestPlugin. I now have such a plugin for Linux that works like a champ. Spotted the leaks I'd already fixed when I temporarily reverted my code and then found another I was unaware of. I'll share it back here if I can get approval from TPTB since it was developed while working for my employer.

SPC

unread,
Jan 21, 2016, 3:05:31 PM1/21/16
to cpputest
One problem with the "-r" switch is that it doesn't stop if an error is detected. A flash of red, assuming "-c" is in use, and the error is gone up the screen. Not helpful when trying to catch an error that only occurs once in several thousand iterations. Could we have an expansion to the UI for either a "stop on error" switch or a "repeat until error" switch? I would tend to favor the second option so I don't have to input some arbitrary high number to get a long running test session.
Reply all
Reply to author
Forward
0 new messages