accurate memory leak checking?

221 views
Skip to first unread message

Kevin Watters

unread,
Apr 2, 2009, 10:39:21 AM4/2/09
to Google C++ Testing Framework
At http://gist.github.com/89218 I've got a testing::Test subclass
using the Win32 API's _CrtMemCheckpoint and
_CrtMemDifference functions in a test fixture's SetUp and TearDown
methods
to check for memory leaks during a test.

When #if defined(WIN32) && defined(_DEBUG), any heap allocations that
are not
freed by the end of a test are reported on stderr, and cause a test
failure. This works great, except for two issues:

1) _CrtMemDifference includes lots of heap allocated memory from the
test
framework--there ends up being lots of noise around the actual leak.

To solve I need a way to call _CrtMemCheckpoint *immediately* before
and
after the test case is run, without any google test allocations
happening
in between. Am I missing an obvious way to do this?

2) Inside TearDown I check Test::HasFatalFailure() to make sure we
don't
report leaks when there is already a failure. But when an EXPECT_XXX
has
failed, it's not a fatal failure, and the leak finder finds unfreed
heap
allocations which are not actually test leaks--it's more google test
stuff.

This is partially caused by #1 -- so if that issue is solved, then I
don't
think it will matter.

Zhanyong Wan (λx.x x)

unread,
Apr 2, 2009, 10:08:22 PM4/2/09
to Kevin Watters, Google C++ Testing Framework, Vlad Losev
Hi, Kevin,

Vlad is investigating this. I suspect that #1 doesn't have a simple
solution. However, it's very easy to implement HasNonFatalFailure()
and HasFailure(), so we can solve #2 first.

>

--
Zhanyong

Vlad Losev

unread,
Apr 3, 2009, 3:13:59 PM4/3/09
to Kevin Watters, Google C++ Testing Framework
Hi Kevin -

Google Test doesn't allocate any memory between SetUp and TearDown unless you have failures. If there are some, it needs memory to record them, and it allocates that memory dynamically, of course. This, I am afraid, is not going to change any time soon.

From your message I understand that you are only interested in detecting leaks of your tests succeed (please correct me if I am wrong). Unfortunately, HasFatalFailure() does not report presence of no-fatal failures and doesn't quite fit your bill. As Zhanyong has said, there is no quick solution to that problem. We are planning to soon open Google Tests event listener interface (see issue 58), which will make it very easy to find out when failures happen, but that change is not there yet. :-(

Nevertheless, I have a workaround you might want to consider. You don't have to detect failures at each test, actually. Collect memory diffs for all of your tests, just as you do, along with the test names. Then, after the RUN_ALL_TESTS macro finishes, check its return value. If all tests pass, examine your collected diffs and print out the non-zero ones.

This will give you a list of leaking tests when all the tests pass. Unfortunately, it doesn't give you as good reporting granularity as your intended solution. If yo have some tests failing, you may want to re-run the test binary with --gtest_filter flag to filter out failing ones. It still requires some manual intervention :-( so stay tuned for that event listener interface!

Regards,
Vlad.

On Thu, Apr 2, 2009 at 7:39 AM, Kevin Watters <kevinw...@gmail.com> wrote:

Zhanyong Wan (λx.x x)

unread,
Apr 3, 2009, 4:03:25 PM4/3/09
to Vlad Losev, Kevin Watters, Google C++ Testing Framework
On Fri, Apr 3, 2009 at 12:13 PM, Vlad Losev <vl...@google.com> wrote:
>
> Hi Kevin -
>
> Google Test doesn't allocate any memory between SetUp and TearDown unless you have failures. If there are some, it needs memory to record them, and it allocates that memory dynamically, of course. This, I am afraid, is not going to change any time soon.
>
> From your message I understand that you are only interested in detecting leaks of your tests succeed (please correct me if I am wrong). Unfortunately, HasFatalFailure() does not report presence of no-fatal failures and doesn't quite fit your bill. As Zhanyong has said, there is no quick solution to that problem. We are planning to soon open Google Tests event listener interface (see issue 58), which will make it very easy to find out when failures happen, but that change is not there yet. :-(

Checking whether there are failures in a test using the event listener
API is more involved than calling HasFailure(). You need to implement
the listener interface and register it. I suggest to implement
HasFailure(), which is trivial to do and more convenient for Kevin's
need.

--
Zhanyong

Zhanyong Wan (λx.x x)

unread,
Apr 5, 2009, 2:33:30 PM4/5/09
to Vlad Losev, Kevin Watters, Google C++ Testing Framework
2009/4/3 Zhanyong Wan (λx.x x) <w...@google.com>:

> Checking whether there are failures in a test using the event listener
> API is more involved than calling HasFailure().  You need to implement
> the listener interface and register it.  I suggest to implement
> HasFailure(), which is trivial to do and more convenient for Kevin's
> need.

FYI, I have a draft implementation for HasFailure() and
HasNonFatalFailure(). Will probably clean it up and have it reviewed
on Monday.

These functions have been requested before, and we decided to wait
until there's more demand. Now seems a good time to do it finally.
--
Zhanyong

Kevin Watters

unread,
Apr 5, 2009, 6:42:50 PM4/5/09
to Google C++ Testing Framework
> These functions have been requested before, and we decided to wait
> until there's more demand.  Now seems a good time to do it finally.

Thanks for following up on this!

My only thought about keeping gtest allocations out of the leak
reports is to use a pool (w/ placement new, if necessary), but I
haven't had a change to look into the code as to how feasible this is,
and if it would be worth the effort.

- Kevin

Zhanyong Wan (λx.x x)

unread,
Apr 5, 2009, 8:07:15 PM4/5/09
to Kevin Watters, Google C++ Testing Framework

We've considered this. The problem is that each test function can
generate an unbounded number of failures (note that most other
frameworks will allow only one failure in each test function), and
each failure can contain a custom message that's arbitrarily long.
Therefore we may run out of space with a pre-allocated pool approach.

Plus, soon we'll open the event listener interface to allow people to
write plug-ins to respond to various gtest events. Such plug-ins may
allocate memory any time they want, any way they want.

Therefore I think realistically we'll not attempt to catch the leaks
when there is a failure already. Let's make it possible to catch the
leaks when there is otherwise no failure first.

>
> - Kevin
>

--
Zhanyong

Reply all
Reply to author
Forward
0 new messages