force each test in a separate process

3,327 views
Skip to first unread message

Joey

unread,
Aug 3, 2010, 7:22:24 PM8/3/10
to Google C++ Testing Framework
There was a similar question posted earlier this year, but I have a
specific use-case. I'm testing a DLL on windows and I'd like to run
each test in a separate process. My typical flow is:

TEST(MyTest, CheckSomeReturnCode)
{
// 1. Call LoadLibrary and GetProcAddress on the DLL-under-test
// 2. Call several functions inside this DLL
// 3. ASSERT_* or EXPECT_* something about the return value
// 4. When completed call FreeLibrary
}

The problem is that calling FreeLibrary isn't enough, because many
leaked resources (in my case, device driver handles) won't be released
automatically by the operating system until process termination.
Windows DLLs often cannot cleanup on unexpected unload because of
restrictions in DllMain. It seems like the best way to eliminate the
cross-test dependency is to force each test to run in a separate
process.

Death Tests use a separate process. It might be possible to create a
wrapper for each test that uses EXPECT_NO_FATAL_FAILURE around the
actual test, calls HasFatalFailure, and calls exit with success or
failure. I haven't gone very far with this idea because every test
would need another function. It seems like a real mess.

Is there a setting or a simpler way to force each test to run in a
separate process with this framework?

Vlad Losev

unread,
Aug 4, 2010, 1:24:09 PM8/4/10
to Joey, Google C++ Testing Framework

You may try use Google Test's support for test sharding (http://code.google.com/p/googletest/wiki/GoogleTestAdvancedGuide#Distributing_Test_Functions_to_Multiple_Machines).

Regards,
Vlad

Andrew Melo

unread,
Aug 4, 2010, 1:38:50 PM8/4/10
to Vlad Losev, Joey, Google C++ Testing Framework
Cmake, IIRC, uses this to map gtest to it's testing harness. It parses out the test names and then does test sharding to make it so each invocation of the gtest executable only runs one test.

best,
Andrew
--
--
Andrew Melo

Joey

unread,
Aug 5, 2010, 3:33:14 PM8/5/10
to Google C++ Testing Framework
I'll check out sharding but it looks like that's intended for multiple
machines, and 1.5.0 contains no samples. To restate, my software-under-
test is untrusted. I anticipate 1) in the best case the test fails
based on an API return-code 2) or the code will crash from a null
derference or 3) worst of all the test will cause some corruption like
a buffer-overrun that does not crash the process immediately but willl
carry-over to other tests.

I'm probably describing a new MAYBE_DEATH(). Here's an example of what
I tried with the existing Death Tests:

void check_PassNull()
{
// TEST1: Make an API call and pass NULL where a valid pointer is
required
// Success is returning ERR_NULL_PARAMETER
// Failure is any other return value or a crash
long status = MakeAnApiCall(NULL);
ASSERT_EQ(status, ERR_NULL_PARAMETER);
}
void run_PassNull()
{
check_PassNull();

// Check gtest status, exit with code 0=success or 1=failure
if (::testing::Test::HasFatalFailure())
exit(1);
else
exit(0);
}
TEST(MyDeathTest, PassNull)
{
// Call run_PassNull in a new process
EXPECT_EXIT(run_PassNull(), ::testing::ExitedWithCode(0), "");
}

This gets me very close, but unfortunately I lose all of the testing
details:

[ RUN ] MyDeathTest.PassNull
Running main() from gtest_main.cc
apitest.cpp(47): error: Death test: run_PassNull()
Result: died but not with expected exit code:
Exited with exit status 1

[ FAILED ] MyDeathTest.PassNull (64 ms)

Can you comment on this approach? Any ideas how I enjoy the benefits
of an *_EXIT (separate process) but still pass-back the failure
details from each EXPECT_*?

On Aug 4, 1:24 pm, Vlad Losev <v...@losev.com> wrote:
> You may try use Google Test's support for test sharding (http://code.google.com/p/googletest/wiki/GoogleTestAdvancedGuide#Dist...
> ).
>
> Regards,
> Vlad
>
> On Aug 3, 2010 4:24 PM, "Joey" <joeyora...@gmail.com> wrote:
>
>
>
> > There was a similar question posted earlier this year, but I have a
> > specific use-case. I'm testing a DLL on windows and I'd like to run
> > each test in a separate process. My typical flow is:
>
> > TEST(MyTest, CheckSomeReturnCode)
> > {
> > // 1. Call LoadLibrary and GetProcAddress on the DLL-under-test
> > // 2. Call several functions inside this DLL
> > // 3. ASSERT_* or EXPECT_* something about the return value
> > // 4. When completed call FreeLibrary
> > }
>
> > The problem is that calling FreeLibrary isn't enough, because many
> > leaked resources (in my case, device driver handles) won't be released
> > automatically by the operating system until process termination.
> > Windows DLLs often cannot cleanup on unexpected unload because of
> > restrictions in DllMain. It seems like the best way to eliminate the
> > cross-test dependency is to force each test to run in a separate
> > process.
>
> > Death Tests use a separate process. It might be possible to create a
> > wrapper for each test that uses EXPECT_NO_FATAL_FAILURE around the
> > actual test, calls HasFatalFailure, and calls exit with success or
> > failure. I haven't gone very far with this idea because every test
> > would need another function. It seems like a real mess.
>
> > Is there a setting or a simpler way to force each test to run in a
> > separate process with this framework?- Hide quoted text -
>
> - Show quoted text -

Joey

unread,
Aug 9, 2010, 7:13:27 PM8/9/10
to Google C++ Testing Framework
So I did some research on how this works today... Normal assertion
macros will: evaluate a predicate, create an AssertionResult of
success or failure, send the info up using AddTestPartResult, and in
case of ASSERT_* the macro includes a return statement to exit the
function.

In contrast Death Tests use a special macro that forks and requires
the statement to exit, crash, or otherwise terminate the child
process. But the pipe sends 1 status character about termination to
the parent, nothing about individual assertions like
AddTestPartResult.

My first problem was test logic -- I wanted to run statement in a
separate process and succeed if it completed with no fatal failure.
It's fairly easy to change the macro, Abort(), and Passed() to
implement that. My syntax looks like:

void check_PassNull()
{
// TEST1: Make an API call and pass NULL where a valid pointer is
required
// Success is returning ERR_NULL_PARAMETER
// Failure is any other return value or a crash
long status = MakeAnApiCall(NULL);
ASSERT_EQ(status, ERR_NULL_PARAMETER);
}
TEST(MyDeathResistantTest, PassNull)
{
ASSERT_NO_FATAL_FAILURE_OR_DEATH(check_PassNull(), "");
}

So I think my next steps (any comments??):

1. My pipe still communicates only one status character. It seems
preferable to hook ReportTestPartResult in the child, send each
AssertionResult for the TestPart across the pipe, and report them from
the parent. This way the parent gets the details of individual
assertions.

2. It isn't easy to debug forked code. The technique needs some way to
disable forking and execute statement in the current process like a
normal test, maybe controlled with a command line argument.

3. The macro adds a level and makes for ugly code. It should be
possible to do the forking higher, at the TEST() level.
> > - Show quoted text -- Hide quoted text -

Zhanyong Wan (λx.x x)

unread,
Aug 9, 2010, 7:38:05 PM8/9/10
to Joey, Google C++ Testing Framework
Hi Joey,

Instead of making the death test mechanism more complex (it's already
quite complex as is), I'd suggest you write a simple test runner that:

1. runs the test executable with --gtest_list_tests to get a list of all TESTs.
2. runs the test executable with --gtest_filter=FooTest.Bar to invoke
only one TEST at one time.

You only need to write such a runner script once, and it shouldn't be
hard. Your test logic can stay simple.

Or, you might be able to use CMake's support for this, as Andrew Melo
suggested. I haven't tried that myself though.

--
Zhanyong

Joey

unread,
Aug 9, 2010, 9:02:33 PM8/9/10
to Google C++ Testing Framework
I appreciate your feedback and I'll consider the runner approach as I
move forward. I agree with the pros -- it could be easy to write and
nothing extra is needed for debugging. I see a few cons though:

- The runner may replicate a lot of gtest logic like
- Compiling individual test results into a main report
- Randomizing the order of tests
- Repeating tests
- ...etc

- If tests are invoked individually then:
- The global environment gets set-up and torn-down each time
- The test fixture gets set-up and torn-down each time
I think that's already a concern with windows CreateProcess in my
proposal. I think the difference is that the runner would have to act
like the fixture and pass expensive information by command line
argument.

- There will be an impact on event listeners. Only important if
they're used.

- The separate process logic lives inside the runner, so the test
binary becomes incompatible with other runners (like gtest-gbar GUI)
unless they're built with the same change. This is kind of minor since
very few runners exist.

I'm still on the fence because some of those cons are important to me.
I'll keep exploring and let you know what I end up with.

Zhanyong Wan (λx.x x)

unread,
Aug 10, 2010, 1:31:15 AM8/10/10
to Joey, Google C++ Testing Framework
On Mon, Aug 9, 2010 at 6:02 PM, Joey <joeyo...@gmail.com> wrote:
> I appreciate your feedback and I'll consider the runner approach as I
> move forward. I agree with the pros -- it could be easy to write and
> nothing extra is needed for debugging. I see a few cons though:
>
> - The runner may replicate a lot of gtest logic like
>  - Compiling individual test results into a main report

Do you really need that? You already have multiple test reports
anyway (one for each test executable), unless you only have one test
executable, which is unlikely.

>  - Randomizing the order of tests

This doesn't add any value when you run different TESTs in different
processes and sequentially.

>  - Repeating tests

You already said that your TESTs cannot clean up after themselves
properly, so repeating them doesn't make much sense. And if you
really want to, you can still use --gtest_repeat with --gtest_filter.

>  - ...etc
>
> - If tests are invoked individually then:
>  - The global environment gets set-up and torn-down each time
>  - The test fixture gets set-up and torn-down each time

True. But the same is true if you use death tests. On Windows, when
EXPECT_DEATH(blah, ...) runs, it cannot magically jump to where blah
is in the child process. Instead, the child process has to run from
beginning, including setting the global environment and the test
fixture.

--
Zhanyong

Reply all
Reply to author
Forward
0 new messages