// Please also note that generator expressions are evaluated in
// RUN_ALL_TESTS(), after main() has started. This allows evaluation
of
// parameter list based on command line parameters.
This is exactly what I need to do, but how can I get there?
I am (ab)using googletest to write some tests that, strictly speaking,
are integration tests running through a collection of input files. In
this way, the number of input files and their nature decide how many
tests will be run, and the outcome of the code exercised in each test
is held up against an "expected result for this file" that is embedded
in the files themselves.
My current code follows this general outline:
// includes
namespace
{
std::vector<std::string> fileList;
class SuperDuperTest :
public ::testing::TestWithParam<std::string>
{
// Test fixture stuff; SetUp, TearDown, etc.
};
TEST_P(SuperDuperTest, ...)
{
// ...
}
// etc.
INSTANTIATE_TEST_CASE_P(InstantiationName, SuperDuperTest,
::testing::ValuesIn(fileList));
} // anonymous namespace
int main(int argc, char* argv[])
{
::testing::InitGoogleTest(&argc, argv);
// grab remaining command line arguments
// and use them to build up fileList
// through liberal use of push_back()
return RUN_ALL_TESTS();
}
Now, this obviously does not do the trick, because ValuesIn copies the
contents of the container at the time INSTANTIATE_TEST_CASE_P is
called. At that time, fileList is still empty.
How, then, can I use a parameter list based on command line parameters
(ref. the comment)?
Thanks in advance for any helpful response. I have done a bit of
searching before posting, but have been unable to find a good
answer.
--
Einar
--
Einar
Vlad,
Thanks a lot for the quick and comprehensive response!
I understand that I need to look into some of the details here. If I
come up with something brilliant, I will let you know.
I gave the code you attached a spin. After adding a few things:
using std::ifstream;
using std::ios;
, fixing a couple of typos:
...
const string* FileContentIterator::Current() const {
if (current_string_ == NULL) {
ifstream is(current_->c_str());
if (!is.good()) { // <- s/ifs/is/
...
INSTANTIATE_TEST_CASE_P(TestInstantiation,
FileParamTest,
FileContents(g_file_names)); // <- s/
g_file_name/g_file_names/
, and adding some parameters in main:
int main(inc argc, char** argv) {
testing::InitGoogleTest(&argc, argv);
// Here, copy the file name parameters to g_file_names from argc/
argv.
g_file_names.push_back("foo.txt");
g_file_names.push_back("bar.txt");
return RUN_ALL_TESTS();
}
, I still run into the same problem as before. "0 tests from 0 test
cases ran."
As before, g_file_names is empty upon FileContents construction.
Maybe something even more devious is needed?
Regards,
Einar
First of all, I tinkered a bit more with the code you sent, and
finally got it to work as intended; SetUp duly printed the file
contents for each file I supplied. To make it do this, I had to build
up the g_file_names vector BEFORE calling InitGoogleTest, at the very
start of main. So, the final tweak I did to your code was basically
to change:
int main(inc argc, char** argv) {
testing::InitGoogleTest(&argc, argv);
// Here, copy the file name parameters to g_file_names from argc/
argv.
return RUN_ALL_TESTS();
}
to:
int main(inc argc, char** argv) {
// First of all, copy the file name parameters to g_file_names from
// the non-googletest-related entries in argv.
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
My tests only need the file names, not the contents - the code that is
being tested processes the file it is being passed itself. This was
probably not completely clear from my initial post (or, if it was, it
would seem too unbelievable that this was problematic).
I therefore realized - with a groan - that all I had to do in my own
test code was to handle my own arguments (and build up the value
(file) list) first, then pass argc and argv on to InitGoogleTest and
use good old ValuesIn(g_file_names) in INSTANTIATE_TEST_CASE_P.
If I had needed the contents of each file passed to the tests, your
solution would fit the bill perfectly (with my minor adjustment).
Thanks for your input. I need to go smack my forehead some more now.
Einar
Einar
Please don't bang your head! It's our fault to have misleading
documentation. Sorry for your trouble!
We should fix the documentation and our own tests. I've created
http://code.google.com/p/googletest/issues/detail?id=262 to track it:
<quote>
Einar Rune Haugnes reports that in gtest-param-test.h we say that a
parameter generator is evaluated in RUN_ALL_TESTS(). In fact it is
evaluated in InitGoogleTest().
I checked the code and think it's doing the intended thing: by generating
all parameterized tests in InitGoogleTest(), we give the user a chance to
iterate through all the tests using the test reflection API before calling
(or skipping) RUN_ALL_TESTS(). For example, someone may use that
information to build a list of tests and show them in a GUI and let the
user decide which tests to run, or whether to run the tests at all.
We should fix the comment and wiki documentation to say that the parameter
generator is evaluated in InitGoogleTest() instead of RUN_ALL_TESTS().
We should also update/add tests to verify this is indeed the case. Our
current tests only verify that the generator is evaluated in main(), which
isn't accurate enough.
</quote>
Thanks for the detailed report and sharing your solution!
And to clarify this to other users: you don't need to implement your
own parameter generator in order to create a parameter list at run
time. What Einar described in his previous post is the right (and
easier) approach. Thanks,
--
Zhanyong
Regards,
Einar
On 17 Mar, 08:30, Vlad Losev <vl...@google.com> wrote:
> D-oh! I was under a very strong and that we evaluate generator parameters at
> the point of macro invocation. It was strong enough that I didn't re-check
> the source. It was also very wrong. Einar - my apologies for the confusion.
> We should definitely fix both documentation and the tests.
>
> 2010/3/16 Zhanyong Wan (λx.x x) <w...@google.com>
>
>
>
> > Hi Einar,
>
> > Please don't bang your head! It's our fault to have misleading
> > documentation. Sorry for your trouble!
>
> > We should fix the documentation and our own tests. I've created
> >http://code.google.com/p/googletest/issues/detail?id=262to track it: