void myTable::insert(char key, int val){
// client must not provide a duplicate key
if (dupedKeyFound) //basically use map.find(key); if this already exists raise an assertion
ASSERT("dupedKeyFound");
// assert passed do the actual insertion
}
// This is the fixture I am using
// expecting O(log n) performance since this is just using a std::map insertion.
class MyFixture : public ::benchmark::Fixture
{
public:
MyFixture(){}
void SetUp(const ::benchmark::State& state)
{
std::cout << "ran setup\n";
// some function which creates an empty table, basically an empty std::map
// with some bells and whistles.
m_myTableInstance = createTable();
}
~MyFixture(){}
public:
std::shared_ptr<MyTable> m_myTableInstance;
};
BENCHMARK_DEFINE_F(MyFixture, TableInsertionWithFixture)(benchmark::State& st) {
// build a random stringset
std::set<std::string> stringSet;
size_t numOfStrings(st.range_x());
size_t eachStringSize(7);
std::cout << "got here once\n";
while (stringSet.size() != numOfStrings) {
auto theStr = generateRandomString(eachStringSize);
stringSet.insert(theStr);
}
std::cout << "got here once" << std::endl;
// time the insertion
while(st.KeepRunning()){
std::cout << "ran again\n";
for (auto const & id: stringSet) {
m_myTableInstance->insert(id.c_str(), 1);
}
};
}
BENCHMARK_REGISTER_F(MyFixture, TableInsertionWithFixture)->Arg(1000);
Now, this will cause an assertion. Because the input size is small the framework tries to run the testpoint multiple times
to get a better estimate.
I get the following run sequence:
ran setup
got here once
ran again
ran again // here the assertion will be raised.
whereas I would have expected the following sequence:
ran ctor
ran setup
got here once
ran setup
got here once
ran setup
got here once
// finally after some x amount of iterations, we can get a correct baseline.
1. My question is why is the fixture setup code not run again, if it is assumed by the framework that we need to run the testpoint to get a better estimate.
for example, here is another framework , that seems to do similar run sequence:
https://github.com/DigitalInBlue/Celero#general-program-flow
(here EachSample/EachExperiment would correspond to each time I decide to run the experiment again to get a better estimate the setup code gets fired off again)
2. In spirit of googletest and/or x-unit frameworks my thought was
ran ctor will get printed here.
3. if the running sequence is not the way I would have expected/wanted; is there a suitable way to get this running sequence.
Thanks for answering my questions/helping in clearing my doubts...
BENCHMARK_DEFINE_F(MyFixture, TableInsertionWithFixture)(benchmark::State& st) {
// build a random stringset
std::cout << "outside the while loop\n";
while(st.KeepRunning()){
std::cout << "inside the while loop\n";
st.PauseTiming();
std::set<std::string> stringSet;
size_t numOfStrings(st.range_x());
size_t eachStringSize(7);
std::cout << "got here once\n";
while (stringSet.size() != numOfStrings) {
auto theStr = generateRandomString(eachStringSize);
stringSet.insert(theStr);
}
st.ResumeTiming();
for (auto const & id: stringSet) {
m_myTableInstance->insert(id.c_str(), 1);
}
};
}
ran setup
outside the while loop
inside the while loop
ran teardown
ran setup
outside the while loop
inside the while loop
inside the while loop
inside the while loop
ran teardown
It would be nice to have a flow diagram of the call sequence. The sequence kind of makes sense.