timing tests?

1,603 views
Skip to first unread message

stevej

unread,
Sep 29, 2010, 5:02:29 PM9/29/10
to Google C++ Testing Framework
What are your thoughts on using gtest to detect performance problems?
Basically, run a test N times and expect that it takes less than T
time. Clearly I can write my own test to do this, but I wonder if it
would be useful to have the generic ability to apply a timing test to
any existing test case? This would presumably be done via command line
argument(s), probably referencing a config file with the details of
what to run and the time limits.

I'd be interested to hear what others think about building this into
gtest.

Joey Oravec

unread,
Sep 29, 2010, 6:12:45 PM9/29/10
to stevej, Google C++ Testing Framework
Steve -
 
I make enough timing measurements that I have functions handy to record t_start, t_end, and calculate t_elapsed using the high resolution windows timers. For the things I test, I'm able to calculate an estimated time and EXPECT_* that t_elapsed is within some percent.
 
It seems like gtest taking the measurement automatically is too coarse. For example should the interval include SetUp and TearDown for the Test? For the fixture? For the environment? What if instead we had a:
 
- Function to take a high resolution timestamp
- Function to calculate an elapsed interval
- Assertion that a float is between min and max limits
- Assertion that a float is within a percentage in each direction
 
That way you can measure any interval and check for failure with only 3 or 4 lines of code. And helpers would be nice because I reuse that code constantly.
 
One thing I would like to see at the TEST() level is a timeout, in case the software under test has deadlocked, that can stop the offending test and still run the subsequent tests. However that probably requires a subprocess and is discussed in more detail in other postings.
 
-joey

stevej

unread,
Sep 30, 2010, 9:54:30 AM9/30/10
to Google C++ Testing Framework
Yes, I have my own timer and have done the same. And I agree that for
high precision the issues of setup/teardown, etc would make the
results only approximate. But what I'm thinking is that if you got the
timing tests "for free" -- ie without writing any additional code --
they could be used to detect significant slowdowns that might be
caused by, for example, using a new algorithm, changing convergence
criteria, etc. I'm clearly not talking about realtime systems here,
rather calculations that take, say 100msec normally but degrade to
200msec after some code change.
Reply all
Reply to author
Forward
0 new messages