Test Case Daily Execution Rate

5 views
Skip to first unread message

Global QA Team

unread,
Nov 2, 2012, 3:58:02 AM11/2/12
to global-...@googlegroups.com

Provost, Shelley


Jan 4
This metric measures efficiency of test execution and validation and is used to improve test case rate on projects. The rate is calculated at the end of the project and only includes time when Test Engineers are performing System Testing, Regression Testing, Solution Testing, Results Analysis and re-testing.

How the Rate is calculated: 

1. GTA hours logged to 10.01, 10.02, 10.05, 10.89 / 6 = FTE days
2. FTE days * number of QA resources working on the project = Total Testing Days
3. Total Testing Days / Total Number of test cases executed on the project

Global QA Team

unread,
Nov 2, 2012, 3:59:13 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Walen, Pete

Feb 12 in response to Provost, Shelley
 I'm curious about this metric in general.  Broadly speaking, it appears to be looking at some form of productivity measure.  I wonder about what information is hoped to be gleaned from this. 
 
The nature of projects vary by environment, restrictions on capability (some may require more manual effort because of the hardware involved) and maturity of the software products involved.  Project teams will be impacted by the experience of the test team (generally) and the experience of the participants on the systems with which they are working. 
 
The complexity of setting up or changing environmental configurations will certainly have an impact, as will other activities that are not equal, equivalent or have congruence between systems. 
 
I wonder, therefore, if someone can address the intent of this metric beyond conjecture. 
 
Thanks so much -
 
Pete 

Global QA Team

unread,
Nov 2, 2012, 3:59:38 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Provost, Shelley

Feb 14 in response to Walen, Pete
Hi, Pete!
 
To your first point about "what information is hoped to be gleaned from this", we will start to see productivity trends for each product by which future projects will be compared.
 
To your second point regarding project variables, these factors are understood which is why this metric is done by project and not by individual.
 
To your third point regarding environments, Environment setup/configuration is excluded in the Validation portion of the metric.
 
If you'd like further clarification beyond what has already been stated, we can discuss this in our QA meeting with Abe since I'm on your team!

With these metrics in their infancy, it is difficult to predict where the discrepancies will happen.  As long as we measure consistently, we can fine tune them further down the road. We are already in the process of refining the calculation and once it is approved by upper management, will get communicated to the QA Managers.

Global QA Team

unread,
Nov 2, 2012, 4:01:11 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Walen, Pete

Feb 15 in response to Provost, Shelley
 I know we're on the same team!  :)
 
I will still, annoyingly, ask a couple of questions which I believe are fundamental to the question.
 
What is the productivity that the organization is interested in?  How is this being defined?  (The definition has a direct bearing on the measurement function applied, which is why I am asking the question.)
 
What defines a "test case"?  I realize this is a fundamental question, yet which instances in a Test Specification are defined as a "case"?
 
Finally, as a point of clarification, is the Configure Test Environment time reporting code available or not quite yet?  If it is not yet available, will that not skew the potential results?

Global QA Team

unread,
Nov 2, 2012, 4:01:38 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com

Diluzio, Gerry

Feb 16 in response to Provost, Shelley
 Pete, thanks for your feedback.  To answer your question on "what defines a test case?", each product has defined that for themselves.  We are all already reporting test case metrics as part of the test status report and Master Test Report.  Those definitions differ per product so therefore the goal # of test cases for each product will be different. 
 
We will likely find that we need to tweak those goals as actual numbers flow in but we cannot wait for perfect information to begin measuring ourselves.  We need to kick off the process and adjust accordingly if need be.  Let me know if this answers your question or if it requires more discussion.  Thanks. 

Global QA Team

unread,
Nov 2, 2012, 4:02:20 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Walen, Pete

Feb 28 in response to Diluzio, Gerry
Thanks for responding Gerry.
 
So, if each product (group/team) has its definition of what makes a "test case" and these entities are being measured by project, presumably the intent (of the existant metrics is to compare the size of the project (number of test cases) between projects within the product group. 
 
Is the new ratio (test cases daily execution rate) intended to compare the efficacy of test execution within the same product group between projects within that group? 
 
It strikes me that the result of each of these measurement functions may not be what the intended result actually is.
 
By making each failure point within a test case (a logical collection of steps intended to answer a specific question about a software application under test) a test case itself, the number of test cases will increase, making the projects "bigger."  Additionally, the focus of the new metric may discourage software testers from examining anamolous behavior, even if the literal "expected results" are met.  If the testers know that "it matters" how many test cases are being executed each day, will human behavior not lead to a skewing of the outcome for these activities? 
 
Of course, there are ways to off-set these effects, however, the message delivered may well vary from the message received.

Global QA Team

unread,
Nov 2, 2012, 4:02:40 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Diluzio, Gerry

Feb 28 in response to Walen, Pete
Pete, I think I understand your point.  We may very well find that people are manipulating the data to drive better numbers.  I believe we need to rely on our governance to root that kind of thing out.  If for some reason Product A's team is banging out 50 test cases a day in 2012 that will mean their 2013 goal will probably be 55.  So I would expect this to be self-correcting over time.  
 
Also, as we all migrate to Silk Central we should have a more standardized definition of what a test case means which I think will partially address this issue as well.  Thanks.
Reply all
Reply to author
Forward
0 new messages