QA Effort Metric

5 views
Skip to first unread message

Global QA Team

unread,
Nov 2, 2012, 4:03:39 AM11/2/12
to global-...@googlegroups.com

Provost, Shelley


Jan 4
The Validation effort should be no greater than 40%* of the Application Development effort on a project from Stage Gate 2 to Stage Gate 3.  This will be measured using hours logged to the PDM Development and Validation stages of a project. The purpose of the metric is to ensure QA efficiency and help bring focus to areas of QA involvement during a project that may need adjustment. *This metric may be adjusted based on product.

How the Rate is calculated:

Validation / Development + Validation

Where 

Development = all hours logged to the 08.x and 09.x Oracle task codes for a project
Validation    = all hours logged to the 10.x Oracle task codes for a project

Global QA Team

unread,
Nov 2, 2012, 4:05:28 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Walen, Pete

Feb 12 in response to Provost, Shelley
 I'm curious about this metric and what the intent is.  Presumably there is an expectation that testing (validation) effort should not exceed the development effort by said percentage. 
 
In stating this, it appears there are some presumptions being made around the efficacy of certain aspects of the development process.  It also appears that some assumptions are being made about the quality of the product delivered for validation. 
 
It is not uncommon for anamalous behavior to take a fairly significant time to identify, track, recreate then explain/advocate to the rest of the group.  A not un-natural reaction on the part of software development is to write the behavior off as "user" or tester) error.  This takes greater time and effort on the part of testing to gather evidence to explain the business impact and impact to users/customers.  If the code change (fix) is small, it is entirely possible to expend a massive amount of time recreating the scenario to exercise the fix. 
 
In short, when you state "Validation effort should be no greater than 40%* of the Application Development effort" - What happens if it is? 
 
Pete 

Global QA Team

unread,
Nov 2, 2012, 4:05:51 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Provost, Shelley

Feb 14 in response to Walen, Pete
 Understood. These are great discussion points, Pete.
 
There are exceptions to the metric.  We sometimes have projects that require little to no Development effort but do require a large QA effort. 
While there is no formal process to address projects that are greater than 40%, we are working to create one.  

Global QA Team

unread,
Nov 2, 2012, 4:06:24 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Walen, Pete

Feb 15 in response to Provost, Shelley
 Let me approach this from a different angle.
 
What is the question that this measurement function is attempting to (at least partially) illuminate?  What is it that is trying to be discovered? 

Global QA Team

unread,
Nov 2, 2012, 4:06:47 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Diluzio, Gerry

Feb 16 in response to Provost, Shelley
 Pete, I think what we are trying to do here is drive "Smart testing" and push quality upstream as an organization.  This will foster our move from "bug hunting" to "customer validation". 
 
To meet our 40% goal we will need to dig deep to understand how to properly define testing scope.  The easy answer is always retest everything but what we want our qa engineers to do is make smart decisions around risk and in partnership  with the Software Engineering group understand the true impact of our scoping decisions.  One way to make that 40% number more achievable is to insist on a level of unit and integration testing during the development phase as part of our DAR.  If we know engineering has done a certain level of testing we can be more confident in refining our test scope to closely match the software changes made in the release.  Another is to hold strong on scope creep when working with Product management.  There are many other examples of ways we can build quality upstream without increasing qa effort. 
Having said all that there are projects where the right decision is for QA to be more than 40% of the total effort between SG2 and SG3.  When all stakeholders are onboard that that is the right decision for our customers we will make that decision.  But that should be a decision informed by factual risk analysis. 

Global QA Team

unread,
Nov 2, 2012, 4:07:31 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Walen, Pete

Feb 28 in response to Diluzio, Gerry
Thanks for responding, Gerry.
 
If I understand your statement correctly, the intent of the 40% value is to drive decisions around testing toward greater/improved risk analysis with a leaning toward greater collaboration with development/engineering, yes?  If the expectation is that some level of unit testing has been done, perhaps manually by the development engineer and/or in concert with some rudimentary CI process validation, then it would seem reasonable that the result should be a more focused testing effort.  Does that therefore, result in a smaller effort? 
 
I am curious then as to where the 40% figure originated.  I've seen and heard several individuals cite various "best practices" and result in a different target percentage for the same basic measure described in Shelley's original post.   
 
Presumably the unstated goal is to reduce the cost of testing, yes?  While it is a recent book, I wonder how many people in QA have read, or heard of, "How to Reduce the Cost of Software Testing" ( http://www.amazon.com/How-Reduce-Cost-Software-Testing/dp/1439861552) ?
Reply all
Reply to author
Forward
0 new messages