Test Lead Responsibilies

9 views
Skip to first unread message

Global QA Team

unread,
Nov 2, 2012, 4:19:34 AM11/2/12
to global-...@googlegroups.com

Keil, David


Nov 15 2011
 I have posted a Test Lead Expectations and Responsibilities document to the files section of this community.  Please use this thread to provide any comments or questions you have about that document.

Global QA Team

unread,
Nov 2, 2012, 4:20:04 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Benea, Iulian

Nov 16 2011 in response to Keil, David
Hi,
 
I really feel that it is a good idea to have such a document in place to clarify expectations and responsabilities. 
 
Only got a short read of the document but would like to get a good understanding of what items represent. The list of them is following:
 
a) Testing lead ultimately is responsible for the scorecard results achieved by a project (which includes budget, schedule and quality implications).
 
b)  Ensure test harnesses are available and in place (Simulators / batch programs / UI / config scripts, etc) TL ->ensuring that sims are available, like in case of Asset, is sometimes problematic -> requests made for sims take too long for some projects (e.g. Permata) - in these cases would escalation and/or mitigation of the risk enough?
 
c)      Works with QA management to ensure project gets scheduled with Test Engineers having appropriate task assignments (based on skill level, with minimum gaps between tasks) -> TE with right skill level in Base24eps are not always available - escalation and/or mitigation is an acceptable aproach?
 
d) how are the metrics exemplified defined: Test Status Report (incl metrics: tests per day per tester, defect discovery rate, transaction rerun total) 
 
e) seen some references for defect management process, as per my knowledge it should be followed as per PDM and case lifecycle, is that correct?
 
Thanks again for creating the document and bringing  it up for an open discussions.

Global QA Team

unread,
Nov 2, 2012, 4:20:33 AM11/2/12
to global-...@googlegroups.com, global-...@googlegroups.com
Keil, David

Nov 23 2011 in response to Benea, Iulian
Thanks for breaking the ince on the discussion forum.  A will try to respond to the list of areas that lack detail. 
 
a) Testing lead ultimately is responsible for the scorecard results achieved by a project (which includes budget, schedule and quality implications).
 
 All AD projects are measured against budget, schedule and quality factors using the score card metrics (Refer to my presentation at the Q3 QA All Hands meeting to see all the criteria around scorecard metrics).  The leads on the project should be aware of the scorecard metrics and use those in decisions about the project.  The Testing Lead has the most influence of anyone in the QA organization on the scorecard results of the project.  The expectation is that the Testing Lead takes responsibility for the scorecard results and will escalate to project team and management when issues that will impact scorecard performance need to be addressed.
 
 
b)  Ensure test harnesses are available and in place (Simulators / batch programs / UI / config scripts, etc) TL ->ensuring that sims are available, like in case of Asset, is sometimes problematic -> requests made for sims take too long for some projects (e.g. Permata) - in these cases would escalation and/or mitigation of the risk enough?
 
 On any project, establishing the testing strategy and requirements for testing tools is a decision that must be made early on in the project.  This particular line item focuses on a message simulator like Asset, but that can really apply to any testing tool that our test plans will become dependent on.  Test leads are responsible for making sure that the set of test tools (referred to as "test harness") will be available for the timelines of the project.  Occasionally, message simulators or testing tools will need to be built or purchased as part of the effort.  We need to address that early in the project.  If that emerges late in the project, that can have huge impact to project budget and schedule.
 
c)      Works with QA management to ensure project gets scheduled with Test Engineers having appropriate task assignments (based on skill level, with minimum gaps between tasks) -> TE with right skill level in Base24eps are not always available - escalation and/or mitigation is an acceptable aproach?
 
At some point, the project will get staffed based upon the tasks defined in estimates and project plans.  The test lead needs to work with the managers that are assigning resources so there is appropriate coverage of the tasks.  Some individuals may have training or startup costs that need to be factored into the plan.  Some tasks will benefit from the resource having certain skill sets.  The testing lead needs to actively be involved with the project manager and QA management on getting the tasks assigned to individuals and needs to help make decisions on the various trade-offs that are involved. 
 
d) how are the metrics exemplified defined: Test Status Report (incl metrics: tests per day per tester, defect discovery rate, transaction rerun total) 
 
The QA artifacts (TVD, Test status report, Master Test report) contain metrics information.  AD management will be continually refining the metrics gathering that we do in order to research or address operational efficiencies.  Testing leads will be asked to use the most recent templates and guidelines around metrics to be included in these QA artifacts
 
 
e) seen some references for defect management process, as per my knowledge it should be followed as per PDM and case lifecycle, is that correct?
 
Defect management process in this "case lifecycle" you mention.  Basically the workflow and state changes that are involved as a Salesforce case reporting a product issue is created, worked on by engineering, re-tested and ultimately closed.  A big part of the Testing Lead role during the System Testing phase of the project is decision making about outstanding cases including: which cases are highest priority to be fixed, the relationship between open Salesforce cases and blocked test cases, recognition about clustering of cases around a certain feature or product module that reflects poor quality.
 
Reply all
Reply to author
Forward
0 new messages