Thank you for pulling this together Lucian;
The definition of test stages is an excellent start, and an exercise that was needed in the last 4 projects I've been working with. Can we please focus on this area and expand to the point that the policy can clearly identify the goal and scope of each testing stage?
The policy might also consider the options for combining or skipping test stages on smaller projects.
I ask this because even though the discussion is present the concensus of different people does not seem to get to a very clear end point. We get "close enough" for one project then go on but each project can choose the scope and goals differently.
In each level, there is a question of when to stop and how much of the overall solution for a customer is involved. Between Component, System and Solution, in particular there is potential for large overlaps and then to see that overlap and try to skip one or more of the levels.
I've seen Component done in a customer tailored environment with all interfaces and acquirers present, and also Solution done as a repeat of most of the component tests but using customer site equipment instead of simulators. If we skip Component and combine System/Solution, it may be more efficient but the scale and control needed may become unmanageable.
The unit test and user acceptance testing are much clearer, since the responsible parties are very close to the coding or production deployment respectively and have that to guide them.
---- One other policy suggestion:
Can we define the metrics to be collected in this process and how that feeds back into the cycle of new estimates?
We have a long tradition of project overruns on testing which is mostly due to optimistic estimate assumptions or contingencies not included in the estimates. Both could be visible at the end of project when the estimate vs actual comparisons are done.