Hi Freddy,
thanks for your reply. I agree that our business case is a bit of a weird one, so let me describe our codebase:
* A large proportion of our code is old, but stable, and no changes will have to be done to it any time soon. This code is only marginally covered (0-5%).
* The more recent, "working" set of our code is reviewed and tested more thoroughly for code quality, so it generally has higher coverage (heading towards 30% and upwards, fingers crossed).
This means that, as you say, the Quality Gate solution is an OK indicator if we fulfill the target for Coverage on New Code. However, since it takes a while to spread the knowledge of what a good code review, a good test, good coverage etc. is, it would be naive to assume that we can start fulfulling this Quality Gate limit within days or weeks. During this transitional period, we would get very many "false negatives" of failed Quality Gates (in that we are aware and expect a certain failure). We could of course regulate this with red/orange gating, but this information is a bit too coarsely grained to be easily interpreted: We only have the choice of analyzing the % of Quality Gate failure, or analyze all error causes of the Quality Gate, which would just be overkill for us.
So yes, I am proposing a change which would only be useful until it obsoletes itself and we can switch to a Gate-based solution, but I hope you can understand the need for a transitional solution.
For the moment, we have switched to an analysis of total Coverage, but as you say, this value has not moved even 0.1% during 3 months due to the small ratio of new code to old.
Kind regards,
Rudolf