Thanks for responding Gerry.
So, if each product (group/team) has its definition of what makes a "test case" and these entities are being measured by project, presumably the intent (of the existant metrics is to compare the size of the project (number of test cases) between projects within the product group.
Is the new ratio (test cases daily execution rate) intended to compare the efficacy of test execution within the same product group between projects within that group?
It strikes me that the result of each of these measurement functions may not be what the intended result actually is.
By making each failure point within a test case (a logical collection of steps intended to answer a specific question about a software application under test) a test case itself, the number of test cases will increase, making the projects "bigger." Additionally, the focus of the new metric may discourage software testers from examining anamolous behavior, even if the literal "expected results" are met. If the testers know that "it matters" how many test cases are being executed each day, will human behavior not lead to a skewing of the outcome for these activities?
Of course, there are ways to off-set these effects, however, the message delivered may well vary from the message received.