Bringing this old question back, since answers did not satisfy my
curiosity...
There was a thread in June 15th, 1999
(Message-ID: <37661F04...@cfmu.eurocontrol.be>)
My question is really more precise though.
If "regression testing" is to ensure that no new bugs are introduced
in newer versions... That seem to be the accepted and used definition.
Can we consider that "non-regression testing" to be to ensure that
reported bugs are corrected? I mean, we run a test with a sequence
that created that bug in an earlier version with the new one, and the
test should fail (well or succeed, depends on the point of view).
I would really much appreciate the views of those who know. I can't
really find any really definite definition on the Web. I find a lot of
definition for "regression testing" around, but no such luck for
"non-regression testing".
Only one that kind of bugged me:
http://www.systest.com/services/ar.html
Regression testing verifies that previously identified
problems have been corrected and that these "corrections"
have not caused problems elsewhere. (40%-60% of all bugs are
created when "fixes" are made to correct earlier problems.)
This one suppose that regression testing also include corrected
problems, but also mention "problems elsewhere". Most of the other
only specify modified code (which to me do not mean corrected code).
Thanks for any replies and pointers.
An e-mail copy of the follow-up would be much appreciated.
Huu Da Tran.
hu...@syclik.com
Not precisely... when you are testing a NEW version of your
software with OLD test cases (that were run on a previous
version), then you are doing regression testing. Basically, you
are trying to see whether your code has "regressed" by sliding
back into incorrect behaviour, on test cases that it behaved
correctly on before.
If you are testing new features in a new version with new
test cases that test those new features, then you are doing
"progressive" testing.
>Can we consider that "non-regression testing" to be to ensure that
>reported bugs are corrected? I mean, we run a test with a sequence
>that created that bug in an earlier version with the new one, and the
>test should fail (well or succeed, depends on the point of view).
I don't know what I would call this... this is the minimum
that a bug fix should accomplish, so if the software used to
perform incorrectly on that test case and now performs
correctly, then it's just a normal bug fix. Then we would have
to do regression testing of this new version (eventually) with
all the other test cases. If it used to perform incorrectly and
still performs incorrectly, then that version shouldn't even get
past the developer's machine. Hope this helps.
--Jamie. (nel mezzo del cammin di nostra vita)
andrews .uwo } Merge these two lines to obtain my e-mail address.
@csd .ca } (Unsolicited "bulk" e-mail costs everyone.)