Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Metrics for System test.

5 views
Skip to first unread message

Kanikkannan

unread,
Feb 4, 1999, 3:00:00 AM2/4/99
to
Does anybody know the 'metrics' that I can use to evaluate System test
?


mallick

unread,
Feb 18, 1999, 3:00:00 AM2/18/99
to Kanikkannan
The matrics to evaluate "Systems Test" is as follows. I understand that
this reply
is late:
Make a list is Systems functionality and against them list the "test cases"
that test
the functionality. Any blanks implies more work --- write the test to
cover
that functionality. Any place with too many tests implies that some tests
can
be eliminated---- save testing time.
This list is initially made from knowledge of the test and the function.
A more objective way to determine this list is from test coverage tools
which marks the areas of the program that are covered while executing
the tests. Then any area of the program that is not marked by the test
(test coverage tool) needs a test to be written.


Kanikkannan wrote:

> Does anybody know the 'metrics' that I can use to evaluate System test
> ?

--
Subir Mallick
ATM System Test
Voice: (978)960-3425
Fax: (978)960-3573

PRODUCE QUALITY WITH SPEED

boris beizer

unread,
Feb 22, 1999, 3:00:00 AM2/22/99
to
Ken Foskey wrote in message <36D1BBAB...@zip.com.au>...
>Kanikkannan wrote:
>
>> Does anybody know the 'metrics' that I can use to evaluate System > Test?
>
>The obvious metric is seeding where intentional bugs are seeded into the
>code. The test is then evaluated by it's ability to locate these bugs.
>If you find 50% of the seeded bugs then it is assumed that you have
>missed 50% of the 'real' bugs.


That may be obvious, but it doesn't really work. Bebugging, as it is known,
is too bias prone. This idea has been proposed on an average of twice a year
since it was first proposed by Harlan Mills more than 25 years ago.
Sometimes it pays to read the literature in order to avoid making the same old
mistake over and over again. There have been several papers published over
the years as to why this doesn't do what you hope for it. It is very
difficult to seed a program with real bugs whose behavior match bugs found in
real system testing.

1. Most seeded bugs will cause crashes unlike real bugs that remain after
proper unit testing, component testing, and integration testing has been done
at every stage of the build.

2. If you do manage to inject bugs that pass through the (should be mandatory)
testing prior to system testing, they are very unlikely to be statistically
like the real bugs. This is self-evident. If you know (i.e., have a good
idea of) what kinds of bugs remain and how they are distributed, then why are
they there?

3. You can't use prior bug history as a guide because if your programmers are
effective, then such bugs will have been removed. See my pesticide paradox
(in all of my books).

4. Bugs are known to cluster, but we don't know where the clusters are. If
you knew where the clusters are, why are they there?

5. 35% or more of system-level bugs today are requirement bugs. Seeding
won't help and you couldn't possibly seed missing requirements.

6. The next biggest category today are integration bugs that occur at the
interfaces between components. Both components A and B are correct, but the
combination isn't. The bug is distributed. How do you seed a distributed
bug?

The closet we have to the idea of bebugging are a whole bunch of mutation
methods -- look up mutation analysis and mutation testing in Marciniak's
Encyclopedia of Software Engineering. These methods are effective at
evaluating the effectiveness of test techniques, but typically only for
low-level unit testing.

Ken Foskey

unread,
Feb 23, 1999, 3:00:00 AM2/23/99
to
Kanikkannan wrote:

> Does anybody know the 'metrics' that I can use to evaluate System > Test?

The obvious metric is seeding where intentional bugs are seeded into the
code. The test is then evaluated by it's ability to locate these bugs.
If you find 50% of the seeded bugs then it is assumed that you have
missed 50% of the 'real' bugs.

Ken

Paul E. Black

unread,
Feb 23, 1999, 3:00:00 AM2/23/99
to
Kanikkannan <lkan...@baan.com> writes:

> Does anybody know the 'metrics' that I can use to evaluate System

> test?

We are developing a specification-based test metric. Given tests and
specifications, it applies a specification mutation analysis to
produce a coverage metric. We have a tool running, but it is a lab
prototype. If you are interested, please get in touch, and I'd be
happy to get you a copy.

-paul-
--
Paul E. Black (p.b...@acm.org) 100 Bureau Drive, Stop 8970
paul....@nist.gov Gaithersburg, Maryland 20899-8970
voice: +1 301 975-4794 fax: +1 301 926-3696
Web: http://hissa.ncsl.nist.gov/~black/black.html KC7PKT

Ken Foskey

unread,
Feb 23, 1999, 3:00:00 AM2/23/99
to
Boris,

Must read your book soon... It is high on my gunna list, everyone
recommends it (in this group and outside).

I must admit I knew that the bite would come if I suggested seeding. I
think that the requirements comment is extremely valid, 35% is a large
number. If only I could write perfect specs then everything else is
easy :->.

I think that stating that the pretest should remove the bugs is missing
the point. The bugs are deliberately put in after the 'validation by
XXX' phase to verify the thoroughness of the YYY testing.

I remember someone saying that they introduced three key bugs into their
beta test. The people not responding were automatically dropped, and
surprisingly only one in all the beta copies reported all three bugs
back (or was it none). This shows that Beta is not the answer.

Ken
Quality is built from the ground up, do everything to the best of your
ability.


Jim Kandler

unread,
Feb 24, 1999, 3:00:00 AM2/24/99
to Ken Foskey
You should look at "software reliability engineering". By keeping a record of
the time spend testing and the defect arrival rate you can characterize your
system and make some educated guesses about how mature it is. Search on the
quotes above and also look up John Musa and Yui. There are many references. I
have used this with some success.

Jim Kandler

0 new messages