I'm comfortable with the title QA Engineer. It's a title i've worn for
many years and still seems to be the most common title for software
testers. However, I've seen several convincing arguments for calling what
i do Testing rather than QA. One argument is in Ch 15 of Testing Computer
Software by Kaner, et al. Another is the fact that in some (but a
minority), QA refers to a different job than testing. Another argument is
a recent legal concern raised by Kaner suggesting that using the term
"Quality Engineer" could open me up to exposure to malpractice (at least
in California, where my company is headquartered).
Thus, there are good reasons to call myself a Test Engineer. However, this
appears to be an uncommon practice in the software industry. Microsoft is
the only company i'm aware of that uses the term -- actually i think they
use "developer in test". I"m also familiar with a few companies that call
their testers Verification Engineers.
Thoughts? Advice?
Bret Pettichord
Certified Software Quality Engineer - ASQC
Unison Software
br...@unison.com
Does it matter? A rose by any other name......
[snippety snip]
__________________________________________________
Chris Lawton Texas Instruments Software
Sunbury on Thames, England. Standard disclaimer.
Tel: +44 (0) 1784 212831 mailto:chr...@ti.com
It matters to those who think testing is not synonymous with quality
assurance. On a manufacturing line, inspectors are certainly not
quality engineers.
--
Danny Faught -- HP-Convex -- Software Test Alchemist
"Everything is deeply intertwingled." (Ted Nelson, _Computer Lib_)
I've always liked "Apprentice Visionary" or if you're feeling particularly
cocky, just "Visionary" ;-)
: Bret Pettichord
St. Louis Mo.
On Line Quality Reference
Copyright Ronald Hartge 1992
Can we derive that the:
QUALITY ASSURANCE Agent performs the:
"planned and systematic actions necessary to provide adequate
confidence that a product will conform to established requirements."
QUALITY AUDITor performs:
"A systematic review to determine if sufficient management
systems
are present to guarantee a quality product or service."
QUALITY CIRCLEr employs the "employee involvement technique", where:
"A circle is a team who meet regularly to identify and solve problems.
The circle is a means of allowing and encouraging people from all
levels and functional
areas to participate in rendering decisions that will improve
quality and/or reduce costs."
QUALITY CONTROLer performs:
"A management function whereby control of quality of products or
services are exercised for the purpose of preventing defects."
QUALITY COSTer identifies the:
COST OF POOR QUALITY.
QUALITY ENGINEER analyzes:
"a system to maximize the quality of services and products
produced by the system."
QUALITY LEVELer maintains the ACCEPTABLE:
"Maximum percentage of defective or non-conforming units",
"which may be considered satisfactory as a process average."
QUALITY LOSS FUNCTIONal identifies:
"The costs that occur when a quality characteristic deviates
from"
its "target value."
QUALITY Technician checks that the product is:
1. Fit "for use".
2. Free "of defects".
3. Satisfies "customer expectations".
QUALITY TRILOGY Analyst provides:
"A method of managing for quality which emphasizes a three
pronged
approach.
1. Quality Planning
2. Quality Control
3. Quality Improvement"
But then, you ask:
"What does this lowly Tester do? The Tester Tests:
to identify, record, and document
anomalies, defects, issues, problems, burps,
and yes "BBBUGGGS!!!
A noble profession.
Personally I would stick with the QA Engineer title, and then work to
chane the role from testing to QA.
Gary
If you want to remain as a Test Engineer (which appears to be what I'd call
your job at the moment), take the TE option; if you want to expand your
role to include all the other aspects of Quality Assurance as well, leaving
yourself as Test Engineer could be restrictive in the future. (For a start,
you'd have to get new business cards printed sometime B-)= )
Another NZ$0.02
Andrew Merton
Hold on there. A software tester should not be muzzled at a review!
The tester should participate fully, and should take on special roles
such as looking for testability issues.
When someone starts helping to define how the reviews are conducted
and auditing the results, then they're in the realm of Quality
Assurance.
Verfication: "The act of reviewing, inspecting, testing checking, auditing or
otherwise establishing and documenting whether or not products, processes,
services or documents conform to specified requirements"
Answers question: "Am I building the product correctly?"
Validation: "The evaluation of software throughout the software development
process to ensure compliance with user requirements."
Answers question: "Am I building the correct product?"
For example verification is doing reviews and validation (SQA function) is
making sure the reviews are done and identified actions planned, tracked and
done.
>In article <55tpdb$13...@node2.frontiernet.net>,
>Enrique Duran <my...@pop3.frontiernet.com> wrote:
>>You are right about the two being two separate jobs; I consider myself
>>a test engineer, but because I mention in my resume that I have been
>>responsible for creating test plans and test cases, I get many offers
>>for a QA Engineer position. As a test engineer, I never get
>>involved (except, mostly as an observer) in software reviews,
>>walkthroughs, etc. I am surprised as to how many people are not
>>aware of the difference (including some manager that want to interview
>>me for QA work)
>Hold on there. A software tester should not be muzzled at a review!
>The tester should participate fully, and should take on special roles
>such as looking for testability issues.
>When someone starts helping to define how the reviews are conducted
>and auditing the results, then they're in the realm of Quality
>Assurance.
>--
>Danny Faught -- HP-Convex -- Software Test Alchemist
>"Everything is deeply intertwingled." (Ted Nelson, _Computer Lib_)
There's a debate ongoing in comp.software-eng regarding the difference
between software engineers and programmers. I think a similar debate
on Test (or QA) Engineers and Testers would be equally pertinent.
Referring to the Britannica 'Micropaedia' article on "Engineering", I
find that "it is ... common to all branches of engineering that
academic training must begin with a thorough grounding in ...
mathematics." Moreover, " All engineers must have a positive interest
in the translation of the theoretical into the practical." Let's see
how this works.
I've had extensions done to my house. This has involved a substantial
excavation into a hillside. At the appropriate time (i.e., *before*
beginning design, let alone construction), the architect called in an
engineer with regard to the retaining wall. The engineer used a
penetrometer to determine the hardness of the rock, and other
instruments to determine aqueous flow rates, etc.-- i.e., he worked
out the numbers. He then crunched these with other numbers -- the
known properties of concrete, steel, and gravity -- to design the
correct height, thickness, foundation, reinforcement, and batter for
the retaining wall, and to specify the size (quantities) of materials
required.
The wall was built according to his engineered specification. It
hasn't yet disclosed any production bugs (I mean it hasn't fallen over
yet).
The same engineer did similar work on designing a roof truss and a
number of other details. Engineers work by knowing the properties of
the materials they work with, their strengths and weaknesses, not just
in qualitative terms ("granite = strong, greywacke = medium, chalk =
weak") but in numbers (Mohrs scale). We might say that engineering is
the practical exploitation of known scientific data (numbers!).
A lot of numbers are known regarding programming practices, but (from
some years' experience as a software quality control consultant and
trainer) far from *widely* known. Scales such as cyclomatic
complexity, cohesion, and coupling tell us lots about the software
they're embedded in, provided we know how to measure them and what the
measurements mean. To my mind, programmers who aren't using these
well-understood, well-verified properties of software to help design
and validate their code are programmers, not engineers (of any kind).
Nor does it help that, in contradistinction to *all* other branches of
engineering, "software engineers" typically see documentation as
incidental and secondary rather than primary and fundamental (see
Parnas, Madey, and Iglewiski, "Precise Documentation of
Well-Structured Programs", IEEE Transactions ..., 20:12, 12/94).
By these criteria, I haven't yet met a software engineer in New
Zealand (though I've worked with "software engineering" graduates, as
well as people who hold the "software engineer" title at work).
The relevance of these comments to this newsgroup is, of course, that
similar observations apply to testing. Cyclomatic complexity, for
example, is just as much a tester's tool as it is a programmer's. A
test engineer should understand the strengths and weaknesses of
coverage-based testing, both for code and for functions, and how to
compute the risks (see Horgan, London, and Lyu, "Achieving Software
Quality with Testing Coverage Measures," COMPUTER, 9/94).
With regard to the translation of theory into practice, a "test
engineer", to qualify for the title, needs to have been *trained* in
formal test requirements elicitation and representation techniques,
such as cause-effect analysis, equivalence partitioning,
karnaugh-veitch analysis, finite-state representations, control and
data flowgraphing, and a host of others -- not just have read about
them, but understood the theoretical basis of each, their strengths,
weaknesses, and best applications. Most of these, of course, are
covered in Beizer's classic, "Software Testing Techniques," but let's
not forget that most of them are also 15 to 20 years old (or much
older, considering some of them arose in circuitry testing pre-WWII).
So a test engineer will also be constantly reading the technical
papers relevant to his/her topic, seeking the BCPs (Best Current
Practices), and looking for experimental validation (or refutation) of
new, recent, or old theoretical positions. (See, for example, Grady,
"Successfully Applying Software Metrics," COMPUTER, 9/94.)
I wouldn't want anyone to think I'm preaching this from the arrogant,
protectionist position of someone who has all these qualities and
doesn't want any wannabees muscling in. Although I have the
effrontery to teach this topic, "Testing" (and nobody else in New
Zealand or Australia seems to be teaching it as a formal discipline,
not even at University level), I have no formal training myself, and
don't always understand the maths. Frankly, I don't qualify as a Test
Engineer, and have never claimed to be one. (I'm a Professional
Tester, and will admit to being a Test Consultant, Analyst and
Designer, as well as Trainer.) But I have stood on the shoulders of
giants, so to speak (Beizer, Hetzel, Mosley, Myers, Royer ...), and
gained a vision of What Ought To Be.
Having said this much, I'd also like to weigh in on the "what's QA?"
question. This is being actively pursued in another thread, so I'll
just say here that, in my courses and my practice, I stick to the
ANSI/IEEE definitions, which agree with what is generally held
*outside* the software arena. The definitions boil down to:
* TESTING means "quality control"
* QUALITY CONTROL measures the quality of a product
* QUALITY ASSURANCE measures the quality of processes used to create
a quality product.
I'd remind anyone who reads this far, but especially Enrique Duran,
that, in software development, there are many *products* besides
object code (Requirement Specs, Software Specs, System Designs, Module
Designs, Database Designs, Source Code ...), and that every one of
these should be regarded as having bugs in every part of it until
*testing* shows otherwise. Unless you're using highly formal
specification languages (probably an essential part of being a
"software engineer", really!), testing any of these is going to mean
some form of review (I prefer the term, "Inspection", capitalised,
meaning Fagan Inspection).
With regard to Inspections, then, Danny Faught hits the bullseye with
his remark that testers *should* be involved in inspecting, i.e.
testing, all the above documents, if only for testability issues.
(But do they know how to recognise or define "testability" in
specifications, designs, and code? That's another thread --
everything is deeply intertwingled! -- and what I've seen there wasn't
promising.)
Danny's not quite dead centre, though, 'cos he missed the point that
testware products (Test Plans, Test Design Specs, Test Case Specs,
Test Procedure Specs, etc, to go through the ANSI standard list) are
*also* software products, are *also* created by bug-prone processes,
and are *also* to be held as buggy until shown otherwise. So, to
round out this dissertation, a Test Engineer, by whatever title, most
certainly should be participating in Inspections *of his or her own
test products*.
I hope I haven't insulted anyone with these views. Unlike many
posters, though, I can safely say that my comments *do* reflect the
views of the organisation I work for, since I (and my wife) own it!
-- Don Mills -- Software Tester From Hell -- Macroscope Services Ltd
-- Wellington, New Zealand
-- <don...@voyager.co.nz>
I should echo Beizer's motto - I can't cover everything under the
sun. :-)
Yes, test products should be created with the same engineering
standards as the products we ship to customers. But you must provide
a mechanism to prevent an infinite regress of testing.
In this neck of the woods, I have advocated testing at least your test
tools, and I'd like to test the test cases for the potential for false
passes and false failures. And certainly designs, design reviews,
inspections and all the upstream activities apply to tests.
Downstream testing of the tests seems to me to be very similar to MIS
testing - they both involve testing an internal product, and they both
usually get overlooked.
>I hope I haven't insulted anyone with these views. Unlike many
>posters, though, I can safely say that my comments *do* reflect the
>views of the organisation I work for, since I (and my wife) own it!
On the contrary, your insights are appreciated.
But when they occur, the test group loses face. And when you're
trying to make a case that the developers should be doing their own
regression testing, knowing that the defect density in the tests is
equal to or greater than the defect density in the product, the
argument doesn't get very far. There should be at least a 50%
probability that a test failure is due to a product bug, preferably
80%+.
I would like to emphasize the importance of reviewing the possibility of
false passes. On the other hand, you can usually defer worrying about
false failures until they actually occur.
______________________________________________________________________
Bret Pettichord Unison Software Software Testing Hotlist
br...@unison.com Austin, Texas http://www.io.com/~wazmo/qa
Danny Faught wrote:
>But when they occur, the test group loses face. And when you're
>trying to make a case that the developers should be doing their own
>regression testing, knowing that the defect density in the tests is
>equal to or greater than the defect density in the product, the
>argument doesn't get very far. There should be at least a 50%
>probability that a test failure is due to a product bug, preferably
>80%+.
I reply:
It seems to me that your setting up a situation where you are basically
claiming that you are going to have to write better programs than your
developers. I don't think you can sustain this. If you can, your company
should have your testers program and your programmers test!
Generally test code is less compact and less complex than the code it
tests. This may make it less error-prone, but there is more of it!
The biggest way to lose face is to have a test that should have reported a
bug, but failed silently and allowed it to be shipped.
A false failure may lead your programmers to not want to run your tests. A
false pass will lead the other testers to not want to run your tests.
Do you actually have a testsuite where more than 50% of the failures are
product bugs? Or is this just your goal?
Bret
You seem to be perpetuating the attitude that causes people to think
that test developers are second-class citizens, and are less capable
programmers than product developers. Independent of the defect
density in the product, I have taken measures to help my group improve
the quality of their tests. This has improved the morale of the test
group, besides the obvious benefits of having better tests. Some of our
engineering practices are noticeably more advanced than those used in
the product development group. This is partially because of our own
passion for quality, and partially because we're not on the critical
path as often and can put more effort toward infrastructure.
I'm not alone in this opinion. Rodney Wilson stated in _Unix Test
Tools and Benchmarks_ that the quality of the tests must be greater
than the quality of the product. And Boris Beizer has refused to
allow product development rejects to join his test group.
>Generally test code is less compact and less complex than the code it
>tests. This may make it less error-prone, but there is more of it!
Yes, having compact test cases makes them much less error-prone,
because the hardest problems occur in the larger programs. If your
group has trouble cranking out the volume of test code required while
maintaining a decent quality level, you should be doing automatic code
generation.
>Do you actually have a testsuite where more than 50% of the failures are
>product bugs? Or is this just your goal?
At somes stages, yes. The cycle is something like this - the functional
tests are developed in parallel with the product. When both have at
least a skeleton of functionality working, the test developer executes
the tests, typically finding that most of them fail. Several test
bugs are fixed, and several product bugs are reported. By the time
the tests and product are ready for checkin, a baseline is established
where the reason for all test failures is documented in a database.
At this point the tests are used for regression testing. Some test
bugs will be uncovered because they were masked by other bugs. And
product regressions or intermittent or configuration-related failures
will turn up new test failures. As the product interface is changed
without proper notification to the test group, test "bugs" will have
to be fixed so as to conform to the new interface.
If the product stabilizes, few test failures will show up from this
point on. If the product continues to change, test issues due to
previously masked bugs or interface changes will continue to show up.
So the density and type of test bugs will vary over the lifetime of
the tests.
What do you mean by automatic code generation? Capture-Replay? 4th generation
programming language?
|==================================================================|
| Carlos Vicens | B-Tree Verification Systems, Inc. |
| cvi...@BTREE.COM | 5929 Baker Rd, Suite 475 |
| (VOICE) 612-930-4135 | Minnetonka, MN 55345-5955 |
| (FAX) 612-936-2187 | |
|==================================================================|
I don't have much experience in that area, so maybe you should tell
me. :-) In cases where the code follows a common pattern, you should
be able to use some sort of automation to generate it, or maybe stuff
some of the functionality into a library. Then when you fix a bug in the
boilerplate, you fix it everywhere.
And I'm sorry for acting like I knew what I was talking about. :-)
The claims I've heard about automatically generating testcases from a
design spec don't seem to be very practical in most situations. But I
have heard from people who have used scripts to automatically generate
test code that follows a common pattern. Unfortunately, the generated
code is often checked in instead of continuing to generate the code
dynamically. This makes maintenance difficult, but avoiding the
problem means your generator has to handle every situation it runs
into without manual editing later.
There is a big difference between test cases and test code, I might have
misinterpreted your posting but it seems you are using both nouns to
cover the same thing.
There are quite a few tools that generate test cases based on, say,
functional specs. This is quite easy to do especially if you use the
file as an ASCII text and extract the various headings.
But generating test CODE? What do you mean by test code? This presumably
is utilised by some tool or within a harness, or is it like a test
object in OOD? I would be interested in your thoughts, either direct or
posted to this group.
Regards
Chris
--
__________________________________________________
Chris Lawton Texas Instruments Software
Advanced Technology Group, World Wide R&D,
Test Code - Code to be executed in a test tool/station/harness
to automate the execution of a test. This setup is frequntly used for
regression test.
Automated Test Code Generation - A tool to generate "Test Code" from
a formal specification or model.
Automated Test Data Generation - A tool or process to generate test cases
or test data to be used for manual testing or as input to a automated
test station. This also requires a formal specification or model as input.
"Utility" code generators - Sometimes is not feasible to create a test
library and pass parameters to represent several similar situations.
This could be because of limitations in the test tool (for example
using an In Circuit Emulator or debugger macro language to automate a
test). In this case a "utility" code generator can be written in AWK or
similar language to create the test code.
I use "test case" to refer to the smallest invokable test entity.
You're right that it's ambiguous to talk about generating such things
if you don't specify whether you're generating them from a product
functional spec or a test functional spec.
It appears that your definition is different. Please elaborate.
The term "test code" betrays my propensity toward automated testing.
I'm assuming that a test design or functional spec is not the final
product needed for executing a test case, but that you have to
actually implement a test program in some sort of scripting or
programming language. If there's a standard term for this, I'm all
ears, especially if there's a term which can apply to both manual and
automated tests.
Kevin Nennecke
Morgan Stanley & Co.
May I take a moment and perhaps add some spin to this discussion?
It is hitting on an area that I'm interested in. While it's more
philosophic than scientific, I'd like to hear your opinions.
The issue is "what is a test case?". Like Danny, I used to think of a
test case as the smallest invokable entity. And sometimes I still do.
However I sometimes find myself in a situation where I need something
smaller than a test case. I've been using the term "test condition"
for this. For example, in alot of editors you have the SAVE AS option.
Clicking in SAVE AS should result in a specific action, usually a menu
with a fill-in element for a new filename.
But what if you have not yet loaded a file? In this case, SAVE AS is
often greyed out and clicking on it has no effect. You might say that
the results of the test case "Click on SAVE AS" depends upon a previous
condition "File opened". I would consider this as two test cases, one
with the two test conditions "File opened, SAVE AS clicked on" and
the other with "File not opened, SAVE AS clicked on".
I normally trace all the "test conditions" for the software and use
that to generate testcases where each testcase is a strig of unique
test conditions. This is alot of work but it has been very helpful
in two ways.
1 - I have always overcovered some "requirements errors" when I've done this.
2 - It seems to be a good way to cover the legal behavior space of the
software under test.
It also sheds light on strategies for destructive testing, though it's not
comprehensive in that area.
So what do y'all think? I'm sure others have hit upon this and I'd like
to hear from you.
- Steve Driscoll
Yes, I usually cover more than one test condition per test case. Most
test harnesses have too much overhead to limit a test case to a single
test condition. I call each independent section of the test case a
"subtest".
>I normally trace all the "test conditions" for the software and use
>that to generate testcases where each testcase is a strig of unique
>test conditions. This is alot of work but it has been very helpful
>in two ways.
>
> 1 - I have always overcovered some "requirements errors" when I've done this.
> 2 - It seems to be a good way to cover the legal behavior space of the
> software under test.
You'll see arguments on both sides of this. Purists, especially those
with standards testing experience, want you to isolate each test
condition. Pragmatists prefer to use more complex test cases because
you'll find some bugs that can only be found in more realistic
scenarios. The best approach lies somewhere in between, probably
starting with some basic isolated test conditions and moving on to more
complicated test cases later in the suite.
> The issue is "what is a test case?". Like Danny, I used to think of a
> test case as the smallest invokable entity. And sometimes I still do.
<snip>
> I would consider this as two test cases, one
> with the two test conditions "File opened, SAVE AS clicked on" and
> the other with "File not opened, SAVE AS clicked on".
I recently found myself leading a software test group of 10 people,
where I was the only one with any formal background in software testing.
All of my engineers had different ideas of what constituted a test case.
IEEE defines test case as: "A set of test inputs, execution conditions,
and expected results developed for a particular objective, such as to
exercise a particular program path or to verify compliance with a
specific requirement." [IEEE Std 610.12-1990]
Beizer, Marick, Kaner, and Perry weren't much help in this area -
everyone either seems to assume that the definition is obvious or else
skirts around the issue -- or else I just couldn't find what I was
looking for.
So, building upon the IEEE definition, I came up with a simple
definition for a test case that all of my engineers agree on.
A test case has the following components:
1. Configuration (usually hardware or hardware/software)
2. Initial State (usually of the software)
3. Input vector (whatever inputs are sent into the system)
4. Execution Procedure (this becomes a test script design when
automating)
5. Output vector (whatever outputs are expected from the system)
It is important to note that *any* time one or more of items 1-4 change,
a new test case has been generated. (If only item 5 changes, look again
-- or realize that you have a non-deterministic system, which is
inherently untestable!)
This definition makes for consistent counting (metrics) as well as a
common understanding and language.
Rob Schultz
--
Rob Schultz Tel: +1 847 632 7307
Motorola CIG Fax: +1 847 632 2431
1475 W Shure Dr Page: +1 800 759 8888
Arlington Heights, IL 60004 PIN 412 6118
g12...@email.mot.com OR sch...@cig.mot.com
********Test cases as objects*********
Interesting, has anybody yet used object modelling to define test cases?
Simplisticly, a test case is an instantiation of the class Test_Case
with defined entry/exit criteria etc.. Any thoughts?
Chris
__________________________________________________
Chris Lawton Texas Instruments Software
>> In article <5722ln$n...@feep.rsn.hp.com>, Danny R. Faught (fau...@convex.hp.com) wrote:
>> : In article <329449...@ti.com>, Chris Lawton <chr...@ti.com> wrote:
>> : >There is a big difference between test cases and test code, I might have
>> [snip]
>> : I use "test case" to refer to the smallest invokable test entity.
>> : You're right that it's ambiguous to talk about generating such things
>> : if you don't specify whether you're generating them from a product
>> : functional spec or a test functional spec.
>> [snip]
>>
>> May I take a moment and perhaps add some spin to this discussion?
>> It is hitting on an area that I'm interested in. While it's more
>> philosophic than scientific, I'd like to hear your opinions.
>>
>> The issue is "what is a test case?". Like Danny, I used to think of a
>> test case as the smallest invokable entity. And sometimes I still do.
>>
>> However I sometimes find myself in a situation where I need something
>> smaller than a test case. I've been using the term "test condition"
>> for this. For example, in alot of editors you have the SAVE AS option.
>> Clicking in SAVE AS should result in a specific action, usually a menu
>> with a fill-in element for a new filename.
>>
>> But what if you have not yet loaded a file? In this case, SAVE AS is
>> often greyed out and clicking on it has no effect. You might say that
>> the results of the test case "Click on SAVE AS" depends upon a previous
>> condition "File opened". I would consider this as two test cases, one
>> with the two test conditions "File opened, SAVE AS clicked on" and
>> the other with "File not opened, SAVE AS clicked on".
>>
>> I normally trace all the "test conditions" for the software and use
>> that to generate testcases where each testcase is a strig of unique
>> test conditions. This is alot of work but it has been very helpful
>> in two ways.
>>
>> 1 - I have always overcovered some "requirements errors" when I've done this.
>> 2 - It seems to be a good way to cover the legal behavior space of the
>> software under test.
>>
>> It also sheds light on strategies for destructive testing, though it's not
>> comprehensive in that area.
>>
>> So what do y'all think? I'm sure others have hit upon this and I'd like
>> to hear from you.
>>
>> - Steve Driscoll
Steve,
What you are describing seems to me the derivation of test cases based on
use cases. A use case being a sequence of user actions and system
reactions that represent a unique business function. Use cases contain
conditions which indicate choices of actions that can be made by the user.
A use case with no conditions has only 1 path (i.e from entry point to
exit point). A use case with conditions has a path for each combination of
conditions. For testing based on use cases, a test case can be derived for
each path. If you view a test case as being one or more Causes resulting
in one or more Effects, then each use case test case is then a series of
mini test cases (I call them test steps) each of which being a Cause and
an Effect.
Regards
Stephanie
---------------------------------------------------------------------------
Steve Colebrooke & spc...@sky-consulting.co.nz Mobile: +64 25 840 854
Stephanie Young sky...@sky-consulting.co.nz Mobile: +64 25 840 853
Sky Consulting Ltd Sky...@sky-consulting.co.nz Tel: +64 4 477 2343
944 Ohariu Valley Rd, Ohariu Valley,Wellington,NZ Fax: +64 4 477 2342
---------------------------------------------------------------------------
Chris Cavallucci
Chris Lawton <chr...@ti.com> wrote in article <329EE0...@ti.com>...
> Robert M Schultz wrote:
> [snip]
> ********Test cases as objects*********
Comments anyone?
--
Robert M Schultz <sch...@cig.mot.com> wrote in article
<329C51EC...@cig.mot.com>...
> A test case has the following components:
>
> 1. Configuration (usually hardware or hardware/software)
> 2. Initial State (usually of the software)
> 3. Input vector (whatever inputs are sent into the system)
> 4. Execution Procedure (this becomes a test script design when
> automating)
> 5. Output vector (whatever outputs are expected from the system)
>
>
Richard S. Hendershot LANshark Systems, Inc.
Software Testing Engineer Columbus, OH 43230
rhen...@lanshark.com http://www.lanshark.com
There is no limit to the input stream - it might be keypresses (we don't
have any in our system), data streams coming down some sort of comm link
(T1, Serial, ethernet, Radio, line voltages, messages from other
systems,
etc., or any other type of input stimuli. Since our product is embedded
fairly deeply, I haven't considered keystrokes or other terminal/GUI
types
of inputs.
I would be happy to listen to your disagreements, however. :-)
> Still, I think you have a very good point that
> test-cases must include configuration and state specifications... but I
> wouldn't *specify* that one might be hardware while the other generally
> software. Building on this, I'd think Configuration to be a quantity of
> resources (OS type, rev.; Software rev; Printer type/rev, etc.).
What you say is absolutely true. The primary difference here is the
base
physical configuration of the system under test vs the state the
software
(and/or hardware) is in at the start of the test. For example, in a PC
world, a test might be run on a system with Win3.1 and another test with
Win95. This would qualify as a second test based on item #1 (different
configuration). Again, the point is to be able to differentiate between
test cases, in order to make accurate and consistent counts of the
number
of test cases identified, written, and executed.
> And
> shouldn't State be a specification of the previous test-cases having been
> executed to get to that point?
Well, that's one way of getting there. The point is to be able to
re-create
the test exactly, in order to facilitate the real problem in testing -
Problem isolation. Sometimes it is easier to execute all previous tests
to get to the initial state, but if the sequence is long or involved, it
can be much quicker to generate the initial state directly. Remember,
if
a test does not uncover a bug, there is no more work to do. If there is
a problem, though, we need to be able to determine what went wrong - was
it a configuration problem, was the test case incorrect, is the code
wrong,
is the spec wrong? Resolving problems quickly is key to shortening the
test cycle.
> Comments anyone?
Of course :-)
> Robert M Schultz <sch...@cig.mot.com> wrote in article
> <329C51EC...@cig.mot.com>...
>
> > A test case has the following components:
> >
> > 1. Configuration (usually hardware or hardware/software)
> > 2. Initial State (usually of the software)
> > 3. Input vector (whatever inputs are sent into the system)
> > 4. Execution Procedure (this becomes a test script design when
> > automating)
> > 5. Output vector (whatever outputs are expected from the system)
--
I was only referring to the "What is a test-case and what is not" thread
which has been riding here for awhile. I feel the debate centers around
what each of us would permit as input range. Some would allow any amount
of inputs so long as they achieved a predetermined function while others
would only allow a more strict, and fewer, inputs to the app under test.
I. personally, favor the former due to it's being an easily understood
perimeter; "if it seems to perform a function, it's a test-case. If it
performs several of these, it's a test-suite"
As I remember,. you posed this:
> 1. Configuration (usually hardware or hardware/software)
> 2. Initial State (usually of the software)
> 3. Input vector (whatever inputs are sent into the system)
> 4. Execution Procedure (this becomes a test script design when
> automating)
> 5. Output vector (whatever outputs are expected from the system)
>
...as an agreed-upon coordination system which seemed to have minimised
this debate about what constituted a test-case. I'd guess that the decrease
iin conflict might have most to do with; 1) having put things in an
organised structure and 2) separating State from Configuration. That's the
other half of the debate, no? "Should a test-case be responsible for
setting things up?"
>
> > Still, I think you have a very good point that
> > test-cases must include configuration and state specifications... but I
> > wouldn't *specify* that one might be hardware while the other generally
> > software. Building on this, I'd think Configuration to be a quantity
of
> > resources (OS type, rev.; Software rev; Printer type/rev, etc.).
>
> What you say is absolutely true. The primary difference here is the
> base
> physical configuration of the system under test vs the state the
> software
> (and/or hardware) is in at the start of the test. For example, in a PC
> world, a test might be run on a system with Win3.1 and another test with
> Win95. This would qualify as a second test based on item #1 (different
> configuration). Again, the point is to be able to differentiate between
> test cases, in order to make accurate and consistent counts of the
> number
> of test cases identified, written, and executed.
Absolutely! Two test-cases, as you say.
>
> > And
> > shouldn't State be a specification of the previous test-cases having
been
> > executed to get to that point?
>
> Well, that's one way of getting there. The point is to be able to
> re-create
> the test exactly, in order to facilitate the real problem in testing -
> Problem isolation. Sometimes it is easier to execute all previous tests
> to get to the initial state, but if the sequence is long or involved, it
> can be much quicker to generate the initial state directly. Remember,
> if
> a test does not uncover a bug, there is no more work to do. If there is
> a problem, though, we need to be able to determine what went wrong - was
> it a configuration problem, was the test case incorrect, is the code
> wrong,
> is the spec wrong? Resolving problems quickly is key to shortening the
> test cycle.
Good Point. I would not wish to *require* a series of test-cases as a
preamble to a test-case. Unless that were the only way to reconstruct the
exact state by which a defect was reproducible. At which point we could
see that the preablic test-cases *still* had a job to do even though they
might not, in and of themselves, demonstrate a test-subject's failure ;-)
>
> > Comments anyone?
>
> Of course :-)
pls pardon my delay. Was nuts here friday.
-rsh
>
> > Robert M Schultz <sch...@cig.mot.com> wrote in article
> > <329C51EC...@cig.mot.com>...
> >
> > > A test case has the following components:
> > >
> > > 1. Configuration (usually hardware or hardware/software)
> > > 2. Initial State (usually of the software)
> > > 3. Input vector (whatever inputs are sent into the system)
> > > 4. Execution Procedure (this becomes a test script design when
> > > automating)
> > > 5. Output vector (whatever outputs are expected from the system)
>
> --
> Rob Schultz Tel: +1 847 632 7307
> Motorola CIG Fax: +1 847 632 2431
> 1475 W Shure Dr Page: +1 800 759 8888
> Arlington Heights, IL 60004 PIN 412 6118
> g12...@email.mot.com OR sch...@cig.mot.com
>
--