kmentor
unread,Oct 21, 2011, 4:04:49 PM10/21/11Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Learning Registry Developers List
Seems that it (QA) is naturally integral throughout the product life
cycle and that it should be augmented by automation testing. Perfect
world aside there are constraints of budgets, people, schedules and
all of that inconvenient stuff.
We've no GUI and the like which reduces the human testing effort
tremendously. The servive nature of the product lends well to
autmoation. There are some natural weak links often unintentionally
built into automated testing though in spite of best efforts to avoid
them.
AFAIK at the moment, when we're talking about CI, we're speaking of
something that we might set up, for example, to take each days work
and bake a fresh build in the morning? That build, if succesful, would
automatically get put through appropriate automated testing all which
yields a report for which the days tasks flow. If the build is usable
folks use it internally for testing and the ongoing dev work. Or
something along those lines.
Right now the product is relatively small. If it is succesful then
going forward it will necessarily grow in size, features, complexity
and history. This likely makes it harder to manange, update, and
automate testing. More options, more ways of combining things, etc
requires more creativity in testing and interpretting results than is
got through automation. The automation result will tell you if the
spec was met literally. It may tell you that its alright to go ahead
and use it on your own machine. It wont tell you whether or not if,
once installed, the product is of any use.
In the daily grind, as Steve mentioned, post automation test success,
someone would put it through a scripted smoke text which lets the team
know that yes, it is likely that if you install it this will be of use
and it wont explode.
My concern isnt that automation doesnt catch it all. My concern is
that the code, coding, and code testing are performed by the same
folks which may reduce the effectiveness of all three.
In the world of wordsmiths and publications its not that writers cant
edit or that editors cant write. Having one edit their own writing it
seems just really doesnt work out. Software is technically more
complex in it's tasks but the reasoning behind task seperation is the
same.
If the dev team's Smith is repsonsible for the new Widget feature its
likely he'll also create tests. Aware of problems with developers
writing their own tests, he earnestly works to write fair, honest
tests with eyes wide open. Self-awareness of human tendencies, Smith
believes, allows them to be kept at bay.
All well and good intentioned but Smith is likely to write tests that
ensure a pass in spite of it all. Smith is "part of the system" as is
"Smith's code" and "Smith's test". Smith: Code, Coder, Tester,
Approver. There is no independent voice stating that the system has
earned their faith.
This doesnt mean Smith shouldnt write a test or that it shouldnt be
included in an automated test. Its just that its reliability as a
quality metric might it not be the end all thing.
At the risk of sounding cliche, one who is functional at "logical
software think", thinks differently than most folks. Given the team's
penchant for quality and design I'd anticipate the expression of that
logical approach in both product and test code. When it comes to
testing, that logic may have little in common with the tack taken by
the remaining 99 percent of humanity and as such less valuable.
Someone pounding the keys to test the daylights out of it, twist it,
and poke at things from a based-on-experience-hunches can make a big
difference in the number of bugs logged. With that Someone and dev
working together from the onset to create design specs, test plans,
etc. misunderstanding is reduced. When it comes to what is being
built; what it is expected to do; and how it might be used; starting
with assupmtions that are similar rather than independent smoothes the
track that lies ahead.
Below is an overly simple example that may be instructive. Or not.
Here, Smith from Dev is tasked to write a function to count cats.
Jones from Test suggests there is a bug in the Cat Counter code:
********************************************************************************
EXAMPLE 1
********************************************************************************
-- Cat Counter
-- Smith. 21 Oct 2011.
--
-- PURPOSE: Given the number of cats in basket A, and the
-- number of cats in basket B, return the total
-- number of cats.
--
-- ACCEPTS: 'basketA' as the number of cats in basket A
-- 'basketB' as the number of cats in basket A
--
-- RETURN: Sum of the cats.
-------------------------------------------------
CountCats(basketA, basketB)
{
return(basketA + basketB)
}
-------------------------------------------------
Jones: I get an ugly error when I try to count cats.
Smith: It passed the tests. Can I check your test?
Jones: Oh yes please do
Smith: There's your problem, you have 6 cats in basket A and 3.5 cats
in basket B. You cant have half of a cat.
Jones: Why not?
Smith: Because it doesnt make sense. When did you last see 1/2 of a
cat?
Jones: I cant say for sure but there's definitely 1/2 of a cat in
Basket B.
So.....
Smith is correct that Earth is not inhabited by fractional cats.
Jones is correct that there shouldnt be an ugly error if one enters
3.5 rather than 3. "The Spec", Jones points out, "says that it counts
cats. Nothing there about whole cats only."
Thus....
Automated Test Entity scores it 100/100.
Jones scores it at 67/100.
Ship it? Maybe. Maybe not. The point being that hands on using the
software found a bug that the automated test never tested for because
it apparently made no sense to do so.