Test Automation? Pashaw!

9 views
Skip to first unread message

George

unread,
Apr 4, 2009, 2:14:53 PM4/4/09
to Watir General
It seems that I've been encountering more people within my workplace
(and, alas, even within my own QA team!) that are not sold on test
automation. From what I've learned so far, there seems that automation
will never cover 100% of what needs to be tested, but this doesn't
negate the need.

Another frustration is that I've been tasked to write automation
scripts as part of my year-end goals. However, I haven't been assigned
hours in my work week to do them! All of my script development has
been after-hours and weekends (notice I'm posting this on a
Saturday!).

Has anyone else run into naysayers? How can I convince the decision-
makers that this is a worthwhile effort?

Paul Rogers

unread,
Apr 4, 2009, 4:22:15 PM4/4/09
to watir-...@googlegroups.com
get a copy of Lisa Crispins and Janet Gregorys book. There lots of good stuff in there about this kind of situation.

Paul

Chris McMahon

unread,
Apr 5, 2009, 12:14:51 AM4/5/09
to Watir General


On Apr 4, 2:22 pm, Paul Rogers <paul.rog...@shaw.ca> wrote:
> get a copy of Lisa Crispins and Janet Gregorys book. There lots of good
> stuff in there about this kind of situation.

And may I point out in particular (ahem) page 260 sidebar "Giving
Testers Better Work". Automation exercises function; people exercise
aesthetic judgments. Conflating or confusing the two is a dire
mistake.

Chuck van der Linden

unread,
Apr 5, 2009, 8:11:26 PM4/5/09
to Watir General
I've run into both. I've seen orgs where the mandate 'automate
everything' came down and everyone had to become a SDET (software
development engineer in test) or find other work somewhere else.
I've also seen places that having been burned by automation snake oil
(see stuff by james bach for good examples of this) and there was a
strong mindset of 'automation doesn't work'

I my mind it's just a tool, a powerful tool if used wisely, but an
expensive and frustrating one if not used properly. In terms of
tools, if manual testing is a drill, then automation is a hammer.
while it might be possible to somehow use one to do the job of the
other, it's not generally the best idea.

It seems to me that you are in a trap. You've been given a task but
not empowered to complete the task. That's frankly not fair to you,
as you are being setup in a situation where you have to work basically
unpaid overtime or fail in your goals.

One potential way out is to get yourself a quick win, demonstrate some
success with automation (and learn the time it takes you to do it) and
then demand that they either give you the time or remove the task,
since you are in a no-win situation otherwise.

Testing automation works best IMHO in one of the following situations

1) same test repeated frequently e.g daily unit tests, build
validation tests, acceptance tests (if you are using them as a way to
track progress and show what's working and what's not instead of just
at the end)
2) same test repeated with different data: e.g. combinatorial testing
(like testing the word 'font settings' ui), or testing an input field
with a large number of 'valid' vs 'invalid' values to see that they
are all properly accepted or rejected.
3) not humanly feasable or possible (loadtesting, performance testing)
4) Utiities to save time (does it take 2 hours to setup your test env
and load it with data? automate it) or improve testing process (is
the environment setup complicated with ots of settings that need to be
set the same way to allow test results to be replicated?)
5) regression suites which are boring as hell to execute manually over
and over..

One problem is that many of these rarely find bugs after the test is
first written. and since a tester's goal is generaly to find bugs,
that can be a problem.

If I'm looking for a quick win, I want something that can benefit the
entire team, so something like a useful utility script (if you've a
need for one) or a BVT type test to save people from wasting time with
bad buids, might be your best bet.. The tests of type 3 are often
complicated to setup and may require a speciaized environment to run
them, so that wouldn't be my first choice, OTOH managers LOVE graphs
and numbers so something that reports on the product performance and
can be run on each build might be a way to gain some good visibility.

What is it you are assigned to do in terms of scripting? can you find
something in there you believe might represent a potential 'quick win'
and/or proof-of-concept?

George

unread,
Apr 6, 2009, 12:54:14 PM4/6/09
to Watir General
Right now, I'm creating simple regression tests that we can run after
every patch/version update. And, you're right - there isn't much of a
chance of these tests failing. We're automating this because it's
boring to do manually.

As far as the quick win, I have already completed quite a few scripts
for a couple of our web apps. My manager has actually been great as
far as being an advocate. The challenge for me as a beginner is that
I'm constantly seeing ways I can refactor the code to make it better,
or add functionality (such as writing the results to an Excel file),
so I may be working a little too long on these. Perhaps with the
introduction of the Watircraft framework, the process of writing
scripts can be a little more streamlined.

Truth be told, I actually don't mind the unpaid overtime. Learning
Ruby/Watir is a lot of fun and it doesn't even seem like work to me.
It's just unfortunate that official work time can't be alloted to this
effort.
Message has been deleted

Lisa Crispin

unread,
Apr 6, 2009, 1:49:04 PM4/6/09
to watir-...@googlegroups.com
Hi George,
I appreciate the plugs for the book from Paul and Chris (and it's the contributions from them and other folks like them that make the book helpful!)

Does your team do any regression testing? If so, and if you don't have these tests automated, who is doing the manual regression tests?

What has worked for me in the past is to get support from the team coach/manager that the whole team should be responsible for quality and testing. When it's time to do manual regression testing, the entire team participates. Since you will have more and more regression tests as you deliver more and more new functionality, it will soon become obvious to everyone why automating these tests would be really helpful.

In my experience, it takes a commitment from everyone on the development team, not only the testers, to deliver high-quality software. Programmers don't want to deliver a crummy product either. If you can get everyone talking about this, identifying pain points and how to mitigate them, you may find support for automation efforts.

What language is your app under test written in? As much as I like Watir, you might get better buy-in if you get the whole team to agree on the most appropriate automation tools for your situation. While the Java programmers on my team went along with using Watir, and they all bought the PickAxe book so they could help with the scripts, they've never been as sold on it as they have on other tools we use such as FitNesse where they can write the automation fixtures in Java.

Be patient and take baby steps. Try to get the whole team engaged in an automation effort. I highly recommend _Fearless Change_ by Rising and Manns, it has really helpful patterns on how to introduce changes such as this.
-- Lisa
--
Lisa Crispin
Co-author with Janet Gregory, _Agile Testing: A Practical Guide for Testers and Agile Teams_ (Addison-Wesley 2009)
http://lisacrispin.com

Jeff Fry

unread,
Apr 6, 2009, 2:04:37 PM4/6/09
to watir-...@googlegroups.com
This conversation tends to be quite unproductive when it becomes "is
or isn't test automation useful." It's a lot like "Are books useful?"
Books won't solve any problems on their own. There are plenty of
problems that books will not help with a bit. There are /some/
problems where /the right/ books /well applied/ might add tremendous
value.

Similarly with testing, I like to look at what are the problems that
need to be solved. Then I like to think about each of them, and
consider solutions. Is one of your problems basic functionality
breaking when code is changed? Unit tests /might/ be a great solution
to your problem. Do you suspect intermittent production failures are
the result of concurrency issues? Likewise, computer-assisted tests
might be your best bet in reproducing the problem in house.

I think discussing the relative costs/benefits of automated
browser-based regression testing is a good idea, and getting real
experience reports helps a lot. Even within this specific area, there
may be some problems that're helped through automated browser-based
tests and others where the cost is too high.

By the way, I wrote a bit about some of the most compelling cases
where I've used watir over the past five years here:
http://testingjeff.wordpress.com/2008/04/15/why-do-you-use-watir/

Getting time added into the schedule for test automation is a
different question, but one that /might/ become easier if you're able
to focus the conversation around solutions to particular problems.

Cheers,
Jeff
--
Jeff Fry

http://testingjeff.wordpress.com
http://associationforsoftwaretesting.org

Chuck van der Linden

unread,
Apr 6, 2009, 4:21:39 PM4/6/09
to Watir General
write the results to HTML on a network share everyone can access, send
mail when the test finishes that the results are available (or better
yet embed the report in the mail).. or link to it from sharepont or
your team wiki or whatever collaboration tools you are using.

(see the html reporting class in the examples for a starting point)

Run them really frequently (like if there is an automated nightly
'full build') so you catch a problem as soon after it occurs as
possible.

Then pray to your favored god(s) that a dev messes up and the tests
expose the problem, because the instant that happens, management
starts to see the value in what you've done. Seeing the tests pass
is a 'feel good' and gives managers a nice 'warm fussy' feeling, but
demonstrating that they can catch a regression that might have
otherwise gone out the door is a whole level better than that.
(because that kind of mistake can be potentially costly)

The best way to silence naysayers is with results. If the results
happen to be a nicely formatted HTML page, with the company logo on
it, and nice color coded PASS and FAIL stuff, so much the better since
managers eat that stuff up.

The more visible your work is, (providing you are doing good work) the
higher up the food chain in your org, the more likely you are to get
support for what you are doing.
> > > makers that this is a worthwhile effort?- Hide quoted text -
>
> - Show quoted text -

George

unread,
Apr 6, 2009, 8:59:45 PM4/6/09
to Watir General
Good tips, Chuck. We currently have SharePoint (although we're
getting rid of it in the future...not sure why, though), so I'll need
to figure out how to link the HTML report from there. I'll look into
this...thanks!

Alister Scott

unread,
Apr 7, 2009, 1:31:19 AM4/7/09
to Watir General
You could always define your tests in a wiki and update the results
directly to there!
A good way to get everyone involved in what you are running and what
the results are.
Some more info on my blog: http://watirmelon.wordpress.com/category/wiki/
(shameless plug!)

Cheers,
Alister Scott
Brisbane, Australia
http://watirmelon.wordpress.com/
Reply all
Reply to author
Forward
0 new messages