Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Best practices question

14 views
Skip to first unread message

David Landsman

unread,
Apr 26, 2007, 4:30:47 PM4/26/07
to
When creating a complicated web based application from scratch, how
many testers per developer would be considered a best practice. I
have heard 1.5 testers for every developer. What are your thoughts on
this?

Phlip

unread,
Apr 26, 2007, 9:14:35 PM4/26/07
to
David Landsman wrote:

> When creating a complicated web based application from scratch, how
> many testers per developer would be considered a best practice. I
> have heard 1.5 testers for every developer.

Manual testing?

> What are your thoughts on
> this?

How many unit tests do the developers right?

For example, at my day-job the ratio of test to code lines is 1:1. We have
no formal testers; each change gets reviewed by our lead programmer and then
the executive who requested the change. We write a very complex suite of
websites, and we put it live - directly from our desktops - more than once a
day. No test website or other overhead is required.

We use Ruby on Rails, so it comes with complete test rigs. To add any
feature, we just extend the existing test rig.

Anything less than this situation...

- a fully dynamic language
- Model View Controller
- automated schema migrations
- wall-to-wall unit tests
- completely streamlined build scripts

...is a Worst Practice - it is throwing good money after bad.

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html


Michael Bolton

unread,
Apr 27, 2007, 3:25:34 AM4/27/07
to

- If you want to find all the bugs, 100 testers per developer would be
much better than 1.5 testers per developer. You customers would
consider this impressive (if they were willing to pay), but your CFO
would freak out.
- If you want to keep costs low, 0 testers would be much better than
1.5 testers per developer. It might even work for you --see Phlip's
posting--but do you have the same confidence in your developers that
he has in his?
- If you want to keep costs really low, 0 testers per 0 developers
would be better yet.

We haven't yet talked about the skills of the testers or the
developers involved. We haven't talked about the business domain and
the attendant levels of risk. We haven't talked about whether you
account for test managers and admins as testers.

We haven't talked about the ugliness that would ensue if you took this
"best practice" literally on a team of three developers--which half of
the fifth tester would you want to keep? (Hint: pick the end with
the head.)

One of my points is that this is an unanswerable question without more
context information--experience with the company and the developers in
the business and technical domains? budget? schedule? co-location
of developers and testers? mission of testing? See
http://www.developsense.com/articles/Comparitively%20Speaking.pdf

Another point is that, irrespective of the answers to the questions
above, "best practice" is a meaningless marketing term, except for
the meaning "something that our very large and expensive consulting
company is promoting because it worked for us (or we heard it worked
for someone) zero or more times". "Industry best practices" is even
worse. What industry? if you're developing Web-based billing systems
for a medical imaging company, are you in the "Web" industry, the
"software" industry, the "medical services" industry, or the
"financial industry"?

Phlip proposes a model that seems to work for him, his programmers,
his company, and (in particular) the executive who requested the
change--who, in one way of thinking, is the only person who matters.
As a professional tester and teacher of testers, I can see risks in
this approach, but the risk is apparently tolerable for them at this
time. They gain confidence--which is reasonable to them--by their
development practices. Their customers will eventually tell them if
it turns out that they need testers at all, but apparently, so far so
good.

What skilled testers do is to help to mitigate the risk of not knowing
something about the system that we would prefer to know. So: instead
of matching it to the number of developers, try asking "What do we
want to know that we might not find out otherwise? What tasks might
be involved in finding that stuff out? Who would we like to assign
to those tasks?"

And if you're still stuck, get a tester (just one skilled tester) to
help you to ask and answer those questions.

---Michael B.

H. S. Lahman

unread,
Apr 27, 2007, 10:42:37 AM4/27/07
to
Responding to Phlip...

> For example, at my day-job the ratio of test to code lines is 1:1. We have
> no formal testers; each change gets reviewed by our lead programmer and then
> the executive who requested the change.

Just out of curiosity, who writes the acceptance tests if you have no
formal testers?


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
h...@pathfindermda.com
Pathfinder Solutions
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
"Model-Based Translation: The Next Step in Agile Development". Email
in...@pathfindermda.com for your copy.
Pathfinder is hiring:
http://www.pathfindermda.com/about_us/careers_pos3.php.
(888)OOA-PATH

H. S. Lahman

unread,
Apr 27, 2007, 10:50:25 AM4/27/07
to
Responding to Landsman...

Basically I agree with Bolton that there is no magic number for all of
his reasons and more.

However, I would add that in the end what counts is the defect rate. If
the shop is using testing to improve reliability and the released defect
rate is unacceptable, you need more testers. If the shop is using
testing to monitor the development process and the testing defects found
in your fault categories are too small to be statistically significant,
then you need more testers.

Phlip

unread,
Apr 27, 2007, 7:07:41 PM4/27/07
to
H. S. Lahman wrote:

> Responding to Phlip...
>
> > For example, at my day-job the ratio of test to code lines is 1:1. We have
> > no formal testers; each change gets reviewed by our lead programmer and then
> > the executive who requested the change.
>
> Just out of curiosity, who writes the acceptance tests if you have no
> formal testers?

We are doing XP - Customer Tests.

If we were doing them (if we were life- or money-critical or
something), the team would write them, in step with writing the
Developer Tests and code. Teams working in harder spaces would have
more clearly defined tester roles.

--
Phlip

H. S. Lahman

unread,
Apr 28, 2007, 10:31:44 AM4/28/07
to
Responding to Phlip...

>>>For example, at my day-job the ratio of test to code lines is 1:1. We have
>>>no formal testers; each change gets reviewed by our lead programmer and then
>>>the executive who requested the change.
>>
>>Just out of curiosity, who writes the acceptance tests if you have no
>>formal testers?
>
>
> We are doing XP - Customer Tests.

Wow. I thought XP had finally given up on that and now advocated an
independent QA team writing the acceptance tests as an agent for the
customer.

0 new messages