Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Life without a beta test

0 views
Skip to first unread message

Cem Kaner

unread,
Jan 21, 1996, 3:00:00 AM1/21/96
to
Todd Bradley (to...@rainbow.rmii.com) wrote:
: I have never worked on any software project without a beta
: test release, but I'm now in the position of trying to figure
: out how to assure the quality of a product whose development
: schedule does not allow for a beta test. Does anybody out
: there have any useful suggestions or pointers to info on how
: to best work around not having a beta release?

: Todd.
: --
: Todd Bradley -- A7 Audio Research Lab -- to...@rmii.com
: Supreme Ruler of the Galaxy and Administrator of boulder.general

I think there are several ways of living without some types of beta
tests. But I can't make useful suggestions without knowing some more
information.

What kind of product are you making? What kind of market? (For example,
is this a hardware/software consumer product?)

What would your objectives normally be in a beta test phase? What is it
that you think that you are missing?

--
-----------------------------------------------------------------
CEM KANER JD, PhD. Attorney / ASQC-Certified Quality Engineer
read Kaner, Falk & Nguyen, TESTING COMPUTER SOFTWARE (2d Ed. VNR)
1060 Highland Court #4, Santa Clara 95050 408-244-7000
-----------------------------------------------------------------


Todd Bradley

unread,
Jan 22, 1996, 3:00:00 AM1/22/96
to
Cem Kaner (cemk...@netcom.com) wrote:

: I think there are several ways of living without some types of beta

: tests. But I can't make useful suggestions without knowing some more
: information.

: What kind of product are you making? What kind of market? (For example,
: is this a hardware/software consumer product?)

It's a new major release of an existing software product--a cross
platform GUI development framework. The market is software dev-
elopers creating portable GUI programs.

: What would your objectives normally be in a beta test phase? What is it

: that you think that you are missing?

Normally, the objectives would be to get preliminary customer
feedback on the software and documentation for the new features
and to field test the software for backwards compatibility
with existing applications.

Cem Kaner

unread,
Jan 22, 1996, 3:00:00 AM1/22/96
to
Todd Bradley (to...@rainbow.rmii.com) wrote:
: Cem Kaner (cemk...@netcom.com) wrote:

: : I think there are several ways of living without some types of beta
: : tests. But I can't make useful suggestions without knowing some more
: : information.

: : What kind of product are you making? What kind of market? (For example,
: : is this a hardware/software consumer product?)

: It's a new major release of an existing software product--a cross
: platform GUI development framework. The market is software dev-
: elopers creating portable GUI programs.

: : What would your objectives normally be in a beta test phase? What is it
: : that you think that you are missing?

: Normally, the objectives would be to get preliminary customer
: feedback on the software and documentation for the new features
: and to field test the software for backwards compatibility
: with existing applications.

Here are a couple of suggestions, extending a theme laid out in this
thread by tada...@aol.com.

First, to achieve preliminary customer feedback, I would hire a few
members of your target market, and have them work at your site on projects
of their, or your, choosing. Whenever they run into a problem or a
confusion, they leave the office they're working in and show you the
problem immediately. Stage their arrival. Don't have them all start at
the same time. Start one or two earlier and some others later.

Second, field testing for backward compatibility appears impossible if you
have no time for a beta cycle. But you have your support records from the
previous release. You can (a) regression test around the complained-about
issues and (b) (perhaps?) get code from developers (your customers) who
had unusually much trouble last time. If you got that code, could you test
your compatibility with it?

Joe Strazzere

unread,
Jan 22, 1996, 3:00:00 AM1/22/96
to
In article <4duk4a$m...@natasha.rmii.com>, to...@rainbow.rmii.com says...

>
>Cem Kaner (cemk...@netcom.com) wrote:
>
>: I think there are several ways of living without some types of beta
>: tests. But I can't make useful suggestions without knowing some more
>: information.
>
>: What kind of product are you making? What kind of market? (For
example,
>: is this a hardware/software consumer product?)
>
>It's a new major release of an existing software product--a cross
>platform GUI development framework. The market is software dev-
>elopers creating portable GUI programs.
Given the type of product you are developing, the lack of Beta could
really hurt!
If this framework is itself cross-platform GUI development - do you use
it internally? Do you "eat your own dogfood"? You might be able to turn
this into an "internal Beta" if you carefully plan for an appropriate
level of internal distribution and careful feedback.

>: What would your objectives normally be in a beta test phase? What is
it
>: that you think that you are missing?
>
>Normally, the objectives would be to get preliminary customer
>feedback on the software and documentation for the new features
>and to field test the software for backwards compatibility
>with existing applications.

Clearly, you need that feedback.
Could you find some "friendly customers" (prior Beta sites?) and send
them an "unofficial Beta"?
Could you solicit customers for their existing applications (or applets)
which you could use internally to test for backwards compatibility?

What is the justification for not planning a Beta?
What are the expected savings?
What are the expected risks?
Will you let us know how this turns out?

Best of luck,
--
Joe Strazzere
Director, Quality Assurance and Support
Softbridge, Inc.
[home of the Automated Test Facility -
client-server testing tools for Windows, '95, 'NT, and OS/2]


Charles Nichols

unread,
Jan 22, 1996, 3:00:00 AM1/22/96
to
In <4drhfs$8...@natasha.rmii.com> to...@rainbow.rmii.com (Todd Bradley)
writes:
>
>I have never worked on any software project without a beta
>test release, but I'm now in the position of trying to figure
>out how to assure the quality of a product whose development
>schedule does not allow for a beta test. Does anybody out
>there have any useful suggestions or pointers to info on how
>to best work around not having a beta release?

Ouija board.


Philip Stuntz

unread,
Jan 24, 1996, 3:00:00 AM1/24/96
to
>I have never worked on any software project without a beta
>test release, but I'm now in the position of trying to figure
>out how to assure the quality of a product whose development
>schedule does not allow for a beta test. Does anybody out
>there have any useful suggestions or pointers to info on how
>to best work around not having a beta release?

>Todd.


>--
>Todd Bradley -- A7 Audio Research Lab -- to...@rmii.com
>Supreme Ruler of the Galaxy and Administrator of boulder.general

I have never worked without a beta either, but the last release we
shipped might as well have. The reason is an assumption we made: beta
testers are going to find anything (aside from configuration related
items) that you can not find internally, given sufficient effort.

Most firms simply won't put that much effort into testing. When we
were doing the planning, we came to a crucial understanding: Our
clients are not in business to to test software, they are in business
to (xxx - sell insurance in our case). Since our system manages the
entire financial side of the client, they are literally shut down on
failure.

I can't really help you on the schedule though - the "iron triangle"
of features, resources, and schedule still held for us. Since we had
a fixed feature list and fixed resources, management decided to let
the schedule slide. As project manager, we built the product nightly
for the last 6 months, tested daily, and got into a steady state. We
used test coverage statements to let me know what was not tested yet.
Getting the hard data from rate of new bug discovery and our fix rate
and accuracy %, I could give accurate ship dates to management 4
months out. The key for me was that they accepted those dates, and
choose not to ship before we were ready. That is not often the case.


Gary A Feldman

unread,
Jan 24, 1996, 3:00:00 AM1/24/96
to

In article <00001b88...@msn.com> Phil_...@msn.com (Philip Stuntz) writes:

...


> Getting the hard data from rate of new bug discovery and our fix rate
> and accuracy %, I could give accurate ship dates to management 4
> months out. The key for me was that they accepted those dates, and
> choose not to ship before we were ready. That is not often the case.

Could I trouble you to elaborate on this. (Actually, I suspect you could
get a good paper out of it to present at one of the testing/quality
conferences.)

The questions that come to my mind:

What do you mean by accuracy %?

How do you compensate for differences in testing effort that affect
the validity of the bug discovery rates? (If someone goes on vacation,
will that affect the rate, assuming the test system isn't a fixed automated
system?)

How did you extrapolate current arrival and fix rates to a ship date?

Thanks,

Gary


Todd Huffman

unread,
Jan 26, 1996, 3:00:00 AM1/26/96
to

|> In <4drhfs$8...@natasha.rmii.com> to...@rainbow.rmii.com (Todd Bradley)
|> writes:
|> >
|> >I have never worked on any software project without a beta
|> >test release, but I'm now in the position of trying to figure
|> >out how to assure the quality of a product whose development
|> >schedule does not allow for a beta test. Does anybody out
|> >there have any useful suggestions or pointers to info on how
|> >to best work around not having a beta release?

There are many companies that do not use beta tests, or few beta tests. I've
worked for two of those.

There is a straightforward answer here. You should be motivated to build
up your in house testing capability to do a very good job. A system test
group with people that are expert in the product usage would probably be
needed. Perhaps your pre-sales people can use the product for benchmarks
to get some extra usage. You will want to be very much on top of testing
and test automation in order to make up for the lack of beta test.

I believe that many companies use a beta test to find things that
they should have found themselves. The product is released to beta when
the schedule says it is time rather than when the product is ready.

--
----------------------------------------------------------------------------
Todd Huffman (503)685-1812
Mentor Graphics Corporation or 1-800-592-2210 ext. 1812
8005 S.W. Boeckman Road
Wilsonville, OR 97070-7777 email: todd_h...@wv.mentorg.com

Bbeizer

unread,
Jan 27, 1996, 3:00:00 AM1/27/96
to
>|> In <4drhfs$8...@natasha.rmii.com> to...@rainbow.rmii.com (Todd Bradley)
>|> writes:
>|> >schedule does not allow for a beta test. Does anybody out
>|> >there have any useful suggestions or pointers to info on how
>|> >to best work around not having a beta release?

There are some good and bad reasons for doing beta testing.

Bad Reasons:

#1. To do the testing that should have been done in house, especially
unit testing by developers.
#2. To do system testing that should have been done in house.

#3. To learn if you have developed the right product, right features,
right feel, etc. that should have been learned by early prototyping and
earlier market surveys.

The reality is that for well-tested products, beta testing doesn't buy all
that much (but see below). If beta testing finds a lot of bugs, then the
product was released prematurely. If good internal testing is done prior
to release, beta testing doesn't find many bugs and is not even worth the
administrative costs for so-called free testers.

However, here are three good Reasons:

#1. The beta test myth is so strong that any software developer
(especially of pc software products) who would publically admit to
downgrading beta testing would pilloried beyond reason by the trade
press. Beta testing, whether technically effective or not, is mandatory
PR these days.

#2. Configuration issues. No vendor can test all possible
hardware/software configurations in-house, or even make a statistically
good stab at it. The in-house test teams use configurations that perforce
have biases--even if they try to do some configuration compatibility
testing. It's really tough to know what's out there. 500+ plus disc
drives, 200+ mother boards, 500+ modems, disc controllers, sound cards,
etc. The combinatorics quickly gets into the trillions and quadrillions.
Beta testing is an attempt to sample those configurations and look for
configuration sensitivity bugs. Unfortunately, beta testers tend to be
atypical, power users. Even so, it does seem to pay off--after all,
300,000 beta testers on WIN95 represents only a 0.35% sample of
86,000,000 Microsoft user configurations.

3. Localization to other languages. Even your in-house translators to
the Pangrovitit language might not know that reading the menu line from
right to left spells out the worst possible Pangrovinian obscenity, or
that the first letter of every choice on the EDIT menu reading top down is
the acronym of the primary anti-government terrorist group. Your in-house
tester left Pangrovinia 15 years ago and isn't up on the latest political
stuff.

Boris Beizer


Bbeizer

unread,
Jan 27, 1996, 3:00:00 AM1/27/96
to

Charles Nichols

unread,
Jan 27, 1996, 3:00:00 AM1/27/96
to
In <4ecrk5$j...@newsbf02.news.aol.com> bbe...@aol.com (Bbeizer) writes:

{List of good reasons deleted}

To which I would add under good reasons to do a Beta Test...

4. It is cheaper than not doing one. If your processes are
coordinated then all departments (manufacturing, marketing, finance,
etc.) will be ready (or think they are ready) at the conclusion of your
Alpha test. The Beta test can be thought of as a trial run (test) of
your product readiness using a small production run. If there are
serious or fatal flaws in the product it is better for your Beta sites
to discover them after your churning out 50 or 100 widgets rather than
50,000.

A side note on MicroSoft, I believe that their extraordinarily long
Beta test and number of Beta testers is atypical. MicroSoft has
essentialy shifted much of their Alpha testing to their Beta test.
MicroSoft can get away with it for much the same reason that Ford was
once able to tell the customer that he could have any color car that he
wanted - as long as it was black.

Charles Nichols


Dave Klemp

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to
Charles Nichols (cnic...@ix.netcom.com) wrote:

: In <4ecrk5$j...@newsbf02.news.aol.com> bbe...@aol.com (Bbeizer) writes:

: {List of good reasons deleted}

: To which I would add under good reasons to do a Beta Test...

: 4. It is cheaper than not doing one. If your processes are
: coordinated then all departments (manufacturing, marketing, finance,
: etc.) will be ready (or think they are ready) at the conclusion of your
: Alpha test. The Beta test can be thought of as a trial run (test) of
: your product readiness using a small production run. If there are
: serious or fatal flaws in the product it is better for your Beta sites
: to discover them after your churning out 50 or 100 widgets rather than
: 50,000.

Let me add that Beta is an excellent method to test the readiness of a
company to support production, to provide customer support, product inquiries,
etc. In small companies with little or no track record in meeting market place
demands, this "test" of the company processes is very valuable.

dave klemp

--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
David W Klemp 408-429-9221
Santa Cruz Associates da...@klemp.com
2 Hollins Drive
Santa Cruz, CA 95060


Philip Stuntz

unread,
Jan 31, 1996, 3:00:00 AM1/31/96
to

>...
>> Getting the hard data from rate of new bug discovery and our fix rate
>> and accuracy %, I could give accurate ship dates to management 4
>> months out. The key for me was that they accepted those dates, and
>> choose not to ship before we were ready. That is not often the case.>

>Could I trouble you to elaborate on this. (Actually, I suspect you could
>get a good paper out of it to present at one of the testing/quality
>conferences.)

>The questions that come to my mind:

1)


>What do you mean by accuracy %?

2)


>How do you compensate for differences in testing effort that affect
>the validity of the bug discovery rates? (If someone goes on vacation,
>will that affect the rate, assuming the test system isn't a fixed automated
>system?)

3)


>How did you extrapolate current arrival and fix rates to a ship date?

1) Accuracy percentage is the percentage of times that the developer
says it is fixed, and testing proves it to be so. The negative here
(developer says its fixed but its not) has to be considered in
estimating - if 90% of items are really fixed, 10% of any fix rate
generates a new problem (or rather one that must be revisited). I
found that circulating team fix rates created a lot of positive peer
pressure. I did not circulate those beyond the team, since others
(QA, management etc) would misinterpret them

2) I kept discovery rates by tester. We are a small shop, and I know
that tester X finds entirely different errors than tester Y. Both
types are needed, but Y may find fewer but more subtle problems.

3) I'm not sure I understand how this is a problem. I know the number
of known errors, I know from historical data the rate they are being
added per week; the rate they are begin fixed per week.... From
there, I used a Excel template with some polynomial regression to
predict the future. It seems simple enough, and it proved accurate
(although we've only been keeping data this accurately on this
project, so I wouldn't say that this will always hold)

As to writing a paper about this, I just finished shipping a very
large project, and I think I would rather have some down time than
too much writing time.

Phil

0 new messages