Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Testing New Games

5 views
Skip to first unread message

ChicagoDave

unread,
Nov 20, 2006, 10:23:30 PM11/20/06
to
I wanted to bring up the subject of beta-testing.

I'm sure most of us are familiar with the common methods of acquiring
and interacting with IF beta testers. Whether you use the beta testing
website or not, you usually send game files privately to a handful of
people, receive varying types of reports back (length, quality), and
then this goes around and around until you decide to say "Hey! I'm
done!"

I suspect there are people here that have done things in a more
scientific manner, and I'd like to know if that's true. Has anyone gone
overboard in how they setup the testing with their testers? Have they
created walkthroughs and run-scripts so that people can test particular
areas? Do they ask for specific things to be tested?

I don't think handing someone a game and just letting them "play" it is
really beta-testing. Or there are two kinds of testing, "play-testing"
and "functional-testing". There may be more types of testing though,
such as "design-testing" where someone is focused on the logic and
cohesion of a game. There might be someone focused on grammar and
spelling. There also could be someone focused on fact-checking if the
game has elements that _can_ be fact-checked.

There's a lot of work in getting a game out the door and testing is a
critical element. I'm wondering if this is a topic that hasn't been
scrutined enough and that we could develop better recommendations for
authors on the best ways to test for different things.

Or am I wrong? Is it good enough to find 2, 3, or 4 good "play-testers"
and work through the different aspects using standard cycles of
testing?

Comments?

David C.

d...@pobox.com

unread,
Nov 20, 2006, 10:52:31 PM11/20/06
to

On Nov 20, 10:23 pm, "ChicagoDave" <david.cornel...@gmail.com> wrote:
> I wanted to bring up the subject of beta-testing.
>
> I'm sure most of us are familiar with the common methods of acquiring
> and interacting with IF beta testers. Whether you use the beta testing
> website or not, you usually send game files privately to a handful of
> people, receive varying types of reports back (length, quality), and
> then this goes around and around until you decide to say "Hey! I'm
> done!"
>
> I suspect there are people here that have done things in a more
> scientific manner, and I'd like to know if that's true. Has anyone gone
> overboard in how they setup the testing with their testers? Have they
> created walkthroughs and run-scripts so that people can test particular
> areas? Do they ask for specific things to be tested?
>
> I don't think handing someone a game and just letting them "play" it is
> really beta-testing. Or there are two kinds of testing, "play-testing"
> and "functional-testing". There may be more types of testing though,
> such as "design-testing" where someone is focused on the logic and
> cohesion of a game. There might be someone focused on grammar and
> spelling. There also could be someone focused on fact-checking if the
> game has elements that _can_ be fact-checked.
>
> There's a lot of work in getting a game out the door and testing is a
> critical element. I'm wondering if this is a topic that hasn't been
> scrutined enough and that we could develop better recommendations for
> authors on the best ways to test for different things.

Well, many of these things are SOP for testing in the software
industry. When scrutinising a work, assigning roles to scrutineers
(you look for mistakes in writing, you look for mistakes in fact, you
look for continuity problems) has proved to be very effective in the
context of improving the quality of software.

Text adventures are a little unusual for software though (and like
videogames in that respect). Whilst they have requirements, the
boundaries of what is acceptable and what is not are more blurry. It's
pretty clear that you want to precisely specify the behaviour of a
nuclear reactor controller, but not a game. Should this NPC say that
when confronted with this object? Should there be an extra puzzle, or
should this one be simplified?

>
> Or am I wrong? Is it good enough to find 2, 3, or 4 good "play-testers"
> and work through the different aspects using standard cycles of
> testing?

With a little more rigour in the testing protocol, I'm pretty sure some
of the more blatant bugs in some of the competition's entries could
have been found (automatically) before the competition deadline.

However, having said all that, you have to appreciate the quality of
the works already produced. I encourage people to discuss their
testing methods more openly in order to gather best practice.

drj

Jim Aikin

unread,
Nov 21, 2006, 12:03:07 AM11/21/06
to
> I suspect there are people here that have done things in a more
> scientific manner, and I'd like to know if that's true. Has anyone gone
> overboard in how they setup the testing with their testers? Have they
> created walkthroughs and run-scripts so that people can test particular
> areas? Do they ask for specific things to be tested?

After the first round, I did request of a couple of second-round testers
that they speed through the opening section in order to pound on the later
sections. But my project is so totally non-linear that it's hard to say
where "the opening section" ends. The game includes a walkthrough and hints,
so some people have availed themselves of one or both -- which is good,
because there were a few bugs in the hints!

> I don't think handing someone a game and just letting them "play" it is
> really beta-testing. Or there are two kinds of testing, "play-testing"
> and "functional-testing". There may be more types of testing though,
> such as "design-testing" where someone is focused on the logic and
> cohesion of a game. There might be someone focused on grammar and
> spelling. There also could be someone focused on fact-checking if the
> game has elements that _can_ be fact-checked.

I feel grateful to have found a few testers who were able to address several
or all of these (admittedly disparate) elements. I didn't try to regiment
it. For one thing, I'm not sure an all-volunteer community would support
much regimentation. If testing isn't fun, why do it? For another, if you
handed a game to one person and said "you're a design-tester, not a
grammar-tester," how would you know you'd made the right choice? Maybe that
person is strong in grammar and weak in design!

Most of my testers did a lot more than "play-testing." Most of them went
back to specific areas and tried a variety of commands to see if they could
break the software. That's sort of a game-within-a-game, which is cool.

> Or am I wrong? Is it good enough to find 2, 3, or 4 good "play-testers"
> and work through the different aspects using standard cycles of
> testing?

I suspect that each work of IF is a bit different, and may require different
cycles of testing. I'm not sure how standardized the test process could be
made. But possibly I'm just naturally messy. (If you saw my source code,
you'd know....)

--JA


Mike Snyder

unread,
Nov 21, 2006, 12:26:41 AM11/21/06
to
"ChicagoDave" <david.c...@gmail.com> wrote in message
news:1164061410.0...@m73g2000cwd.googlegroups.com...

> I wanted to bring up the subject of beta-testing.
>
> I suspect there are people here that have done things in a more
> scientific manner, and I'd like to know if that's true. Has anyone gone
> overboard in how they setup the testing with their testers? Have they
> created walkthroughs and run-scripts so that people can test particular
> areas? Do they ask for specific things to be tested?

I usually make several suggestions to my testers for the kinds of things I'm
looking for and the kind of feedback I need. So far, for me, it hasn't been
a lot more organized than that. It's definitely not the same for me as it is
for my day job, where the test department has checklists and bug reports and
procedures to follow. Some testers work out better than others, but almost
all feedback (especially with a transcript) is helpful.

> I don't think handing someone a game and just letting them "play" it is
> really beta-testing. Or there are two kinds of testing, "play-testing"

Some general play-testing is good, though. It shows what a "normal" player
might see -- one who's not actively looking for problems.

> and "functional-testing". There may be more types of testing though,
> such as "design-testing" where someone is focused on the logic and
> cohesion of a game. There might be someone focused on grammar and
> spelling. There also could be someone focused on fact-checking if the
> game has elements that _can_ be fact-checked.

Most volunteers are amateurs. Professionals might not be apt to volunteer.
:)

> There's a lot of work in getting a game out the door and testing is a
> critical element. I'm wondering if this is a topic that hasn't been
> scrutined enough and that we could develop better recommendations for
> authors on the best ways to test for different things.

It's definitely a critical element. I don't know if anybody has bothered to
compare, but I bet the games that rank the highest in IFComp are the ones
with the most/best beta testing. I think an "okay" game just seems *better*
if it's well-polished.

I think most of my testers just wanted to play the game. Few are patient
enough to *really* poke hard at it, but those *are* the kinds of testers I
appreciate most.

> Or am I wrong? Is it good enough to find 2, 3, or 4 good "play-testers"
> and work through the different aspects using standard cycles of
> testing?

Here's what I recommend (and from talking with Jason Devlin before and
during the comp, I know it's the same kind of system he uses). This isn't
what I've *always* done, but it's becoming more and more the norm.
Especially this year, I realized that I put too many testers into phase 1.
Anyway, here's what I think works best for IFComp-sized games. (Bigger games
will, of course, need more testers and probably the ability to jump into
later segments.)

Phase 1 -- get two people to test. Any more than this, and you're just
wading through lots of transcripts that are uncovering the same major flaws
over and over. I'd say get just one, but there *will* be findings that don't
overlap. This is where you weed out the largest problems and make the
biggest changes.

Phase 2 -- get six new people to test. This version should be pretty stable,
and the bigger problems are weeded out in phase 1. Here, you'll uncover the
more obscure bugs, and bugs in the things you changed/fixed from the first
phase. The goal here is to polish up the game for what *might* be a
releasable version.

Phase 3 -- get two new people *and* (if possible) one or two of the phase 1
or 2 testers. (It's usually hard for testers to help a lot the second time
around, but they'll at least remember what did or didn't work and notice
(maybe) when it changes). The goal here is to make sure your game really
*is* as polished as you think. These new testers will find problems, but
they *should* be minor problems -- things that either aren't real bugs, or
are just requests for better implementation. Be careful what you change here
unless you're willing to commit to a *4th* phase. And a 4th phase is
basically just another 3rd phase.

And remember that you can test and polish forever, because testers *will*
contradict each other. I had testers want TTS to do certain things that
others were glad it *didn't* do. I made changes in the first beta that
satisfied requests that later testers didn't like. So don't get into the
trap of going in circles. You will never have a perfect game. Never. Because
even if it is, people will find *something* you should have implemented but
didn't. And this will seem like a big deal to them, because the rest of the
implementation will lead to higher expectations. So... when it's good
enough, stop. :)

--- Mike.


Emily Short

unread,
Nov 21, 2006, 12:56:02 AM11/21/06
to

ChicagoDave wrote:
> I wanted to bring up the subject of beta-testing.
>
> I'm sure most of us are familiar with the common methods of acquiring
> and interacting with IF beta testers. Whether you use the beta testing
> website or not, you usually send game files privately to a handful of
> people, receive varying types of reports back (length, quality), and
> then this goes around and around until you decide to say "Hey! I'm
> done!"

I usually do tell my testers a few things about what I'm looking for:
namely, that they should send complete transcripts, that they should
put some identifying marker before their comments, and that any UI bug
report should be accompanied by information about the OS and
interpreter they're using. I also invite them to comment, outside the
transcript, on any aspect of the game they want to discuss. (I tend to
work with the same testers on multiple projects, so they get used to
these requests.)

I read the whole transcripts, not just the bits marked with comments,
but sometimes it's helpful to have those marks in there anyway, so that
I can search through and confirm that I have addressed everything.

I don't rely entirely on email, either: I tend to chat with testers
about any design concerns, so we can go back and forth about how these
things might be fixed.

> I suspect there are people here that have done things in a more
> scientific manner, and I'd like to know if that's true. Has anyone gone
> overboard in how they setup the testing with their testers? Have they
> created walkthroughs and run-scripts so that people can test particular
> areas?

If I've got a script to check, there's no need to make my beta-tester
run it; I can automate that myself. (Unless I don't trust their
interpreters and want to verify those, but for z-machine games that's
rarely an issue.)

I7 makes automated testing easier than it was before, since I can
develop mini test scripts and a skein for finishing the game as I write
it, and then verify things as I go along. The I7 testing apparatus
actually encourage two kinds of automated testing:

1) testing with the 'Test foo ...' commands, which are great for
building a rigorous test of a small subsection of the game; and

2) the skein, which is great for verifying a complete runthrough.

I think the ideal thing -- though I didn't take this up until recently
-- is to write your test commands at the same time that you add a new
piece of functionality to your game, so you can automatically verify
everything you just added. Even the process of coming up with the test
command can help focus your mind on edge cases and possible problems.
In Bronze I wrote a number of these to deal with the behavior of the
helmet and sensory perceptions, where I was worried that I'd break the
system in subtle ways. (Some of the test commands were cut from the
released source because I was starting to run out of compile room, and
Glulx wasn't yet a viable setting. But they were there during
development.)

For Bronze, Floatpoint, Glass, et al., I also kept a skein that plays
through to all the possible endings, and re-checked each ending
immediately before distributing the game to anyone. This at least
guarantees that the endings are all reachable -- though there's still
the chance that players will do things in a different order and trigger
outcomes you didn't expect. (So it would be better to write still more
versions of these skeins, I suppose. Mercifully, in I7 it's not too
hard to grab bits of the skein and move them around, which makes it
easier to construct a consistent set of variations. The more linear the
game, the easier it is to do a thorough job of this, but even for a
nonlinear one, it's still much, much better than nothing.)

Some of this is stuff one could do with another language, too; I'm just
accustomed to the I7 IDE, at this point.

> I don't think handing someone a game and just letting them "play" it is
> really beta-testing.

Different testers have different approaches to this. Some play the game
much as if they were playing a released game; others do their best to
break things, especially anything that looks unusual or tricky to code,
even at the expense of a natural play experience. I usually try to
assemble a testing team that includes both types of tester. The former
give useful information about pacing, puzzle difficulty, and estimated
play-time. The latter turn up more simulation bugs and flaws in the
detail work.

I don't recall ever asking anyone to concentrate on one specific aspect
of a game, but I think it works better to bring in people who will by
inclination look at something in a particular way. If you're looking
for testers with particular strengths or interests, you can perhaps
advertise for them specifically, or ask around ifMUD for
recommendations. (Other authors have also occasionally asked me if I
knew a tester who was especially good at X or Y.)

> Or there are two kinds of testing, "play-testing"
> and "functional-testing". There may be more types of testing though,
> such as "design-testing" where someone is focused on the logic and
> cohesion of a game.

Often when I have part of the game implemented and am wondering whether
the devices in it are going to work, I show it to someone and talk over
the design. I find it useful to do that before the game is finished, so
that there's time to correct any major structural problems. That's the
alpha stage. I then tend to have at least two rounds of beta work.

I find I use more testers than I used to, just as a matter of course --
this is partly because I've been writing larger and/or more
experimental games lately, but also partly because I learned (the hard
way) how difficult it is for even a very dedicated group of two or
three people to catch everything.

Also, though this may seem obvious: it's horribly risky to release a
game that hasn't been played to completion by someone (preferably at
least two someones) other than the author. You may think that it's in
pretty good shape because your testers have seen 85% of the thing, and
if they'd had a little more time, they would have gotten to the ending
for sure. Bad idea. A drop in quality at the end of a game is a sure
sign of a rushed product, and it's very annoying to players to have a
piece fall apart just when they were really absorbed in it.

This whole process takes time: I try to schedule at least a couple of
weeks for the final beta of a comp game, and a month or two for
something more ambitious.

> There might be someone focused on grammar and
> spelling.

I usually do my own spell-checking, either with I7's built-in
spell-checker or (before that) by dumping the game text and checking it
with another application. It usually doesn't seem like it's worth
devoting a whole beta-tester to the job of editing prose, though
sometimes my normal testers will catch a few mistakes.

On the other hand, I've played some games written by non-native
speakers of English that could have used an editor focused just on the
language. I'm not sure whether that would be best served by someone
playing and commenting, though; it might be better to have someone
proofread a dump of the game text, to make sure that they get to all of
it. (That's in addition to the more traditional play-testing to make
sure sensible commands are covered: guess-the-verb problems are even
more likely to occur when the author of the game isn't a native speaker
of the input language.)

> There also could be someone focused on fact-checking if the
> game has elements that _can_ be fact-checked.

Again, the need for that would depend on the type of game. I do
sometimes pick testers who will be alert to particular kinds of
mistakes I'm worried about making, though.

Anyway, I guess what most of this boils down to is: assess what your
game's liabilities are likely to be, then direct some attention
(beta-tester attention or automated testing or both) to each of those
issues. And pretty much everyone needs to make sure that the game can
be played to the finish (and to all its endings, if there are
multiples). Automatic script verification helps with that, but it is
not a substitute for having testers play through the game to the end.

Urbatain

unread,
Nov 21, 2006, 10:54:30 AM11/21/06
to
Sometimes you get the betatesting you are only capable to get.

So sometimes a just play-the-game-test is better than nothing.

But I'm with you, there are always betatesters specialiced in some
areas, for instance, I'm a betatester of design, and maybe a technical
one because my knoweledge of inform and IF programming in general. So I
think the best ammount of people to test a brief game is 5 or 6, two
for design, two for break the parser, one casual or novice players. A
large game must have like 10 or such. But it's better to have more than
10 like such famous people could get :) like Emily of course, please
don't misunderstand me, that's the way to go, but starting people at IF
have always problems to get their works well tested.

See you.

Urbatain.

ChicagoDave

unread,
Nov 21, 2006, 5:52:46 PM11/21/06
to
ChicagoDave wrote:
> I wanted to bring up the subject of beta-testing.

Okay, those were great answers. Now I'd like to ask the _testers_ how
they go about doing their thing.

So all you testers out there. How do you do your thing? Do you have any
special methodology that you sort of "fall into" when testing IF games?

David C.

quic...@quickfur.ath.cx

unread,
Nov 21, 2006, 6:18:59 PM11/21/06
to
[...]

I've only betatested one game so far, but since the author was quite
happy with what I did, I think I'm at least doing *something* right. :-)

The way I tested the game was to first play through it as though I were
a normal player. This did uncover some of the more obvious rough spots
which I duly noted, as well as minor quips like spelling, etc.. It also
let me comment on some of the larger scale questions about the game,
such as story flow, the quality of endings, etc.. (It was harder for me
to comment on these later, since I'm "spoiled" once I've seen the game
once through.)

After this initial run, I went back and re-tested the game in a very
thorough way---I'm an examine-rat, and it shows. Examine everything, try
every possible action I think should be reasonable in the game, try
different combinations of different actions, etc.. This uncovered a
number of holes in the implementation which the author had overlooked,
and also revealed some show-stopping bugs which I brought to the
author's attention.

Then I go into stress-testing mode, where I try my best to break the
game. I deliberately try to come up with the worst possible combinations
of actions that could trigger bugs, such as doing "dumb" (unexpected)
things, nonsensical actions, or variations of actions that had
previously triggered bugs. This did reveal one or two serious bugs that
had been overlooked, including a problem that wasn't completely resolved
by a previous bug fix.

One of the principles that I tried to keep is that every time the author
came back with a new version of the game, I would go back and retest
previously buggy areas to make sure that there are no further problems
there. I think this worked fairly well in sorting out problematic areas.


QF

--
Mediocrity has been pushed to extremes.

Mike Snyder

unread,
Nov 21, 2006, 7:03:18 PM11/21/06
to
"ChicagoDave" <david.c...@gmail.com> wrote in message
news:1164131566.5...@k70g2000cwa.googlegroups.com...

I've tested a few, so I guess I have an opinion here too. :)

I try to do whatever the author asks, above all. In the few works of IF I've
tested, I've never really been given clear instructions. Aside from running
a transcript with comments, I guess it's just been up to me.

I start a transcript, and note the current time and date. I add comments as
I go along, for anything I feel like commenting about. This could be a bug,
or something that's *not* a bug which I liked, or just a comment about my
surroundings (for instance, "having some distractions, will come back in a
moment."). I don't know how useful that is to the author, but my hope is
that it'll help them know what was going on while I tested (did I miss a
clue I wouldn't have otherwise, etc). I periodically make note of the
current time again, just in case they're interested in checking to see how
long it too me to get from one part to the next (it'd be cool if you could
put a transcript in "beta test" mode, and the interpreter would
automatically note the current time periodically in the transcript). I guess
I treat the transcript like a conversation, and I tell the author what's
going on as I play -- things I'm reminded of or comparisons that come to
mind, etc. Maybe that will help it makes sense when something goes wrong
later -- it's easier to see that I was on the wrong track because I mention
what my ideas are earlier.

I also save and undo a lot. If I think an action might have consequences,
I'll usually try it, and then either undo or restore afterwards. I try to go
down as many paths as I find. If I think something *might* prove to be a
problem, but I can't think of a good way to test it at that point (such as
leaving an object behind earlier in the game), I'll note that in the
transcript. The author can either disregard it, or realize maybe there *is*
something they didn't anticipate. In addition to making mistakes on purpose,
I'll often try multiple commands for the same action. If I can "get flower"
I'll usually undo and try "pick flower" or "pluck flower" -- and whatever
else comes to mind. I sometimes do this several times, exploring different
ways of doing the same action, just to make sure I've covered the things
other players might try.

I try to be more critical of the writing when testing, so if something
doesn't seem to flow quite right, I mention that too. In this regard, it's a
lot like how I review. I also look for things that just aren't handled, but
probably should be. These are less vital, but they can really help a game
feel well-implement.

---- Mike.


ChicagoDave

unread,
Nov 21, 2006, 9:47:57 PM11/21/06
to
> Mike Snyder wrote:
> (it'd be cool if you could put a transcript in "beta test" mode, and the
> interpreter would automatically note the current time periodically in
> the transcript)

I've thought of similar things, which brings up a new idea.

Wouldn't it be interesting to have an interpreter specifically designed
for beta-testing?

I can imagine an interpreter that you could interact with by using
terp-based meta commands that would upon request, spit out a report
that had very detailed information about comments, time, current state,
etc. Something like this:

In the Grail Room Dave reports, "The description could use some work
where the exits are concerned."

In the Map Room Dave found an execution error, "*** ERROR *** String
missing" by entering the command "> EXAMINE FLOOR".

There could be some interesting things done with a test focused
interpreter.

David C.

Greg Boettcher

unread,
Nov 26, 2006, 6:15:50 PM11/26/06
to
ChicagoDave wrote:
> I don't think handing someone a game and just letting them "play" it is
> really beta-testing. Or there are two kinds of testing, "play-testing"
> and "functional-testing". There may be more types of testing though,
> such as "design-testing" where someone is focused on the logic and
> cohesion of a game. There might be someone focused on grammar and
> spelling. There also could be someone focused on fact-checking if the
> game has elements that _can_ be fact-checked.
>
> There's a lot of work in getting a game out the door and testing is a
> critical element. I'm wondering if this is a topic that hasn't been
> scrutined enough and that we could develop better recommendations for
> authors on the best ways to test for different things.
>
> Or am I wrong? Is it good enough to find 2, 3, or 4 good "play-testers"
> and work through the different aspects using standard cycles of
> testing?

I think that "functional-testing" is best carried out by the author
himself as part of alpha testing. Thorough, frequent, and careful
alpha-testing.

When my Introcomp entry was in beta-testing, the only
"functional-testing" issue I told my testers to look out for was a
particular problem that I had not yet been able to pin down the source
of on my own. That way, my testers were able to focus on more
high-level aspects of the game.

That's my two cents...

Greg

K M

unread,
Nov 27, 2006, 5:18:45 AM11/27/06
to
Looking back at my transcripts of the two games I beta-tested, I was
assessing them on three levels:

1) Basic spelling, punctuation, and grammar. "It's" errors and
accidental double periods are really minor, in the Grand Scheme, but a)
they distract me from the rest of the game and b) they're easy to
correct, so why not?

2) Break-the-parser kinds of things: are all reasonable synonyms for
nouns and verbs in place? Are all major things and most minor things
examinable? When a wrong action is tried, does the error message give
a hint what the "correct" manipulation of the object/puzzle might be?

3) Does the game as a whole flow and make sense? Can someone who
speaks English (but who is not a native speaker) follow and enjoy the
game? Do the characters have actual personalities, or are they so
bland any of them would deliver the same line of exposition? Are the
puzzles well-integrated into the story?

(As an aside: In both the games I beta-tested, I actually had very
little to contribute for 1) and 2), because the authors had polished
their games so very nicely before they even reached the beta-test
stage. Dept. 3) primarily reminded me that my
small-compared-to-mean-R*IFer brain has difficulty wrapping itself
around time paradoxes!)

d...@pobox.com

unread,
Nov 27, 2006, 10:56:20 AM11/27/06
to

I haven't done that much beta-testing, but...

I'm a professional programmer. At one level I view text adventure
beta-testing as like other sorts of software testing. Which is to say
it's a battle of wits. A programmer has created a program and claimed
that it has a particular specification; when I'm testing, it's up to me
to find ways in which the program does not in fact meet the
specification. Text adventures aren't quite the same, but I still go
about trying to break them in a very similar way.

I think I can sniff out code. Any special code that the author has
written, above and beyond the library code, is ripe for bugs (code ==
bugs). So if I find something in a game and it hints of "author wrote
special code for this" then I'll go a bit crazy and try and test it to
death. Example: a leaky object that can hold both water and objects
separately. What happens if you try both? What happens if you fill it
with water and then put in another container? What happens if you
empty it into another container when it has water in it? etc etc.
Sometimes you have to be very cunning.

Surprisingly often repeating a special action (BREAK BOX WITH HAMMER)
won't have been picked up (a special case of special code).

I tend to test in one of a few "modes". "breaking things" mode.
"grammar nazi" mode. "consistency mode". In consistency, smell seems
to be the easy one to pick on here. If I notice I can SMELL CORPSE and
get an interesting response, I'll start smelling everything else to see
what I get. Has the author implemented smell consistently or only when
it was crucial to the plot? Custom verbs often combine problems of
consistency and special code. Are objects that change state correctly
described in all their states?

One mode I don't have is "plausiblity mode", well, I have it, but it's
weak. It's easy for me to not notice that it's implausible that there
should be a key in the cupboard in the barn behind a heavy cask. Even
when I do deliberately think about such things it's hard for me to
raise them as issues.

Sometimes I'll even use my "just trying to play the game" mode.

I'll also try and take a step back and look at some of the higher
issues. Does the writing carry conviction and voice? Are the parts of
the writing that are supposed to be dramatic actually dramatic. Does
the dramatic flow of the plot work? Is the PC well motivated? Are the
NPCs? Are the NPCs actually distinguishable from each other and from
cardboard, are they, in fact, characters?

I think there's a sort of Maslow's hierarchy thing going on too. It's
no use complaining about poorly motivated NPCs when the work is full of
bad spelling and punctuation mistakes. It's no use complaining that
the puzzle structure is too simple if there are too many "*** Run-time
problem P7"s.

I also use a Mac, so I like to think that by testing on a Mac I offer
offers a bit more coverage of the wide variety of interpreter
platforms.

I find being a tester is quite a responsibility. If I tested a game
and it was released with a glaring "** Run-time problem" within the
first 5 minutes of play then that reflects badly on me. Why didn't I
find that? Unfortunately the best a tester can do is send their list
of whinges and curses to the author and hope that they're responsive.

drj

Jim Aikin

unread,
Nov 27, 2006, 6:44:04 PM11/27/06
to
> I think I can sniff out code. Any special code that the author has
> written, above and beyond the library code, is ripe for bugs (code ==
> bugs). So if I find something in a game and it hints of "author wrote
> special code for this" then I'll go a bit crazy and try and test it to
> death. Example: a leaky object that can hold both water and objects
> separately. What happens if you try both? What happens if you fill it
> with water and then put in another container? What happens if you
> empty it into another container when it has water in it? etc etc.
> Sometimes you have to be very cunning.

David is not just venting hot air here. He's good at it. He found plenty of
nasty little problems in my soon-to-be-released game. He did, in fact,
manage to put a large object into a small suitcase!

--Jim Aikin


Al

unread,
Nov 27, 2006, 9:19:59 PM11/27/06
to

Jim Aikin wrote:

>
> David is not just venting hot air here. He's good at it. He found plenty of
> nasty little problems in my soon-to-be-released game. He did, in fact,
> manage to put a large object into a small suitcase!


It's called thinking outside the box.

Jerome West

unread,
Nov 28, 2006, 1:36:02 PM11/28/06
to

Although that's a worst case scenario.

0 new messages