Recently, I have been thinking about what features a good I-F system
should have. In this post, I want to focus on the use of 'incremental
techniques' in I-F systems. To begin with a motivating example, I
have always liked the approach of MUD-like systems to development: add
new objects on-the-fly, changing and experimenting until everything
works as expected. This cannot be done (easily) with current systems,
such as TADS or Inform, since they are too 'static'. But can a
similar technique still be used in such systems?
Most I-F programs can be described as turning an input sequence (e.g.
player commands and key-presses) into output (e.g. the screen
contents). Viewed from another angle, the output the player sees at a
give time depends on two factors: the program, and the input sequence
so far. Both of these are frequently changed, the former by the I-F
author during development, the latter by the player/reader during
play.
During play, the player gradually changes the input sequence, usually
by adding new commands and key-presses at the end. But it is also
conceivable that the player edits the input sequence somewhere in the
middle. (This is a very useful feature when you forgot to take an
item that proves to be essential later on. Weather, where is thy
sting?) The output changes when the input does, as prescribed by the
program.
During development, the author gradually changes the I-F program.
This is done in many different ways, probably using text editors,
compilers, and other tools. A new version of the program is then
tested by giving it some input sequence, and checking the resulting
output against the author's expectations. Again, this testing is done
in a myriad ways, but one method seems to be used often: fix a certain
input sequence (e.g. a 'walk-through'), and change the program until
the corresponding output is correct.
Both cases have something in common: one item is frequently updated,
and another must be updated to match. If the input or program updates
are small, often the output updates are small too. Thus it might be
possible to compute the output update for a given input or program
change with little effort, obviating the need for e.g. save files and
recompilation. This is essentially what 'incremental techniques' are
all about.
How would that work out in practice? Suppose we have coded the first
stage of a story, and we now want to add a second part. We begin at
the beginning, and 'walk through' to the location where the second
part must begin. Currently, that location is a dead end. Now we want
to place a locked door there, to be opened with a key found in the
initial part. To do that, we add a door object to the source code for
our program. If we want to see and test the result in TADS or Inform,
we have to recompile the source, and walk through the initial stage
again. In an incremental system, the small program change is
converted directly to the corresponding output change, which probably
only concerns the last paragraphs of the output.
Such an incremental system allows development a la MUD: the effect of
small changes can be viewed directly, without the overhead of
recompilation and rerunning part of an input sequence. Experimenting
and debugging is consequently made easier. The reader is also given
more room for experimentation: what happens if I do that before this?
what if I first push the button in the other room before solving this
20-disc towers of Hanoi puzzle? etc.
Remains the question of designing an incremental I-F system. I must
admit that I don't know much about that. But maybe this idea can be
of use to those contemplating the design of their own Incredibly
Fabulous I-F system.
Musingly,
<><
Marnix
--
Marnix Klooster
kloo...@dutiba.twi.tudelft.nl
>Remains the question of designing an incremental I-F system. I must
>admit that I don't know much about that. But maybe this idea can be
>of use to those contemplating the design of their own Incredibly
>Fabulous I-F system.
When I last looked into incremental compilation in any detail (the late
80's), the literature on the topic was growing rapidly. Back then, at
least, my take was that incremental compilation turns out to be a lot
harder than you'd expect. I suspect that 99% of the time you'd spend
writing an incremental IF compiler would be on the incremental part. :)
On the other hand, you can get a lot of the capabilties you're looking for
from an interpreted system. With an interpreted lisp or scheme based IF
language, you could playtest the game for a few turns, then break into the
debugger and execute arbitrary code --- anything from "give the player this
object" to "replace the current definition of the Trophy Room's 'down'
method with the following code". The nice thing about lisp is that its
syntax is so simple that the debugger can easily accept the entire lisp
language; i.e., you can do anything in the debugger that you can in the
language itself. This is very powerful indeed. (Contrast with C, where
you have to write (and debug!) custom debugging routines that specifically
deal with some subset of your program.)
Of course, none of this precludes compilation. Production lisp systems
have compilers too, and interpreted and compiled functions coexist happily.
To my mind, the main sticking point is portability of the compiled
binaries. Compiling to a virtual machine (like TADS and Inform do) is the
standard solution, but this leaves a lot to be desired performance-wise.
I've personally brought TADS to its knees on lesser machines :), and Inform
users seem to go to incredible lengths to keep Inform's code simple enough
to run quickly on the Z machine. (Much of this work has been Graham's, of
course.)
When it comes down to it, this is my main criticism of current IF
development tools: in this day and age no one should be spending any effort
writing nasty code to make things fast, or worrying about limits on the
number of objects, or faking things like 2D arrays. These are neanderthal
restrictions for a development system in 1996.
Dealing with these kinds of low-level issues should be the IF language
designer's problem. Game authors using the language shouldn't have to
think about them. In that sense, I think your incremental IF system ideas
are right on target, because they imply a much higher level of abstraction.
Dave Baggett
__
d...@ai.mit.edu
"Mr. Price: Please don't try to make things nice! The wrong notes are *right*."
--- Charles Ives (note to copyist on the autograph score of The Fourth of July)
The growth of literature on incremental compilation has continued and currently
there is a rather large 'scientific' foundation on the concept. So there is much
to learn from, but still the major part of writing a system is in the
'incrementality', if you're doing it by hand. Again there is much work on tools
and support for generating incremental environments, all the way back to the
Cornell Program Synthesizer (for Pascal, around mid-70's I think, a rather
simple, but elegant approach). To me it seems, however, that the tools of today
are bulky and generate largish systems which are not appropriate for the kind of
machines that we are usually aiming for (even if those are constantly getting
bigger and better by the year).
So my conclusion is that the size (when going automatic) and the work (when
implementing it by hand) is currently prohibitive. Not that we should stop aiming
for it...
(As it says in the Alan manual: "One day...")
> To my mind, the main sticking point is portability of the compiled
> binaries. Compiling to a virtual machine (like TADS and Inform do) is the
> standard solution, but this leaves a lot to be desired performance-wise.
> I've personally brought TADS to its knees on lesser machines :), and Inform
> users seem to go to incredible lengths to keep Inform's code simple enough
> to run quickly on the Z machine. (Much of this work has been Graham's, of
> course.)
>
This is not inherently so. The performance problem in many system of this kind is
often caused (or at least not lessened) by the low-level design of the virtual
machine. If we design a virtual machine that has instruction sets similar to the
real processors, the penalty *will* be 5-10 times, because you have to 'run the
processor in software'.
On the other hand, designing a virtual machine with much higher level (see
below), having only instructions which 'mean more' will also mean that you will
be able to optimize the interpreter to perform those more complex tasks as
efficient as possible (as opposed to trying to make the interpreter perform those
trivial tasks as efficient as possible ;-).
Again, to be able to do this you *must* have a 'langauge' that contains those
higher level constructs.
> When it comes down to it, this is my main criticism of current IF
> development tools: in this day and age no one should be spending any effort
> writing nasty code to make things fast, or worrying about limits on the
> number of objects, or faking things like 2D arrays. These are neanderthal
> restrictions for a development system in 1996.
>
> Dealing with these kinds of low-level issues should be the IF language
> designer's problem. Game authors using the language shouldn't have to
> think about them. In that sense, I think your incremental IF system ideas
> are right on target, because they imply a much higher level of abstraction.
>
Right on the spot! (Except that incrementality doesn't have anything to do with
higher level abstractions...) Much of the design of a higher level langauge is in
examining the way you would implement the functionality you want in a 'normal'
language, extracting the patterns, the conventions that you normally have to use,
and turn those into concepts and constructs of the new language. In this way you
will also have a language that can be 'understood' better by the compiler and
thus can also be checked for higher level of semantic errors.
But there are also some trade-offs that you have to take into consideration,
because this also makes the language less general. So it will be harder to do
things that the langauge designer hadn't expected, thought of, or even designed
out of the possiblities of the language. In this respect a higher level language
and its designer needs help from its users to find ideas for features that are
missing or problems and restrictions with the current language design. (hint,
hint!) So a higher level language *will* change with time, but as a user one
should always remember that behind a language (or any system, for that matter)
there is a basic view of the intent and the usage of the language, it might not
be possible to force every 'necessary feature' into it!
As the principal designer of the Alan language I can only say that this has been
our utimate goal.
/Thomas
--
"Little languages go a long way..."
(ThoNi of ThoNi&GorFo Adventure Factories in 1985)
------------------------------------------------------------------------
Thomas Nilsson Phone Int.: (+46) 13 12 11 67
Stenbrötsgatan 57 Phone Nat.: 013 - 12 11 67
S-582 47 LINKÖPING Email: th...@softlab.se
SWEDEN alan-r...@softlab.se for info
------------------------------------------------------------------------
John
_________________________________________
On 15 Feb 1996, Marnix Klooster wrote:
> Hello all,
>
> Recently, I have been thinking about what features a good I-F system
> should have. In this post, I want to focus on the use of 'incremental
> techniques' in I-F systems. To begin with a motivating example, I
[...]
> During play, the player gradually changes the input sequence, usually
> by adding new commands and key-presses at the end. But it is also
> conceivable that the player edits the input sequence somewhere in the
> middle. (This is a very useful feature when you forgot to take an
> item that proves to be essential later on. Weather, where is thy
> sting?) The output changes when the input does, as prescribed by the
> program.
>
> During development, the author gradually changes the I-F program.
> This is done in many different ways, probably using text editors,
> compilers, and other tools. A new version of the program is then
> tested by giving it some input sequence, and checking the resulting
> output against the author's expectations. Again, this testing is done
> in a myriad ways, but one method seems to be used often: fix a certain
> input sequence (e.g. a 'walk-through'), and change the program until
> the corresponding output is correct.
>
[...]
> Marnix Klooster
> kloo...@dutiba.twi.tudelft.nl
And lo, there was the CISC/RISC religious war again. :-)
To be explicit: the more high-level constructs you build into the
virtual machine (and thus the language), the more screwed you are when
you realize that you left something out. Either you update the virtual
machine spec frequently, in which case you might as well write in C,
or you write more and more code that ignores the machine-level complex
constructs.
Actually, Thomas Nilsson says this lower down in his post, but I want
to give the obvious examples from my own experience, which is the
Z-machine. The Z-machine has really only one high-level structure,
which is the object. Objects can contain other objects, they have
properties (arbitrary arrays of data), they have attributes (boolean
values.) This is nicely optimized for IF. And what are the chief
complaints about the Z-machine? The standard object isn't enough for
some purposes. Not enough properties, not enough attributes, want to
allocate objects at run-time.
Nobody ever complains that the "add" or "test-and-branch" opcodes
aren't powerful enough for modern IF. :-)
--Z
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
> > On the other hand, designing a virtual machine with much higher level (see
> > below), having only instructions which 'mean more' will also mean that you will
> > be able to optimize the interpreter to perform those more complex tasks as
> > efficient as possible (as opposed to trying to make the interpreter perform th\
> > ose
> > trivial tasks as efficient as possible ;-).
> >
> > Again, to be able to do this you *must* have a 'langauge' that contains those
> > higher level constructs.
>
> And lo, there was the CISC/RISC religious war again. :-)
>
I thought that war was over, I thought RISC won?!?! ;-)
But seriously I think this is true, the RISC ideas has entered the CISC scene,
at least when it comes to designing processors. Look at the recommendations
from Motorola to *not* use the new, more complex addressing modes, instead they
say that a few instructions using the fast, simple and optmized addressing
modes will be faster...
> To be explicit: the more high-level constructs you build into the
> virtual machine (and thus the language), the more screwed you are when
> you realize that you left something out. Either you update the virtual
> machine spec frequently, in which case you might as well write in C,
> or you write more and more code that ignores the machine-level complex
> constructs.
>
Granted, except that I don't think you might as well write in C. There are at
least three different things hiding in this bush.
1) The efficiency
Assuming that we have decided to use the virtual machine approach (you don't
have to, the Alan compiler used generation to C source in early versions), the
efficiency of the execution is the efficiency of the interpreter. An
interpreter can do more work per unit of time if it can avoid decoding
instructions etc. (there is a lot an interpreter have to do that can't be
considered productive work) as much as possible. So an interpreter for a
virtual machine that has higher level instructions *can* be faster QED.
The RISC arguments doesn't really apply here since we'd have a hard time
introducing many of the current hardware optimisations in processors,
instruction pipelines, optimized wiring for register to register moves, branch
prediction caches etc. etc. that make RISC processors faster than CISC. (Some
are of course appropriate, and already present, like caching)
2) The compatability
The higher level virtual machine certainly has the problem you describe - being
screwed by your own inability to forsee every future requirement... Upgrading
the virutal machine specification (and its implementation) is no great problem
unless you have a wide "user" base, e.g. other interpreters, hosts of programs
(games in our case).
Personally I think a more significant problem with each IF authoring langauge
having its own virtual machine is that they are incompatible in exactly the way
that you can't take your Microsoft Office for Windows and run them on your
Amiga (or Mac either for that matter). A while back I was actually toying with
the idea to build a new backend for the Alan compiler that would produce
Z-machine data.
3) Language level
It is my firm belief that a higher level of "languages", or tools if you
prefer, is required to support any activities requiring adaption of computer
based systems to particular needs. We'll never get over the software "crisis"
before we grown past the "black-smith" era (an analogy, refering to the fact
that a long time ago you could visit the local black-smith if you had any needs
in the field of metal produce, today there is no such thing as the
"metal"-industry, currently we seek for programmers...)
I think we must allow the specialists tools that suit their needs, and not
force them to become programmers (in C, C++, Ada, Java or any other
*programming* language). We have a long way to go yet, but if stick to "real
programming", standardised programming languages, backwards compatibility, and
dare I say it? C++!, we will never get there...
And now back to the issue at hand, a higher level langauge is of course
possible to implement on any level of execution machine, virtual or real, it
only differs in the effort it takes. A virtual machine, especially one of
higher level, is usually built to match the language because that makes it easy
to generate the code, but I am sure that I could generate code for the special
purpose processors used in telephone exchanges to run Alan adventures ;-)
But as a professional language and compiler designer I must also add that the
clear design of a higher level virtual machine is a great help in understanding
the language you'r designing (or compiling) and which semantic restrictions
must be imposed to help the "programmer" to avoid pitfalls (usually the
resaoning goes "if you can't understand how to implement a particular variation
of a construct, what it does and means, that particular aspect is probably
wrong, and should not be used").
Comments are most welcome, but perhaps we are (I, at least) getting a bit of
the topic of this newsgroup...
/Thomas
>> Again, to be able to do this you *must* have a 'langauge' that contains those
>> higher level constructs.
>
>And lo, there was the CISC/RISC religious war again. :-)
Except that here the CISC people do have a better case, I think. What
really kills you with virtual machines is the instruction fetching and
decoding time. The more CISC your virtual machine is, the less time you
spend decoding instructions (proportionally).
>To be explicit: the more high-level constructs you build into the
>virtual machine (and thus the language), the more screwed you are when
>you realize that you left something out.
Right on.
>And what are the chief complaints about the Z-machine?
Um, how 'bout "by today's standards it sucks"? Seriously: it's been
retrofitted with so much junk so many times that it's a complete mess.
It's very much tied to a particular way of thinking about adventure games.
Its design makes it inherently slow for anything too far from the Zork
ideal.
When it was designed, it was a very sensible (even inspired) way of dealing
with the tiny machines of the day. But it really doesn't extend well into
the 32 bit world and beyond.
>Not enough properties, not enough attributes, want to allocate objects at
>run-time.
More precisely: that there even *are* such limits.
Ironically, what started out as a best-of-both-worlds design is now really
a worst-of-both-worlds design. The designers worked within the constraints
of 8-bit microcomputers. Despite the fact that, practically speaking,
these constraints are gone, we still have to deal with them every time we
write an Inform program.
The TADS virtual machine has similar problems, though not as bad (yet),
because it was designed 10 years later. The TADS VM is designed to support
real mode DOS, so it worries about being able to swap objects in and out of
memory. Now that we have vritual memory on even microcomputers, such
assumptions in the VM add nothing but overhead.
This is getting perilously close to the same old boring flame war, but
want to try to clarify my point. Which is: the limited-ness of a given
machine design is proportional to how high-level its constructs are,
*not* how old it is.
Ok, sure, Z-machine objects are limited to X attributes. This sucks. X
is small; this sucks too. If I in charge of designing the sequel to
the Z-machine, I would allow an infinite number of attributes, or at
least 2^16 of them. But this is not the point. There will always be
things we haven't thought of. Ten years later, people would be
complaining that my Z-machine sequel didn't do any of a dozen things
that none of us have even thought of today.
The point is: it's easier to change the programming language than the
hardware, it's easier to change the libraries than the programming
language, and it's easier to change the program than the libraries.
Thus, a virtual machine -- fixed in design and flexible in capacity --
has tremendous advantages both for game designers and game players.
My data point of supporting evidence is that I didn't find Legend to
be particularly slow. The machine I was playing it on was three years
old -- an ancient hulk :-)
You have said (I believe) that the games you want to write will take
so much memory, so much CPU power, so much general testosterone, that
they'll grind a minimum-configuration TADS machine into the dirt.
That's fine. But I think the effort would be better spent writing a
TADS game that requires a fast machine, than writing an IBM executable
that will run well on a medium-level IBM machine. Machines will speed
up, but new executables have to be ported.
Maybe I *should* design the Z-machine part 2. 32-bit addresses, and
rip the support for objects out of the opcode level and into a
library. Same user-interface capacities as V5 -- a fixed-width status
window and a scrolling styled-text buffer. It would be trivial to port
Inform to compile to the new machine format -- just change some
opcodes to function calls. Then I'd just have to recompile a few
games, to demonstrate that they haven't gotten significantly slower,
and we'd be in business. Hopefully for more than ten years.
Yes, you claim that it *would* be significantly slower. That's why I
want to try it and see.
Footnote: I am *not* advocating doing this soon. I'm not advocating
*doing* it. I'm not even going to *think* about it until after this
year's competition. And I'd recommend that we have at least two
screaming arguments on the newsgroup, and let them settle out -- twice
-- before the specs are decided.
Don't forget to remove the dictionary and tokenize too, if you're
going for real minimalism :-)
--
Matthew T. Russotto russ...@pond.com russ...@his.com
"Extremism in defense of liberty is no vice, and moderation in pursuit
of justice is no virtue."
> ... I didn't find Legend to
> be particularly slow. The machine I was playing it on was three years
> old -- an ancient hulk :-)
Ya know, I didn't find it to be "too slow" either. In fact, I though that
the text pacing was remarkably reminiscent of the Infocom games I used to
play on the Apple ][c. Sort of like a simulated disk access pause after
entering every room. I thought it was cool. :)
daved nault
>This is getting perilously close to the same old boring flame war...
I'm not sure what you mean here: TADS v. Inform? I didn't mean it that
way; in fact, I cited shortcomings of the TADS VM as well.
RISC v. CISC? I think this discussion is mostly unrelated to that, for the
reasons Thomas gave: the real meat of the RISC approach (pipelining,
predictive branches, using the limited silicon for registers instead of
microcode, etc.) is not relevant to virtual machine design.
Actually, most of what you wrote in this latest post I agree with. Perhaps
we're in violent agreement? :)
>Ok, sure, Z-machine objects are limited to X attributes. This sucks. X
>is small; this sucks too. If I in charge of designing the sequel to
>the Z-machine, I would allow an infinite number of attributes, or at
>least 2^16 of them.
My answer to this is: why keep attributes? Why lock yourself into
attributes, properties, etc.? A lot of what the Z-Machine imposes on you
seems awfully low-level --- there to save percious space (which is not
nearly so precious 15 years later) or register bittage (which we have
plenty of).
>But this is not the point. There will always be things we haven't thought
>of. Ten years later, people would be complaining that my Z-machine sequel
>didn't do any of a dozen things that none of us have even thought of
>today.
Yes, and I don't think this is necessarily inherent in a good IF virtual
machine, despite Thomas' well-reasoned arguments to the contrary. Here's
another concrete example: there are now *two* VM-based Scheme
implementations. In both cases, the VM's are "high-level", in the sense
that they know about fundamental Scheme operations, not just "test carry
bit and branch if 0" kinds of things.
Why couldn't such a VM become the next IF virtual machine? Then you're
locked only into a Scheme-based worldview, which limits you to... well
absolutely anything you'd like to do, frankly.
The problem with this is that, based on my limited personal experience with
these VM Schemes (VSCM and Scheme48) and with this newsgroup, I don't think
we're quite at the point where the VM Schemes will support all the machines
that IF fans want to play games on, performance-wise. And what a
tremendous waste of time it would be if one were to spend months or years
on a really nice Scheme-based IF system, and then (as with WorldClass),
almost no one were to use it!
>My data point of supporting evidence is that I didn't find Legend to
>be particularly slow.
But lots of people did. And many people flamed me for it, refused to play
the game, or dwelled only on technical issues like performance rather than
the real *content* of the work. You'll notice that though Legend is highly
regarded by quite a number of people (5 stars from Baf --- who-hoo!), it's
almost never mentioned as a game to try (contrast with Jigsaw, Curses,
Christminster, and Theatre, which are almost constantly touted here, and
rightly so).
Though this doesn't make much difference to me personally, it *does* argue
that people will not tolerate games that *require* recent hardware to run.
And perhaps also that people still feel a certain affinity for the
Z-Machine itself. (Or maybe you've all just changed your minds about the
game...)
>You have said (I believe) that the games you want to write will take
>so much memory, so much CPU power, so much general testosterone, that
>they'll grind a minimum-configuration TADS machine into the dirt.
Well, I wouldn't put it so hyperbolically, but in general I do think that
we're better of requiring fast machines in return for better (higher
abstraction) development systems.
>That's fine. But I think the effort would be better spent writing a
>TADS game that requires a fast machine, than writing an IBM executable
>that will run well on a medium-level IBM machine. Machines will speed
>up, but new executables have to be ported.
Agreed, 100 percent.
>Footnote: I am *not* advocating doing this soon. I'm not advocating *doing*
>it. I'm not even going to *think* about it until after this year's
>competition. And I'd recommend that we have at least two screaming
>arguments on the newsgroup, and let them settle out -- twice -- before the
>specs are decided.
We don't need a committee-designed VM --- that would never be good. Also,
forget Infocom for a moment. Forget the Z-Machine. If these things never
existed, *then* what VM would you design? I'm sure it wouldn't look
anything like the Z-Machine, because this is 1996, and the rules are very
different! So don't just slap some more gunk onto the poor Z-Machine.
Hasn't it been through enough already?
OK, I'll say it: I *do* think we need a new VM and a new IF language. But
finding the right balance between performance and level of abstraction (or
extensiblity, if you will) is difficult enough so that I'm not eager to
rush out and start coding yet. (I also have less than zero free time...)
}The problem with this is that, based on my limited personal experience with
}these VM Schemes (VSCM and Scheme48) and with this newsgroup, I don't think
}we're quite at the point where the VM Schemes will support all the machines
}that IF fans want to play games on, performance-wise. And what a
}tremendous waste of time it would be if one were to spend months or years
}on a really nice Scheme-based IF system, and then (as with WorldClass),
}almost no one were to use it!
There's another popular VM out there-- Java's. I _KNOW_ it can handle
adventure games, because it's possible to write a Z-machine
interpreter in it. I previously suggested (in jest) an Inform->Java
bytecode compiler. Maybe it isn't such a bad idea. Or a TADS->Java
compiler, for that matter. This lets IF people off the hook on
performance-- if the current hype keeps up for any amount of time,
we'll have just-in-time Java compilers for all major platforms. That,
of course, is the main sticking point -- it's still not going to
support nonrecent machines.
Besides, as you suspect, some of us are nostalgic for the Z-machine.
Hey, its pretty rare to have something to be nostalgic over at my age,
I might as well take advantage of it. :-)
>>My data point of supporting evidence is that I didn't find Legend to
>>be particularly slow.
>
>But lots of people did. And many people flamed me for it, refused to play
>the game, or dwelled only on technical issues like performance rather than
>the real *content* of the work. You'll notice that though Legend is highly
>regarded by quite a number of people (5 stars from Baf --- who-hoo]), it's
>almost never mentioned as a game to try (contrast with Jigsaw, Curses,
>Christminster, and Theatre, which are almost constantly touted here, and
>rightly so).
That is *really* a shame. I played and enjoyed Legend greatly - you have
my heartfelt compliments on it. Quite a work] I couldn't play it from
Windoze (no skin off my nose; I'm a DOS programmer), but what the heck?
Yes, I do have a 486, although a somewhat slow one (at the time... I have
a 486DX2/80 in it now] tee hee]), with "only" 4MB of RAM. I'll try
playing it on my mother-in-law's 386SX/20 and see how it goes (will it run
on my father-in-law's 286? we'll see]).
Slowness is in the eye of the beholder. Truly. I feel truly ancient
among the college kids that show up here, because I remember when hand
calculators became affordable for those of us who don't have 6-digit
incomes. I remember 4K personal computers; I remember later when 8080A
1MHz chips were considered screaming fast (and my systems programmer
husband had an unholy glee in revving it up to 2MHz by putting in a faster
clock). I regularly poke around on my father-in-law's 286 just to remind
myself that there was a time when I thought that was *fast*.
So Legend is a little slow on older machines. So what? This is supposed
to be flame-worthy? This is worth refusing to play an excellent game?
Let's put our emphasis where it belongs: on the quality of the game
itself.
bonni
coming soon - 1996 IF Competition entry
__ __
IC ! XC ! bonni mierzejewska "The Lone Quilter"
---+--- ! u6...@wvnvm.wvnet.edu
NI ! KA ! Kelly's Creek Homestead, Maidsville, WV
: >This is getting perilously close to the same old boring flame war...
: I'm not sure what you mean here: TADS v. Inform? I didn't mean it that
: way; in fact, I cited shortcomings of the TADS VM as well.
Probably Andrew means Z-machine sux vs. Z-machine rulez.
: Why couldn't such a VM become the next IF virtual machine? Then you're
: locked only into a Scheme-based worldview, which limits you to... well
: absolutely anything you'd like to do, frankly.
I have a fairly good idea of why Scheme is good for IF (besides the
availability of the VM implementations), but for the rest of the group,
how about a little lesson, Dave? I would try to dive in, but I would get
details wrong and leave plenty out.
: >My data point of supporting evidence is that I didn't find Legend to
: >be particularly slow.
: But lots of people did. And many people flamed me for it, refused to play
: the game, or dwelled only on technical issues like performance rather than
: the real *content* of the work. You'll notice that though Legend is highly
: regarded by quite a number of people (5 stars from Baf --- who-hoo!), it's
: almost never mentioned as a game to try (contrast with Jigsaw, Curses,
: Christminster, and Theatre, which are almost constantly touted here, and
: rightly so).
It's my favorite, and I encourage everyone to try it, but that's not
important. I don't believe it was the performance issues alone that
made Legend hard to accept. As someone pointed out recently, TRX.EXE had
a lot to do with it. I wouldn't be surprised if you lost 100 users to
that thing. When I first downloaded Legend, I had to run it under
Windows in order to get it to work. It ran, but at a speed so low it
wasn't worth my time.
I didn't return to Legend until months later, and played it on a Mac at
work. It took about half an hour to hook me, and I was wedded to the
game for the next week. I consider myself remarkably tolerant, though,
and figure that most users would not have given the game a second chance
after having it repeatedly crash the first time around.
Reminds me of the products and technologies that could be tremendously
useful but are doomed for the next fifty years because their introduction
was handled poorly.
I realize I'm making it sound like you flubbed Legend's intro, Dave, but
I don't mean to put the blame on you. I realize the bind you were in.
My point: Legend's obscurity can't possibly be related to the quality of
the game, unless my tastes are far outside the mainstream (always a
possibility).
Matthew
>I have a fairly good idea of why Scheme is good for IF (besides the
>availability of the VM implementations), but for the rest of the group,
>how about a little lesson, Dave?
Like all languages, lisp (and Scheme, a minimal lisp dialect) has
advantages and disadvantages. The primary advantage to lisp is that lisp
programs are themselves lisp data structures. This sounds abstract, but it
has important practical implications: lisp programs can change themselves
at compile time. Specifically, lisp has powerful macro facilities that
really allow you to make lisp into a new language -- the best language for
the program you're writing.
Using macros, you can implement a TADS-like object oriented facility on top
of lisp in about two pages of code. The object-oriented syntax then
becomes, essentially, as much a part of lisp as lisp's primitives.
Let's assume that someone writes an IF language on top of lisp, using
macros called "defobject", "make-instance", and so forth. Then someone
programming in this IF langauge can further build it up to meet his needs.
So, if he's making a game that has many, many kinds of (say) keys, he could
use macros to make a new macro "defkey" with simple syntax, but which
expands to some complicated "defobject" declaration.
Note that there is no run-time overhead to this; macros are expanded at
compile time. So you can make your syntax as high-level as you'd like,
with no performance overhead. (But more on performance in a moment.)
Another advantage to lisp is that, because the syntax is so simple, it is
very easy to interpret. The natural way to use lisp is from an
interpreter. Imagine an IF system where you could run your game and
then type any parenthesized lisp expression at the command prompt and
immediately see the results. E.g.:
-------------------------------------------------------------------------------
>north
You're in a room. Sure is nifty here. There's a gold key lying
on the ground.
>get gold key
I don't see any "gold key" here.
>(listcontents) [call a lisp function that tells us what's here]
gold-key
[list of other stuff in this location deleted]
>(gold-key 'adjectives) [ask the key for its adjective list]
("bronze" "tarnished") [oops!]
>(set-method gold-key 'adjectives '("gold")) [set adjectives list]
("gold") [lisp listener tells us it set it]
>get gold key
Taken.
-------------------------------------------------------------------------------
This assumes a particular method calling syntax, which is fairly arbitrary.
But the point is that a lisp interpreter is so simple that you could easily
add the ability to execute arbitrary code from your prompt.
This is great for debugging, but also for development. You can test cases
easily. For example: "What if I put the lead bar into the bucket?" Just
type in the code to do this at the prompt and see what happens.
Another nice thing is that every lisp construct can return a value. In C,
code blocks can't return values. In lisp they do. But it goes beyond that.
In lisp, you can write the equivalent of the following psuedo-C code:
x = if (i) 3 else 2;
C veterans will recognize this as
x = i ? 3 : 2;
But whereas C make a specific shorthand for this, lisp's functional style
is totally general. You could write something like (again, pseudo-C):
x = {
int i;
/* does our array contain the magic number 23? */
for (i = 0; i < 100; i++)
if (a[i] == 23)
return 0;
/* no 23 in our array -- safe for now! */
return 1;
};
where "return" means "set the value of this block to the following value
and exit the block".
It's not easy to see how this capablity can make programs simpler
syntactically (the same is true of C's ?: operator), but once you've used
it you can't believe you ever programmed without it.
Of course, there are drawbacks to lisp as well. For one, the simple
parenthesized syntax takes some getting used to, and makes certain kinds of
expressions like very complex if-then-else constructions harder to read
than their Algol language equivalents. (In my opinion, at least. Many
lispers disagree. I think it's a matter of familiarity, and most
programmers these days are very familiar with Algol-style declarative
langauges.)
Second, it's easy to write a very elegant, totally inefficient lisp
program. And I believe that functional languages like lisp are inherently
more difficult to write good compilers for, though such claims are
basically impossible to prove.
So why is lisp a good choice for IF? Because its advantages are just the
things we want for IF, and its drawbacks are not very relevant to IF. IF
doesn't need to be overly-concerned with efficiency to be good (unlike 3D
video games, for example, where the speed of your polygon engine is
everything). And text adventures generally involve pretty simple
algorithms (at least as far as someone programming a game in TADS or Inform
is concerned --- writing such a system in the first place can be tricky),
so the initial opacity of the syntax to those brought up on Algol-like
languages should be only a minor barrier to writing a typical adventure
game.
And why is Scheme an especially good lisp dialect for IF? Because it
offers all the nice things I mentioned above, but is *tiny*. Scheme
development systems are small, and so are the executables they produce.
This is not true of many lisp dialects, like Common Lisp. CL is an
incredible development system, but only if you can tolerate 8 megabyte
executables. We don't want that for IF.