Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Natural Language Interfaces

18 views
Skip to first unread message

Jeff Nyman

unread,
Sep 7, 2006, 4:12:07 PM9/7/06
to
I am honestly not trying to start any battles about this, but I am curious:

Is Inform 7 considered to be utilizing natural language *processing* or
natural language *programming*?

It seems to me that the natural language aspect is simply an interface,
particularly when you realize that the actual code that executes is in
Inform 6 format (and thus most definitely not natural language). In that
sense, it would seem that the "front-end" or the ni program of Inform 7, is
really an elaborate natural language parser that processes those constructs
into code.

In a similar vein, I was reading about something called NaturalJava as a
natural language interface that would convert to Java code. This is not,
however, considered natural language programming from what I have read.

If it is the case that the Inform 7 natural language component is simply an
interface, there is no reason (save time and effort) that such interfaces
could not be built and tacked onto TADS 3 or Hugo or any other language, for
that matter. (Leaving aside, for now, the issue of whether that is a good or
bad idea.)

I am just curious if people feel what I said here is accurate about the
distinction.

- Jeff


Neil Cerutti

unread,
Sep 7, 2006, 4:30:40 PM9/7/06
to
On 2006-09-07, Jeff Nyman <jeffnyman_nospam@nospam_gmail.com>
wrote:

I guess I agree with the distinction you are making, and an
Inform 7 could be written that used targetted TADS instead. But
I'm not sure what you're getting at. Even if Inform 7 compiled
directly to machine code, you could still theoretically write a
version of it that compiled to TADS instead.

--
Neil Cerutti

Andrew Plotkin

unread,
Sep 7, 2006, 5:20:37 PM9/7/06
to
Here, Jeff Nyman <jeffnyman_nospam@nospam_gmail.com> wrote:
> I am honestly not trying to start any battles about this, but I am curious:
>
> Is Inform 7 considered to be utilizing natural language *processing* or
> natural language *programming*?

What is the difference?

I7 is programming, so "natural language programming" certainly seems
to be an answer to your question. On the other hand, I7 has a kind of
world model expressed as relations, which is one of the things
natural-language processing is supposed to create. On the other other
hand, it's a more precise and deterministic model than
natural-language systems usually deal with.



> It seems to me that the natural language aspect is simply an interface,
> particularly when you realize that the actual code that executes is in
> Inform 6 format (and thus most definitely not natural language).

The actual code that executes is in Z-machine format. I'm not sure
whether that bolsters your point or not. :) Every program on any
computer is *some* kind of rigid, simplistic machine code at the
bottom level.

--Z

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*
When Bush says "Stay the course," what he means is "I don't know what to
do next." He's been saying this for years now.

Jeff Nyman

unread,
Sep 7, 2006, 7:04:19 PM9/7/06
to
"Andrew Plotkin" <erky...@eblong.com> wrote in message
news:edq2f5$g77$3...@reader2.panix.com...

> What is the difference?

Well ... I guess that is, to some extent, my question as well. I see a lot
of debate here about whether or not "natural language programming" is a good
way to go. But if that is not, in fact, what Inform 7 is really doing, then
that seems to be a side issue. (To me it then just becomes more of an
interface debate, almost like debating the merits of the Inform 7 IDE and
the TADS 3 Workbench and the ADRIFT interface, for example.)

I have routinely seen people state that the natural language aspects are
just one of the benefits of Inform 7, the other two apparently being a
rules-based structure and relations. I have also seen rules.t and relation.t
(both for TADS 3) and it seems that this core functionality can certainly be
replicated. So that leaves the natural language aspect standing on its own,
and the one which has apparently given rise to a lot of the debate.

I could envision someone writing a TADS 3 interface that let you type in
natural language and that got translated (parsed?) into the code that is
currently in relation.t and rules.t. I am not sure that this would be
natural language programming, however. It just seems to me that this would
be a clever interface that parses natural language into code.

As to why it matters: it probably does not. I just know I have seen people
talking about the issues with the natural language (some don't like it) and
the problematic issues for non-English users. Yet if the natural language of
Inform 7 is just an interface, then it should be possible to code directly
in the underlying "new Inform 6": meaning, using the rules and relations but
not the natural language. (That separation seems easier to me if you are
dealing with just a natural language *processor/parser*, rather than the
natural language *programming* being intrinsic. These debates came up with
the Metafor tool, from what I understand, as well.)

- Jeff


Adam Thornton

unread,
Sep 7, 2006, 8:20:50 PM9/7/06
to
In article <edq2f5$g77$3...@reader2.panix.com>,

Andrew Plotkin <erky...@eblong.com> wrote:
> The actual code that executes is in Z-machine format. I'm not sure
> whether that bolsters your point or not. :) Every program on any
> computer is *some* kind of rigid, simplistic machine code at the
> bottom level.

Neuron #34501387244 cries! And by "cries," I mean "fires."

Adam

Jim Aikin

unread,
Sep 7, 2006, 8:32:44 PM9/7/06
to

"Jeff Nyman" <jeffnyman_nospam@nospam_gmail.com> wrote

> I have routinely seen people state that the natural language aspects are
> just one of the benefits of Inform 7, the other two apparently being a
> rules-based structure and relations. I have also seen rules.t and
> relation.t (both for TADS 3) and it seems that this core functionality can
> certainly be replicated.

I have only a cursory (and rapidly eroding) acquaintance with I7 -- so I'm
curious. Could someone define "rules-based structure" and "relations" for me
in technical terms?

My suspicion is that there would be no particular technical obstacle to
doing both of these things in I6. But I could conceivably be wrong. Just to
be clear, my question is not, "Is it easier to do these things with the I7
code syntax?" My question is, what ARE they at a technical level, and can
that (whatever it is) be fully replicated using I6 routines and variables?

> I could envision someone writing a TADS 3 interface that let you type in
> natural language and that got translated (parsed?) into the code that is
> currently in relation.t and rules.t. I am not sure that this would be
> natural language programming, however. It just seems to me that this would
> be a clever interface that parses natural language into code.

Well, that's a bit like saying that C++ is just a clever interface that
parses a bunch of symbols into assembly language. True, but irrelevant.
What's of interest to the programmer is precisely the interface, it seems to
me.

--Jim Aikin


Jeff Nyman

unread,
Sep 7, 2006, 9:18:05 PM9/7/06
to
"Jim Aikin" <rai...@musicwords.net> wrote in message
news:MS2Mg.23172$kO3....@newssvr12.news.prodigy.com...

> My suspicion is that there would be no particular technical obstacle to
> doing both of these things in I6. But I could conceivably be wrong.

That is sort of my implied question. If the natural language aspect of
Inform 7 is solely a processor to convert natural language to source code,
then these things *are* already done in Inform 6, since that is what the
ultimate source code is. Ultimately that is one of the things I am trying to
determine: can all the language features that are claimed to be vast
improvements in Inform 7 be coded directly without the natural language? If
so, then that (would seem) to indicate how intrinsic the natural language is
to the language itself.

> Well, that's a bit like saying that C++ is just a clever interface that
> parses a bunch of symbols into assembly language. True, but irrelevant.

I'm not sure it is entirely irrelevant, though. C++ goes directly from
source code to native code. Even Java goes directly from source code to
bytecode. Inform 7 goes from natural language "code" to Inform 6 source code
to z-code. I am trying to determine if the natural language is a necessary
component or just one that was tacked on as an interface. That, to me, has
always been the distinction between a natural language processor and true
natural language programming. That theoretical distinction, in this case,
means less to me than whether or not someone could utilize the constructs of
I7+I6 (I'm not even sure what to call it) without having to utilize the
natural language.

> What's of interest to the programmer is precisely the interface, it seems
> to me.

To a certain extent, I agree. That's why I am curious if the natural
language is *just* an interface, or if it is part interface and part
intrinsic to the language itself.

- Jeff


Jim Aikin

unread,
Sep 8, 2006, 12:09:58 AM9/8/06
to

"Jeff Nyman" <jeffnyman_nospam@nospam_gmail.com> wrote in message
news:m6KdnaKzW-k6WZ3Y...@comcast.com...

>> What's of interest to the programmer is precisely the interface, it seems
>> to me.
>
> To a certain extent, I agree. That's why I am curious if the natural
> language is *just* an interface, or if it is part interface and part
> intrinsic to the language itself.

Yes, I think that might be an interesting question. To generalize a bit, are
there computer constructs that can ONLY be defined and coded using a
"natural language" programming language?

I'm not an expert, God knows, but I suspect that any programming construct
may, in theory, be realizable with any decently powerful programming
language. Of course, defining "decently powerful" is non-trivial.

For example, I'm pretty sure one COULD write object-oriented code in BASIC,
up to a point. It would be mind-numbingly inefficient, but one could perhaps
produce code that had essentially the same functional structure as OOP code
in C++. But there wouldn't be any classes, constructors, or public and
private functions; it would be up to the BASIC programmer to simulate all
that stuff.

If that's the case, then programming in C++ rather than BASIC is ultimately
about ease of use rather than about pure functionality.

To use an example that's closer to home, Inform doesn't have any
floating-point variable type. But I'm pretty sure there's a library
extension that implements floating-point. If there isn't, I'm sure a
computer science major could hack one together in about an hour.

So the question becomes, what type(s) of complex programming constructs are
made easier by the I7 interface? And conversely, what are its limitations.

The only one I'm certain of is that I7 doesn't have multiple inheritance.
Now, I happen to think multiple inheritance is neat, especially in IF, where
one might want (to borrow an example from the DM4) an object that's both a
bird of prey and a treasure.

Even without multiple inheritance, you could give such an object all of the
properties of both classes. What you can't do is test whether it's a member
of either one class or the other and get the desired answer. Instead of the
I7 equivalent of ...

if ((noun ofclass BirdOfPrey) && (noun ofclass Treasure))

... you have to run the test in some other manner. For instance, you could
give every object in the game a bitflag that contained your secret schematic
for the things that would otherwise be classes, and then do something like
this (pseudocode):

if ((noun has BirdOfPrey_bit) && (noun has Treasure_bit))

It's not impossible, it's just more work. And it may even be theoretically
provable that no programming language requires less work for the programmer
in all conceivable situations. So the desirability of a programming language
has to be evaluated against the question, "What type of program do I want to
write?" Graham's hope, I'm sure, is that I7 will be significantly easier to
use for people who want to write IF. That may prove to be the case. It may
also prove to be the case that the natural language interface (or rather,
_a_ natural language interface) offers advantages in other situations as
well.

Time will tell.

But I'm still hoping to read an answer to my question about rules and
relations.

--JA

Adam Thornton

unread,
Sep 8, 2006, 12:21:26 AM9/8/06
to
In article <q26Mg.16631$1f6....@newssvr27.news.prodigy.net>,
"Jim Aikin" <rai...@musicwords.net> wrote:

> Yes, I think that might be an interesting question. To generalize a bit, are
> there computer constructs that can ONLY be defined and coded using a
> "natural language" programming language?

Any Turing-equivalent language is, well, equivalent.

However, if your target architecture doesn't have infinite memory,
practically not.

I suspect, however, that if you targetted Glulx, you could create an I6
compiler in I7.

Adam

Andrew Plotkin

unread,
Sep 8, 2006, 12:36:17 AM9/8/06
to
Here, Jeff Nyman <jeffnyman_nospam@nospam_gmail.com> wrote:
>
> > Well, that's a bit like saying that C++ is just a clever interface that
> > parses a bunch of symbols into assembly language. True, but irrelevant.
>
> I'm not sure it is entirely irrelevant, though. C++ goes directly from
> source code to native code.

How do you know? The earliest C++ compiler generated C intermediate
code. Your current C++ compiler could theoretically do the same. If it
did, would it change your opinion of the design of C++ as a language?

All computer languages worth their salt have equivalent power; we've
known that since the first electronic computers were built. Comparison
and judging of languages always comes from the standpoint of what they
make *simple*, not what they make possible.

In the case of I7, the obvious comparison is not to I6. The I7
compiler adds a lot of code structure that doesn't have a clean I6
API. That is, it would be awkward to invoke it as an I6 programmer, as
a simple I6 library. This is simply because I6 is poorly-suited to
APIs for complex data structures. So it's easier to sort it all out at
compile-time and generate the appropriate routines mechanically. If
that's something "tacked on as an interface", it's a *smart* interface
that does a lot of work for you.

(I imagine, without having looked, that TADS is dynamic enough a
language to offer relations and rule programming as a library. This is
not to minimize the importance of "compile-time" consolidation,
consistency-checking, and optimization of the rule model. But in
principle, this could happen at game startup time.)

Anyway: the comparison you want is with a more formal, mathematical
syntax for expression I7's relations and rule/condition model. (A
mathematical syntax is certainly what I had in mind when I started
thinking about rule-based programming, a couple of years ago.) This
would indeed be "a different interface to I7 functionality" -- the
programmer would be expressing the same relations and rules, and the
compiler would be doing the same work with them.

The question is whether it's worthwhile for the real-world-like
relations of containment, classification, etc to be "naturally"
represented with our real-life language for those relations -- or
whether the ambiguity of natural language throws too many monkey
wrenches into the process to bother with it.

--Z

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*

If the Bush administration hasn't thrown you in military prison without trial,
it's for one reason: they don't feel like it. Not because you're patriotic.

na...@natecull.org

unread,
Sep 8, 2006, 12:40:54 AM9/8/06
to

Adam Thornton wrote:
> Any Turing-equivalent language is, well, equivalent.
>
> However, if your target architecture doesn't have infinite memory,
> practically not.


I've never understood that 'the full Turing Machine spec requires
infinite storage' argument.

Wouldn't any algorithm requiring infinite storage also take infinite
time to traverse it (it's serial, not randomly-addressable, after all),
and therefore, never halt? And Turing Machine programs are all about
the halting.

Seems to me the only kind of TM you could build that actually used all
of that infinite tape would be by definition a broken one.

Andrew Plotkin

unread,
Sep 8, 2006, 12:46:09 AM9/8/06
to
Here, na...@natecull.org wrote:
>
> Adam Thornton wrote:
> > Any Turing-equivalent language is, well, equivalent.
> >
> > However, if your target architecture doesn't have infinite memory,
> > practically not.
>
>
> I've never understood that 'the full Turing Machine spec requires
> infinite storage' argument.

The full Turing Machine spec requires *unbounded* storage. That is,
any given program (which halts) will only use a finite amount of
memory. But for any given amount of memory, there's a program which
needs one more bit.

I rather disagree with Adam, though. In *practical* terms, computer
languages only have a finite amount of memory available, and they are
equivalent anyway.

--Z

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*

na...@natecull.org

unread,
Sep 8, 2006, 12:59:08 AM9/8/06
to

Andrew Plotkin wrote:
> The question is whether it's worthwhile for the real-world-like
> relations of containment, classification, etc to be "naturally"
> represented with our real-life language for those relations -- or
> whether the ambiguity of natural language throws too many monkey
> wrenches into the process to bother with it.

One of the syntactic ambiguities I'm wrestling with just at the moment,
for a toy game just trying to model a few streets near my house, is the
I7 concept of '<room> is <direction> of/from <room>' vs the more
low-level I6 'the <direction> exit from <room> goes to <room>'.

The difference being that, as far as I can tell, I7's directional
statements automatically define both the forward and reverse exit,
while sometimes that really is not what you want to say, and so you
have to manually remember to cancel every implied reverse exit with a
'<direction> from <room> is nowhere'.

(My example being a square with a ring road, where the exits going
around the square imply walking and turning to follow the curve of the
road, rather than a simple compass direction.)

I *think* for the most part I7's compiler is somehow guessing correctly
what I want to do, but I'm never quite sure, and I'd sure like it to
either tell me exactly what one-way exits it is generating from my
ambiguous direction statements, or give me a clear syntax for saying
'this is a one-way exit that does not imply a simple transitive,
reversible compass-direction relationship'.

I'm not sure how you'd remove that ambiguity in pure English, since we
really don't have a clean concept as far as I know for 'one-way exit'
in most of our directional language. It is generally assumed as part of
our common sense that if X is north of Y, then Y is by definition south
of X. But that's a piece of *our* common sense that is not actually
correct in the IF world-model, where rooms and exits are discrete
entities and compass directions are not global realities, but unique
and transient for each room. ( It would be correct in most MMORPG and
FPS worlds, where the map is much more static - but IF is a special
case in this as in so many other respects.)

na...@natecull.org

unread,
Sep 8, 2006, 1:10:08 AM9/8/06
to
By way of example:

"Square"

Main Square is a room.

Square South is south of Main Square.

Square East is east of Main Square.

[So far, so good. Here it gets tricky.]

Square South is south of Square East.

[It's not, of course - it's actually south-west, as the crow flies. But
you start walking south and turn west half-way. What I want is a way to
say unambiguously 'it's actually south-west by the compass, but that's
not accessible by foot; to get to it you go south'. Is this where the
'of' vs 'from' difference becomes important? ]

Adam Thornton

unread,
Sep 8, 2006, 1:13:29 AM9/8/06
to
In article <1157690454.0...@i42g2000cwa.googlegroups.com>,
na...@natecull.org wrote:

Strictly infinite, no.

Arbitrarily large, yes. Where "arbitrarily large" may well be larger
than any finite amount of memory you choose to give it. Indeed, it
might, for instance, be larger than A(5,5) cells, where A is Ackermann's
function[0]; in which case it would come nowhere near close to fitting
into the universe.

Hence, algorithms can take an arbitrarily long time to complete,
assuming a finite amount of time per Turing machine move.

So, strictly speaking, not infinite, but "too big to compute given the
amount of energy/time/space available in the universe," sure.

Adam

[0] The Ackermann function is recursive, but not primitive recursive.
Here is its definition:

A(m,n) = { n+1 if m = 0
A(m-1,1) if m > 0 and n = 0
A(m-1,A(m,n-1)) if m > 0 and n > 0

A(0,0) = 1
A(0,1) = 2
A(1,0) = 2
A(1,1) = A(0,A(1,0)) = 1 + (A(1,0)) = 1 + 2 = 3
A(2,0) = A(1,1) = 3
A(2,1) = A(1,A(2,0) = A(1,3)
A(1,2) = A(0,A(1,1)) = A(0,3) = 4
A(1,3) = A(0,A(1,2)) = A(0,4) = 5 (= A(2,1))
.....

A(4,1) = 65533

I'll leave A(5,5) as an exercise for the reader.

Adam Thornton

unread,
Sep 8, 2006, 1:24:58 AM9/8/06
to
In article <edqsih$dfa$3...@reader2.panix.com>,

Andrew Plotkin <erky...@eblong.com> wrote:
> I rather disagree with Adam, though. In *practical* terms, computer
> languages only have a finite amount of memory available, and they are
> equivalent anyway.

I can probably find you a problem that's eminently tractable in C, but
requires more memory than some not-terribly-ancient-computer-I-can-find
would take to implement in Brainf*ck. Particularly of the class where
the easiest solution is to write a C compiler in Brainf*ck. Which does
bring to mind another thing Turing machines are lacking: I/O. Sure, you
can always glue a memory-mapped I/O spec to a Turing machine, but I/O in
the real world often does come with limits on response time, and I'm
suspect that, say, generating a video display, if you did it with a
two-state Turing machine, would perhaps require the head of the machine
to move faster than the speed of light if you were going to represent,
say, a 640x480x256 display as a sequential range of Turing machine tape
and were going to require it to display arbitrary bitmaps at 60Hz, even
if you made your cell size, say, 1nm wide.

Wait, what did this have to do with IF?

Adam

Raksab

unread,
Sep 8, 2006, 4:49:11 AM9/8/06
to
>From what I understand, I7 is, in fact, a natural language *interface,*
sort of a user-friendly way to approach I6 code. It's more of a tool
for writing programs than a program unto itself (though I'm sure it
took quite a bit of computer code to create!)

When you type natural language source into I7 and hit the compile
button, it converts the sentences you typed into code, interpreting
your sentences as the same kinds of rules that are understood by I6.
Since the interface is a computer program, it needs to have things in
black and white. It isn't as good at processing "natural language" as
a human mind is, so you have to word your rules veeerrry specifically,
or it won't understand what you want. The interface is,
understandably, pretty intolerant of natural language synonyms humans
tend to think little of. (For example, if you want to put an item in
your game, you must not categorize it as an "item," "article," or
"object," and *especially* don't call it a "noun"! You have to say
that it is a "thing." If you call it an "object," I7 will spit your
code back with an error message.)

If your sentences are properly formatted and clearly understood by the
program, it compiles and runs... based on its I6 programming. (Then,
of course, the software nicely makes it all up in hexadecimal so the
computer can read it, starts the program, and displays the game on the
other side of the screen.)

I keep referring to I7 as an "interface," because that's what I think
of it as. The underlying IF parser is the I6 parser: the main
difference is the amazingly useful "skin," which some people have a
much easier time handling than straight-up computer code. (I'm one of
those people. Straight computer code tends to make me a little
intimidated. Even though NL coding is just as picky in its way, I find
it easier to work with.)

I think it's sort of like the difference between old Windows and
MS-DOS. The guts of the program are still similar (well, sort of), but
there's this interface in between. You can still open the underlying
program and use it to tinker with the complex internal workings (though
I'm not sure how to fiddle with the Inform library I7 comes with), but
if you're just looking to use the program and not directly alter any of
it, you can stick to the interface.


So, I suppose, TADS or other systems could conceivably have a big chunk
of programming added so they could convert "natural language" into
their understood rules. It'd be a huge undertaking, though, of course.

rpresser

unread,
Sep 8, 2006, 10:10:33 AM9/8/06
to

Forget A(5,5). I'll pay a large amount of money for someone to write
out the value of A(4,3). No power towers, please; I'll allow scientific
notation, no more.

(Amusing coincidence: I tried to look up Ackerman function on
Wikipedia, and was told:

Wikipedia has a problem
Sorry! This site is experiencing technical difficulties.
Try waiting a few minutes and reloading.
(Can't contact the database server: All servers busy)

Perhaps it was trying to calculate A(4,3) for me.)

Neil Cerutti

unread,
Sep 8, 2006, 10:29:16 AM9/8/06
to

Abelson and Sussman must have thought is was pretty funny to
instruct readers of SICP to figure out what function A(0,n),
A(1,n), A(2,n), etc., computed. If one hadn't heard of the
Ackerman function at the time, as I hadn't, one wasted a whole
lot of pencil lead. ;)

--
Neil Cerutti
We're not afraid of challenges. It's like we always say: If you
want to go out in the rain, be prepared to get burned.
--Brazillian soccer player

Andrew Plotkin

unread,
Sep 8, 2006, 11:20:41 AM9/8/06
to
Here, Jim Aikin <rai...@musicwords.net> wrote:
>
> So the question becomes, what type(s) of complex programming constructs are
> made easier by the I7 interface? And conversely, what are its limitations.
>
> The only one I'm certain of is that I7 doesn't have multiple inheritance.
> Now, I happen to think multiple inheritance is neat, especially in IF, where
> one might want (to borrow an example from the DM4) an object that's both a
> bird of prey and a treasure.
>
> Even without multiple inheritance, you could give such an object all of the
> properties of both classes. What you can't do is test whether it's a member
> of either one class or the other and get the desired answer.

You are describing a limitation but I think you're coming at it from
the wrong angle.

Given I7's set of concepts, the only reason *to* test whether X is a
member of class ("kind") Y is in when deciding what rules apply to X.
But this is merely one kind (no pun) of object description. A rule can
have any description as a conditional, and descriptions are not
limited to the strict tree model of single-inheritance OO.

When you're deciding how to structure your I7 world, the primary
question is not "what class is this" (as it would be in, say, Java)
but "how shall I describe this". If a category of things fall
naturally into a tree, then "kinds" are a useful tool for describing
them. If not, then you use either/or properties or some other I7 tool.
As you note (forgive me for covering ground that you understand, but I
want to clarify this):

> For instance, you could
> give every object in the game a bitflag that contained your secret schematic
> for the things that would otherwise be classes, and then do something like
> this (pseudocode):
>
> if ((noun has BirdOfPrey_bit) && (noun has Treasure_bit))
>
> It's not impossible, it's just more work.

But it *isn't* more work. Look at the two approaches:

[Kinds -- third line is not valid I7:]
A bird-of-prey is a kind of thing.
A treasure is a kind of thing.
*A treasure-bird is a kind of bird-of-prey and a kind of treasure.
The Maltese Falcon is a treasure-bird.

[Properties:]
A thing can be bird-of-prey.
A thing can be treasure.
Description: a thing is a treasure-bird if it is bird-of-prey and it
is treasure.
The Maltese Falcon is a bird-of-prey treasure thing.

The second is slightly wordier, but only slightly, and it's not more
complicated in I7 terms. (It is slightly awkward, because I'm using
"bird-of-prey" as an adjective for parallel's sake.)

Also note that the "Description" line would only be necessary if you
have rules that apply specifically to treasure-birds. In the naive
"multiple inheritance" case, you'd have rules for birds-of-prey and
rules for treasures, and they'd both apply to the Maltese Falcon --
you wouldn't need the description at all.

(Side note: it is also theoretically possible that the I7 compiler
could be extended to allow "The Maltese Falcon is a treasure-bird", in
the second approach. This would relieve the awkwardness of "is a
bird-of-prey treasure thing". The compiler can't in general create
objects based on adjectives, but some cases are logically
determinable.)

Now, I originally said "you are describing a limitation", and then
went on to deny the limitation you described. :) What is the real
limitation? I think it is this: the I7 standard rules, having
inherited a lot of the structure of the I6 library, overuse I7 kinds.
You can't make a player-character who is a door. "Player-character"
and "door" *could* have been non-kind descriptions, but the rules form
them as kind descriptions, and changing that *is* a lot of work.

More generally: descriptions are not subject to the sort of
customization that action rules are. You can mess with what doors do,
but you can't mess with what a door is.

The I7-natural way to write a standard ruleset that can be messed
with in this way: base everything off of either/or properties. It's
legal to say "A thing can be doorlike. A thing is usually not
doorlike." and then, for a particular game, "The player is doorlike".

But that just happens to be the I7 mechanism which is sufficiently
flexible! I think the real answer is that every part of the system --
descriptions, library messages, function calls, the lot -- should be
rules, and every rule should be customizable. Then this whole
discussion would go away: you'd look at the standard definitions of
player-characters and doors, and write an *exception* to them! End of
problem.

--Z

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*

If the Bush administration hasn't shipped you to Syria for interrogation,

ChicagoDave

unread,
Sep 8, 2006, 11:56:57 AM9/8/06
to
> Andrew Plotkin wrote:
> Anyway: the comparison you want is with a more formal, mathematical
> syntax for expression I7's relations and rule/condition model. (A
> mathematical syntax is certainly what I had in mind when I started
> thinking about rule-based programming, a couple of years ago.) This
> would indeed be "a different interface to I7 functionality" -- the
> programmer would be expressing the same relations and rules, and the
> compiler would be doing the same work with them.
>
> The question is whether it's worthwhile for the real-world-like
> relations of containment, classification, etc to be "naturally"
> represented with our real-life language for those relations -- or
> whether the ambiguity of natural language throws too many monkey
> wrenches into the process to bother with it.

Which makes people wonder if relational databases are best served with
the ANSI-SQL standard or would some other syntax be more effective. I
have always "sensed" that IF could use a dose of database capability
behind it and this falls into what I7 does with relations. Not from a
storage perspective (which is what I had always considered), but from
a code generation perspective. Well maybe Andrew's point about complex
data structures is the same thing.

The I7 natural language provides the means to act on sets of data as
well as doing this against the state of the virtual machine. I'm sure
that this all could be presented in a lower level prgrammatic way and
certainly should be able to be described mathematically, but for me at
least, the natural language syntax, much like ANSI-SQL, gives me a
better feel for what I'm doing.

Of course I've worked with the occasional SQL expert who often have a
mathematics background and will actually write out their set theory in
equations before they even tackle SQL.

I'm not sure it matters what the top level syntax and the bottom level
syntax is, does it? There are pre-processors that do optimization and
then the compiler does optimizations, and then run time environment
can do even more optimizations. What's the difference?

David C.

Jeff Nyman

unread,
Sep 8, 2006, 12:56:12 PM9/8/06
to
"ChicagoDave" <david.c...@gmail.com> wrote in message
news:1157731017.7...@h48g2000cwc.googlegroups.com...

> I'm not sure it matters what the top level syntax and the bottom level
> syntax is, does it? There are pre-processors that do optimization and
> then the compiler does optimizations, and then run time environment
> can do even more optimizations. What's the difference?

On the "what's the difference?" question, I am not sure. That is partly what
I wanted to see, in terms of how people felt about the natural language
component and, more particularly, how it actually was implemented in terms
of Inform 7. It seemed like the top-level vs. bottom-level syntax did make
quite a difference for awhile since the natural language (or "pseudo-NL", as
I heard it referred to) was such a topic of debate. And rightfully so, I
would think. If Microsoft released, say, .NET 3.0 with an entirely natural
language interface, I would imagine it would make a difference to a lot of
people. (Or if Sun did something similar with Java.) Now imagine if they
said that the natural language interface was the *only* way to write your
.NET or Java code going forward.

(From an optimization standpoint, as you are discussing, perhaps there is
ultimately no difference.)

On this newsgroup, I have heard some who say they would prefer to stick with
Inform 6 (to get away from the natural language) but who hope Inform 6 will
be updated with the features of Inform 7. So clearly to those people, it
makes as difference. (Yet, if Inform 7 is really just parsing down to Inform
6, that would suggest you could write the same constructs in Inform 6
directly, without using Inform 7. That is part of what I was trying to
determine.)

Also, to those who say they cannot use Inform 7 for non-English games, it
seems the top-level syntax does make a difference.

I should probably add that I find Inform 7 as a whole a fascinating project
and I am very glad it is here as it is constructed so that issues like this
can be discussed.

- Jeff


Andrew Plotkin

unread,
Sep 8, 2006, 1:13:17 PM9/8/06
to
Here, Jeff Nyman <jeffnyman_nospam@nospam_gmail.com> wrote:
>
> On this newsgroup, I have heard some who say they would prefer to stick with
> Inform 6 (to get away from the natural language) but who hope Inform 6 will
> be updated with the features of Inform 7. So clearly to those people, it
> makes as difference. (Yet, if Inform 7 is really just parsing down to Inform
> 6, that would suggest you could write the same constructs in Inform 6
> directly, without using Inform 7. That is part of what I was trying to
> determine.)

You cannot. This is a confusion between the natural-language features
of I7 and the rule/relation features.

--Z

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*

Making a saint out of Reagan is sad. Making an idol out of Nixon ("If the
President does it then it's legal") is contemptible.

Adam Thornton

unread,
Sep 8, 2006, 3:02:00 PM9/8/06
to
In article <1157724632.9...@i3g2000cwc.googlegroups.com>,

"rpresser" <rpre...@gmail.com> wrote:
> Forget A(5,5). I'll pay a large amount of money for someone to write
> out the value of A(4,3). No power towers, please;

As Frank Zappa put it, "I can take about an hour on the Tower of
Power..."

Adam

ChicagoDave

unread,
Sep 8, 2006, 4:49:22 PM9/8/06
to
> Andrew Plotkin wrote:
> Here, Jeff Nyman <jeffnyman_nospam@nospam_gmail.com> wrote:
> >
> > On this newsgroup, I have heard some who say they would prefer to stick with
> > Inform 6 (to get away from the natural language) but who hope Inform 6 will
> > be updated with the features of Inform 7. So clearly to those people, it
> > makes as difference. (Yet, if Inform 7 is really just parsing down to Inform
> > 6, that would suggest you could write the same constructs in Inform 6
> > directly, without using Inform 7. That is part of what I was trying to
> > determine.)
>
> You cannot. This is a confusion between the natural-language features
> of I7 and the rule/relation features.
>

Well, it would be possible to augment the inform 6 syntax to have new
features that implemented the relational (and other) constructs in an
I6 manner....

It would require the I7 compiler logic be copied/moved down to the I6
compiler, but would essentially have the same output as we have now.

I think it would be hard to develop the NL relations syntax in an I6
format, but probably not impossible.

David C.

steve....@gmail.com

unread,
Sep 8, 2006, 7:24:27 PM9/8/06
to

Andrew Plotkin wrote:
> Here, Jeff Nyman <jeffnyman_nospam@nospam_gmail.com> wrote:
> > I am honestly not trying to start any battles about this, but I am curious:
> >
> > Is Inform 7 considered to be utilizing natural language *processing* or
> > natural language *programming*?
>
> What is the difference?

It's a very clear and very meaningful difference, and essential for
appreciating what I7 (as an experiment) can teach us about system
design.

NLParsing takes as its object true natural language (that is, ordinary
language originally designed for human consumption), and tries to make
some (imperfect) sense of it; where it fails it's the fault of the
program, and the concept of failure is quite different than with a
programming language.

With NLProgramming, on the other hand, the language is formal-symbolic
coded instruction, addressed to a machine, which the machine by
definition understands perfectly; any failure of communication is the
fault of the human user.

I think you're well enough aware of the important issues in the
discussion, so perhaps by clarifying the terms I'm not answering your
real question. Perhaps you can develop your question?

steve....@gmail.com

unread,
Sep 8, 2006, 8:14:24 PM9/8/06
to
Jeff Nyman wrote:

> Is Inform 7 considered to be utilizing natural language *processing* or
> natural language *programming*?

It easily qualifies as NLProgramming, but it is only NLProcessing (or
NLParsing, same thing) if you have a very particular (and I think very
strange) concept of what is "natural language."

For examples, the grammar correction mechanism of your word-processor
uses NLParsing, as does the machine translation provided by
babelfish.com for example. Think also data mining (e.g., the tools
google et. al. uses to analyze your email, so as to better target
advertisements, and whatever other nefarious stuff they're up to). In
these cases, the computer is not the addressee.

When the computer is the addressee, it's normally called NLProgramming.
AppleScript and I7 for example. The computer is the addressee, and the
content, the meaning, is totally codified instruction.

One immediate difference, in the first case, any failure is a failure
on the part of the program; in the latter case, any failure is a
failure of the user, for example, to get the syntax right.

> It seems to me that the natural language aspect is simply an interface,

> particularly when you realize that the actual code that executes is in
> Inform 6 format (and thus most definitely not natural language). In that


> sense, it would seem that the "front-end" or the ni program of Inform 7, is
> really an elaborate natural language parser that processes those constructs
> into code.

You're right. But Java (or I6 or TADS) is also simply an interface, and
indeed we say that they are "parsed" also. Really any modern
programming language (natural or normal) is a symbolism for describing
some desired behavior, parsed by a lower-level machine.

> In a similar vein, I was reading about something called NaturalJava as a
> natural language interface that would convert to Java code. This is not,
> however, considered natural language programming from what I have read.

It would be NLProgramming if it is language directed for machine
instruction (and if you'd say that it looks English-readable).

> If it is the case that the Inform 7 natural language component is simply an
> interface, there is no reason (save time and effort) that such interfaces
> could not be built and tacked onto TADS 3 or Hugo or any other language, for
> that matter. (Leaving aside, for now, the issue of whether that is a good or
> bad idea.)

That's correct.

steve....@gmail.com

unread,
Sep 8, 2006, 8:32:47 PM9/8/06
to
Andrew Plotkin wrote:
> I imagine [...] TADS is dynamic enough a

> language to offer relations and rule programming as a library.

Well as you say, it's not what a language makes *possible* but what a
language makes *easy* (or, at least, substantially easier). That said,
yes, TADS 3 makes this kind of thing very easy. So easy, in fact, that
I continuously forget how hard this stuff might be for users of other
systems, and thus a great deal of the benefit of I7.

> This is
> not to minimize the importance of "compile-time" consolidation,
> consistency-checking, and optimization of the rule model. But in
> principle, this could happen at game startup time.

Importantly, TADS 3 provides a pretty good mechanism for compile-time
calculation. Basically anything you can perform at game startup you
can, if you like, perform at startup.

But to actually produce what you're talking about -- I'd need to take a
class or two in graph analysis.

na...@natecull.org

unread,
Sep 10, 2006, 5:26:02 AM9/10/06
to

ChicagoDave wrote:
> I think it would be hard to develop the NL relations syntax in an I6
> format, but probably not impossible.

A large part of the problem would probably depend ultimately on what VM
such an updated I6 compiler was targeting. If we were still requiring
all I6 syntax to be compilable to Zcode, then we're probably stuck with
the limitation of a maximum of 5 (I think) parameters to any given
function. That limit plus, I believe, some kind of weird funkery with
Zmachine stack frames that I don't entirely understand, plus a maximum
of 64K dynamic memory space total, makes it extraordinarily difficult
to implement any kind of arbitrarily-complex recursive data type in I6,
or any serious heap storage allocator.

Some of the parameters in, eg, I7's object-specification phrases ('now
every green cow owned by the first visible red angry dog is bashful')
are sufficiently complicated that they really need
dynamically-allocated heap storage to represent at runtime. I7 gets
around this by not representing them at runtime and instead only
allowing these kind of phrases in special contexts, and then compiling
them to sort of specialised objectloops.

It would be very very very very nice if one could treat these kind of
entities as first-class runtime addressable objects in I7, but I think
to do so would require abandoning the Z-machine, and the community's
not quite ready to do that yet. For both sentimental and practical
reasons.

It would be a fun engineering challenge, though. Given upper RAM which
can only contain strings and compiled functions, a limit of 5
parameters to any function, and 64K of lower RAM scratch space which
must also contain every gameworld object (every Thing kind) in the
entire game - implement a universal data representation API!

(I'd start with a global array implementing a tiny heap, a function for
every <objectspec> to load it onto the heap... wait, that won't cope
with variable adjectives.... ennh... will to live fading rapidly...)

na...@natecull.org

unread,
Sep 10, 2006, 5:57:38 AM9/10/06
to

ChicagoDave wrote:

> > Andrew Plotkin wrote:
> > The question is whether it's worthwhile for the real-world-like
> > relations of containment, classification, etc to be "naturally"
> > represented with our real-life language for those relations -- or
> > whether the ambiguity of natural language throws too many monkey
> > wrenches into the process to bother with it.
>
> Which makes people wonder if relational databases are best served with
> the ANSI-SQL standard or would some other syntax be more effective. I
> have always "sensed" that IF could use a dose of database capability
> behind it and this falls into what I7 does with relations.

This is where I think the computing world lost a great deal when logic
programming died stillborn at the end of the 80s and object-oriented
programming became king. I think SQL is only a tiny glimpse of what
Codd's relational vision was all about, and logic programming was a
somewhat larger expression of the same idea. Instead, the idea of
relations - of dealing with collections instead of individual data
entities, and doing so in a way that erased the line between explicit
and inferred/calculated data - sort of got half-built, then stayed in a
database ghetto, unfriendly for general programming, while objects,
powerful as they are, remained procedural rather than declarative, and
OO analysis and design ended up recreating the old CODASYL navigational
database model with slightly improved data typing. To the point that we
now build these huge enterprise OO systems that take hundreds of lines
of Java or VB.NET code to replicate the functionality of what three
lines of SQL could do, and only as a last resort and with much distaste
and wiping of hands do they touch the icky *database, eww!* just to
persist the domain model - element by element - and quickly run away
back to object-land. And you still can't write a custom query, and you
sure can't reuse the domain model objects in any other application, and
no way in hell can you ever evolve your domain model, say add a field
or two, once you've carved it in OO stonework and sealed the API.

Or at least that's how it seems to me. I guess there's a huge semantic
gap between the OO mantras of encapsulation and separation of concerns,
and the global intertwining of state that logic programming and
relational modelling lead to, and we don't necessarily want to go back
completely to the pre-OO days, but... darnit, we seem to keep taking
two steps forward and three steps back each time we switch computing
paradigms.

I await the discovery of megafroodish intersprongly logiplexing
sometime around 2025.

na...@natecull.org

unread,
Sep 10, 2006, 6:21:34 AM9/10/06
to

n...@natecull.org wrote:
> I await the discovery of megafroodish intersprongly logiplexing
> sometime around 2025.

I also think that REST is a third partial angle on the same thing that
Prolog and SQL are looking at. And probably bits of pure functional
programming as well. Something about a) having a small number of
standard predictable verbs implemented everywhere, and pushing the bulk
of the data/program into the space of relationships between nouns
(rather than the OO approach of making verbs be fluid and situational
to each noun), b) always, always, ALWAYS, in a low-level,
non-overridable manner, providing a safe 'read' verb which is
guaranteed never to change the state of a system, and c) being able to
not care whether an entity is a single data element, a collection, or
some kind of 'virtual' updatable set of partial results - transparency
between data, functions/methods, and containers.

Oh, for what it's worth, let's throw in APL (though I've never used it,
but again it seems to have that idea of sets rather than individuals at
its heart) and virtualisable operating systems / hypervisors (since so
many problematic elements of managing and running a server just
magically vanish once you can reduce that server's entire state vector
to a simple file and apply backup/restore techniques to it - again,
it's that thing about treating functions as data, I think. What's an OS
but a very large function/closure or object with some state? But most
OSes and object systems don't allow backtracking of calculations at
arbitrary points, say, after deleting that file last Thursday, or after
applying a service pack that mysteriously went bad. But virtualise the
OS and foom, you can manually backtrack anything. Very very useful and
something the OS kernel/language VM base Should Just Do(tm) already.
But none of them do. Why not? And what would a language look like that
had 'save state and backtrack safely' as a fundamental operation?)

Andrew Plotkin

unread,
Sep 10, 2006, 10:27:18 AM9/10/06
to
Here, na...@natecull.org wrote:
> But virtualise the
> OS and foom, you can manually backtrack anything. Very very useful and
> something the OS kernel/language VM base Should Just Do(tm) already.
> But none of them do. Why not? And what would a language look like that
> had 'save state and backtrack safely' as a fundamental operation?)

@save_undo / @restore_undo?

Not quite flexible enough (you can only go backwards, only one step)
but the notion could be extended.

A few weeks ago, I was very seriously thinking of leveraging VM-undo
(or an extended VM-undo) as a solution to your NPC-planning problem.
Does the NPC go through the door? @save_undo, trigger the action, see
if it succeeded; if not, @restore_undo and test a different route!

I think you'd need to custom-design a programming environment for
efficient checkpoint-and-revert. The Z/Glulx model, where the
interpreter scans all of memory for changes, is never going to be fast
enough.

(Does T3 have a smart undo system? By "smart", I mean that the speed
and storage cost of a checkpoint operation are proportional to the
number of state changes since the last one. Change one variable,
super-fast checkpoint.)

--Z

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*

Bush's biggest lie is his claim that it's okay to disagree with him. As soon as
you *actually* disagree with him, he sadly explains that you're undermining
America, that you're giving comfort to the enemy. That you need to be silent.

Andrew Hunter

unread,
Sep 10, 2006, 11:04:15 AM9/10/06
to
On 2006-09-10 15:27:18 +0100, Andrew Plotkin <erky...@eblong.com> said:

> Here, na...@natecull.org wrote:
>> But virtualise the
>> OS and foom, you can manually backtrack anything. Very very useful and
>> something the OS kernel/language VM base Should Just Do(tm) already.
>> But none of them do. Why not? And what would a language look like that
>> had 'save state and backtrack safely' as a fundamental operation?)
>
> @save_undo / @restore_undo?
>
> Not quite flexible enough (you can only go backwards, only one step)
> but the notion could be extended.

On the Z-Machine, at least, multiple level undo has always been a
possibility: one step only is restriction enforced by the Inform
library only. frotz actually supports around 500 levels of undo (Zoom
only supports 5: I've never bothered to extend this owing to Inform's
restriction).

> A few weeks ago, I was very seriously thinking of leveraging VM-undo
> (or an extended VM-undo) as a solution to your NPC-planning problem.
> Does the NPC go through the door? @save_undo, trigger the action, see
> if it succeeded; if not, @restore_undo and test a different route!
>
> I think you'd need to custom-design a programming environment for
> efficient checkpoint-and-revert. The Z/Glulx model, where the
> interpreter scans all of memory for changes, is never going to be fast
> enough.

The Z-Machine's system may be more efficient than you think: a
@save_undo only really needs to store about 70k of data: not amazingly
cheap, but not incredibly expensive either.

> (Does T3 have a smart undo system? By "smart", I mean that the speed
> and storage cost of a checkpoint operation are proportional to the
> number of state changes since the last one. Change one variable,
> super-fast checkpoint.)

You're going to have a problem designing a 'smart' undo system, in that
you'll need to add tracking to all memory and stack operations, which
is not a huge overhead for an individual instruction, but which is
going to slow down the interpreter quite a lot over the course of it's
execution. This is only worth it if undo becomes such a common
operation that it occurs, on average, before around 30-60k worth of
stack and memory operations, possibly less depending on the efficiency
of your logging algorithm and writable memory size.

The Inform library does actually modify main memory quite a lot via the
various object manipulation commands, but the thing that's going to be
really noticable is stack operations: function calls and intermediate
results go there, and you could easily reach the point where the
overall degredation of performance far exceeds the gains you get with
having a fast @save_undo.

(Then there's the question of how fast @restore_undo would be able to
run through the logs, which would further reduce the benefits)

Andrew.

Andrew Plotkin

unread,
Sep 10, 2006, 11:21:11 AM9/10/06
to
Here, Andrew Hunter <and...@logicalshift.demon.co.uk> wrote:
> On 2006-09-10 15:27:18 +0100, Andrew Plotkin <erky...@eblong.com> said:
>
> > Here, na...@natecull.org wrote:
> >> But virtualise the
> >> OS and foom, you can manually backtrack anything. Very very useful and
> >> something the OS kernel/language VM base Should Just Do(tm) already.
> >> But none of them do. Why not? And what would a language look like that
> >> had 'save state and backtrack safely' as a fundamental operation?)
> >
> > @save_undo / @restore_undo?
> >
> > Not quite flexible enough (you can only go backwards, only one step)
> > but the notion could be extended.
>
> On the Z-Machine, at least, multiple level undo has always been a
> possibility: one step only is restriction enforced by the Inform
> library only. frotz actually supports around 500 levels of undo (Zoom
> only supports 5: I've never bothered to extend this owing to Inform's
> restriction).

Oh, that's not what I meant. To really use this as a programming tool,
you need to be able to go back multiple steps *in one jump*. Also keep
a tree of checkpoints, rather than a simple chain.



> > A few weeks ago, I was very seriously thinking of leveraging VM-undo
> > (or an extended VM-undo) as a solution to your NPC-planning problem.
> > Does the NPC go through the door? @save_undo, trigger the action, see
> > if it succeeded; if not, @restore_undo and test a different route!
> >
> > I think you'd need to custom-design a programming environment for
> > efficient checkpoint-and-revert. The Z/Glulx model, where the
> > interpreter scans all of memory for changes, is never going to be fast
> > enough.
>
> The Z-Machine's system may be more efficient than you think: a
> @save_undo only really needs to store about 70k of data: not amazingly
> cheap, but not incredibly expensive either.

I am sadly familiar with it. And a programming system that has to save
70K of data as a calculational step has issues. But, since I'm
fantasizing anyway, I don't see this idea as limited to the Z-machine.
So 70K is not a limit I'm planning on.



> > (Does T3 have a smart undo system? By "smart", I mean that the speed
> > and storage cost of a checkpoint operation are proportional to the
> > number of state changes since the last one. Change one variable,
> > super-fast checkpoint.)
>
> You're going to have a problem designing a 'smart' undo system, in that
> you'll need to add tracking to all memory and stack operations, which
> is not a huge overhead for an individual instruction, but which is
> going to slow down the interpreter quite a lot over the course of it's
> execution.

This is why I brought up T3 -- I think it *does* that already. (Or
rather, that's how I remember T2 working. And I assume the same idea
was used, in a cleaned-up way, for T3.)

> This is only worth it if undo becomes such a common
> operation that it occurs, on average, before around 30-60k worth of
> stack and memory operations, possibly less depending on the efficiency
> of your logging algorithm and writable memory size.

That does, off the cuff, sound reasonable. Strangely. I'm talking
about doing this as a bedrock element of action planning; and I
picture an action as thumping a dozen object properties, typically.

> The Inform library does actually modify main memory quite a lot via the
> various object manipulation commands, but the thing that's going to be
> really noticable is stack operations

I remember MJR posting about his division of T3 state into transient
and persisent. The stack was transient. This struck me as insane, but
it certainly solves the problem you're talking about. :)

(Although, of course, that model is a lot more suited to IF
once-per-player-turn undo than to my speculative action-planning undo.
You really do want local variables and the stack for that.)

Krister Fundin

unread,
Sep 10, 2006, 12:22:13 PM9/10/06
to

"Andrew Plotkin" <erky...@eblong.com> skrev i meddelandet
news:ee17c6$p0p$2...@reader2.panix.com...

> A few weeks ago, I was very seriously thinking of leveraging VM-undo
> (or an extended VM-undo) as a solution to your NPC-planning problem.
> Does the NPC go through the door? @save_undo, trigger the action, see
> if it succeeded; if not, @restore_undo and test a different route!
>

As it happens, Steve Breslin and I were recently discussing this very idea.
I wrote a few lines of TADS 3 code as an experiment. It's just a function
that takes an action and a callback, sets up a savepoint, executes the
action
silently, returns the result of the callback and then triggers an undo. It
seemed
to me first like a very dangerous thing to do, but having tried it out, I
think it
should be safe as long as the caller knows that the action won't have any
non-undoable side-effects.

The problem isn't really the technical side, though, but how much
information
you want the planner to have, since this sort of undo test would make the
NPC pretty much omniscient.

> (Does T3 have a smart undo system? By "smart", I mean that the speed
> and storage cost of a checkpoint operation are proportional to the
> number of state changes since the last one. Change one variable,
> super-fast checkpoint.)

It's sort of like that, I think. Each change to an object is stored in an
undo
record, so fewer changes should take less time to undo. I believe that the
storage cost is pretty much constant, since all undo records are kept in a
kind of fixed size cyclic array, and a savepoint is just a special marker in
this array.

-- Krister Fundin

Mike Roberts

unread,
Sep 10, 2006, 3:32:26 PM9/10/06
to
"Andrew Plotkin" <erky...@eblong.com> wrote:
> (Does T3 have a smart undo system? By "smart", I mean that
> the speed and storage cost of a checkpoint operation are
> proportional to the number of state changes since the last one.
> Change one variable, super-fast checkpoint.)

The T3 undo implementation is basically the same as tads 2's. Undo is
stored as a change log, and a checkpoint is just a special log entry that
serves as a marker. So creating a checkpoint is a constant-speed
operation - it's simply a marker insertion. Applying undo is proportional
to the number of operations since the last checkpoint, since you play back
the change log in reverse order until you hit a checkpoint marker. So
basically all the overhead of undo is distributed uniformly through
execution - although there's no undo saved for changes to objects that the
program designates as 'transient', and since the stack is always a transient
object, there's no undo overhead for things like making a function call or
writing to a local variable.

There's some execution-time cost, obviously, but In practice it seems
acceptable - probably because property updates are a small percentage of
overall ops in a typical program. It's a relatively space-efficient
approach; in the typical case you touch only a small fraction of memory
between checkpoints, so the memory required for a checkpoint is small. In
the worst case, it tops out at a constant factor away from the approach
where you save all of writable memory at each checkpoint, since it's only
necessary to save the first update to a given obj.prop since the last
checkpoint; subsequent writes to a given obj.prop are irrelevant for obvious
reasons.

I think the main cost of the approach is the added complexity in the VM, and
the correspondingly greater opportunity for bugs. Saving all non-transient
writable memory between checkpoints is a heck of a lot simpler and less
error-prone.

--Mike
mjr underscore at hotmail dot com


Hook

unread,
Sep 10, 2006, 9:22:14 PM9/10/06
to
On 10 Sep 2006 02:57:38 -0700, na...@natecull.org wrote:
>This is where I think the computing world lost a great deal when logic
>programming died stillborn at the end of the 80s and object-oriented
>programming became king. I think SQL is only a tiny glimpse of what
>Codd's relational vision was all about, and logic programming was a
>somewhat larger expression of the same idea. Instead, the idea of
>relations - of dealing with collections instead of individual data
>entities, and doing so in a way that erased the line between explicit
>and inferred/calculated data - sort of got half-built, then stayed in a
>database ghetto, unfriendly for general programming, while objects,
>powerful as they are, remained procedural rather than declarative, and
>OO analysis and design ended up recreating the old CODASYL navigational
>database model with slightly improved data typing. To the point that we
>now build these huge enterprise OO systems that take hundreds of lines
>of Java or VB.NET code to replicate the functionality of what three
>lines of SQL could do, and only as a last resort and with much distaste
>and wiping of hands do they touch the icky *database, eww!* just to
>persist the domain model - element by element - and quickly run away
>back to object-land. And you still can't write a custom query, and you
>sure can't reuse the domain model objects in any other application, and
>no way in hell can you ever evolve your domain model, say add a field
>or two, once you've carved it in OO stonework and sealed the API.

Having glimpsed logic programming a considerable time ago, and now
making a living out of design and occasional code cutting in the realm
of "sort of OOP" systems, I felt that the main stumbling block was
complexity. By that I mean procedural and OOP systems are simpler to
write, although good OOP programmers still take time to develop. If
the development effort had gone into IDEs and similar tools oriented
towards logic programming then we'd have flexible and sophisticated
development harnesses available now of course.

The problem with databases is pretty fundamental and largely one of
perception. People around me who are otherwise good developers insist
in claiming that the database of choice (DB2 or MySQL here) is object
oriented and stores objects largely because they have fallen in love
with the term "object".

The whole IT industry has a history of falling in love with
techniques, each of which is better than the previous, each of which
will clear the backlog of work that most coding shops have. It's an
almost religious fervour, and is quite cute to watch after the first
few instances have worn off.

Hook

na...@natecull.org

unread,
Sep 10, 2006, 9:59:54 PM9/10/06
to

Andrew Plotkin wrote:

> Here, na...@natecull.org wrote:
> > But none of them do. Why not? And what would a language look like that
> > had 'save state and backtrack safely' as a fundamental operation?)
>
> @save_undo / @restore_undo?
>

Possibly. Certainly undo and save/restore are both examples of a
similar kind of thing, and most game environments really are their own
little VM/OS. Haskell monads, I think, also are similar to undo
log-files, though they seem to work forward rather than backward. I'm
wondering if there's some kind of simple programming abstraction which
would allow one to easily create little 'virtual jails' for every
object - kind of like a CVS environment for one's entire language / OS.
And then use that for massively parallel / massively speculative
execution of stuff, somehow. The point being that everything from a
variable assignment to a function call to a file to a pipe to a process
to a game session ought to be just an instance of a single kind of
'thing' that can be instantiated or rolled back as one wishes, either
manually (like undo) or automatically (like Prolog backtracking). With
built-in backup/restore/version-checking at every level, using the same
mechanism.

Though I/O - or any kind of message send outside of an object, really -
is what breaks reversibility, since you can't necessarily safely
restore one object past the point where it sent a message with the
potential to change the state of another object, unless you're willing
to throw away transactional atomicity, or the environment in which both
objects exist makes guarantees about behaviour somehow - which is where
I keep coming back to the idea of a small set of globallly available,
system-provided, unchangeable verbs, some of which are guaranteed never
to change state of other objects.

I suspect I'm ranting wildly and not terribly coherently. Need to stop
reading C2 Wiki and Lambda The Ultimate, it gives ones mad-science
ambitions.

steve....@gmail.com

unread,
Sep 11, 2006, 1:08:23 PM9/11/06
to
Nate Cull wrote of:

> a CVS environment for one's entire language / OS.

Have you used CVS? I don't think it means what you think it means. CVS
is a mechanism to compare diffs, to make sure that the diffs are not
overlapping, and to grant priority and responsibility for the
resolution of conflicts.

> And then use that for massively parallel / massively speculative
> execution of stuff, somehow.

I'm with you when you have 1) a grounded idea and a vague target; or 2)
a vague idea but a clear objective. But here I'm hearing a
weakly-considered idea and a vague target.

You're a much better programmer than I, and you're probably engaged in
a (public but) direct exchange with Plotkin (who is also a much better
programmer than I), so who am I to say anything? I definitely do not
want to distract or detract from intelligent discussion.

I love Icarus: it's great, toying with genius to fault. But if he
starts criticising airplanes it gets a bit silly. If only
waxy-feathered flight hadn't died stillborn, because it was,
fantastically, so badly misunderstood by everyone involved....

> I suspect I'm ranting wildly and not terribly coherently. Need to stop
> reading C2 Wiki and Lambda The Ultimate, it gives ones mad-science
> ambitions.

Word. Question -- what catapult propelled this flight? I7's
enthusiastic break with traditional programming symbolism? Did I7 open
rule-based programming in a popular enough way that you could get into
it, and once in your interest exploded?

steve....@gmail.com

unread,
Sep 11, 2006, 1:29:35 PM9/11/06
to
Hook wrote:

> The problem with databases is pretty fundamental and largely one of
> perception. People around me who are otherwise good developers insist
> in claiming that the database of choice (DB2 or MySQL here) is object
> oriented and stores objects largely because they have fallen in love
> with the term "object".

Not the term, but the concept. The concept of "object", specifically
"abstract programmatic object" is great. An OO programmer will
legitimately think in terms of this powerful metaphor and coding
device. Faced with a rule-based model, the OO programmer will think of
how it works according to his metaphor, how he would implement it, for
example. Indeed most modern implementations of RO is in OO. (Even if
not, you can probably understand the idea.)

> The whole IT industry has a history of falling in love with
> techniques,

Oh no, you're challenging the IT industry: too radical for me, sorry
must abort!

> each of which is better than the previous, each of which
> will clear the backlog of work that most coding shops have. It's an
> almost religious fervour, and is quite cute to watch after the first
> few instances have worn off.

Instances probably in the minor sense, but if we're comparing
programming paradigms like OO or RO, would you say the same thing?
These are larger movements, and perhaps not the same as catchy
techniques.

na...@natecull.org

unread,
Sep 11, 2006, 6:10:46 PM9/11/06
to

steve....@gmail.com wrote:
> Nate Cull wrote of:
> > a CVS environment for one's entire language / OS.
> Have you used CVS? I don't think it means what you think it means. CVS
> is a mechanism to compare diffs, to make sure that the diffs are not
> overlapping, and to grant priority and responsibility for the
> resolution of conflicts.

But also the pervasive storage of prior versions of files such that
changes can be reverted, no?

I'm not actually talking about *how* all these systems work, because
they're all very different. I'm interested in general underlying
principles and approaches that might suggest something that is
otherwise very hard to explain, even to myself.

>
> > And then use that for massively parallel / massively speculative
> > execution of stuff, somehow.
>
> I'm with you when you have 1) a grounded idea and a vague target; or 2)
> a vague idea but a clear objective. But here I'm hearing a
> weakly-considered idea and a vague target.

Certainly. It's much less of an idea and more of a dull ache, a sense
that there is something quite important and extremely obvious still
missing that one can't articulate because the cognitive infrastructure
to describe it isn't there. But one can feel its lack.

> I love Icarus: it's great, toying with genius to fault. But if he
> starts criticising airplanes it gets a bit silly. If only
> waxy-feathered flight hadn't died stillborn, because it was,
> fantastically, so badly misunderstood by everyone involved....

Ornithopters are cool, though. Also zeppelins. And Lifters.


> Word. Question -- what catapult propelled this flight? I7's
> enthusiastic break with traditional programming symbolism? Did I7 open
> rule-based programming in a popular enough way that you could get into
> it, and once in your interest exploded?

It was I7, partly. More that it reawakened my interest in inference,
logic and massively concurrent systems that I had on first encountering
Prolog and Occam and things as a teenager in the 1980s. I was a huge AI
fan as a kid. Still am, but AI stopped being cool long ago.

This ache I've had about traditional programming languages didn't just
start yesterday, but I've learned to subdue it and Just Get On With
Life. I've learned to accept that things in IT are always vastly harder
than they need to be, and our tools always crazily limited in stupid
and unnecessary ways. Because change is hard, and breaks things, and
usually not worth the risk.

I don't have to like it, though.

na...@natecull.org

unread,
Sep 11, 2006, 10:05:05 PM9/11/06
to

steve....@gmail.com wrote:
> Not the term, but the concept. The concept of "object", specifically
> "abstract programmatic object" is great. An OO programmer will
> legitimately think in terms of this powerful metaphor and coding
> device. Faced with a rule-based model, the OO programmer will think of
> how it works according to his metaphor, how he would implement it, for
> example. Indeed most modern implementations of RO is in OO. (Even if
> not, you can probably understand the idea.)

A slightly more coherent critique of the OO paradigm than my confusing
rants is this paper I've found by Shajan Miah, 1997. Despite it being
nine years old, it encapsulates a lot of the issues I think are
important in the debate and that I think also apply particularly to IF
- especially the points about fuzzy class membership, relations and
ad-hoc queries, and how strict immutable class hierarchies do not map
cleanly onto real world objects.

I7's object specification phrases, for instance, seem to me to be an
example of ad-hoc queries.

na...@natecull.org

unread,
Sep 11, 2006, 10:08:33 PM9/11/06
to

n...@natecull.org wrote:
> A slightly more coherent critique of the OO paradigm than my confusing
> rants is this paper I've found by Shajan Miah, 1997.

... and it would help if I included the URL.

http://members.aol.com/shaz7862/critique.htm

"Critique of the Object Oriented Paradigm: Beyond Object-Orientation
By Shajan Miah
Date: 14th May, 1997"

Kevin Forchione

unread,
Sep 12, 2006, 12:27:00 AM9/12/06
to
<na...@natecull.org> wrote in message
news:1158026705.0...@d34g2000cwd.googlegroups.com...
>
<snip>

> I7's object specification phrases, for instance, seem to me to be an
> example of ad-hoc queries.

I am somewhat confused by this statement, since surely the I7 object
specification maps directly to an OO object definition. Or am I missing
something?

--Kevin


John Roth

unread,
Sep 12, 2006, 8:35:07 AM9/12/06
to

That's basically unreadable, besides being almost 10 years old. Let's
see. Ten years ago Java was just being adopted, c++ was the great
shining light rather than the obese monster that many people see today,
etc. etc. etc. Time has moved on, and the OO paradigm is still with us,
and shows no sign of obsolescence.

There are, granted, a lot of problems in practical software
development. Most of them have solutions, despite the fact that the
practices causing the problems are still being vigorously pushed by a
lot of vendors and taught in a lot of universities. Sigh.

The reason that OO is still with us is very simple: OO scales
reasonably well. Procedural didn't, and logic programming generally
doesn't either. I have yet to hear of a really major system (10 million
loc +) written in Prolog or a functional language.

The basic problem that I see Graham Nelson facing in his quest is that
he's not using a lot of linguistic theory. There's a really major chunk
having to do with "frames", whatever the name that the linguist wants
to use. The concept is that a word has a specific meaning only in
context, and that the context has a structure. For example,
commercial-transaction has a structure that includes buyer, vendor,
goods, money, buy, sell, marketplace and a great deal more.

I'm not at all sure how he'd use it though, since he's welded I7 to I6
and the z-machine (well, also Glulx). What's needed is an entirely
different theory of how to organize the linguistic part of an
interactive fiction.

John Roth

Andrew Plotkin

unread,
Sep 12, 2006, 1:59:04 PM9/12/06
to
Here, John Roth <John...@jhrothjr.com> wrote:
>
> The reason that OO is still with us is very simple: OO scales
> reasonably well. Procedural didn't, and logic programming generally
> doesn't either. I have yet to hear of a really major system (10 million
> loc +) written in Prolog or a functional language.

Scalability for rule-based programming is my biggest concern. I'm sure
that if I figure it out, OO will vanish into mist like the
stepping-stone that it is. :)



> I'm not at all sure how he'd use it though, since he's welded I7 to I6
> and the z-machine (well, also Glulx).

I don't agree with that reasoning. The way I7 deals with context is
part of the language design, the front end. It ought to be completely
independent of how it generates code, or even with the way the library
is built (i.e., in I6 code).

--Z

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*

Just because you vote for the Republicans, doesn't mean they let you be one.

Shadow Wolf

unread,
Sep 12, 2006, 3:49:55 PM9/12/06
to
"Kevin Forchione" <ke...@lysseus.com> wrote in news:iGqNg.9442$cw.7616
@fed1read03:

He means phrases like "All the green boxes". Not an OO object definition at
all.

--
Shadow Wolf
shadowolf3400 at yahoo dot com
Stories at http://www.asstr.org/~Shadow_Wolf
AIF at http://www.geocities.com/shadowolf3400

----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----

na...@natecull.org

unread,
Sep 12, 2006, 9:11:56 PM9/12/06
to

John Roth wrote:
> The reason that OO is still with us is very simple: OO scales
> reasonably well. Procedural didn't, and logic programming generally
> doesn't either. I have yet to hear of a really major system (10 million
> loc +) written in Prolog or a functional language.

Well, for a start functional languages seem to me to be quite different
beasts from logic languages, despite sharing some similarities - and
sort-of-pure functional languages like Haskell seem to be shambling
toward the mainstream at an alarming rate. Linspire Linux has made
Haskell its core scripting language, for instance. Will be interesting
to see how they make out. I don't really grok Haskell at all - I'd like
to, but it fries my brain - but I wish it well.

I agree that Prolog seems to have scalability problems, but I'm not
convinced that those are necessarily intrinsic to logic progamming per
se then in Prolog itself as an implementation. The ideas behind logic
programming seem to me to be very similar to relational theory, only
made Turing-complete. And relational databases seem to have stood up
remarkably well to the test of time - despite having a global scope,
they're very easy to modify and evolve and link together and tolerant
of missing data, issues that decentralised object systems still
struggle with.


> The basic problem that I see Graham Nelson facing in his quest is that
> he's not using a lot of linguistic theory. There's a really major chunk
> having to do with "frames", whatever the name that the linguist wants
> to use. The concept is that a word has a specific meaning only in
> context, and that the context has a structure. For example,
> commercial-transaction has a structure that includes buyer, vendor,
> goods, money, buy, sell, marketplace and a great deal more.


Yeah, that's where I'm thinking some kind of scoping or chunking
mechanism for logic/relational could be the missing piece in my dream.
And maybe it turns out that once you do that you do get objects. But
I'm not convinced that objects need the ability to run arbitrary
imperative code. If they were purely declarative entities, with
(potentially recursive) relationships specified between them... maybe
it'd end up looking something like the Semantic Web, perhaps, rather
than SOAP. That is: a network of entities making and retracting
assertions about other entities, with specific interpretations being
dependent on the client.

(That's one thing objects don't do well - allowing multiple views of
knowledge. Knowledge in OO is generally embedded in a procedural
framework that assumes One True Way of accessing facts - by tracing
collections from node to node. Like old-school Unix mail bang-paths for
addresses. But we don't use those for mail routing anymore. Why?
Because they're brittle, and a global DNS namespace is both easier to
understand and makes services easier to modularise and update without
breakage. So maybe doing a similar thing with our data - creating a
sort of global dataspace - would also be sensible. And that's where
you're getting back into the realm of logic and relational theory,
which do assume globally-true predicates. But at the same time, if you
bracketed that with the understanding that this is the viewpoint or
assertion of just one entity... maybe you'd get the best of both
worlds.)

>
> I'm not at all sure how he'd use it though, since he's welded I7 to I6
> and the z-machine (well, also Glulx). What's needed is an entirely
> different theory of how to organize the linguistic part of an
> interactive fiction.

Hmm, meaning what? A special language for the programmatic generation
of text?

John Roth

unread,
Sep 13, 2006, 10:39:06 AM9/13/06
to

Andrew Plotkin wrote:
> Here, John Roth <John...@jhrothjr.com> wrote:
> >
> > The reason that OO is still with us is very simple: OO scales
> > reasonably well. Procedural didn't, and logic programming generally
> > doesn't either. I have yet to hear of a really major system (10 million
> > loc +) written in Prolog or a functional language.
>
> Scalability for rule-based programming is my biggest concern. I'm sure
> that if I figure it out, OO will vanish into mist like the
> stepping-stone that it is. :)

Rule based systems have an interesting history. At
one time they were "the thing" in AI, but like so many
things in AI they turned out to have really fatal flaws.
DEC eventually replaced the first major rule based
system, the PDP-11 configurator, with a more
conventional system. In fact, it would never have
been written if the original dev team had actually
understood the problem. One of the problems is that
it required really smart people (Ph.D. level) to maintain
it.

Experts don't use rules. Rules are good training
devices to get someone started in a field so they
can do some stuff and get some successes, but
what real experts in any field do can't be described
by rules. Some experts will tell you "these are the
rules", but if you look, they don't do it that way
themselves.

> > I'm not at all sure how he'd use it though, since he's welded I7 to I6
> > and the z-machine (well, also Glulx).
>
> I don't agree with that reasoning. The way I7 deals with context is
> part of the language design, the front end. It ought to be completely
> independent of how it generates code, or even with the way the library
> is built (i.e., in I6 code).

The problem is runtime parsing. If you want to
give the player a substantially better experiance,
you need to have a whole lot more power at
run time than you have now.

The notion of providing a run time system
that's substantially less powerful than the one
the author uses is, IMO, a deeply flawed legacy
of the time when systems were much less
powerful than they are today.

The notion of a specialized run-time IF engine
has outlived its usefulness. Most systems
these days ship with a Java runtime, so
you don't have the "but it's too big a download"
whine I hear whenever someone is defending
the z-machine.

Sure, Graham could have scrapped the I6
library and started over rather than simply
restructuring it. But he didn't.

John Roth

Andrew Plotkin

unread,
Sep 13, 2006, 1:52:31 PM9/13/06
to
Here, John Roth <John...@jhrothjr.com> wrote:
>
> Andrew Plotkin wrote:
> > Here, John Roth <John...@jhrothjr.com> wrote:
> > >
> > > The reason that OO is still with us is very simple: OO scales
> > > reasonably well. Procedural didn't, and logic programming generally
> > > doesn't either. I have yet to hear of a really major system (10 million
> > > loc +) written in Prolog or a functional language.
> >
> > Scalability for rule-based programming is my biggest concern. I'm sure
> > that if I figure it out, OO will vanish into mist like the
> > stepping-stone that it is. :)
>
> Rule based systems have an interesting history. At
> one time they were "the thing" in AI, but like so many
> things in AI they turned out to have really fatal flaws.
> [...]

>
> Experts don't use rules. Rules are good training
> devices to get someone started in a field so they
> can do some stuff and get some successes, but
> what real experts in any field do can't be described
> by rules. Some experts will tell you "these are the
> rules", but if you look, they don't do it that way
> themselves.

Implementing an IF world is a programming task, not a real-world task,
so I don't see the relevance of this at all. (Experts don't use simple
procedural algorithms either, but they're clearly good enough for many
IF games.)



> > > I'm not at all sure how he'd use it though, since he's welded I7 to I6
> > > and the z-machine (well, also Glulx).
> >
> > I don't agree with that reasoning. The way I7 deals with context is
> > part of the language design, the front end. It ought to be completely
> > independent of how it generates code, or even with the way the library
> > is built (i.e., in I6 code).
>
> The problem is runtime parsing. If you want to
> give the player a substantially better experiance,
> you need to have a whole lot more power at
> run time than you have now.

You're talking about a linguistic approach to the game parser? I have
nothing against this, but it's orthogonal to the kind of problem that
I'm interested in (and that I7 added to the Inform table).

(This isn't quite true: I7 does unify the player's descriptions of
things with the author's descriptions. It's still true that the author
gets a more powerful language, of course. But I thought you were
talking about contextual models to describe *programming rules* -- the
problem of scalability, as opposed to game interaction models. We did
get into this from scalability, right?)

--Z

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*

When Bush says "Stay the course," what he means is "I don't know what to
do next." He's been saying this for years now.

Adam Thornton

unread,
Sep 14, 2006, 12:54:31 AM9/14/06
to
In article <1158064507.2...@e63g2000cwd.googlegroups.com>,
"John Roth" <John...@jhrothjr.com> wrote:

> The basic problem that I see Graham Nelson facing in his quest is that
> he's not using a lot of linguistic theory. There's a really major chunk
> having to do with "frames", whatever the name that the linguist wants
> to use.

http://www.xkcd.com/c114.html

Adam

Adam Thornton

unread,
Sep 14, 2006, 12:56:36 AM9/14/06
to
In article <1158158346.2...@h48g2000cwc.googlegroups.com>,

"John Roth" <John...@jhrothjr.com> wrote:
> it required

> really smart people
> (Ph.D. level)

These are *so* not equivalence classes.

>to maintain it.

Adam

steve....@gmail.com

unread,
Sep 14, 2006, 6:38:33 AM9/14/06
to
Adam Thornton wrote:
> http://www.xkcd.com/c114.html

It's funny because it's true. See also...

http://www.xkcd.com/c91.html

John Roth

unread,
Sep 14, 2006, 10:59:16 AM9/14/06
to

Shrug. When you need people in the top few
percentiles to maintain a system, it's not
maintainable. It doesn't really matter what
you call them.

John Roth
>
> Adam

John Roth

unread,
Sep 14, 2006, 11:06:17 AM9/14/06
to

Right. I'm mostly looking at where the bottlenecks
are, and having to live with a hacked up runtime
that was originally designed for a very limited
virtual machine (in today's sense) and 15 year
out of date ideas of what can be done seems to
be a significant limit.

In 1990 I had a 128K Macintosh. The z-machine
would have fit - barely. (Even then it was bigger than
any of the first machines I worked on in 1965).

What was a major wonder 15 years ago is a
ball and chain today.

John Roth

John Roth

unread,
Sep 14, 2006, 11:16:51 AM9/14/06
to

Andrew Plotkin wrote:
> Here, John Roth <John...@jhrothjr.com> wrote:
> >


> > > > I'm not at all sure how he'd use it though, since he's welded I7 to I6
> > > > and the z-machine (well, also Glulx).
> > >
> > > I don't agree with that reasoning. The way I7 deals with context is
> > > part of the language design, the front end. It ought to be completely
> > > independent of how it generates code, or even with the way the library
> > > is built (i.e., in I6 code).
> >
> > The problem is runtime parsing. If you want to
> > give the player a substantially better experiance,
> > you need to have a whole lot more power at
> > run time than you have now.
>
> You're talking about a linguistic approach to the game parser? I have
> nothing against this, but it's orthogonal to the kind of problem that
> I'm interested in (and that I7 added to the Inform table).

I'm not so sure. I've been in this business for over 40 years,
and one of the things I've learned is that, if you understand
the domain you're writing a program for, the program
simplifies as time goes on, and changes seem to slide
in like they're on greased tracks.

More and more complexity is a symptom of not
understanding the domain properly, and rule based
programming is a symptom of complexity.

I'd compare it to a "Hail Mary pass" in American
football. The quarterback is about to get sacked, and
there's no receiver open. Throw the ball anyway.
"Hail Mary, Mother of God..."

John Roth

Neil Cerutti

unread,
Sep 14, 2006, 11:56:18 AM9/14/06
to
On 2006-09-14, John Roth <John...@jhrothjr.com> wrote:
> I'm not so sure. I've been in this business for over 40 years,
> and one of the things I've learned is that, if you understand
> the domain you're writing a program for, the program simplifies
> as time goes on, and changes seem to slide in like they're on
> greased tracks.
>
> More and more complexity is a symptom of not understanding the
> domain properly, and rule based programming is a symptom of
> complexity.

Perhaps there's a simpler way to provide the arbitrary
hookability that rules (we think) will provide. The OO systems
provide "enough" hooks. Is there a general framework, besides a
complete rule-based system, that's hookable? Something like CLOS
probably does everything that might be needed, but that's a lot
of complexity.

> I'd compare it to a "Hail Mary pass" in American football. The
> quarterback is about to get sacked, and there's no receiver
> open. Throw the ball anyway. "Hail Mary, Mother of God..."

That seems like a good analogy for the state we're approaching.
As a result of some of the discussion here, I tried a few PROLOG
tutorials. I probably didn't stick with it long enough, but it
seemed, mostly, a hellish way to write recusive routines.

--
Neil Cerutti
Strangely, in slow motion replay, the ball seemed to hang in the
air for even longer. --David Acfield

quic...@quickfur.ath.cx

unread,
Sep 14, 2006, 3:31:49 PM9/14/06
to
On Thu, Sep 14, 2006 at 05:56:18PM +0200, Neil Cerutti wrote:
[...]

> As a result of some of the discussion here, I tried a few PROLOG
> tutorials. I probably didn't stick with it long enough, but it seemed,
> mostly, a hellish way to write recusive routines.
[...]

Well, that's not quite what PROLOG was designed for. :-) The idea behind
PROLOG is a descriptive approach to programming, where the computer does
most (ideally all) of the deductive (algorithmic) work for you based
upon a set of model descriptions you give it and a description of the
output that you want, rather than a prescriptive approach such as the
common imperative paradigm, where you tell the computer how to do
something to achieve the result you want.

As to whether this approach is better for IF programming, I'm on the
fence. Personally, I program in a decidedly procedural/prescriptive way,
so it would take me a lot of effort to become fluent in the PROLOG
paradigm.


QF

--
Skill without imagination is craftsmanship and gives us many useful
objects such as wickerwork picnic baskets. Imagination without skill
gives us modern art. -- Tom Stoppard

Neil Cerutti

unread,
Sep 14, 2006, 4:02:23 PM9/14/06
to
On 2006-09-14, quic...@quickfur.ath.cx <quic...@quickfur.ath.cx> wrote:
> On Thu, Sep 14, 2006 at 05:56:18PM +0200, Neil Cerutti wrote:
> [...]
>> As a result of some of the discussion here, I tried a few PROLOG
>> tutorials. I probably didn't stick with it long enough, but it seemed,
>> mostly, a hellish way to write recusive routines.
> [...]
>
> Well, that's not quite what PROLOG was designed for. :-) The
> idea behind PROLOG is a descriptive approach to programming,
> where the computer does most (ideally all) of the deductive
> (algorithmic) work for you based upon a set of model
> descriptions you give it and a description of the output that
> you want, rather than a prescriptive approach such as the
> common imperative paradigm, where you tell the computer how to
> do something to achieve the result you want.

I found it not work like that for me. I couldn't even begin to
write a set of rules for computing a factorial without starting
from a recursive function, and then translating that into Prolog.
Just writing down the list of rules defining factorial will not
work. At least, I couldn't make the connection.

I'll have to go back and try again. Scheme seemed
incomprehensible to me the first time I played Andrew's _Lists
and Lists_, too.

> As to whether this approach is better for IF programming, I'm
> on the fence. Personally, I program in a decidedly
> procedural/prescriptive way, so it would take me a lot of
> effort to become fluent in the PROLOG paradigm.

A hybrid approach seems best, to me. There's lots of stuff to do
in IF that's very naturally expressed as a list of commands for
the interpreter to carry out.

--
Neil Cerutti
We don't necessarily discriminate. We simply exclude certain
types of people. --Colonel Gerald Wellman

Kevin Forchione

unread,
Sep 15, 2006, 12:36:25 PM9/15/06
to
"Adam Thornton" <ad...@fsf.net> wrote in message
news:adam-067B22.2...@fileserver.fsf.net...

> In article <1158158346.2...@h48g2000cwc.googlegroups.com>,
> "John Roth" <John...@jhrothjr.com> wrote:
>> it required
>
>> really smart people
>> (Ph.D. level)
>
> These are *so* not equivalence classes.

Lol, the myth persists. Somehow it's assumed that, for instance, the man who
has specialized in the sexual revolution in golden age spanish theatre is
more intelligent than the average joe. That's a little like assuming that a
CEO adds value to a company's bottom line.

--Kevin


John Roth

unread,
Sep 15, 2006, 12:39:27 PM9/15/06
to

Neil Cerutti wrote:
> On 2006-09-14, quic...@quickfur.ath.cx <quic...@quickfur.ath.cx> wrote:
> > On Thu, Sep 14, 2006 at 05:56:18PM +0200, Neil Cerutti wrote:
> > [...]
> >> As a result of some of the discussion here, I tried a few PROLOG
> >> tutorials. I probably didn't stick with it long enough, but it seemed,
> >> mostly, a hellish way to write recusive routines.
> > [...]

> I found it not work like that for me. I couldn't even begin to


> write a set of rules for computing a factorial without starting
> from a recursive function, and then translating that into Prolog.
> Just writing down the list of rules defining factorial will not
> work. At least, I couldn't make the connection.

I don't know about logic languages, but one of the
very first examples in most functional language
tutorials is how to write a factorial.

The principle seems to be to write the rule that
defines the general case (one line) and then
write the rule(s) that define the special case(s).
That's one additional line for a factorial to define
the starting conditions.

The trick is to go back to the mathematical
definition, which doesn't mention recursion
at all. You only need recursion if you're attempting
to prove that the equations make any sense in
the real world.

And I've never done any logic programming.

Having installed Ghostscript and GSview,
I managed to read the paper. It's, um,
interesting, and it seems like it's pretty
distinct from what Graham's doing with
Inform 7. In other words, it could arguably
be a target for Inform 7, in the same way
that the z-machine and Glulx are targets.

John Roth

Neil Cerutti

unread,
Sep 15, 2006, 1:09:53 PM9/15/06
to
On 2006-09-15, John Roth <John...@jhrothjr.com> wrote:
>
> Neil Cerutti wrote:
>> On 2006-09-14, quic...@quickfur.ath.cx <quic...@quickfur.ath.cx> wrote:
>> > On Thu, Sep 14, 2006 at 05:56:18PM +0200, Neil Cerutti wrote:
>> > [...]
>> >> As a result of some of the discussion here, I tried a few PROLOG
>> >> tutorials. I probably didn't stick with it long enough, but it seemed,
>> >> mostly, a hellish way to write recusive routines.
>> > [...]
>
>> I found it not work like that for me. I couldn't even begin to
>> write a set of rules for computing a factorial without
>> starting from a recursive function, and then translating that
>> into Prolog. Just writing down the list of rules defining
>> factorial will not work. At least, I couldn't make the
>> connection.
>
> I don't know about logic languages, but one of the
> very first examples in most functional language
> tutorials is how to write a factorial.

Yup, and this irritates some affcianados. Factorial is very easy
to express functionaly. You get a good impression of the
expressiveness, but a bad impression of the efficiency.

> The principle seems to be to write the rule that defines the
> general case (one line) and then write the rule(s) that define
> the special case(s). That's one additional line for a factorial
> to define the starting conditions.
>
> The trick is to go back to the mathematical definition, which
> doesn't mention recursion at all. You only need recursion if
> you're attempting to prove that the equations make any sense in
> the real world.
>
> And I've never done any logic programming.

Perhaps Prolog is a special case. The reason I found it
complicated to compose the rules was that you need to keep
Prolog's resolution rules in mind constantly when designing your
rules. Prolog is going to check your rules in a defined order,
and without accounting for that, your program won't work.

There are probably other logic programming languages that are
more forgiving.

> Having installed Ghostscript and GSview, I managed to read the
> paper. It's, um, interesting, and it seems like it's pretty
> distinct from what Graham's doing with Inform 7. In other
> words, it could arguably be a target for Inform 7, in the same
> way that the z-machine and Glulx are targets.

Yes. They have a few things in common, but not a lot.

--
Neil Cerutti
It might take a season, it might take half a season, it might
take a year. --Elgin Baylor

Adam Thornton

unread,
Sep 17, 2006, 2:04:44 AM9/17/06
to
In article <1158230313.6...@k70g2000cwa.googlegroups.com>,

c132 describes my life eerily well.

Adam

Richard Bos

unread,
Sep 18, 2006, 5:13:06 AM9/18/06
to
Neil Cerutti <hor...@yahoo.com> wrote:

> On 2006-09-14, quic...@quickfur.ath.cx <quic...@quickfur.ath.cx> wrote:
> > Well, that's not quite what PROLOG was designed for. :-) The
> > idea behind PROLOG is a descriptive approach to programming,
> > where the computer does most (ideally all) of the deductive
> > (algorithmic) work for you based upon a set of model
> > descriptions you give it and a description of the output that
> > you want, rather than a prescriptive approach such as the
> > common imperative paradigm, where you tell the computer how to
> > do something to achieve the result you want.
>
> I found it not work like that for me. I couldn't even begin to
> write a set of rules for computing a factorial without starting
> from a recursive function, and then translating that into Prolog.
> Just writing down the list of rules defining factorial will not
> work. At least, I couldn't make the connection.

:-) Well, that's not quite what Prolog was designed for.

It never was intended to do general programming. It _can_, but not, as
you've found, comfortably or easily. It's meant for the Programming of
Logic. Basically, deduction from rules and facts. Unfortunately, most
problems are not as easily reducible to such premises as the optimism of
the heyday of expert systems would lead us to believe. For those few
programs that can, Prolog works fine.

Richard

Richard Bos

unread,
Sep 18, 2006, 5:13:07 AM9/18/06
to
Neil Cerutti <hor...@yahoo.com> wrote:

> On 2006-09-15, John Roth <John...@jhrothjr.com> wrote:
>
> > The trick is to go back to the mathematical definition, which
> > doesn't mention recursion at all. You only need recursion if
> > you're attempting to prove that the equations make any sense in
> > the real world.
> >
> > And I've never done any logic programming.
>
> Perhaps Prolog is a special case. The reason I found it
> complicated to compose the rules was that you need to keep
> Prolog's resolution rules in mind constantly when designing your
> rules. Prolog is going to check your rules in a defined order,
> and without accounting for that, your program won't work.

Actually, Prolog as designed didn't specify any order, so in each rule
you had to keep into account that any other rules might not have
triggered first. One of the consequences of this is that for each rule,
you have to fully specify the preconditions, even for the last dinky
default one. Another is that perfect Prolog would be the greatest thing
ever for parallel processing.

This was soon found not to work, either, and was abandoned in favour of
the fixed order. In theory, this would last until a better system could
be found that would allow random-order and even parallel processing
again. It never was.

Richard

Gene Wirchenko

unread,
Sep 20, 2006, 9:52:44 PM9/20/06
to
Adam Thornton <ad...@fsf.net> wrote:

How do you find this stuff?
http://www.xkcd.com/c91.html
is pretty good, too.

Sincerely,

Gene Wirchenko

Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.

0 new messages