Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Prof. Dijkstra on NL-programming

15 views
Skip to first unread message

steve....@gmail.com

unread,
Jul 8, 2006, 5:15:54 PM7/8/06
to
Dijkstra thinks NL-Programming is foolish, and wrote a short essay to
explain his position, availible here...

http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html

... Or you can just search for the Dijkstra essay entitled "On the
foolishness of 'natural language programming'."

Enjoy!

====

On the other end of the spectrum, researchers at MIT are doing
something very much like NLProgramming, a NLProgramming "visualizer."
See the story here...

http://www.trnmag.com/Stories/2005/032305/Tool_turns_English_to_code_032305.html

I'm with Dijkstra, but hey, good luck!

James Cunningham

unread,
Jul 8, 2006, 5:42:30 PM7/8/06
to
On 2006-07-08 17:15:54 -0400, steve....@gmail.com said:

> Dijkstra thinks NL-Programming is foolish, and wrote a short essay to
> explain his position, availible here...
>
> http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html
>
> ... Or you can just search for the Dijkstra essay entitled "On the
> foolishness of 'natural language programming'."
>
> Enjoy!

Of course, he also said that "object-oriented programming is an
exceptionally bad idea which could only have originated in California."
This bodes poorly for Mr. Roberts, I'm afraid, who not only authored
TADS but is also (as I understand it) from California.

And - perhaps hyperbolically, but nonetheless with a straight face -
Dijkstra also said that anyone who began programming with BASIC may as
well give up now; they're corrupted for life. Mr. Plotkin - if he did
not begin with BASIC he programmed in it early, if my knowledge of his
history is correct. I doubt that we should expect much from hiim.

But you, sir, you - ! Never fail to set me straight.

Best,
James

Jeff Nyman

unread,
Jul 8, 2006, 8:45:07 PM7/8/06
to
<steve....@gmail.com> wrote in message
news:1152393354.3...@m79g2000cwm.googlegroups.com...

> Dijkstra thinks NL-Programming is foolish, and wrote a short essay to
> explain his position, availible here...
>
> http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html

I am not sure what this really says, though, as far as anything related to
I7 because there is also the matter of scale of application. As the essay
says: "In order to make machines significantly easier to use, it has been
proposed (to try) to design machines that we could instruct in our native
tongues."

In reference to something like I7, this is very different. I7 is not trying
to make our "machines" as a whole instructable in our "native tongues." It
is taking one, limited domain (interactive fiction) and putting a particular
type of expressive medium atop it. The whole science of parsers is based on
a compromise between what people find easy to input and what computers find
easy to deal with and that compromise centers of designing a language that
fits a particular context. (Whether this approach is successful with
Interactive Fiction remains to be seen.)

I would definitely agree that natural language interfaces definitely have
their limitations (at least currently) and are probably not valid in various
domains. Again, though, I think it very much depends on the type of
application you are talking about. I think time will tell how this approach
works in the arena of Interactive Fiction. (For example, I could imagine
something like I7's interface being used more as a sort of prototyping tool
to get started rather than as the sole means of programming a work of
interactive fiction.)

Personally, I do not see a lot of substance in that essay that would make me
agree with the "foolishness" contention, at least as some sort of blanket
condemnation. The comment at the end says "from one gut feeling..." and from
that gut feeling it is believed that machines programmed in "native tongues"
would be difficult to make and use.

- Jeff


rpresser

unread,
Jul 8, 2006, 8:51:25 PM7/8/06
to

steve....@gmail.com wrote:
> Dijkstra thinks NL-Programming is foolish, and wrote a short essay to
> explain his position, availible here...
>
> http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html
>
> ... Or you can just search for the Dijkstra essay entitled "On the
> foolishness of 'natural language programming'."
>

"He added, 'He's written some books.' It was a very Aristotelian way of
putting it ...."

-- A. E. Van Vogt, _The World of Null-A_

(If my point is unclear, I am attempting to poke fun at Mr. Breslin
(Dr.? Apologies, I do not know your preferred form of address) for
appealing to authority. It is a seems to be a sign that he realizes he
has failed to convince us on the merits of his arguments alone. The
irony of my quoting yet another august personage in order to poke fun
just makes it sweeter.)

rpresser

unread,
Jul 8, 2006, 9:03:13 PM7/8/06
to

steve....@gmail.com wrote:
> Dijkstra thinks NL-Programming is foolish, and wrote a short essay to
> explain his position, availible here...
>
> http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html
>
> ... Or you can just search for the Dijkstra essay entitled "On the
> foolishness of 'natural language programming'."
>

"He added, 'He's written some books.' It was a very Aristotelian way of

steve....@gmail.com

unread,
Jul 8, 2006, 11:49:32 PM7/8/06
to
Jeff Nyman wrote:

> I am not sure what this really says, though, as far as anything related to
> I7 because there is also the matter of scale of application.

What is true of the general is not necessarily true of the particular.
Okay, fair enough, but on the other hand, dude it's related and it's
relevant, and you have got to see that. What makes I7 a special case?

> The whole science of parsers is based on
> a compromise between what people find easy to input and what computers find
> easy to deal with and that compromise centers of designing a language that
> fits a particular context.

I think you mean compilers, not parsers. NLParsers take language
initially intended for human consumption, and try to make some sense of
it. NLProgramming takes language intended as instruction to a machine.
This is a big and important difference.

> (Whether this approach is successful with
> Interactive Fiction remains to be seen.)

I fear that I7 has not been and will not be judged objectively. Because
it is associated with the brand-name "Inform," some celebrity, and some
(I must say) misleading and otherwise rhetorically charged advertising,
it does not simply present a product and allow the outcome to sort-of
decide with all fairness of neutrality. People defend I7 with bad
arguments, and attack serious criticism with bad arguments. This seems
more the issue: whether I7's advertising can sufficiently obscure its
shortcomings until people get so comitted to it that the question is no
longer asked.

> Personally, I do not see a lot of substance in that essay that would make me
> agree with the "foolishness" contention, at least as some sort of blanket
> condemnation.

He's speaking very generally, it's true. We can distinguish two types
of NLProgramming: one which acknowledges that its subset of NL is a
conventional programming syntax, and another which attempts to present
its syntax as natural language. I think we could respond to his
argument by saying that the latter is foolish, but NL-esque syntax,
considered as programming syntax, is fine, even preferable in some
cases. I don't think his argument applies to that.

I would add that I7 *is* that, despite its pretenses. I suspect that
Dijkstra would say that I7 is alright, except its designer screwed up
the syntax somewhat trying to make it appear to be NL, which is plain
foolish.

> The comment at the end says "from one gut feeling..." and from
> that gut feeling it is believed that machines programmed in "native tongues"
> would be difficult to make and use.

No, you're misreading the last sentence, admittedly a rather tricky
sentence (and one perhaps badly translated -- I don't know if the
original text is English). Anyway, he's saying that he doesn't want to
see NLProgramming happen because he knows its a bad direction to go in:
e.g., it's very difficult to use NL to specify a bug-free program. He's
saying he's hopeful that it won't become popular because NLProgramming
compilers will be difficult difficult to make.

Andrew Plotkin

unread,
Jul 8, 2006, 11:58:59 PM7/8/06
to
Here, James Cunningham <jameshcu...@google.com> wrote:
>
> And - perhaps hyperbolically, but nonetheless with a straight face -
> Dijkstra also said that anyone who began programming with BASIC may as
> well give up now; they're corrupted for life. Mr. Plotkin - if he did
> not begin with BASIC he programmed in it early, if my knowledge of his
> history is correct.

BASIC it was. I vividly remember wandering across the playground in
1980 with the Applesoft programming manual. The wire-bound one.

> I doubt that we should expect much from hiim.

I have spent my life wandering across the beaches of the ocean of
Truth; picking up here and there a pretty pebble, and then sticking it
up my nose or in my ear. Chalk up another one to brain damage,
Charlie!

--Z ("The moving cursor having writ can erase or copy all of it.")

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*
If the Bush administration hasn't shipped you to Syria for interrogation,
it's for one reason: they don't feel like it. Not because you're patriotic.

Andrew Plotkin

unread,
Jul 9, 2006, 12:30:18 AM7/9/06
to
Here, steve....@gmail.com wrote:
>
> I fear that I7 has not been and will not be judged objectively.
> [...]
>
> I would add that I7 *is* [...], despite its pretenses.

Have you even noticed that you can't go two posts in a row without
framing the debate in personal terms? Graham pretends this, I7
pretends that, we are in love with I7, the claims in favor of I7 are
advertising. When we disagree with you we are silly, deluded,
incapable of making an argument.

I have yet to see you support one of your arguments with anything
other than a sincere declaration that you are right. Repeating
yourself is not serious criticism. Neither is changing the subject to
"why are you people so confused".

--Z

vaporware

unread,
Jul 9, 2006, 4:51:53 AM7/9/06
to
steve....@gmail.com wrote:
[...]

> He's speaking very generally, it's true. We can distinguish two types
> of NLProgramming: one which acknowledges that its subset of NL is a
> conventional programming syntax, and another which attempts to present
> its syntax as natural language. I think we could respond to his
> argument by saying that the latter is foolish, but NL-esque syntax,
> considered as programming syntax, is fine, even preferable in some
> cases. I don't think his argument applies to that.
>
> I would add that I7 *is* that, despite its pretenses. I suspect that
> Dijkstra would say that I7 is alright, except its designer screwed up
> the syntax somewhat trying to make it appear to be NL, which is plain
> foolish.

I don't see much pretense here:

"No computer does [understand English], and Inform does not even try to
read the whole wide range of text: it is a practical tool for a
particular purpose, and it deals only with certain forms of sentence
useful to that purpose. Inform source text may look like "natural
language", the language we find natural among ourselves, but in the end
it is a computer programming language. Many things which seem
reasonable to the human reader are not understood by Inform.
...
So it is not always safe to assume that Inform will understand any
reasonable instruction it is given: when in doubt, we must go back to
the manual."

vw

steve....@gmail.com

unread,
Jul 9, 2006, 6:57:16 AM7/9/06
to
Andrew Plotkin wrote:
> I have yet to see you support one of your arguments with anything
> other than a sincere declaration that you are right.

Then all I can say is that you have not been paying attention. I have
explained my logic in detail. Having done that, and upon reading such a
post as yours, how can I think anything other than that you are being
quite silly, and for emotional reasons uninterested in discussing the
idea seriously?

Now on the one hand, you can reply however you want, that's fine. But
on the other hand, you are responding to a point I was making, that I
am concerned that I7 is not now being judged objectively, for a number
of reasons I list, and why I think it's a problem for the genre -- a
large set of authors picking up a language on (at least arguably)
misleading pretenses, a large set of theorists expressing emotional
rather than rational discussion. Posts such as this one of yours,
another emotional attack against rational arguments, only underline the
problem.

Graham Nelson

unread,
Jul 9, 2006, 7:04:07 AM7/9/06
to
steve....@gmail.com wrote:
> Dijkstra thinks NL-Programming is foolish, and wrote a short essay to
> explain his position, availible here...
> http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html

I doubt if Dijkstra would consider this one of his more important
essays, but it's always a pleasure to read the Old Masters. When we say
"Dijkstra thinks", we ought to remember that this is the present
historic - I don't recognise this piece, but my guess is that Dijkstra
wrote it about twenty-five years ago.

If we leave aside the saloon-bar stuff about declining literacy
standards - kids today! - his position appears to be that (a)
programmers are weak people who dislike to have to write code which
passes strict tests, that (b) they therefore prefer compilers which are
incapable of spotting their mistakes, that (c) natural language is
inherently imprecise, and therefore that (d) programmers might like to
write in natural language but should not be allowed to.

A characteristic Dijkstra "you need to be saved from yourselves"
argument. One sees what he means, but while I agree that (a), (b) and
(c) entail (d), I can't see much truth in any of (a), (b) or (c).
Programmers don't like to code for overly fussy compilers, but they do
like good error detection: today, unlike in Dijkstra's time,
programmers have overwhelmingly chosen to use type-checked languages
rather than not. But even in the 1970s, I doubt if programmers ever
really preferred to write programs which gave wrong answers rather than
to have to deal with pesky compiler errors. In any case, it is not true
that a natural language system is necessarily more easy-going (the
Inform 7 compiler type-checks and looks for about four times as many
error conditions as the Inform 6 one, which does not type-check; and
Inform 7, unlike Inform 6, has no "goto" and only a single exit point
from loops). Perhaps most relevantly, I don't agree that natural
language is necessarily imprecise: in some domains it is very precise,
in others not. In some domains it is a wise programming language, in
others a foolish one.

steve....@gmail.com

unread,
Jul 9, 2006, 7:12:25 AM7/9/06
to
vaporware wrote:
> [...]
> > He's speaking very generally, it's true. We can distinguish two types
> > of NLProgramming: one which acknowledges that its subset of NL is a
> > conventional programming syntax, and another which attempts to present
> > its syntax as natural language. I think we could respond to his
> > argument by saying that the latter is foolish, but NL-esque syntax,
> > considered as programming syntax, is fine, even preferable in some
> > cases. I don't think his argument applies to that.
> >
> > I would add that I7 *is* that, despite its pretenses. I suspect that
> > Dijkstra would say that I7 is alright, except its designer screwed up
> > the syntax somewhat trying to make it appear to be NL, which is plain
> > foolish.
>
> I don't see much pretense here:
> [quoting from the introduction to the main manual]

> "No computer does [understand English], and Inform does not even try to
> read the whole wide range of text [etc.]

First, I am defending I7 from Dijkstra's complaint, by arguing, with
Nyman, that the argument against NLProgramming may not apply in this
special case. The case may be special because I7 *is not* (of course)
an NLP, but only uses English-esque language for an absolutely and
directly determinate programming syntax.

The argument may indeed apply, however, to the degree that I7 attempts
to be an NLP. I have read the conceptual development of I7, and at
times it attempts to be an NLP -- probably why the syntax is overloaded
and context-sensitive, for instance. By my understanding of Dijkstra's
argument, it would seem to apply to I7 only insofar as I7 mistakes
itself for NL. Dijkstra does not argue there's anything wrong with
using English words as operators, or even English grammatical
constructions as syntactic patterns. But it's absolutely necessary to
recognize them as operators and programming patterns, to recognize that
NL is a misappellation. I believe that I7 steps beyond this, and that's
where it would seem that Dijkstra would begin to object.

steve....@gmail.com

unread,
Jul 9, 2006, 7:38:55 AM7/9/06
to
Graham Nelson wrote:
> I doubt if Dijkstra would consider this one of his more important
> essays, but it's always a pleasure to read the Old Masters.

Yes indeed.

> [H]is position appears to be that (a)


> programmers are weak people who dislike to have to write code which
> passes strict tests, that (b) they therefore prefer compilers which are
> incapable of spotting their mistakes, that (c) natural language is
> inherently imprecise, and therefore that (d) programmers might like to
> write in natural language but should not be allowed to.
>
> A characteristic Dijkstra "you need to be saved from yourselves"
> argument.

Well it's clear you understand the basic thrust, but it's more
sophisticated than this, of course.

> Programmers don't like to code for overly fussy compilers, but they do

> like good error detection[.]

His argument is not against programmers in general, but against a
certain lazy-headed breed. These misguided souls will *confuse* (not
equate) bad error-checking with ease-of-use. They will prefer to
program badly, even produce code that does not always do what is
intended, rather than take the trouble to get it right.

I don't know the literature, so this is a wild guess, but I bet
Dijkstra wouldn't like rule-oriented programming for roughly the same
reason: a buggy knowledge-base will run, get things right most of the
time, but sometimes get things wrong, where for example precedence
means something other than intended. (How common a problem this is in
I7 I don't know.)

> Perhaps most relevantly, I don't agree that natural
> language is necessarily imprecise: in some domains it is very precise,
> in others not. In some domains it is a wise programming language, in
> others a foolish one.

I think we're confusing an NL which describes what the program should
do, and NL-ish syntax which translates directly into machine
instruction. What you say is true of the latter, but again, I don't
think that's what he's talking about.

vaporware

unread,
Jul 9, 2006, 8:06:22 AM7/9/06
to
steve....@gmail.com wrote:
[...]

> The argument may indeed apply, however, to the degree that I7 attempts
> to be an NLP. I have read the conceptual development of I7, and at
> times it attempts to be an NLP -- probably why the syntax is overloaded
> and context-sensitive, for instance. By my understanding of Dijkstra's
> argument, it would seem to apply to I7 only insofar as I7 mistakes
> itself for NL. Dijkstra does not argue there's anything wrong with
> using English words as operators, or even English grammatical
> constructions as syntactic patterns. But it's absolutely necessary to
> recognize them as operators and programming patterns, to recognize that
> NL is a misappellation.

Who doesn't recognize that? Do you have a concrete example of I7
"mistaking itself for NL"?

vw

Daryl McCullough

unread,
Jul 9, 2006, 9:16:10 AM7/9/06
to
steve....@gmail.com says...

>Now on the one hand, you can reply however you want, that's fine. But
>on the other hand, you are responding to a point I was making, that I
>am concerned that I7 is not now being judged objectively, for a number
>of reasons I list, and why I think it's a problem for the genre -- a
>large set of authors picking up a language on (at least arguably)
>misleading pretenses, a large set of theorists expressing emotional
>rather than rational discussion. Posts such as this one of yours,
>another emotional attack against rational arguments, only underline the
>problem.

I would say that your response certainly underlines the problem.
You consider your posts to be rational arguments, and the posts
of others to be emotional attacks. That's exactly what Andrew
was complaining about.

The fact is, your posts are anything but rational arguments,
and Andrew's post was certainly *not* an emotional attack.

The reason that your posts fail to be rational argument is
because they are full of subjective *judgments* on your part.
You judge something to be pretentious, or to be an emotional
attack. Your judgments along these lines are almost always
wrong, and they contribute nothing to the rational argument
you claim to be making.

Calling a thing "pretentious" is not conveying any
information (or at least, not much information) about
the thing, it is conveying *your* emotional reaction
to that thing.

--
Daryl McCullough
Ithaca, NY

John Roth

unread,
Jul 9, 2006, 9:45:22 AM7/9/06
to
steve....@gmail.com wrote:
> Dijkstra thinks NL-Programming is foolish, and wrote a short essay to
> explain his position, availible here...
>
> http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html
>
> ... Or you can just search for the Dijkstra essay entitled "On the
> foolishness of 'natural language programming'."
>
> Enjoy!
>
> ====
>
> On the other end of the spectrum, researchers at MIT are doing
> something very much like NLProgramming, a NLProgramming "visualizer."
> See the story here...

I probably shouldn't feed the troll, but...

The late Prof. Dijkstra was well known for taking
extreme positions and expounding on them in
rather absolute terms. Most of those positions have
turned out to be wrong, at least in practical terms.

He was, for example, a very vocal advocate of
proof techinques as being superior to testing.
While there is certainly a place for mathematical
proof techniques in professional software development,
there are few, if any, examples of low defect
software of any size that have been written
_only_ using proof techinques, while testing has
developed to the point where one can make a
reasonable estimate of the remaining defect density
(defects per thousand lines of code).

His dislike of natural language techniques was
partially a result of COBOL, and partially a result
of his dislike of anything that couldn't be defined
with suitable mathematical rigor.

You can probably find a Dijkstra quote disparaging
just about anything that isn't super mathematically
rigorous.

John Roth

Daphne Brinkerhoff

unread,
Jul 9, 2006, 10:37:04 AM7/9/06
to

Graham Nelson wrote:
> steve....@gmail.com wrote:
> > Dijkstra thinks NL-Programming is foolish, and wrote a short essay to
> > explain his position, availible here...
> > http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html
>
> I doubt if Dijkstra would consider this one of his more important
> essays, but it's always a pleasure to read the Old Masters. When we say
> "Dijkstra thinks", we ought to remember that this is the present
> historic - I don't recognise this piece, but my guess is that Dijkstra
> wrote it about twenty-five years ago.

It looks as though that's essentially correct -- the archive seems to
be sorting things chronologically, and this piece is between ones from
April 1978 and July 1978.

--
Daphne

Andrew Plotkin

unread,
Jul 9, 2006, 10:53:35 AM7/9/06
to
Here, steve....@gmail.com wrote:
> Andrew Plotkin wrote:
> > I have yet to see you support one of your arguments with anything
> > other than a sincere declaration that you are right.
>
> Then all I can say is that you have not been paying attention. I have
> explained my logic in detail. Having done that, and upon reading such a
> post as yours, how can I think anything other than that you are being
> quite silly, and for emotional reasons uninterested in discussing the
> idea seriously?

You *can't* think anything but that I am being silly. I agree. :)

But I have been discussing all the ideas involved in I7 seriously. See
all my *long* I7 posts -- the ones in reply to people other than you.
They do not, you may be able to acknowledge, conform to your story of
"irrationally devoted masses defending I7 from all attacks."

> Posts such as this one of yours, another emotional attack against
> rational arguments, only underline the problem.

That wasn't an emotional attack; it was a personal attack. By which I
mean precisely: I am discussing your rhetorical habits. I am not
discussing I7 at all, in this post or the previous one. It's all about
you.

This is, of course, exactly what I'm accusing you of: talking about
the people rather than about IF or IF design. We are, in fact,
embedded in the same morass, and if someone walked into the newsgroup
on this thread he would (I'm sure) see us as equally inane. But at
least I'm not claiming that this inanity proves something about I7.

(I could go on with an account of the main branch of this thread, but
then this would be turning into a long post. "Appeal to authority" and
"straw man" have already been noted by other posters.)

--Z

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*

If the Bush administration hasn't thrown you in military prison without trial,

Graham Nelson

unread,
Jul 9, 2006, 1:07:54 PM7/9/06
to
John Roth wrote:
> You can probably find a Dijkstra quote disparaging
> just about anything that isn't super mathematically
> rigorous.

Yes, indeed. What I find most interesting about Dijkstra is the way
that he succeeded in moving the debate entirely onto his terms; many
people disagreed with him in the 1970s, but generally did so on his own
terms. To pick an antagonist of Dijkstra's with similar rhetorical
gifts (and similar undoubted greatness), Knuth argued against
Dijkstra's proposed requirements for coding by attempting to prove that
certain algorithms might take a fractionally larger number of steps to
execute if these requirements were accepted. But they both saw it as a
mathematical discipline, which is interesting, I think. Perhaps one
might advance the sociological sort of explanation that computer
science was then emerging as an independent discipline from mathematics
and/or engineering, and beginning to found its own departments, in
freshly-built buildings on campus: there was a fierce clutching at
anything which could be taken for a new theory of this new subject.

Which is not to say that Dijkstra was wrong, of course. It's just that
the practice of programming has completely changed. Nobody today tries
either to prove the correctness of (any substantial amount of) code, to
dry-run it with dummy data, to shave 0.5% off its running time, to
write up proofs that the number of moves of the disc drive head is
minimal, etc. Today we write middleware, and we care far more about
clarity and manageability than performance, theoretical or actual.
(Actually this was starting to be true in Dijkstra's time, too, but -
like Knuth - he was only a practitioner for the fun of it.)

steve....@gmail.com

unread,
Jul 9, 2006, 1:57:30 PM7/9/06
to

vaporware wrote:
> > But it's absolutely necessary to
> > recognize them as operators and programming patterns, to recognize that
> > NL is a misappellation.
>
> Who doesn't recognize that? Do you have a concrete example of I7
> "mistaking itself for NL"?

I am glad indeed that you see it so clearly, but no, in fact it would
seem that we're in the minority.

Two examples.

First, if you read Graham's whitepaper, you'll recognize that he's
using the term "natural language" to cover both meanings: real natural
language like we're using in this conversation, and natural language
which has been appropriated and transformed into a rigorous formal
symbolism for machine instruction. When pressed, he does not
acknowledge the (obvious to us) fact that these are two distinctly
different conceptions of NL, and that I7 participates in the latter
only. Instead he writes such as, "I do not accept that the adoption of
a simplified grammar necessarily obviates naturality." Perhaps the next
example sheds light on the motivation for this peculiar argumentation.

Second, consider the I7 language itself. If its project was the
appropriation and transformation of NL into a formal symbolism, it
would not bend over backwards to sound like English. It would take
English as far as it's useful, but I doubt it would have overloaded
syntax, context-sensitive symbols, multiple phrasings, and much other
syntactic sugar (which, judging by some of the questions we've heard on
the language, is troubling to users generally -- certainly it's
confusing as hell to me). Instead, it places an attempt at mimicing
true natural language over the needs of a formal symbolism. How far
this influences the shape of the target language is difficult to
estimate, but it sounds to me like a bad design decision, however much
I can appreciate its ambition.

Now, there may be more to this question: the distinction we're agreeing
on (between the two types of NL-esque programming) might not be
absolute (counter-intuitive though that be). I'd be interested in
hearing more from the other side of the argument. But at this point I
hope you can understand what I'm talking about.

rpresser

unread,
Jul 9, 2006, 4:12:58 PM7/9/06
to

Here's the Bibtex entry about that piece, from the same site. Doesn't
give a month but does say 1978:

http://www.cs.utexas.edu/users/EWD/indexBibTeX.html

EWD667. On the foolishness of ``natural language programming'', Edsger
W. Dijkstra

@unpublished{EWD:EWD667,
author = "Edsger W. Dijkstra",
title = "On the foolishness of ``natural language programming''",
year = "1978",
note = "circulated privately",
url = "http://www.cs.utexas.edu/users/EWD/ewd06xx/EWD667.PDF",
fileSize = 87 KB
}

Adam Thornton

unread,
Jul 9, 2006, 6:02:35 PM7/9/06
to
In article <e8q0oq$ijs$1...@reader2.panix.com>,

Andrew Plotkin <erky...@eblong.com> wrote:
>Have you even noticed that you can't go two posts in a row without
>framing the debate in personal terms? Graham pretends this, I7
>pretends that, we are in love with I7, the claims in favor of I7 are
>advertising. When we disagree with you we are silly, deluded,
>incapable of making an argument.

A conjugation for you:

*I* am a rational being. *YOU* are foolish and deluded. *HE* is a
syphilitic halfwit who must be removed from his academic post before he
corrupts still more innocent young minds, and ideally then sodomized by
the drooling zombie of Wittgenstein.

Adam

Duncan Harvey

unread,
Jul 9, 2006, 6:19:39 PM7/9/06
to
Adam Thornton <ad...@fsf.net> wrote:

> *HE* is a syphilitic halfwit who must be removed from his academic post
> before he corrupts still more innocent young minds, and ideally then
> sodomized by the drooling zombie of Wittgenstein.

"Drooling"? Cease these emotional attacks on Wittgenstein and his
corpse.

--
Duncan Harvey

Graham Nelson

unread,
Jul 9, 2006, 6:25:42 PM7/9/06
to
steve....@gmail.com wrote:

> vaporware wrote:
> > Who doesn't recognize that? Do you have a concrete example of I7
> > "mistaking itself for NL"?
>
> I am glad indeed that you see it so clearly, but no, in fact it would
> seem that we're in the minority.

I'm not sure the two of you do agree on that, but never mind.

> First, if you read Graham's whitepaper,

"White paper", if we must, though I do not title it so.

> Second, consider the I7 language itself. If its project was the
> appropriation and transformation of NL into a formal symbolism, it
> would not bend over backwards to sound like English.

"Appropriation" is a loaded term, don't you think? I mean, do
we accuse C of "appropriating" the standard mathematical
notation for formulae in the way it writes expressions? Clearly
C lacks the great range and flexibility of actual mathematical
notation, yet it gains great naturality and flexibility from using
a subset of it - using infix rather than prefix or postfix operators
for addition, using the conventional + sign in the same shape
we are used to, using brackets with the same distributivity
conventions used in mathematics, etc. Is that "appropriation"?
I would say it is "adoption".

> Now, there may be more to this question: the distinction we're agreeing
> on (between the two types of NL-esque programming) might not be
> absolute (counter-intuitive though that be).

Could you suggest why this distinction exists? I mean, clearly
there are big differences between the subset of English we are
typing and reading now, and the subset of English recognised
by Inform. But what makes them categorically different, as you
evidently believe? For instance, is mathematical notation
categorically different from the syntax of arithmetic expressions
in a typical programming language?

rpresser

unread,
Jul 9, 2006, 6:35:44 PM7/9/06
to

Adam Thornton wrote:

> A conjugation for you:
>
> *I* am a rational being. *YOU* are foolish and deluded. *HE* is a
> syphilitic halfwit who must be removed from his academic post before he
> corrupts still more innocent young minds, and ideally then sodomized by
> the drooling zombie of Wittgenstein.
>
> Adam

I believe the previous post was a forgery. It was not posted by Adam
Thornton; in fact it was a rare reappearance of The Pissing Bandit.

All hail TPB!

vaporware

unread,
Jul 9, 2006, 8:52:32 PM7/9/06
to
steve....@gmail.com wrote:
[...]

> Two examples.
>
> First, if you read Graham's whitepaper, you'll recognize that he's
> using the term "natural language" to cover both meanings: real natural
> language like we're using in this conversation, and natural language
> which has been appropriated and transformed into a rigorous formal
> symbolism for machine instruction. When pressed, he does not
> acknowledge the (obvious to us) fact that these are two distinctly
> different conceptions of NL, and that I7 participates in the latter
> only. Instead he writes such as, "I do not accept that the adoption of
> a simplified grammar necessarily obviates naturality." Perhaps the next
> example sheds light on the motivation for this peculiar argumentation.

I confess that I've only skimmed the white paper, but this objection
seems irrelevant. The white paper is not I7. The motivations and
reasoning behind I7's design are not I7. I7 is a language, library,
compiler, and IDE.

> Second, consider the I7 language itself. If its project was the
> appropriation and transformation of NL into a formal symbolism, it
> would not bend over backwards to sound like English. It would take
> English as far as it's useful, but I doubt it would have overloaded
> syntax, context-sensitive symbols, multiple phrasings, and much other
> syntactic sugar (which, judging by some of the questions we've heard on
> the language, is troubling to users generally -- certainly it's
> confusing as hell to me).

No? I believe HyperCard had many of those features (certainly multiple
phrasings), but no one confused it for natural language.

In fact, IF parsers themselves have those features. Overloaded syntax
and context-sensitive symbols: compare "up" in PICK UP THE BOX vs. GO
UP, or "take" in TAKE BOX vs. TAKE INVENTORY. Multiple phrasings: TAKE
BOX vs. GET BOX vs. CARRY BOX, or WEAR COAT vs. PUT ON COAT vs. PUT
COAT ON. Syntactic sugar: "then", "the", etc. But we all realize the
parser is just matching patterns, not understanding English, right?

IF parsers support these features not because we're supposed to think
they understand English, but to avoid frustrating the player by making
him guess or memorize particular syntax. It'd be easy to write a parser
where each possible command could only be typed one way (easier than
modern parsers, certainly) and each word only had one meaning, and
players had to look up the exact syntax for each command they wanted to
type. But that'd be a step backward, not forward.

> Instead, it places an attempt at mimicing
> true natural language over the needs of a formal symbolism. How far
> this influences the shape of the target language is difficult to
> estimate, but it sounds to me like a bad design decision, however much
> I can appreciate its ambition.

Again, this is not an interesting objection. Criticizing I7 because of
your perception of the design decisions behind it, when you can't even
point to any influence those decisions had on the language, is like
criticizing C because of Kernighan and Ritchie's political views. (C is
for Commie, that's good enough for me!)

This seems to be a common thing in your posts about I7, criticizing the
people or the attitudes involved rather than the language itself. It's
a hedge: no matter how the language changes, you can always point to
some thought someone had in the past, some claim made in a Usenet
thread, etc. to justify your opposition.

vw

Rachel H.

unread,
Jul 9, 2006, 10:02:26 PM7/9/06
to
> if someone walked into the newsgroup
> on this thread he would (I'm sure) see us as equally inane.

Speaking as someone who is (rather) new to this group ... it all comes
across as quite bizarre, but even from a dabble here and there, it seems
clear to me that Mr. B has an axe to grind, likes to argue, and prefers not
to listen. Combined with his surety that he is always correct ... well, it's
not so interesting to read.

What is extra super-size bizarre, is that everyone keeps responding to him!
I keep wondering why the troll is baited repeatedly. No answers so far.

Rachel


Andrew Plotkin

unread,
Jul 9, 2006, 10:20:36 PM7/9/06
to
Here, Rachel H. <rem...@comcast.net> wrote:
>
> What is extra super-size bizarre, is that everyone keeps responding to him!
> I keep wondering why the troll is baited repeatedly.

Serious answer: I don't think of Breslin as a troll. We've had actual
trolls. They're the ones who try to bait *us*.

When I reply to Breslin, I'm not baiting him; I'm trying to talk to
him. Perhaps I will not succeed. If not, well, I try to ensure that I
spend time posting about IF too.

--Z

--
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*

If the Bush administration hasn't shipped you to Syria for interrogation,

Ian D. Bollinger

unread,
Jul 10, 2006, 1:07:58 AM7/10/06
to
Also, the concept of what a programming language is and can be has
changed quite a bit since the article in question was written. What is
appropriate for a domain-specific programming language may be
completely different than what is appropriate for a general programming
language. Implementing algorithms that would be trivial in other
languages can be extremely complicated in I7. This, however, isn't a
fault but a trade-off: sacrificing expressiveness for constructs
intended for interactive fiction. (Of course, I7 also inherits some
limitations from I6--such as the lack of a real string type--that are
unhelpful for the intended domain.)

NeoWolf

unread,
Jul 10, 2006, 5:42:00 AM7/10/06
to
Speaking as someone else fairly new, I've got to concur. But it's
pretty amusing still to see someone trying so hard.

Quintin Stone

unread,
Jul 10, 2006, 9:52:17 AM7/10/06
to
On Sat, 8 Jul 2006, James Cunningham wrote:

> And - perhaps hyperbolically, but nonetheless with a straight face -
> Dijkstra also said that anyone who began programming with BASIC may as
> well give up now; they're corrupted for life. Mr. Plotkin - if he did
> not begin with BASIC he programmed in it early, if my knowledge of his

> history is correct. I doubt that we should expect much from hiim.

Oh no! I started with BASIC also. Wait, technically I began programming
in Logo. Though I'm not sure whether that means I'm better off or so very
very worse off.

I'm guessing the latter.

==--- --=--=-- ---==
Quintin Stone "You speak of necessary evil? One of those necessities
st...@rps.net is that if innocents must suffer, the guilty must suffer
www.rps.net more." - Mackenzie Calhoun, "Once Burned" by Peter David

Robin Johnson

unread,
Jul 10, 2006, 10:18:34 AM7/10/06
to
Quintin Stone wrote:
[Dijkstra says BASIC corrupts youth]

> Oh no! I started with BASIC also. Wait, technically I began programming
> in Logo. Though I'm not sure whether that means I'm better off or so very
> very worse off.
>
> I'm guessing the latter.

I started with Logo, and I have to say, I think it is a far better
introduction to programming than Basic. You think of things in terms of
functional relationships. Your program is a mathematical description,
not a brute-force series of steps. The colours and shapes, far from
being a symptom of babyishness, keep kids' attention while giving them
something cool and visible to do with their programs (they also show
you the beauty of geometry as a branch of maths, but I'm off on a
tangent here.)

I remember the angriest my dad ever got with me was when I used a
'goto' instruction.

If I hadn't been exposed to Logo at such an early age, I probably
wouldn't be as good a programmer. I might be better with girls though.

My kids are *so* learning Logo.
--
Robin Johnson

Neil Cerutti

unread,
Jul 10, 2006, 10:30:46 AM7/10/06
to
On 2006-07-10, Robin Johnson <r...@robinjohnson.f9.co.uk> wrote:
> I started with Logo, and I have to say, I think it is a far
> better introduction to programming than Basic. You think of
> things in terms of functional relationships. Your program is a
> mathematical description, not a brute-force series of steps.
> The colours and shapes, far from being a symptom of
> babyishness, keep kids' attention while giving them something
> cool and visible to do with their programs (they also show you
> the beauty of geometry as a branch of maths, but I'm off on a
> tangent here.)
>
> I remember the angriest my dad ever got with me was when I used a
> 'goto' instruction.
>
> If I hadn't been exposed to Logo at such an early age, I
> probably wouldn't be as good a programmer. I might be better
> with girls though.
>
> My kids are *so* learning Logo.

Consider using "Simply Scheme" as an introduction to computer
science. The authors were heavily influenced by Logo when
creating the "meta"-Scheme that's used to introduce programming
(by the end of the book, the "meta" part is explained and
jettisoned in favor of real Scheme.)

--
Neil Cerutti
Low Self-Esteem Support Group will meet Thursday at 7 to 8:30
p.m. Please use the back door. --Church Bulletin Blooper

steve....@gmail.com

unread,
Jul 10, 2006, 11:42:20 AM7/10/06
to

Graham Nelson wrote:
> > If [I7's] project was the

> > appropriation and transformation of NL into a formal symbolism, it
> > would not bend over backwards to sound like English.
>
> "Appropriation" is a loaded term, don't you think?

Not at all. I did not mean misappropriate, or I would have said so. I
am describing how one might most productively and amicably understand
I7, not how one might dismiss it. Indeed, if this (or something like
it) were how its NL-aspect were described by its designer, I don't
think there would be any confusion.

> I would say it is "adoption".

Fine. "Adoption" then.

> [C]learly


> there are big differences between the subset of English we are
> typing and reading now, and the subset of English recognised
> by Inform. But what makes them categorically different, as you
> evidently believe?

I'm not a big fan of "common sense" -- it's so often wrong, and it's
generally a stand-in for critical thought. I could say that I have a
"gut feeling" that I7's use of English is categorically (or I would
say, qualitatively) different from natural English. But this a
position, not an argument.

When one attempts to argue for an intuition, the argument is almost
always weaker than the intuition itself -- there's many ways to go
about arguing that computer-language is truly different from human
language, probably all of them incomplete, and certainly all of them
deconstructable.

(As you are aware, I have attempted to explain the difference in terms
of addressee; the degree of formalism; the distinction between the
communication of meaning and the instruction of a machine; and the
reduction to the target language.)

Perhaps the argument will be more interesting if we attempt to explain
why we have an intuition that there's a substantial difference, and
perhaps why it's interesting to attack this intuition. (And I believe
we can, at least, agree on the existence of that intuition.)

Some questions however, which will probably help disentangle the
discussion: at what point does a computer language become English? If
we say that "A is equivalent to B" is natural language, what should we
say about "A=B"? Does this entail that the difference is one of
"notation" only, or that all programming languages are natural also?
Or...

> For instance, is mathematical notation
> categorically different from the syntax of arithmetic expressions
> in a typical programming language?

... perhaps you could explain what you think on that question. I would
think it's pretty common-sensical (i.e., apparent, but not necessarily
to be trusted) that the latter uses the notation of the former, but the
fact that you think it's a particularly useful and question is to me a
clear sign that I'm missing an important subtlety.

Graham Nelson

unread,
Jul 10, 2006, 2:47:33 PM7/10/06
to
steve....@gmail.com wrote:
> Some questions however, which will probably help disentangle the
> discussion: at what point does a computer language become English? If
> we say that "A is equivalent to B" is natural language, what should we
> say about "A=B"?

I think this is a fair question to ask, yes. My answer would be
that if all one is doing is writing "A is equivalent to B", then
indeed there's not much difference between that and "A := B"
or "A = B". One moves considerably further onto the ground
of natural language if one writes "A is B", where "is" means not
only identity but also a partaking of properties - compare
"A is a car" and "A is red". For good or ill, traditional programming
languages tend not to have operators with the range of meanings
that "to be" has. I would say that good diagnostics for naturality
would include the presence of such verbs, and of tenses, a
distinction between common and proper nouns, the ability to
describe sets of things by means of some predicated condition
which the elements have in common, and so forth. It is
certainly possible to envisage a traditional computer programming
language with many, even all, of these characteristics. C++
arguably distinguishes common and proper nouns rather well.
But there is also a feature of a different character: the ability
for a native English speaker to read back the text and have
a reasonable grasp of its meaning.

> Does this entail that the difference is one of
> "notation" only, or that all programming languages are natural also?

I believe not, for the reasons above, really.

> > For instance, is mathematical notation
> > categorically different from the syntax of arithmetic expressions
> > in a typical programming language?
>
> ... perhaps you could explain what you think on that question. I would
> think it's pretty common-sensical (i.e., apparent, but not necessarily
> to be trusted) that the latter uses the notation of the former, but the
> fact that you think it's a particularly useful and question is to me a
> clear sign that I'm missing an important subtlety.

I draw this comparison to suggest that mathematical notation -
a highly expressive language, containing a mixture of social
convention and precise rules, and capable of enormous
flexibility - is not something which could be part of any
programming language in full. Yet many languages, such as C,
Fortran, etc., adopt a core of mathematical notation, because
it is both legible, feels natural to use, and is well adapted to
one of the core domains of the language - arithmetic calculation.
I don't think we would accuse such languages of supporting
only pseudo-mathematics: we would probably agree that they
incorporate a subset of mathematical notation, perhaps only
a modest one, but one that feels natural. It is easily learned,
it does what one thinks, etc.

Well, I suggest that if the core domain for a language is not
arithmetic but qualitative spatial relationships, places, items,
people and their knowledge, etc., then it makes perfectly
good sense to adopt a subset of "natural language", which
is as familiar and powerful in describing these situations as
is mathematical notation in describing numbers and their
manipulation.

Gregory Kulczycki

unread,
Jul 12, 2006, 1:22:04 AM7/12/06
to
Graham Nelson wrote:

> Which is not to say that Dijkstra was wrong, of course. It's just that
> the practice of programming has completely changed. Nobody today tries
> either to prove the correctness of (any substantial amount of) code, to
> dry-run it with dummy data, to shave 0.5% off its running time, to
> write up proofs that the number of moves of the disc drive head is
> minimal, etc. Today we write middleware, and we care far more about
> clarity and manageability than performance, theoretical or actual.
> (Actually this was starting to be true in Dijkstra's time, too, but -
> like Knuth - he was only a practitioner for the fun of it.)

Has anyone *ever* proved that a real-world program is correct with
respect to a mathematical specification? Even when Knuth and Dijkstra
were at the top of their game, proofs of correctness were attempted
for small data structures or algorithms, not programs, and those were
done by hand. Knuth made a famous (in formal methods circles) comment
once: "Beware of bugs in the above code; I have only proved it
correct, not tried it." Note that the 'code' in Knuth's quote
referred to a data structure, not an entire application
(http://www-cs-faculty.stanford.edu/~uno/faq.html). So, in terms of
proving correctness, I don't think the practice of programming has
changed all that much -- it was never something that was practical.

That said, there are still researchers in academia, government, and
industry that are trying to make formal specification and verification
a practical reality. The ultimate goal would be a "verifying
compiler" -- a compiler that can automatically prove that a program is
correct with respect to its specification.
(http://vstte.ethz.ch/Files/hoare-misra.pdf)

There are many open questions about such a compiler: Could proofs be
fully automated or would they need human assistance? If the latter,
how much assistance? Would the benefits of such a tool outweigh the
drawbacks? (For example, someone still needs to write formal specs
for their program before it can be proved correct.) Can we develop
verifying compilers for all languages? If not, will programmers be
willing to adopt new languages that facilitate verification even if
they lack certain features of popular languages?

It's probably true that the easier it is to come up with a formal
(mathematically rigorous) semantics for a language, the more likely
that it will facilitate automatic formal verification. But I don't
see how making the syntax of Inform 7 look more like English will
necesarilly complicate the semantics of the language. The semantics
of a language is complicated much more by things like pervasive
aliasing and dubious inheritance structures (hence the formalists
frustration with OO languages) than it is by whether one uses 'is' or
'='. Making a programming language look more like a natural language
seems like a very reasonable idea, as long as programmers (writers)
know unambiguously what effect their code (text) will have.

Who knows, perhaps one day there will be a verifying compiler for
Inform 7 -- or a language very much like it. ;^)

Cheers,
Greg

P.S. Lojban is an interesting attempt to make a natural language that is
more like programming language. (http://www.lojban.org/).

L. Ross Raszewski

unread,
Jul 12, 2006, 1:28:12 AM7/12/06
to
On Wed, 12 Jul 2006 01:22:04 -0400, Gregory Kulczycki <gre...@vt.edu> wrote:
>
>That said, there are still researchers in academia, government, and
>industry that are trying to make formal specification and verification
>a practical reality. The ultimate goal would be a "verifying
>compiler" -- a compiler that can automatically prove that a program is
>correct with respect to its specification.
>(http://vstte.ethz.ch/Files/hoare-misra.pdf)


Seems to me that for this to be possible, the specification would have
to itself be a program. In which case it's vacuously true.

And all that does is reduce "this program is correct" to "this
specification is correct". Mind you, that might be an easier problem
for a human to solve (my undergrad research was in transformations
that turned one AI-complete problem into another AI-complete problem,
that was equally intractable by a computer but easier for a human).

L. Ross Raszewski

unread,
Jul 12, 2006, 1:30:20 AM7/12/06
to
On Wed, 12 Jul 2006 01:22:04 -0400, Gregory Kulczycki <gre...@vt.edu> wrote:
>
>P.S. Lojban is an interesting attempt to make a natural language that is
>more like programming language. (http://www.lojban.org/).
>

Oh, right, also:

How many Lojbanists does it take to change a broken lightbulb?


Two. One to decide what to change it into, and one to work out what
kind of bulb produces broken light.

Gregory Kulczycki

unread,
Jul 12, 2006, 3:01:32 AM7/12/06
to
L. Ross Raszewski wrote:

> Gregory Kulczycki wrote:
>>
>> That said, there are still researchers in academia, government, and
>> industry that are trying to make formal specification and verification
>> a practical reality. The ultimate goal would be a "verifying
>> compiler" -- a compiler that can automatically prove that a program is
>> correct with respect to its specification.
>> (http://vstte.ethz.ch/Files/hoare-misra.pdf)
>
>
> Seems to me that for this to be possible, the specification would have
> to itself be a program. In which case it's vacuously true.

In general, the specification is not executable, though some
specification languages
(JML, CSpec), are *partially* executable.

> And all that does is reduce "this program is correct" to "this
> specification is correct". Mind you, that might be an easier problem
> for a human to solve (my undergrad research was in transformations
> that turned one AI-complete problem into another AI-complete problem,
> that was equally intractable by a computer but easier for a human).

That's the basic idea. The specification is meant to be an abstraction
that tells you
(for example) what a method does, but lets you to ignore *how* it does
it. An example
might be:

public void sort(List s);
ensures ARE_PERMUTATIONS(#s, s) and IS_SORTED(s);

The specification says that the new value of s (s) is a permutation of
the old value
of s (#s) and the new value of s is in sorted order. Naturally the functions
ARE_PERMUTATIONS and IS_SORTED have to be defined somewhere. You can
make them executable, but it won't help you do the sorting.

One idea is that, if a verifying compiler ever becomes a reality,
writing specs for a
module may become a practical alternative to writing unit tests.
Whether this is true
or not, you still have the problem (as you pointed out) of *validation*
-- ensuring that
your mathematical specification really does capture what you or your customer
intended the program to do. So verification is not something that can eliminate
acceptance tests.

Greg


John Prevost

unread,
Jul 12, 2006, 3:06:42 AM7/12/06
to
L. Ross Raszewski wrote:
> Seems to me that for this to be possible, the specification would have
> to itself be a program. In which case it's vacuously true.

You might want to take a look at this, and other resources that talk
about proof-carrying code: http://raw.cs.berkeley.edu/pcc.html

It really depends what kind of specification you're looking for. For
example, compilers that prove type safety (and infer types, etc. etc.)
have been around for quite a while. And most type systems are not
powerful enough to be considered general programming languages. :)

The PCC bit above talks about things a bit more powerful than that, but
still based on the idea of a "I respect this interface and am
guaranteed not to fail in certain ways" model typical of type checking.
The idea is that if you have a firm specification of an interface, the
compiler can test that you don't attempt anything that the interface
forbids. (You don't dereference pointers you weren't handed by the OS,
don't do math on pointers except for array dereference, don't do array
dereference without bounds checking, etc.) In the case of PCC, the
compiler also produces a succinct summary of the proof. Proving the
program may be expensive (especially if you have the object code but
not the source code), but comparing the object code of program and the
proof summary to the specification is quick and easy. (Compare this
with the idea of some forms of compression that work much harder at
encoding than at decoding.) And then once you know the code doesn't do
anything it's not supposed to, you can call that code as, say, a
low-level network filter that is provided by a user and gets loaded
into the operating system, and not worry that the user is also
wtfpwning your system.

In any case, proving that entire large programs do what they're meant
to do is generally not done (not least because most large programs are
not specified well enough.)

However, proving certain properties of entire programs (type safety is
a good example, although I've also seen proofs of runtime memory
behavior for real-time applications) or the overall correctness of
certain portions of programs isn't at all odd. (Data structure code,
for example, is re-used frequently, and usually comes with certain
guarantees that it will: return the correct values, satisfy certain
memory constraints, and satisfy certain runtime constraints.)

But, this is getting fairly far afield from the topic of this
newsgroup, so I'll shut up now after noting that I *would* be happier
if there were a formal specification of the "core language" of Inform
7. :)

John.

Graham Nelson

unread,
Jul 12, 2006, 5:39:18 AM7/12/06
to
Gregory Kulczycki wrote:
> (For example, someone still needs to write formal specs
> for their program before it can be proved correct.)

I did look at Eiffel-style design by contract in the early stages of
Inform 7, but it just isn't that kind of language, really (and in any
case I am sceptical, because writing good invariants and pre- and
post-conditions in Eiffel can be as difficult as writing the actual
code - it probably does increase accuracy, since what amount to two
different expressions of the same idea must agree, but at considerable
labour cost). In a sense, though, Inform does have formal
specifications, at least in a rudimentary way - if you write a rule
"Instead of taking an open container, ..." then you can be quite
certain that the noun for the current action is, indeed, an open
container: it is impossible for the rule to be called with bad input
data, so to speak.

> Who knows, perhaps one day there will be a verifying compiler for
> Inform 7 -- or a language very much like it. ;^)

Well, in a sense, the point of building the Skein and Transcript into
the interface, and the "Test ... with..." syntax into the language, was
to make testing an inherent part of the whole thing, rather than an
optional extra. I think this has worked pretty well - being able to
check in a matter of 20 minutes or so that all of the examples and
worked examples still compile and play out correctly has made debugging
the compiler a much more comfortable experience. (In a sense, IF is a
very easy domain for unit testing, because it's so easy to express
concisely what the input and output should be - this run of commands
should produce this transcript.)

John Roth

unread,
Jul 12, 2006, 10:57:27 AM7/12/06
to

Oddly enough, I've done three of those four things, and I'd
be very surprised if the relatively small number of people
who do optimized disk drivers (especially for high performance
data base systems) don't do the latter. However, your point
is in general correct even if it's got exceptions.

Since my intention was to label a resort to quoting
Djikstra as a cheap shot (arguement by irrelevant authority)
that essentially admits having lost the arguement, I'm not going to
delve into this much further. Not much isn't, however not at all.

What I see as the background here is that Dijkstra and the
others in his school overreached themselves. The problem of
doing a rigorous foundation for software development is very
hard. Knuth commented once that the effective software
developer does not have the same mindset as a mathematician:
mathematicians tend to see things with a great deal of focus,
where you lead up to a conclusion. In software, there are usually
many competing interests, sometimes spanning several
application domains and which have to be integrated with
the business (or other) objectives of whoever is funding the
project. The mathematical mindset may be applicable in
the details, but it fails badly when putting the entire thing
together.

What I'm beginning to see is a movement toward solving
smaller problems in a rigorous fashion. This seems to go
under the title of model checking, and there seems to be
some quite good work as long as you keep your objectives
modest.

In another direction, there's a subset of ADA that
uses additional annotations and a proof checker. It seems
to be highly regarded in areas where failure is simply
not an option.

John Roth

John W. Kennedy

unread,
Jul 12, 2006, 1:36:53 PM7/12/06
to
Gregory Kulczycki wrote:
> P.S. Lojban is an interesting attempt to make a natural language that is
> more like programming language. (http://www.lojban.org/).

Actually, Lojban is a fork of Loglan (<URL:http://www.loglan.org>),
which was designed, not as a programming language (although it can be
parsed by lexx/yacc, which most human languages, artificial or not,
cannot be), but as a human language based on predicate calculus, as an
experimental device to test the Sapir-Whorf hypothesis.

Ra mrenu ga morcea.
La Sokrates ga mrenu.
La Sokrates ga morcea.

--
John W. Kennedy
"The blind rulers of Logres
Nourished the land on a fallacy of rational virtue."
-- Charles Williams. "Taliessin through Logres: Prelude"
"Lo norvia garni je la Logres
Pa tcidyjuo le gunti ne razpli gudkao lodfalji.

steve....@gmail.com

unread,
Jul 12, 2006, 5:45:43 PM7/12/06
to
John Roth wrote:
> Since my intention was to label a resort to quoting
> Djikstra[...]

I was not quoting, merely referencing the text, which, as it turns out,
is even more interesting for our discussion than I anticipated.

> as a cheap shot (arguement by irrelevant authority)

I wasn't taking a shot, and I certainly don't think Dijkstra's argument
is irrelevant to the ongoing discussion.

> that essentially admits having lost the arguement

I have no argument against I7's adoption of NL as a formal syntax, nor
do I think Dijkstra's argument is an argument against that -- for
reasons I have explained and which would be perfectly clear to you if
you read my posting with any good faith whatsoever.

In any case, my argument is more complicated than that. -- But I'm not
interested in "winning" anything: my interest lies in the mutual
uplifting of our thinking, which I believe is already much closer to
agreement than one might judge from such irresponsible and inflammatory
rhetoric as you here display.

John Roth

unread,
Jul 12, 2006, 10:51:44 PM7/12/06
to

steve....@gmail.com wrote:
> John Roth wrote:
> > Since my intention was to label a resort to quoting
> > Djikstra[...]
>
> I was not quoting, merely referencing the text, which, as it turns out,
> is even more interesting for our discussion than I anticipated.
>
> > as a cheap shot (arguement by irrelevant authority)
>
> I wasn't taking a shot, and I certainly don't think Dijkstra's argument
> is irrelevant to the ongoing discussion.
>
> > that essentially admits having lost the arguement
>
> I have no argument against I7's adoption of NL as a formal syntax, nor
> do I think Dijkstra's argument is an argument against that -- for
> reasons I have explained and which would be perfectly clear to you if
> you read my posting with any good faith whatsoever.

I haven't read it at all. Nor do I intend to. I have my own
opinions in the matter, and I don't consider this group
to be the appropriate place to debate them (which
isn't going to stop me).

My experiance with Dijkstra quotes is that his opinions are 30 years
out of date, and he's usually quoted as an "authority"
to say that something won't work because it isn't formal
enough. That's what I mean by "arguement by irrelevant
authority." I don't regard anything Dijkstra had to say in
the '70s as relevant to today's problems, other than to
note that history repeats.

>From a purely engineering standpoint, I note that there
have been a fair number of quite successful uses of
natural language as interfaces to specific applications.
I may even commit one myself sometime; I've sent
around a quick memo on it in the relevant mailing list.

However, the logic approach does not generalize. A
generation of failures in AI demonstrate quite clearly
that once you get away from a well specified application
area all of the last half century of formal syntax and so
forth is simply useless. It's a dead end.

It's fairly obvious that trying to use natural language
as a general programming language doesn't work.
COBOL tried it, and eventually had to put in structured
programming constructs. Inform 7 has exactly the same
problem: as soon as you get to rules you have to use
the same constructs that programming languages
have used for the last half century or more. You
simply can't express nesting and recursion properly
in natural language, which should tell you exactly
how badly Chomsky is off base on claiming that
one of the hallmarks of human language is recursion.

John Roth

Damien Neil

unread,
Jul 13, 2006, 4:10:52 AM7/13/06
to
lrasz...@loyola.edu (L. Ross Raszewski) wrote:
> On Wed, 12 Jul 2006 01:22:04 -0400, Gregory Kulczycki <gre...@vt.edu> wrote:
> >
> >That said, there are still researchers in academia, government, and
> >industry that are trying to make formal specification and verification
> >a practical reality. The ultimate goal would be a "verifying
> >compiler" -- a compiler that can automatically prove that a program is
> >correct with respect to its specification.
> >(http://vstte.ethz.ch/Files/hoare-misra.pdf)
>
> Seems to me that for this to be possible, the specification would have
> to itself be a program. In which case it's vacuously true.

There is a school of thought (which I happen to subscribe to) which says
that program source code is exactly that: a formal specification.

- Damien
--
NewsGuy inserts spam in the .sigs of paying customers.

steve....@gmail.com

unread,
Jul 13, 2006, 10:46:40 AM7/13/06
to
Damien Neil wrote:
> L. Ross Raszewski wrote:
> > Gregory Kulczycki wrote:
> > >
> > >That said, there are still researchers in academia, government, and
> > >industry that are trying to make formal specification and verification
> > >a practical reality. The ultimate goal would be a "verifying
> > >compiler" -- a compiler that can automatically prove that a program is
> > >correct with respect to its specification.
> > >(http://vstte.ethz.ch/Files/hoare-misra.pdf)
> >
> > Seems to me that for this to be possible, the specification would have
> > to itself be a program. In which case it's vacuously true.
>
> There is a school of thought (which I happen to subscribe to) which says
> that program source code is exactly that: a formal specification.

I share the suspicion, that specifying to the verifier what it's
supposed to verify is difficult to distinguish from writing the program
in the first place. However, I'm very much a novice in such matters,
and I wonder if there's more to it than that. Perhaps Gregory can
elaborate?

(The paper that Gregory referenced is interesting, if blue-sky and
perhaps a bit over-ambitious, but I do not think it answers the
question.)

Kevin Forchione

unread,
Jul 13, 2006, 12:30:52 PM7/13/06
to
"John Roth" <John...@jhrothjr.com> wrote in message
news:1152759104....@p79g2000cwp.googlegroups.com...

> It's fairly obvious that trying to use natural language
> as a general programming language doesn't work.
> COBOL tried it, and eventually had to put in structured
> programming constructs. Inform 7 has exactly the same
> problem: as soon as you get to rules you have to use
> the same constructs that programming languages
> have used for the last half century or more. You
> simply can't express nesting and recursion properly
> in natural language, which should tell you exactly
> how badly Chomsky is off base on claiming that
> one of the hallmarks of human language is recursion.

Sounds like you're in agreement with Steve on this point at least.

--Kevin


JDC

unread,
Jul 13, 2006, 2:12:57 PM7/13/06
to

I'm definitely out of my league here, but I think the distinction is
that a specification gives a way of verifying that the output is
correct, whereas the program actually has to figure out how to produce
output which satisfies the specification. The sorting example given
above is an example of this. Another would be:

Goal: Given a connected graph, output a spanning tree.

Specification: The output is a subgraph of the input graph which has no
cycles and includes all vertices in the original graph.

It is easier to check whether or nor some output is a spanning tree for
a given input graph than it is to actually find a spanning tree. So it
should be more feasible to formally check the specification than to
check the actual program.

-JDC

Richard Bos

unread,
Jul 13, 2006, 2:27:02 PM7/13/06
to
Gregory Kulczycki <gre...@vt.edu> wrote:

> Graham Nelson wrote:
>
> > Which is not to say that Dijkstra was wrong, of course. It's just that
> > the practice of programming has completely changed. Nobody today tries
> > either to prove the correctness of (any substantial amount of) code,

Tell that to my 'versity educators :-/

> > to dry-run it with dummy data,

Er... no? I know I do.

> > to shave 0.5% off its running time,

No, and alas, not to shave 50% off running time or resource consumption,
either. "You should buy a newer machine [read: than last year's] to run
this OS, never mind any useful program on it." Yes, Redmond, I'm looking
at you, damnit!

> > Today we write middleware,

Not all of us do. Middleware has to run on (and behind) something, after
all.

> > and we care far more about
> > clarity and manageability than performance, theoretical or actual.

I know cases currently in use (and new, too - this isn't a legacy system
I'm talking about) where this is very much not true. Let's just say that
"Enterprise", Web 2.0 techniques and XML have a lot to answer for.

> > (Actually this was starting to be true in Dijkstra's time, too, but -
> > like Knuth - he was only a practitioner for the fun of it.)
>
> Has anyone *ever* proved that a real-world program is correct with
> respect to a mathematical specification?

No, but that hasn't stopped them trying.

> That said, there are still researchers in academia, government, and
> industry that are trying to make formal specification and verification
> a practical reality. The ultimate goal would be a "verifying
> compiler" -- a compiler that can automatically prove that a program is
> correct with respect to its specification.

Proving that a program is correct with respect to its specification will
prove (in fact, has been proved repeatedly for years) to be downright
trivial compared to proving that a specification is correct in respect
to the real world.

Richard

raw...@gmail.com

unread,
Jul 14, 2006, 12:09:50 AM7/14/06
to
John Roth wrote:
> It's fairly obvious that trying to use natural language
> as a general programming language doesn't work.
> COBOL tried it, and eventually had to put in structured
> programming constructs. Inform 7 has exactly the same
> problem: as soon as you get to rules you have to use
> the same constructs that programming languages
> have used for the last half century or more. You
> simply can't express nesting and recursion properly
> in natural language, which should tell you exactly
> how badly Chomsky is off base on claiming that
> one of the hallmarks of human language is recursion.

Chomsky (and generative linguistics in general) is not concerned with
the kind of recursion you are -- the kinds of recursion that can occur
are much more general than the patterns seen in a programming language.
For example, if your formalism is a context free grammar, and you have
the productions: A -> BC, B -> AC, A -> a, B -> b, C -> c, this is a
formal language with recursion, but it is awfully hard to see how the
recursion illustrated here has anything to do with nesting in a
programming language. Nothing requires the kind of recursion in NL to
be a sensible kind of recursion for programming languages -- for
instance, it is extremely difficult to nest more than 2 "if"-clauses in
English in the same main clause, and commands like "for each book in
the bookshelf, write down the title" are already hard enough to
interpret without multiple "for"-clauses. But English can support
deeply embedded relative clauses (e.g. "the guy that knows a linguist
who eats ice cream that is made with coffee from this farm that his
brother owns in Gautemala"), a kind of recursion that I'm not sure what
the analogue of in a programming language would be (perhaps a.b.c.d.e
where a-d are objects in an OO language), as well as a range of other
kinds of recursion, again, very few analogous to anything commonly seen
in a programming language.

Gregory Kulczycki

unread,
Jul 14, 2006, 12:10:27 AM7/14/06
to
steve....@gmail.com wrote:
> Damien Neil wrote:
>> L. Ross Raszewski wrote:
>>> Gregory Kulczycki wrote:
>>>>
>>>> That said, there are still researchers in academia, government, and
>>>> industry that are trying to make formal specification and verification
>>>> a practical reality. The ultimate goal would be a "verifying
>>>> compiler" -- a compiler that can automatically prove that a program is
>>>> correct with respect to its specification.
>>>> (http://vstte.ethz.ch/Files/hoare-misra.pdf)
>>>
>>> Seems to me that for this to be possible, the specification would have
>>> to itself be a program. In which case it's vacuously true.
>>
>> There is a school of thought (which I happen to subscribe to) which says
>> that program source code is exactly that: a formal specification.
>
> I share the suspicion, that specifying to the verifier what it's
> supposed to verify is difficult to distinguish from writing the program
> in the first place. However, I'm very much a novice in such matters,
> and I wonder if there's more to it than that. Perhaps Gregory can
> elaborate?

The idea (as Ross and JDC alluded to in their posts) is that the
specification should be easier to understand than the program code
because the spec just needs to tell you *what* the program (or
component) does, not *how* it does it. I think of Javadocs as good
examples of informal specifications. A formal spec would not give you
any more information than a Javadoc would, it would just represent
that information mathematically. There's nothing inherently wrong
about saying that the Java code for a class is a kind of formal
specification because it does describe what a class does, but the
code gives you *too much* information. A good specification provides
a suitable abstraction for the class and hides the implementation
information that you don't need to know in order to use it.

> (The paper that Gregory referenced is interesting, if blue-sky and
> perhaps a bit over-ambitious, but I do not think it answers the
> question.)

The Hoare-Misra proposal is definitely blue-sky stuff. Still,
sometimes you need to push people a little bit for them to show you
what they can achieve. As an aside, it surprised me recently when a
room full (~20) of CS grad students had heard of Knuth and Dijkstra,
but not Tony Hoare. Hoare laid the foundations for formal reasoning,
concurrent programming, and programming language design. For better
or worse, he seems to be best known for a nifty little algorithm he
cooked up to help him look up dictionary words efficiently -- he
called it quicksort.

Greg

Damien Neil

unread,
Jul 14, 2006, 4:40:24 AM7/14/06
to
Gregory Kulczycki <gre...@vt.edu> wrote:
> The idea (as Ross and JDC alluded to in their posts) is that the
> specification should be easier to understand than the program code
> because the spec just needs to tell you *what* the program (or
> component) does, not *how* it does it. I think of Javadocs as good
> examples of informal specifications. A formal spec would not give you
> any more information than a Javadoc would, it would just represent
> that information mathematically. There's nothing inherently wrong
> about saying that the Java code for a class is a kind of formal
> specification because it does describe what a class does, but the
> code gives you *too much* information. A good specification provides
> a suitable abstraction for the class and hides the implementation
> information that you don't need to know in order to use it.

That depends on the language you're using, though.

Once upon a time, "sort this list" was a high-level specification.
Nowadays the program code to perform the operation is shorter often than
the English words describing it.

If the specification is good enough, it's generally easier to write a
compiler for it than it is to formally prove that your lower level code
implements the spec.

steve....@gmail.com

unread,
Jul 14, 2006, 9:33:41 AM7/14/06
to
Gregory Kulczycki wrote:

> The idea (as Ross and JDC alluded to in their posts) is that the
> specification should be easier to understand than the program code
> because the spec just needs to tell you *what* the program (or
> component) does, not *how* it does it.

I think the point was that the verifier has to figure out the "how"
anyway, so why not consider it a compiler, which compiles the spec
(which is its program), and do away with the program-to-be-verified
altogether?

One possible answer is that it's easy to write a spec that runs slow,
which we can use to check whether or not our program (which runs fast)
behaves as it's supposed to. But again, I'm very new to this, and I
expect I'm missing the point. I can only hope that my question makes
clear the confusion.

John Prevost

unread,
Jul 14, 2006, 1:54:34 PM7/14/06
to

steve....@gmail.com wrote:
> One possible answer is that it's easy to write a spec that runs slow,
> which we can use to check whether or not our program (which runs fast)
> behaves as it's supposed to. But again, I'm very new to this, and I
> expect I'm missing the point. I can only hope that my question makes
> clear the confusion.

That's a good model. Another one is when you have an old
implementation of something, as well as a new implementation that is
faster--but you want to be sure that the two implementations really
produce the same results.

You can also write a spec that describes what you want without
describing *how* to do it. (In fact, most specs of programs should do
that--it is a continual frustration of mine when somebody who's asked
to give me requirements for a program tells me how the program should
do its work, not what they want the results to be.) As an example, you
can describe an array that is sorted like so:

for all i in [1..n-1], array[i-1] <= array[i]

This doesn't describe how to sort an array, much less talk about
whether the output is a version of the input, but it does describe what
it means for an array to be in sorted order.

But in any case, the "slow program as a spec for a fast program" is a
pretty reasonable way to think about things. Especially because slow
(or otherwise inelegant) programs can be written much more easily at
times. Imagine the following definition of a queue:

Empty : 'a Queue.
Add : 'a -> 'a Queue -> 'a Queue.
Head : 'a Queue -> 'a.
Tail : 'a Queue -> 'a Queue.
Remove : 'a Queue -> ('a * 'a Queue).

Head Empty => Error.
Tail Empty => Error.
Head (Add x Empty) => x.
Tail (Add x Empty) => Empty.
Head (Add x z) => Head z.
Tail (Add x z) => Add x (Tail z).
Remove q => (Head q, Tail q).

The above definition describes exactly what a queue ought to do, in
terms of correctness. Any sequence of add and remove operations will
give you the "right" answer. However, there are very few systems in
which the above definition would produce an *efficient* implementation.

Still, once you have one implementation that you know is correct, you
can move on to proving its equivalence with a more complicated
implementation--for example, by defining what it means for a queue
state in the two implementations to be equivalent, and then by proving
that:

for all (a : simple implementation value), (b : complex implementation
state): if a is equivalent to b, then "the value Add x a" is equivalent
to "the state of b after push x b".

And that's where the functional language families (or generally,
languages with more formal definitions) win over those languages that
are just sort of thrown together and made to work: it's easier to
mathematically describe the process of evaluation, which makes it
easier to prove what a program "means"--in this case, whether two
pieces of code always produce "equivalent" results.


On the other side of things, I am happy to admit that more
theoretically interesting languages are not necessarily better for
getting work done (though they aren't necessarily worse, either).

And, it's also not like functional programmers are all *that* much more
likely to prove that they've correctly implemented a queue. :)

John.

Adam Thornton

unread,
Jul 17, 2006, 1:03:10 AM7/17/06
to
In article <1152484544.2...@s13g2000cwa.googlegroups.com>,
rpresser <rpre...@gmail.com> wrote:
>I believe the previous post was a forgery. It was not posted by Adam
>Thornton; in fact it was a rare reappearance of The Pissing Bandit.
>
>All hail TPB!

Alas, no.

I believe TPB to have been imprisoned in Gitmo or somewhere more secret
and less salutary, as he was not sufficiently circumspect in expressing
his objections to the policies of those in power. Remember that, now,
in the US of A, dissent is treason and is punishable by vanishing. As,
I add, is only right and proper. Indeed it is unthinkable to imagine
that anything otherwise should be the case. All hail an untrammeled
executive! The only thing that stands between us and mere anarchy
loosed upon the world, eh?

Perhaps he will be released--or at least we will hear of his lamentable
fate--after the cessation of hostilities. I would miss him, except that
he has been declared an unperson, so I must merely ask, the Pissing Who?

Adam

Gregory Kulczycki

unread,
Jul 18, 2006, 12:06:29 AM7/18/06
to
John Prevost wrote:

> Imagine the following definition of a queue:
>
> Empty : 'a Queue.
> Add : 'a -> 'a Queue -> 'a Queue.
> Head : 'a Queue -> 'a.
> Tail : 'a Queue -> 'a Queue.
> Remove : 'a Queue -> ('a * 'a Queue).
>
> Head Empty => Error.
> Tail Empty => Error.
> Head (Add x Empty) => x.
> Tail (Add x Empty) => Empty.
> Head (Add x z) => Head z.
> Tail (Add x z) => Add x (Tail z).
> Remove q => (Head q, Tail q).

A nice algebraic specification. Here's a model-based specification of a
queue that might be more palatable to imperative programmers:

public interface Queue<Item> {

model MathString<Item> this;
intialization ensures |this| = 0;

public void enqueue(Item x);
ensures this = #this + [#x];

public Item dequeue();
requires |this| > 0;
ensures #this = [result] + this;

public int length();
ensures result = |this|;
}

The *model* clause states that a queue object can be viewed as a
mathematical string of items. Requires and ensures clauses give pre- and
post-conditions for each method. [x] represents a unary string
containing the object x,
+ denotes string concatenation, #x (in postconditions) denotes the
"old" value of x,
and the keyword *result* represents the value returned by the method.

Greg

Gene Wirchenko

unread,
Aug 11, 2006, 1:13:57 AM8/11/06
to
steve....@gmail.com wrote:

>Graham Nelson wrote:

[snip]

>> Programmers don't like to code for overly fussy compilers, but they do
>> like good error detection[.]
>
>His argument is not against programmers in general, but against a
>certain lazy-headed breed. These misguided souls will *confuse* (not
>equate) bad error-checking with ease-of-use. They will prefer to
>program badly, even produce code that does not always do what is
>intended, rather than take the trouble to get it right.
>
>I don't know the literature, so this is a wild guess, but I bet

...Dijkstra wouldn't like the things that I don't like.

>Dijkstra wouldn't like rule-oriented programming for roughly the same
>reason: a buggy knowledge-base will run, get things right most of the
>time, but sometimes get things wrong, where for example precedence
>means something other than intended. (How common a problem this is in
>I7 I don't know.)

Yup. My summary appears to be accurate.

And if I am mistaken, I will just call what I said a wild guess.

Sincerely,

Gene Wirchenko

Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.

steve....@gmail.com

unread,
Aug 11, 2006, 10:24:41 AM8/11/06
to

Gene Wirchenko wrote:
> steve....@gmail.com wrote:
>
> >Graham Nelson wrote:
>
> [snip]
>
> >> Programmers don't like to code for overly fussy compilers, but they do
> >> like good error detection[.]
> >
> >His argument is not against programmers in general, but against a
> >certain lazy-headed breed. These misguided souls will *confuse* (not
> >equate) bad error-checking with ease-of-use. They will prefer to
> >program badly, even produce code that does not always do what is
> >intended, rather than take the trouble to get it right.
> >
> >I don't know the literature, so this is a wild guess, but I bet
>
> ...Dijkstra wouldn't like the things that I don't like.

Ha.

> >Dijkstra wouldn't like rule-oriented programming for roughly the same
> >reason: a buggy knowledge-base will run, get things right most of the
> >time, but sometimes get things wrong, where for example precedence
> >means something other than intended. (How common a problem this is in
> >I7 I don't know.)
>
> Yup. My summary appears to be accurate.

Your jokey signature indicates that you have some concept of how bias
works, but I honestly don't think it's only my bias at work here: I'm
picking up a pretty clear statement he's making (against bug-tolerance,
which amounts to automated debugging), and trying to apply it fairly to
another situation. It's a hypothesis you can prove or disprove.

If Dijkstra came out against logic programming, I think it's likely to
be for the abovementioned reason, that it's easy to make rules which
produce false inferences, and it's difficult to debug because it's
impossible to sufficiently mastermind the system.

If, however, Dijkstra came out in favor of logic programming, then I
expect we have not yet uncovered the real nuance of this part of his
argument against NLProgramming, as it seems to apply equally to logic
programming.

I'd love to hear from someone who knows the literature, the shape of
whatever logic-programming debate there was.

0 new messages