Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

returning from a function (Ex: Re: some small proposed changes to standard)

15 views
Skip to first unread message

Vassil Nikolov

unread,
Jul 22, 1999, 3:00:00 AM7/22/99
to comp.la...@list.deja.com
Mark Stickel wrote: [1999-07-20 21:20 -0700]
(to the x3...@ai.sri.com list)

Among other items:

> 8. Add implicit (BLOCK DEFUN ...) around body of defuns
>
> (BLOCK function-name ...) is automatically added around the body of
> defuns now; this enables (RETURN-FROM function-name ...) to be used.
> It would be convenient for (BLOCK DEFUN ...) to be added as well, so
> that (RETURN-FROM DEFUN ...) can be used. This obviates the task of
> editing RETURN-FROM forms when the name of the function is changed
> and facilitates writing macros that produce defuns with RETURN-FROM
> forms.

Making any symbol (apart from NIL) a special case for RETURN-FROM
seems inelegant. Wouldn't it be better to have a macro called (say)
FRETURN which can be used only lexically within a function definition
(a DEFUN form, maybe a LAMBDA form as well) as if

(defun foo (...)
...)

is implicitly

(defun foo (...)
(block #1=#:function
(macrolet ((freturn (&optional value) `(return-from #1# ,value)))
...)))


Vassil Nikolov
Permanent forwarding e-mail: vnik...@poboxes.com
For more: http://www.poboxes.com/vnikolov
Abaci lignei --- programmatici ferrei.

Vassil Nikolov

unread,
Jul 24, 1999, 3:00:00 AM7/24/99
to comp.la...@list.deja.com
Vassil Nikolov wrote: [1999-07-22 20:29 +0300]

[...]


> FRETURN which can be used only lexically within a function definition
> (a DEFUN form, maybe a LAMBDA form as well)

[...]

In fact I was wrong to suggest LAMBDA forms since there might be
a macro that expands into a LAMBDA form which would shadow the
DEFUN and cause FRETURN to return not from the block intended by
the programmer.

As to FRETURN from DEFUN, here's an implementation in case
someone wants to try it out.

----- cut here -----
;;;; freturn -*- Mode: lisp; Package: lux -*-
;;; Purpose: FRETURN returns from a function.
;;; Exports: DEFUN, FRETURN.
;;; Created: VaN [1999 Jul 23]
;;; Author: Vassil Nikolov <vnik...@poboxes.com>
;;; Warning: no warranty!

#| Implementation:

(lux:defun foo (...)
[[ declaration... | doc-string ]]
form...)

->

(cl:defun foo (...)
[[ declaration... | doc-string ]]
(defun-body-block foo
form...))

->

(cl:defun foo (...)
[[ declaration... | doc-string ]]
(block #:foo
(macrolet ((freturn (result) `(return-from #:foo ,result)))
form...)))

|#


(cl:defpackage lisp-user-extensions
(:nicknames lux)
(:shadow defun)
(:export defun freturn))
(cl:in-package lux)


;;; auxiliaries:

(cl:defun decl-or-doc-string-p (item tail)
"Returns true if item is a declaration or a doc-string.
Used to process function bodies. The second argument is
the rest of the body after item."
(typecase item
(cons (eql 'declare (first item))) ;no check for malformed ones
(string tail) ;a string is not a doc-string if it comes last
(otherwise nil)))

(cl:defun body-forms (defun-body)
"Returns the sublist of defun-body that contains just the
forms (skipping any declarations and doc-string)."
;; Does not check if there are two or more doc-strings.
;; I could do here with a function that is to MEMBER-IF as
;; MAPLIST is to MAPCAR.
(do ((l defun-body (rest l)))
((endp l) '())
(unless (decl-or-doc-string-p (first l) (rest l))
(return l))))

(defmacro defun-body-block (name &body defun-body-forms)
"Wraps a block around a function body from which FRETURN
may be used to return."
`(block ,name
(macrolet ((freturn (&optional (result 'nil))
`(return-from ,',name ,result)))
,@defun-body-forms)))


;;; exports:

(defmacro freturn (&optional (result 'nil))
"Returns from the immediately enclosing function definition
introduced by LUX:DEFUN. Within (LUX:DEFUN FOO ...),
(FRETURN form) === (RETURN-FROM FOO form).
Rationale: if the name of the function changes, FRETURN forms
don't need to change."
(declare (ignore result))
(error "FRETURN may be used only within a DEFUN form."))

(defmacro defun (name lambda-list &body dds+forms)
"This extends CL:DEFUN by allowing the use of LUX:FRETURN to return
from the function."
(let ((block-name (make-symbol (symbol-name name)))
(forms (body-forms dds+forms)))
`(cl:defun ,name ,lambda-list
;; declarations and doc-string:
,@(ldiff dds+forms forms)
;; forms (this is silly but harmless if there are none):
(defun-body-block ,block-name ,@forms))))


;;; eof.
----- end -----

Kent M Pitman

unread,
Jul 24, 1999, 3:00:00 AM7/24/99
to
Vassil Nikolov <vnik...@poboxes.com> writes:

> Vassil Nikolov wrote: [1999-07-22 20:29 +0300]
>
> [...]
> > FRETURN which can be used only lexically within a function definition
> > (a DEFUN form, maybe a LAMBDA form as well)
> [...]
>
> In fact I was wrong to suggest LAMBDA forms since there might be
> a macro that expands into a LAMBDA form which would shadow the
> DEFUN and cause FRETURN to return not from the block intended by
> the programmer.
>
> As to FRETURN from DEFUN, here's an implementation in case
> someone wants to try it out.

I'm pretty ambivalent about the conceptual goodness of this device.
A common reason to want it is that if you rename the function, you
have to rename the RETURN-FROM. Some people might find that painful,
but it's a protection against taking:

(defun foo (x)
(+ (if (zerop x) (return-from foo 7) x) 3))

(defun bar (z
(+ (foo z) 2))

and then (by sloppy editing) grabbing the body and dropping it into

(defun bar (z)
(+ ((lambda (x) (+ (if (zerop x) (return-from foo 7) x) 3)) z)
2))

and just compiling it with no errors. This will flag a reminder that
you've moved to a new context and you need to identify the FOO boundary.
If you'd instead used the FRETURN idiom, you'd find 7, not 9, being
returned from BAR when z is 0 once you substituted things, and you'd
get no compilation warning about the fact that this was going to happen.

In fairness, RETURN (vs RETURN-FROM) has this same problem. It's very
easy to substitute something that has a (RETURN ...) into a new context
that has an extra NIL block "helpfully" inserted and not get any warning
about the fact that it's a different NIL block. So it's not like the
language totally avoids things that cause problems of the kind I'm citing;
I just think it's well to learn from these problems and ask hard questions
about whether you want to institutionalize more of them...


Tom Breton

unread,
Jul 24, 1999, 3:00:00 AM7/24/99
to
Kent M Pitman <pit...@world.std.com> writes:

> A common reason to want it is that if you rename the function, you
> have to rename the RETURN-FROM. Some people might find that painful,
> but it's a protection against taking:
>
> (defun foo (x)
> (+ (if (zerop x) (return-from foo 7) x) 3))
>
> (defun bar (z
> (+ (foo z) 2))
>
> and then (by sloppy editing) grabbing the body and dropping it into
>
> (defun bar (z)
> (+ ((lambda (x) (+ (if (zerop x) (return-from foo 7) x) 3)) z)
> 2))

I just wanted to point out that this is a job for mechanical code
transformation, which doesn't forget about blocks. This of course
makes preserving comments desirable.

For the benefit of those who suggested there were no good uses for it.

FWIW, programming by cut-and-paste is always going to be at risk for a
multitude of "capture errors", so I'm not sure this is a valid
drawback to return-from.

--
Tom Breton, http://world.std.com/~tob
Ugh-free Spelling (no "gh") http://world.std.com/~tob/ugh-free.html

Kent M Pitman

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
Tom Breton <t...@world.std.com> writes:

> Kent M Pitman <pit...@world.std.com> writes:
>
> > A common reason to want it is that if you rename the function, you
> > have to rename the RETURN-FROM. Some people might find that painful,
> > but it's a protection against taking:
> >
> > (defun foo (x)
> > (+ (if (zerop x) (return-from foo 7) x) 3))
> >
> > (defun bar (z
> > (+ (foo z) 2))
> >
> > and then (by sloppy editing) grabbing the body and dropping it into
> >
> > (defun bar (z)
> > (+ ((lambda (x) (+ (if (zerop x) (return-from foo 7) x) 3)) z)
> > 2))
>
> I just wanted to point out that this is a job for mechanical code
> transformation, which doesn't forget about blocks. This of course
> makes preserving comments desirable.
>
> For the benefit of those who suggested there were no good uses for it.
>
> FWIW, programming by cut-and-paste is always going to be at risk for a
> multitude of "capture errors", so I'm not sure this is a valid
> drawback to return-from.

Cut and paste is going to be there for a long time to come. Not just
because of impoverished tools, either. It's often hard for people to
articulate subtle issues like this, and so people often over-aggressively
assume that formality should win the day, but Dick Waters once explained
this issue well to me. I'll try in some clumsy way to reproduce what he
said:

Emacs, he explained to me, is powerful exactly because it simultaneously
allows access to multiple representations at the same time, tolerating
quick switching back and forth between "s-expression commands", "word
commands", "character commands", "line commands", etc. So I can go
"up a line" even though Lisp has no "line structure". Or I can go forward
an s-expression even though text has no s-expression structure.
This leads to a very concise language for describing edits. Yes, it's
possible to have more rigid sexp-only or text-only tools do your editing,
but such tools are always going to have the problem that you can see an
"expression style" and cannot quickly use it because the more rigid tool
does not accomodate it, so you'll be forced to say something more clumsy.

Yes, a more rigid tool will be more likely correct. But so too will it
occasionally be less efficient to use in other ways. So people will resist
it.


Christopher B. Browne

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
On Sun, 25 Jul 1999 03:10:46 GMT, Kent M Pitman <pit...@world.std.com>
posted:
>Yes, a more rigid tool will be more likely correct. But so too will it
>occasionally be less efficient to use in other ways. So people will resist
>it.

The W3C "HTML editor," Amaya, takes much this approach. As far as I
can tell, it can only create "valid" HTML. Unfortunately, it is
almost unusable as such.

I'd draw three parallels:

a) Godel's "Incompleteness" theorem.

Bertrand Russell was trying to come up with a "Principia Mathematica,"
where some limited set of mathematical axioms could be used to
construct all the rest of the things that mathematics could prove.
That is, pretty much, the reductionist view where "If you come up with
a suitable set of perfect tools, you can build everything, and if you
have the right set of transitions, it's done perfectly."

Kurt Godel established that in any propositional system, there would
be true statements that could not be proven within the system's
context.

Thus, the idea is broken from a purely theoretical perspective.

I would suggest the connection that it may be *impossible* to create
an editor that *guarantees* that (for instance) an SGML document is
valid at every step.

b) ISO 9000 and "Dilbertian Managers"

Modern business enterprises have fallen into the same trap; they
figure that if you write up a suitable set of "Quality Management"
documents, whether ISO 900x or for "Malcolm Baldridge," you can, via
clearly defined procedures, prevent problems from occurring, and
thereby produce better results.

As with mathematics, where theorems sometimes prove useful results,
there *can* be merit to this, but, as Russell found, creating a
"Principia Manageria" is largely a fool's errand. Scott Adams touched
the pulse of this when he created Dilbert and his coworkers.

Cases where managers that are technically illiterate hear from other
technically illiterate managers schemes for "preventing" programming
errors represent a good example of this.

Amaya isn't an example of this; it is an example of some researchers
trying out something that is academically interesting, but probably
not viable as a technology.

c) Mathematical programming and "relaxation methods."

One of the major classes of methods of solving mathematical
programming problems is that of Lagrangian relaxation, where the basic
idea is to take a problem that involves constraints and objectives,
and relax some of the constraints, transforming them into objectives.

This is particularly useful when you are trying to solve a problem
where you don't start with a feasible solution; you relax the problem
by saying "We'll assume that some initial value is good enough for
now, and then attach hefty costs to the violations of constraints so
as to encourage the solution scheme to move towards feasible
solutions."

In effect, this is what a good Lisp editor can do to help the
programmer; you can do whatever you like to the lines of text (which
can make the program incorrect, e.g. "infeasible"), but by providing
things like "intelligent" indentation and parenthesis counting, the
editor provides feedback to encourage the programmer to make the
program look "more correct," at least from a "syntax/form" point of
view. (There is obviously no solution to the problem of writing
Stupid Code.)

--
"Q: Someone please enlighten me, but what are they packing into NT5 to
make it twice the size of NT4/EE? A: A whole chorus line of dancing
paperclips." -- <mcg...@otter.mbay.net>
cbbr...@hex.net- <http://www.ntlug.org/~cbbrowne/lsf.html>

Erik Naggum

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
* cbbr...@news.brownes.org (Christopher B. Browne)

| I would suggest the connection that it may be *impossible* to create an
| editor that *guarantees* that (for instance) an SGML document is valid at
| every step.

no, this is trivial to create. having such an "editor" would make design
of DTDs almost doable, since the "editor" would have to create a complete
document for you and then let the user make branch decisions, showing
which options were available at all times. (what's not so trivial is
making such an "editor" comfortable to use for humans.)

#:Erik
--
suppose we blasted all politicians into space.
would the SETI project find even one of them?

Craig Brozefsky

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
Erik Naggum <er...@naggum.no> writes:

> * cbbr...@news.brownes.org (Christopher B. Browne)
> | I would suggest the connection that it may be *impossible* to create an
> | editor that *guarantees* that (for instance) an SGML document is valid at
> | every step.
>
> no, this is trivial to create. having such an "editor" would make design
> of DTDs almost doable, since the "editor" would have to create a complete
> document for you and then let the user make branch decisions, showing
> which options were available at all times. (what's not so trivial is
> making such an "editor" comfortable to use for humans.)

What do you think of the approacch of psgml-mode?

I find it works pretty well for putting together SGML documents. I've
written some fairly large ones with it, mostly analysis and
requirements specs for some of the larger projects my employer has
undertaken.

It offers you a list of elements that are valid at any point in the
document, but does not force the document to be valid at all times.
For instance when putting together a varlist in Docbook documents, I
usually just put open and close tags in as I need them, as opposed to
creating a valid structure for the varlist and then filling it in.
psgml-mode lets me do that and makes it easy to close up the structure
as I pass thru it. At various points the document is invalid, but it
does seem to follow my normal thought process for writing a document.

obLisp: I have some elisp code for inserting SGML templates into a
docoument. The idea was to provide our HTML monkeys who usually work
with BBEDIT or something similiar a set of buttons for inserting large
hunks of HTML which they would then fill in. To make it even easier
on them, I also have code which will insert the template with elisp
code embedded in it, so as they insert it, it asks them questions
about what should go where. Overall psgml-mode is pretty sweet.

--
Craig Brozefsky <cr...@red-bean.com>
Free Scheme/Lisp Software http://www.red-bean.com/~craig
I say woe unto those who are wise in their own eyes, and yet
imprudent in 'dem outside -Sizzla

Tim Bradshaw

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
* Christopher B Browne wrote:
> On Sun, 25 Jul 1999 03:10:46 GMT, Kent M Pitman <pit...@world.std.com>
> posted:
> >Yes, a more rigid tool will be more likely correct. But so too will it
> >occasionally be less efficient to use in other ways. So people will resist
> >it.

> The W3C "HTML editor," Amaya, takes much this approach. As far as I
> can tell, it can only create "valid" HTML. Unfortunately, it is
> almost unusable as such.

> [Interesting stuff removed]

I think that it would be would be worthwhile looking at the Xerox
`SEdit' editor before dismissing structure editors. It is nearly 10
years since I used it, but I remember it as doing a really remarkably
good job. I don't want to claim that it was better than Emacs,
because I can't remember what it did exactly, but it was really a lot
better than any of the more recent structure editing tools I've seen
(mostly for HTML/SGML/XML).

Unfortunately I expect that it is entirely unavailable now, as the
machines it ran on were even more of a dead-end than the other Lisp
machines.

(The drawing package on those machines was also very good. The text
editor was rubbish though.)

--tim

Kent M Pitman

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
cbbr...@news.brownes.org (Christopher B. Browne) writes:

> On Sun, 25 Jul 1999 03:10:46 GMT, Kent M Pitman <pit...@world.std.com>
> posted:
> >Yes, a more rigid tool will be more likely correct. But so too will it
> >occasionally be less efficient to use in other ways. So people will resist
> >it.
>
> The W3C "HTML editor," Amaya, takes much this approach. As far as I
> can tell, it can only create "valid" HTML. Unfortunately, it is
> almost unusable as such.

> [...]


> I would suggest the connection that it may be *impossible* to create
> an editor that *guarantees* that (for instance) an SGML document is
> valid at every step.

Yes, people can tolerate intermediate state that is ill-formed if it
seams up ok at the end. These tools often don't understand that.
The shortest path between two states may go through an illegal state,
and a tool that can't manage that is expressionally hampered.

In Maclisp, there was an s-expression editing tool called the
"kludgey binford editor" that edited s-expressions in a token-based way
where you could say teco-like commands that inserted and deleted and
moved over s-expressions, but it also let you just insert a single
paren and then go somewhere else and insert another paren (not necessarily
at the same level) and then you could do a command that asked that it
re-compute paren matches and it would rationalize the tree to make the
two parens you had inserted become "real parens" (that is, make the
tree work). That's a cool accomodation to both structured and
non-structured. I think it was probably the case that INTERLISP
had things like this its DWIM facility, but I never used it so don't know
for sure. But Emacs was much better because it basically always manages
the general case and only imposes syntax requirements when you issue
a syntax-oriented command (like forward-sexp). It's easy to make
"Save File" be a syntax-oriented command so you can't save ill-formed
buffers, but you don't have to make it so.

[I won't comment on the rest of your analogies but enjoyed reading them.]

Christopher B. Browne

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
On 25 Jul 1999 16:17:56 +0000, Erik Naggum <er...@naggum.no> posted:

>* cbbr...@news.brownes.org (Christopher B. Browne)
>| I would suggest the connection that it may be *impossible* to create an
>| editor that *guarantees* that (for instance) an SGML document is valid at
>| every step.
>
> no, this is trivial to create. having such an "editor" would make design
> of DTDs almost doable, since the "editor" would have to create a complete
> document for you and then let the user make branch decisions, showing
> which options were available at all times. (what's not so trivial is
> making such an "editor" comfortable to use for humans.)

(The latter, of "comfortability," is what I meant.)

Amaya may guarantee the generation of "valid" HTML. But if it's too
annoying to use, people won't use it, and will prefer some other tool
that is not so "guaranteed valid."

--
"It has every known bug fix to everything." -- KLH (out of context)
cbbr...@ntlug.org- <http://www.hex.net/~cbbrowne/lsf.html>

Tom Breton

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to

> Yes, a more rigid tool will be more likely correct. But so too will it


> occasionally be less efficient to use in other ways. So people will resist
> it.

Looks to me like you've jumped from "mechanical code transformation"
to "ONLY mechanical code transformation".

Kent M Pitman

unread,
Jul 25, 1999, 3:00:00 AM7/25/99
to
Tom Breton <t...@world.std.com> writes:

> Kent M Pitman <pit...@world.std.com> writes:
>
> > Yes, it's
> > possible to have more rigid sexp-only or text-only tools do your editing,
> > but such tools are always going to have the problem that you can see an
> > "expression style" and cannot quickly use it because the more rigid tool
> > does not accomodate it, so you'll be forced to say something more clumsy.
>
> > Yes, a more rigid tool will be more likely correct. But so too will it
> > occasionally be less efficient to use in other ways. So people will resist
> > it.
>
> Looks to me like you've jumped from "mechanical code transformation"
> to "ONLY mechanical code transformation".

Because you were implying that I was using the wrong tool to do my
editing in response to my claim about FRETURN being a problem for
cutting and pasting. By suggesting that I should not ever cut and paste,
you are suggesting ONLY mechanical code transformation -- or, at minimum,
no textual/semantics-free code transformation. If you admit the possibility
of any such semantics-free transformation, you admit the possibility of
the error I was suggesting.

Tom Breton

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
Kent M Pitman <pit...@world.std.com> writes:

> >
> > Looks to me like you've jumped from "mechanical code transformation"
> > to "ONLY mechanical code transformation".
>
> Because you were implying that I was using the wrong tool to do my
> editing in response to my claim about FRETURN being a problem for
> cutting and pasting. By suggesting that I should not ever cut and paste,

I never said that. I pointed out that cut and paste will always have
risks.

Kent M Pitman

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
Tom Breton <t...@world.std.com> writes:

>
> Kent M Pitman <pit...@world.std.com> writes:
>
> > >
> > > Looks to me like you've jumped from "mechanical code transformation"
> > > to "ONLY mechanical code transformation".
> >
> > Because you were implying that I was using the wrong tool to do my
> > editing in response to my claim about FRETURN being a problem for
> > cutting and pasting. By suggesting that I should not ever cut and paste,
>
> I never said that. I pointed out that cut and paste will always have
> risks.

You said:

> FWIW, programming by cut-and-paste is always going to be at risk for a
> multitude of "capture errors", so I'm not sure this is a valid
> drawback to return-from.

I've just asserted that the use of cut-and-paste is inevitable
and appropriate. If the use of cut-and-paste is inevitable and
appropriate, then worrying about errors that result from it is
appropriate and necessary. To argue against this, you must argue
either that one must not use cut-and-paste (which I have offered
at least one argument to semi-refute) or you must acknowledge that
it's reasonable to worry about issues that people can plausibly
be expected to do.

I think the art and science of language design is not about the easy
part of just defining things to have clear semantics; it's about the
software equivalent of worrying that users will confuse floppy disk
drives with coffee cup holders, and understanding that the entire
burden of avoiding that problem is not in just getting better
customers but really coming to grips with what people really do, not
what you wish they would do. You're welcome to make your name
designing languages that don't acknowledge this truth, and I wish you
luck. I have staked my career and reputation on the idea that
understanding and accepting what people really want to do is important
and not to be neglected. Indeed, I believe that users often do things
that appear superficially stupid but that if you examine them with
more care you realize were not so foolish as they seem. It's easy to
trivialize out the parts of something you see someone do that don't
fit your model, but sometimes those parts are the important ones.

This example of cut-and-paste is an excellent example. Users are
risking something but gaining something else, and you can't reasonably
ask them to give up the risk without giving them a tool that offers
compensating benefit, which largely there are not. Consequently,
people will keep doing this and the languages *I* design will seek (to
the extent I can affect it, which is often hard to do in committee
processes) to cope with the reality of things.

Reini Urban

unread,
Jul 26, 1999, 3:00:00 AM7/26/99
to
cbbr...@news.brownes.org (Christopher B. Browne) wrote:
>a) Godel's "Incompleteness" theorem.
>
>Bertrand Russell was trying to come up with a "Principia Mathematica,"
>where some limited set of mathematical axioms could be used to
>construct all the rest of the things that mathematics could prove.
>That is, pretty much, the reductionist view where "If you come up with
>a suitable set of perfect tools, you can build everything, and if you
>have the right set of transitions, it's done perfectly."
>
>Kurt Godel established that in any propositional system, there would
>be true statements that could not be proven within the system's
>context.
>
>Thus, the idea is broken from a purely theoretical perspective.
>
>I would suggest the connection that it may be *impossible* to create
>an editor that *guarantees* that (for instance) an SGML document is
>valid at every step.

but not broken for practical purposes:

the "Principia Mathematica" approach is wonderful, admirable and even
somewhat resembling lisp/scheme/forth.
with just a few primitives you build up a rich, closed world. even if
goedel smashed the completeness i don't care because this is just
philosophy. i don't need proofs about my world, self-consciousness, if
lisp is complete or not, if i'm complete ;) ... => obviously not.

if it's important i can prove it by using methods outside of the lisp or
editors or robots world.

to come to the editor:
it is too simple to give up saying that it is *impossible* because it
cannot be *proved*. halting problem and such. it can be proved but not
within the closed world assumption of either SGML or LISP or
whatssoever. a robot cannot proof its world view but it can reason and
do. it can be coded, it can process input and output to the best of its
world, that's the point. if it fails it must be rebooted :)

the editor cannot guarantee to stop or fall into an endless loop, but if
it stops it can be formally guaranteed that the input or output is
correct. you as meta reasoner can proof that. the editor does not care.

interesting thoughts.
that's btw mccarthy's formal reasoning world.
wish i could afford stanford.
--
Reini Urban
http://xarch.tu-graz.ac.at/autocad/news/faq/autolisp.html

Christopher Browne

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
On Mon, 26 Jul 1999 16:09:29 GMT, Reini Urban
<rur...@xarch.tu-graz.ac.at> wrote:
>cbbr...@news.brownes.org (Christopher B. Browne) wrote:
>>a) Godel's "Incompleteness" theorem.
>>
>>Bertrand Russell was trying to come up with a "Principia Mathematica,"
>>where some limited set of mathematical axioms could be used to
>>construct all the rest of the things that mathematics could prove.
>>That is, pretty much, the reductionist view where "If you come up with
>>a suitable set of perfect tools, you can build everything, and if you
>>have the right set of transitions, it's done perfectly."
>>
>>Kurt Godel established that in any propositional system, there would
>>be true statements that could not be proven within the system's
>>context.
>>
>>Thus, the idea is broken from a purely theoretical perspective.
>>
>>I would suggest the connection that it may be *impossible* to create
>>an editor that *guarantees* that (for instance) an SGML document is
>>valid at every step.
>
>but not broken for practical purposes:
>
>the "Principia Mathematica" approach is wonderful, admirable and even
>somewhat resembling lisp/scheme/forth.
>with just a few primitives you build up a rich, closed world. even if
>goedel smashed the completeness i don't care because this is just
>philosophy. i don't need proofs about my world, self-consciousness, if
>lisp is complete or not, if i'm complete ;) ... => obviously not.

This leaves open the question of what's useful, and what's *really*
useful.

What Godel established was that P.M. could not be a *complete* work.

This does not establish that the study of mathematics is dumb because
you can't have The Complete Book Of Mathematics, only that trying to
have The Complete Book of Mathematics is impossible.

In keeping with an attempt to keep this relevant, the most precise
analogy is probably that it is not possible to create a program editor
that guarantees that all program written using it will terminate.

And from a more relevant standpoint, it seems to be a Pretty Difficult
Problem to build an editor that guarantees the syntactic correctness
of programs and all transformations thereof, whilst simultaneously
remaining *usable* by mere humans.

It seems to be more appropriate to build editors that don't maintain
such guarantees, but rather allow the programmer to make changes that
may temporarily "break" the code, but then provide tools that provide
maximum assistance to the user to help fix the problems.

[This .signature was randomly, and yet serendipitously, chosen...]
--
"If it can't be abused, it's not freedom. A man may be in as just
possession of truth as of a city, and yet be forced to surrender."
-- Thomas Browne
cbbr...@hex.net- <http://www.ntlug.org/~cbbrowne/computing.html>

Rob Warnock

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Christopher B. Browne <cbbr...@hex.net> wrote:
+---------------
| The W3C "HTML editor," Amaya... can only create "valid" HTML.

| Unfortunately, it is almost unusable as such.
...
| a) Godel's "Incompleteness" theorem.
...

| I would suggest the connection that it may be *impossible* to create
| an editor that *guarantees* that (for instance) an SGML document is
| valid at every step.
+---------------

Interesting, but I think your analogy is slightly off. Better might
be to equate the legal steps in the editor to theorems (or maybe even
better, rules of inference) in the proof system. Then you'd get a
statement like this:

While it is certainly possible to create an editor that guarantees
that an initially valid SGML document remains valid after every
subsequent allowed editing step, Godel's Proof suggests that there
will always be some valid SGML documents which it is impossible
to create using such an editor.


-Rob

-----
Rob Warnock, 8L-855 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. FAX: 650-933-0511
Mountain View, CA 94043 PP-ASEL-IA

Erik Naggum

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
* rp...@rigden.engr.sgi.com (Rob Warnock)

| While it is certainly possible to create an editor that guarantees
| that an initially valid SGML document remains valid after every
| subsequent allowed editing step, Godel's Proof suggests that there
| will always be some valid SGML documents which it is impossible
| to create using such an editor.

in what way is a mathematical proof a _process_ that resembles editing?

Gödel's Incompleteness Theorem isn't a trivial statement of a problem
with all sorts of complex systems. is very carefully constrained to say
something about proofs _inside_ a complex system. it is possible to
prove any mathematical theorem, but you might have to add something from
outside the system to accomplish it. the incompleteness theorem states,
as far as I have understood it, that no system is able to prove all its
true statements within itself. I find that intuitively evident, because
"truth" is already a concept that is not contained fully by the system.
of course, I live in a post-Gödel time and lots of stuff that was very
far from intuitively evident at the time is intuitively evident today,
thanks in to the trailblazing efforts of geniuses past.

however, Gödel's Incompleteness theorem is perhaps the single most
mis-popularized theorem from mathematics there is, probably because
people still believe in the pre-Gödel idea that truth is contained by
a system, such as and especially religions.

the above statement from Rob Warnock sounds like New Age flirting with
quantum physics to me. it sounds profound because what it flirts with is
profound, while it's really just unfounded and confused opinion.

in other words: I'd really like to see some evidence that there exists a
valid string in a regular syntax that cannot be produced by the grammar.

Lars Marius Garshol

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to

* Erik Naggum

|
| the incompleteness theorem states, as far as I have understood it,
| that no system is able to prove all its true statements within
| itself.

This is almost exactly my understanding as well, with two additions:

- there is a complexity limit for such systems beneath which you can
prove all true statements

- you can extend the system to a point where you can prove all true
statements, but then the system will contain inconsistencies

It is also worth noting that Gödel only proved this in a specific case
and never generalized the theorem to cover all formal systems.
However, as far as I know, no mathematicians at the time doubted that
this was possible or considered it necessary.

--Lars M.

Tim Bradshaw

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
rp...@rigden.engr.sgi.com (Rob Warnock) writes:

> While it is certainly possible to create an editor that guarantees
> that an initially valid SGML document remains valid after every
> subsequent allowed editing step, Godel's Proof suggests that there
> will always be some valid SGML documents which it is impossible
> to create using such an editor.
>

I think this is almost certainly false. Really the question is `given
a finite number of allowed transformations between legal strings of
some grammar G, can you start with some string A of that grammar and
get to any string B'. I need to be more clear about what I mean by
`allowed transformations', but I'll avoid that. I think that for SGML
grammars that this is likely to be true.

Indeed, I think that for any grammar for which parsing does not pose a
halting problem this is true, because I can simply encode the grammar
rules as editing rules, and thus reduce the problem to parsing the
string. And I think that there is no halting problem parsing SGML.

--tim

Rob Warnock

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Erik Naggum <er...@naggum.no> wrote:
+---------------
| * rp...@rigden.engr.sgi.com (Rob Warnock)

| | While it is certainly possible to create an editor that guarantees
| | that an initially valid SGML document remains valid after every
| | subsequent allowed editing step, Godel's Proof suggests that there
| | will always be some valid SGML documents which it is impossible
| | to create using such an editor.
|
| in what way is a mathematical proof a _process_ that resembles editing?
+---------------

In a proof you start with a set of axioms and transform them step by step
with permitted rules of inference to produce theorems, which augment the
set of axioms. If by following this process recursively you derive the
thing-to-be-proved, then you've proved it to be true. If you derive the
negation of the thing-to-be-proved, then you've proved it to be false.
If you fail to do either, then... well, you've failed to do either.

In a ("fascist") structuring editor, you start with a valid structure
(or structured document, such as SGML), and step-by-step select & apply
individual permitted transformations, each one of which takes a valid
structure and produces another valid structure to replace it. If, though
this process, you manage to get the structure (or document) to the point
where you wanted to go (wherever that was), then you win. If you fail...
well, then you fail.

In the first case, Godel proved that (if the proof system is consistent)
there are some true statements that can be expressed within the system
that cannot be proved (from within the system) to be true (or false).

In the second case, I hypothesized that there are some legal SGML
documents that a human using a strict structuring editor cannot write
from scratch (from the grammar's start symbol), at least, not in any
practical sense.

I readily admit the analogy is a bit weak (but IMHO still more
appropriate than the Christopher Browne article I was replying to).

+---------------
| Godel's Incompleteness Theorem isn't a trivial statement of a problem


| with all sorts of complex systems. is very carefully constrained to say
| something about proofs _inside_ a complex system.

+---------------

Yes, I know that. [Although Gregory Chaitin's restatement of it in
terms of "complexity" makes it somewhat more approachable, e.g.,
<URL:http://www.cs.auckland.ac.nz/CDMTCS/chaitin/georgia.html>
<URL:http://www.cs.auckland.ac.nz/CDMTCS/chaitin/unm2.html>
<URL:http://www.cs.auckland.ac.nz/CDMTCS/chaitin/rov.html>
for starters...]

+---------------


| the above statement from Rob Warnock sounds like New Age flirting with
| quantum physics to me. it sounds profound because what it flirts with is
| profound, while it's really just unfounded and confused opinion.

+---------------

Again, it was in reply to Christopher Browne's article <URL:news:slrn7pl9fo.
tro.cb...@knuth.brownes.org> which claimed that Godel's proof implied:

"...that it may be *impossible* to create an editor that *guarantees*


that (for instance) an SGML document is valid at every step."

I certainly don't believe *that*! ...that is, if "SGML validity" has any
decidable meaning in the first place. (Which I surely *hope* it does, but
then, you're far more an expert on that than I.)

+---------------


| in other words: I'd really like to see some evidence that there exists a
| valid string in a regular syntax that cannot be produced by the grammar.

+---------------

Is SGML a "regular" syntax? And what exactly do you mean by "regular" here?
Certainly not "regular expression". "Context-free" perhaps? LL(k)? LALR(k)?
Or something more general still? (Hopefully no full context-sensitive?)
Anyway, not to bicker. That's not really the issue. What I was trying to
say to Browne was:

1. That *of course* it's possible to have an editor guarantee that an
SGML document is valid at every step [assuming "SGML validity" has any
meaning]. Simply run the document through your full "SGML validator"
after every editing step, and if it fails, beep, print an error message,
and forcefully undo the most recent editing step.

2. However, just because there may be an algorithmic way to validate (or not)
a piece of text purporting to be SGML, there's no guarantee that there's
any effective procedure for *getting* from "here" to "there" [where
"here" is the (presumably-valid) SGML you started the editing session
with and "there" is whatever goal you (a human) are trying to get to]
using the "editing steps" available in the editor.

3. I would claim that generating one production of the SGML grammar per
editing step is a totally unwieldy human interface, and anyway, how would
you *select* which one to generate (expand) at each step? (E.g., if SGML
has any ambiguity in the grammar at all, the combinatorics explode.)

[I'm assuming of course that you don't *have* the desired end result
already, otherwise you could just run it through an SGML parser and
the walk the parse tree in the editor step-by-step to "create" it...]

4. Thus, depending on the primitive transformation steps the editor *does*
permit you, there may well be valid SGML texts [that even pass the
automated validator in the editor!] that no human could use that editor
to write from scratch (from the SGML "start" symbol) in any practical sense.

The analogy to Godel's Proof seemed obvious to me (especially when
considering Chaitin's notion of "elegant programs" and their unprovability).

But maybe that still sounds like just more "New Age quantum-babble" to you.
If so, I apologize, and will shut up now.

Rob Warnock

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Tim Bradshaw <t...@tfeb.org> wrote:
+---------------

| rp...@rigden.engr.sgi.com (Rob Warnock) writes:
| > While it is certainly possible to create an editor that guarantees
| > that an initially valid SGML document remains valid after every
| > subsequent allowed editing step, Godel's Proof suggests that there
| > will always be some valid SGML documents which it is impossible
| > to create using such an editor.
|
| I think this is almost certainly false. Really the question is `given
| a finite number of allowed transformations between legal strings of
| some grammar G, can you start with some string A of that grammar and
| get to any string B'. I need to be more clear about what I mean by
| `allowed transformations', but I'll avoid that. I think that for SGML
| grammars that this is likely to be true.
+---------------

Unfortunately, `allowed transformations' *is* the issue, at least where
humans & editors are concerned. The point is *not* proving the validity
of an a priori given text, but starting with *nothing* (well, the SGML
grammar's "start" symbol) and writing something "useful" (whatever you
as a human consider a "useful" end-product to be) given only the `allowed
transformations' in a strict validating editor.

Think of it this way: Suppose you were asked to write a *huge* Lisp
program, but one of the rules was that you had to use an editor
that *forced* your program to be "legal Lisp" (including "validly"
compiling) after every single keystroke. E.g., imagine using Emacs
and putting a hook in the keyboard input processing that ran the
Lisp compiler after every keystroke[*] and if the compiler barfed
it would *undo* the keystroke! Do you actually think you could
do that? [I don't think *I* could!]


-Rob

[*] O.k., to make it easier we'll even allow "fat" keys and macros to
count as "one" keystroke. And maybe even allow a whole Lisp symbol...

Erik Naggum

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
* rp...@rigden.engr.sgi.com (Rob Warnock)

| In a proof you start with a set of axioms and transform them step by step
| with permitted rules of inference to produce theorems, which augment the
| set of axioms.

sure, but where are the axioms of editing? insert and delete? and how
are they related to the formal system under scrutiny? as far as I know,
SGML doesn't have such axioms to begin with, and production of a string
from a grammer is not quite a transformation, it's a _production_. so I
think this is a stretch.

| Again, it was in reply to Christopher Browne's article <URL:news:slrn7pl9fo.
| tro.cb...@knuth.brownes.org> which claimed that Godel's proof implied:
|
| "...that it may be *impossible* to create an editor that *guarantees*
| that (for instance) an SGML document is valid at every step."
|
| I certainly don't believe *that*! ...that is, if "SGML validity" has any
| decidable meaning in the first place. (Which I surely *hope* it does, but
| then, you're far more an expert on that than I.)

*snicker* yes, it has a decidable meaning. SGML ain't _that_ bad.

I found the statement you quote flimsy and impossible to argue against.

| Is SGML a "regular" syntax? And what exactly do you mean by "regular" here?

yes, SGML's DTDs defines a regular syntax, "regular" being defined as
LL(0), or that any rule of grammar can only add tokens at the end of the
string.

| Certainly not "regular expression".

um, that's the idea. why "certainly not"?

| 1. That *of course* it's possible to have an editor guarantee that an
| SGML document is valid at every step [assuming "SGML validity" has any
| meaning]. Simply run the document through your full "SGML validator"
| after every editing step, and if it fails, beep, print an error message,
| and forcefully undo the most recent editing step.

<aside>and an SGML validator can return "this is not an SGML docuemnt" as
its only error message, which is more useful than most of the other error
messages you'll try to recover from.</aside>

| 3. I would claim that generating one production of the SGML grammar per
| editing step is a totally unwieldy human interface, and anyway, how would
| you *select* which one to generate (expand) at each step?

the DTD grammar is fairly well constrained. the approach isn't all that
useful for writing DTDs, only the element structure of the document.

consider a Lisp editor that does the same thing. suppose you have an
empty buffer, which is a fully valid Lisp source file. if you insert to
parens, (), you still have a valid Lisp source file, and I'll maintain
that invariant. inside the (), any function or special form in the
prevailing package is allowed. suppose you type a function name, the
appropriate number of argument slots should be indicated and the document
is at this point invalid until all slots have been filled in. optional
arguments could easily be indicated by a shaded slot or whatever.
keyword arguments could be indicated by a key which expands to the list
of available keywords when clicked on, like a menu of choices, etc.
suppose you type LET instead of a function name. a LET form is valid if
it has an empty binding clause and an empty body, so the buffer would
have to contain at least (LET ()). the inner () is straight-forward to
insert automatically, and any bindings inside it would know what they
were, too, which could even be used to "verify" that any free variables
were importable from the outer environment. that's how a Lisp
syntax-sensitive editor would work. SGML makes this easier, since it
allows far less choice.

| 4. Thus, depending on the primitive transformation steps the editor
| *does* permit you, there may well be valid SGML texts [that even pass the
| automated validator in the editor!] that no human could use that editor
| to write from scratch (from the SGML "start" symbol) in any practical
| sense.

I still don't buy this.

| But maybe that still sounds like just more "New Age quantum-babble" to you.
| If so, I apologize, and will shut up now.

ok. ;)

Tim Bradshaw

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
rp...@rigden.engr.sgi.com (Rob Warnock) writes:

> Think of it this way: Suppose you were asked to write a *huge* Lisp
> program, but one of the rules was that you had to use an editor
> that *forced* your program to be "legal Lisp" (including "validly"
> compiling) after every single keystroke. E.g., imagine using Emacs
> and putting a hook in the keyboard input processing that ran the
> Lisp compiler after every keystroke[*] and if the compiler barfed
> it would *undo* the keystroke! Do you actually think you could
> do that? [I don't think *I* could!]

I believe I can do it if the question of whether a program validly
compiles is not a halting problem. For Lisp it probably is a halting
problem at least because of macros (because macros are
turing-equivalent, obviously). Whether it is a halting problem
without macros is a different case, I guess the answer is that it is
not (otherwise the language has some fairly obvious implementation
problems (:-)). Note that I am *not* requiring the editor to be just
a `type in a string' type editor, the basic operations in it would be
higher-level than that.

However this is a classic chinese-room argument -- while I can argue
that it is theoretically possible to do this, the number of moves
might be kind of large.

(the kinds of moves in the editor I guess would be

() -> (progn)
(progn . x) -> (progn (defun y ()) . x)
(defun () . x) -> (defun (y) . x)
(defun (x . y) . z) -> (defun (p x . y) . z)

and so on. Pretty painful!)

--tim

Rolf-Thomas Happe

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Erik Naggum writes (on G"odel's incompleteness theorem):
> outside the system to accomplish it. the incompleteness theorem states,

> as far as I have understood it, that no system is able to prove all its
> true statements within itself. I find that intuitively evident, because

To the best of my non-expert knowledge, G"odel's first incompleteness
theorem states that "no (consistent) system" can decide all
sentences, i. e. there is a sentence S such that neither S nor
its negate -S can be deduced from the axioms A. Now every consistent
set of sentences has a model, i. e. we can interpret constants by
individuals, function symbols by functions, relation symbols by
relations, so that all the sentences hold true. Since both A U {S}
and A U {-S} are consistent, we see that A has models such that S
holds true, and models such that -S holds true.
In G"odel's proof, S says "I am not provable", and models of A in
which S holds true are admittedly kind of strange. Nonetheless, if
`true' is understood as `holds in all models', then your summary of
G"odel's result isn't correct. (For this notion of truth G"odel's
completeness (sic) theorem applies: semantical consequence equals
syntactical consequence, in particular every "true" sentence is
provable.)
(I actually don't say that you're wrong, but only that the
unqualified `true' is rather misleading.)

> "truth" is already a concept that is not contained fully by the system.
> of course, I live in a post-Gödel time and lots of stuff that was very
> far from intuitively evident at the time is intuitively evident today,
> thanks in to the trailblazing efforts of geniuses past.
>
> however, Gödel's Incompleteness theorem is perhaps the single most
> mis-popularized theorem from mathematics there is, probably because
> people still believe in the pre-Gödel idea that truth is contained by
> a system, such as and especially religions.

Note that G"odel's incompleteness theorem has a semantical twin,
"Tarski's theorem", which states (roughly) that there is no definition
of truth for Peano arithmetics within Peano arithmetics.

rthappe


Rolf-Thomas Happe

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
I (Rolf-Thomas Happe) wrote:
In G"odel's proof, S says "I am not provable", and models of A in
which S holds true are admittedly kind of strange. Nonetheless, if
***

Should have been -S (not S), sorry.

rthappe


Pekka P. Pirinen

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
rp...@rigden.engr.sgi.com (Rob Warnock) writes:
> In the first case, Godel proved that (if the proof system is consistent)
> there are some true statements that can be expressed within the system
> that cannot be proved (from within the system) to be true (or false).
>
> In the second case, I hypothesized that there are some legal SGML
> documents that a human using a strict structuring editor cannot write
> from scratch (from the grammar's start symbol), at least, not in any
> practical sense.
>
> I readily admit the analogy is a bit weak.

It's not weak, it's nonexistent: legal SGML documents are exactly
those that are produced by the grammar. That means legality is like
provability, not truth. Hence an SGML structure editor can produce
any legal SGML document, just as an algorithm can produce all
provable statements.

> 3. I would claim that generating one production of the SGML grammar

> per editing step is a totally unwieldy human interface, [...]

You might say that there are documents a human couldn't create
_in_any_practical_sense_, because they are too complex, but that's a
trivial observation and applies equally to any method of editing. I
agree that using the productions of a grammar directly is worse than
most other methods.
--
Pekka P. Pirinen (the new) Harlequin Limited
We use the facts we encounter the way a drunkard uses a
lamppost: more for support than illumination.
[adapted from A. E. Housman]

Pierre R. Mai

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Lars Marius Garshol <lar...@ifi.uio.no> writes:

> * Erik Naggum


> |
> | the incompleteness theorem states, as far as I have understood it,
> | that no system is able to prove all its true statements within
> | itself.
>

> This is almost exactly my understanding as well, with two additions:
>
> - there is a complexity limit for such systems beneath which you can
> prove all true statements
>
> - you can extend the system to a point where you can prove all true
> statements, but then the system will contain inconsistencies

This second sentence is trivially true, since you can prove any
statement (true or false) in an inconsistent system. It might be
better to put it this way (beware this is sloppy english language, and
not formal logics, which you really need to understand Goedel's proof
and it's implications. Doing it in english involves a certain amount
of hand-waving):

- Let G be the Goedel sentence for a system S (which in effect states,
that "G can not be proven in S"). Then there cannot be a proof in S
for G, because then G would be false, and a system where a false
statement can be proven is inconsistent (and would allow any other
statement to be proven). But if G cannot be proven in S, then G is
true, in which case S is incomplete, because G is a true statement,
which cannot be proven in S. Therefore, unless S is inconsistent, G
cannot be proven in S, which means S is incomplete. So you either
get an inconsistent system or an incomplete one.

- Given that you have proven the incompleteness of S w.r.t. to G, you
can extend S to either S1 or S2, by including G or "not G" as an
axiom, thereby removing this "instance" of incompleteness, but you
will get a new incompleteness via the same method used on S.

On the whole I agree with Erik, that Goedel's theorem has been
over-popularized to the point where only a very distorted version of
it has been received by the public (something like "complex systems
never work", or something along these lines). I already feel guilty
for posting this, since it will probably only add to the popularized
image.

Regs, Pierre.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Jan Vroonhof

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
rp...@rigden.engr.sgi.com (Rob Warnock) writes:

> Think of it this way: Suppose you were asked to write a *huge* Lisp
> program, but one of the rules was that you had to use an editor
> that *forced* your program to be "legal Lisp" (including "validly"
> compiling) after every single keystroke. E.g., imagine using Emacs
> and putting a hook in the keyboard input processing that ran the
> Lisp compiler after every keystroke[*] and if the compiler barfed
> it would *undo* the keystroke! Do you actually think you could
> do that? [I don't think *I* could!]

Given the following "keystrokes"

1. Replace any symbol by another.
2. Replace any symbol by the list containing that symbol.
3. add any symbol to a list
4. Replace a two element list by the second element.

you could come quite a way. At least the "validly compiling" issue is
non-existent if you are allowed to start with the empty list:

()
(list)
(list nil)
(quote nil)

You can now replace nil by your top-level form, and then apply 4. to
remove it at which point it will be compiled.

Or am I overlooking something here (apart form the fact that having a
verifying editor doesn't help much if you use it in this way).

Jan

Lieven Marchand

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
Lars Marius Garshol <lar...@ifi.uio.no> writes:

> * Erik Naggum
> |
> | the incompleteness theorem states, as far as I have understood it,
> | that no system is able to prove all its true statements within
> | itself.
>
> This is almost exactly my understanding as well, with two additions:
>
> - there is a complexity limit for such systems beneath which you can
> prove all true statements
>
> - you can extend the system to a point where you can prove all true
> statements, but then the system will contain inconsistencies
>

As soon as you have one inconsistency, all statement are both true and
false in your system. This makes such an extension useless.

> It is also worth noting that Gödel only proved this in a specific case
> and never generalized the theorem to cover all formal systems.
> However, as far as I know, no mathematicians at the time doubted that
> this was possible or considered it necessary.

Gödel used the Peano axioms that formalize simple integer arithmetic
(together with proof by induction) so any formal system that is
sufficiently strong to be able to produce these is covered by the
theorem.

--
Lieven Marchand <m...@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker

Lieven Marchand

unread,
Jul 27, 1999, 3:00:00 AM7/27/99
to
rp...@rigden.engr.sgi.com (Rob Warnock) writes:

> Is SGML a "regular" syntax? And what exactly do you mean by "regular" here?
> Certainly not "regular expression". "Context-free" perhaps? LL(k)? LALR(k)?
> Or something more general still? (Hopefully no full context-sensitive?)

Even full context-sensitive is still decidable, e.g. by Earley's
algorithm. If you want to invoke Goedel you need to go to unrestricted
phrase level grammars (type 0 in the Chomsky hierarchy).

Vassil Nikolov

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to comp.la...@list.deja.com
Rob Warnock wrote: [1999-07-27 11:35 +0000]

[...]


> 4. Thus, depending on the primitive transformation steps the editor *does*
> permit you, there may well be valid SGML texts [that even pass the
> automated validator in the editor!] that no human could use that editor
> to write from scratch (from the SGML "start" symbol) in any practical sense.

[...]

There is another analogy here---to WYSIWYG editors. E.g. there are many
WYSIWIG HTML editors, and at any moment the underlying HTML is valid
(I believe; I don't have much experience with them). So here's yet another
way to see why WYSIWYG editors are bad, or at least not good enough.


Vassil Nikolov
Permanent forwarding e-mail: vnik...@poboxes.com
For more: http://www.poboxes.com/vnikolov
Abaci lignei --- programmatici ferrei.

Vassil Nikolov

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to comp.la...@list.deja.com
(Resending, seems this didn't make it the first time.)

Kent M Pitman wrote: [1999-07-24 02:52 +0000]

> Vassil Nikolov <vnik...@poboxes.com> writes:
[proposing FRETURN to return from a function body, in response
to a proposal to allow for (RETURN-FROM DEFUN [result]) as a
substitute for (RETURN-FROM function-name [result])]
>
> I'm pretty ambivalent about the conceptual goodness of this device.
> A common reason to want it is that if you rename the function, you
> have to rename the RETURN-FROM. Some people might find that painful,
> but it's a protection against taking:
>
> (defun foo (x)
> (+ (if (zerop x) (return-from foo 7) x) 3))
>
> (defun bar (z
> (+ (foo z) 2))
>
> and then (by sloppy editing) grabbing the body and dropping it into
>
> (defun bar (z)
> (+ ((lambda (x) (+ (if (zerop x) (return-from foo 7) x) 3)) z)
> 2))
>
> and just compiling it with no errors. This will flag a reminder that
> you've moved to a new context and you need to identify the FOO boundary.
> If you'd instead used the FRETURN idiom, you'd find 7, not 9, being
> returned from BAR when z is 0 once you substituted things, and you'd
> get no compilation warning about the fact that this was going to happen.

Definitely. But (RETURN-FROM DEFUN 7) would also suffer from the same.
My point (which I should have made more explicit) was that I'd rather have
FRETURN than returning from a block named DEFUN, leaving aside the (more
important) issue if such a thing is really a good idea to have.

By the way, I believe that FRETURN has another advantage over
(RETURN-FROM DEFUN [result]) in that the very same construct
works for DEFMACRO and for local definitions (in FLET etc.),^1
while with the other approach we would need to introduce
another reserved block name to have (RETURN-FROM DEFMACRO ...),
not to mention that it's not clear how this approach would look like
for local definitions.
__________
^1 if anyone happens to be interested in an extended version of
what I posted, supporting those, please send me e-mail

> In fairness, RETURN (vs RETURN-FROM) has this same problem. It's very

Of course; FRETURN is for DEFUN what RETURN is for DO.

And, just like RETURN, it is more obvious with FRETURN than with
(RETURN-FROM DEFUN ...) that there are no promises of warnings
if the context changes.

> easy to substitute something that has a (RETURN ...) into a new context
> that has an extra NIL block "helpfully" inserted and not get any warning
> about the fact that it's a different NIL block. So it's not like the
> language totally avoids things that cause problems of the kind I'm citing;
> I just think it's well to learn from these problems and ask hard questions
> about whether you want to institutionalize more of them...

In fact, it's not for me to answer since I tend not to use RETURN-FROM
with the function name to return a value from a function (if it gets
so deeply nested that one would become necessary, I'd prefer to split
it into several smaller definitions and/or rethink the design).
Therefore I have not really encountered any problems with
(RETURN-FROM function-name ...); I just saw a way whereby a specific
proposal could be improved (IMO).

CsO

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
Erik Naggum wrote...

> however, Gödel's Incompleteness theorem is perhaps the single most
> mis-popularized theorem from mathematics there is, probably because
> people still believe in the pre-Gödel idea that truth is contained by
> a system, such as and especially religions.


Yep.
I recall being amused that J.R. Lucas had hijacked Gödel as an argument
against strong AI.
Having said that, G.E.B. the E.G.B. is probably the single richest source
of the mis-popularization.
A math head friend of mine reckons you either understand simply in the
form Gödel presented or not at all.


William Deakin

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
I'm glad I've caught up with this thread today as I think this is one of the
wisest posts I have every seen.

Thank you,

:-) will


Rolf-Thomas Happe

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
In article <7nm683$j3h$1...@news4.svr.pol.co.uk> "CsO" writes:
[On G"odel's first incompleteness theorem]

A math head friend of mine reckons you either understand simply in the
form Gödel presented or not at all.

The proof isn't very difficult to follow, but its cleverness prevents
(or renders difficult) a deeper understanding. (The cleverness I mean
is concentrated in the proof of the fixed point theorem.) I believe
that the work of G. J. Chaitin on incompleteness Rob Warnock referred
to was motivated partially by his being unsatisfied with that
excessive cleverness. (See Chaitin's home page
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/ for more information.)

rthappe


Tim Bradshaw

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to
* Jan Vroonhof wrote:

> 1. Replace any symbol by another.
> 2. Replace any symbol by the list containing that symbol.
> 3. add any symbol to a list
> 4. Replace a two element list by the second element.

But these don't preserve compilability in Lisp:

(defun foo ())
-> (defun foo (x)) by 3
-> (defun foo (((((x))))) by 2 iterated

You need a cleverer (and larger) set of rules.

--tim

Lars Marius Garshol

unread,
Jul 28, 1999, 3:00:00 AM7/28/99
to

* Lars Marius Garshol

|
| - you can extend the system to a point where you can prove all true
| statements, but then the system will contain inconsistencies

* Lieven Marchand


|
| As soon as you have one inconsistency, all statement are both true and
| false in your system. This makes such an extension useless.

I know. The reason I included it was that I seemed to remember that
you could prove all true statements without having to use the
inconsistencies. But maybe that was rubbish. At least it sounds like
it now that I write it down. :)

* Lars Marius Garshol


|
| It is also worth noting that Gödel only proved this in a specific
| case and never generalized the theorem to cover all formal systems.
| However, as far as I know, no mathematicians at the time doubted
| that this was possible or considered it necessary.

* Lieven Marchand


|
| Gödel used the Peano axioms that formalize simple integer arithmetic
| (together with proof by induction) so any formal system that is
| sufficiently strong to be able to produce these is covered by the
| theorem.

I suppose that is why nobody considered the generalization necessary.

--Lars M.

William Deakin

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Rob Warnock wrote:

> Interesting, but I think your analogy is slightly off. Better might be
> to equate the legal steps in the editor to theorems (or maybe even
> better, rules of inference) in the proof system. Then you'd get a
> statement like this:
>

> While it is certainly possible to create an editor that guarantees
> that an initially valid SGML document remains valid after every
> subsequent allowed editing step, Godel's Proof suggests that there
> will always be some valid SGML documents which it is impossible to
> create using such an editor.

For my two pence worth I believe the Godel stuff is a load of horse
feathers. There is nothing like applying an idea in advanced group theory
to muddy the programming waters.

I would think about it this way: You take the set of all valid SGML
structures and then map these into a second set by applying all allowed
editing transformations. If the resulting set is also a valid SGML, the
set of SGML structures is closed under these transformations, and you have
an editor that only generate valid SGML.

You of course would then have to write the bloody thing, hoping you don't
have any bugs ;-)

The major problem with this is then: what makes up all valid SGML
structures? and what editing transformations you allow?

Defining what you mean by valid SGML seem a difficult, but not
insurmountable task (since I'm think there is a handy dandy international
standard out there somewhere). I would expect this would result in a large
finite set of basic structures.

Describing all editing transformations seems a much more fraught task. It
is then dependant then on what are 'allowed transformations' as to whether
you map from the set of valid SGML back to the set of valid SGML or not.
Is the set of allowed transformations finite?

Finally, what this has got to do with Godel or anything I don't know and
it is IMHO that the original poster is a pretentious arse. Godel's theorem
and SGML for goodness sake!

:-| will

ps: I apologise to Tim Bradshaw. I have just read his postings again and
realise that this essentially reiterates the points he makes. But he is
too polite to include the abuse :-)

pps: My spell check wants me to replace Godel with Godless. Could this be
significant? ROCK!


William Deakin

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Lieven Marchand wrote:

> Even full context-sensitive is still decidable, e.g. by Earley's algorithm.

I have never heard of this before, and by not doing so, show my gross ignorance.

I would like to understand more about natural language processing and would be
very grateful if you could point me in the direction of any stuff, particularly
with regard to your reference about context-sensitive meaning.

Thank you for any help you can give me in this matter,

:-) will


Tim Bradshaw

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
William Deakin <wi...@pindar.com> writes:

>
> Defining what you mean by valid SGML seem a difficult, but not
> insurmountable task (since I'm think there is a handy dandy international
> standard out there somewhere). I would expect this would result in a large
> finite set of basic structures.

The crucial thing is that SGML documents are described by grammars
(DTDs) for which there is no halting problem when parsing. Note that
this is *not* true for Lisp, because of macros, which can cause there
to be a halting problem when parsing. Of course this is just the same
halting problem that a Lisp *compiler* has, so it can probably just be
ignored.

(I believe that the template language for C++ is also
turing-equivalent, though probably in an uninteresting way. It's
amazing what systems are -- vi is for instance!)

--tim


William Deakin

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
Tim Bradshaw wrote:

> William Deakin <wi...@pindar.com> writes:
>
> > I would expect this would result in a large finite set of basic structures.

> The crucial thing is that SGML documents are described by grammars (DTDs) for
> which there is no halting problem when parsing.

O.K. As usual I've missed the point. All I was trying sugest was that you
enumerate these basic sgml structures into a large (possibly infinite) set of
basic structures that you could then map into a new set using the editor
transformations and then decide whether this set was contained in the original
valid set of all valid sgml. You know more maths that me! is this just me
talking through my trousers?

The problem that I now see with this approach is that as sgml is specified by a
grammar there is no way to generate the base set of valid basic structures. OK
then. Is there no way that the editor transformation could be applied to the
grammar rules themselves? And to look at whether the resulting grammar is

Better be careful though or this will start being
new-age-hippy-arse-Godel-type-navel-gazing.

> Note that this is *not* true for Lisp, because of macros, which can cause

> thereto be a halting problem when parsing. Of course this is just the same


> halting problem that a Lisp *compiler* has, so it can probably just be
> ignored.

I think I asked a question some time ago about whether you could write a LALR
grammar (a la C) for lisp. Erik told me that I was an arse and that this was a
stupid question. And quite right too.

> (I believe that the template language for C++ is also turing-equivalent,
> though probably in an uninteresting way. It's amazing what systems are -- vi
> is for instance!)

Again my ignorance! What is a turing-equivalent? this is the stuff about how you
stop a machine when it reads a punch tape? Moves left, moves right and rewrites
the tape.

Cheers :-) will

Tim Bradshaw

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
William Deakin <wi...@pindar.com> writes:


> Again my ignorance! What is a turing-equivalent? this is the stuff about how you
> stop a machine when it reads a punch tape? Moves left, moves right and rewrites
> the tape.
>

I think I was playing fast & loose with terminology. What I was
trying to talk about was the question (way back on this thread) of
whether you could write an editor which preserved `compilability' of
programs. In order to do that you have to have a notion of what
compilability means. That basically comes down to asking `given this
string, is it syntactically legal for my programming language'. For a
language like C, you can write a program which (a) always answer yes
or no to this question and (b) always terminate.

For a language like Lisp, you can not do that, because in order to
answer that question you need to expand macros, and it is possible to
perform arbitrary computation in macros. In particular you cannot
tell if a given macro expansion will even terminate -- there is a
`halting problem' for macro expansion.

The computations that you can perform in macro expansions are the same
that you can perform in Lisp (of course, that's the beauty of them),
and those computations are equivalent in power to those performed by a
universal turing machine, which is one of those things with tape
&c. It's a theorem that you can not tell, using a universal turing
machine, if a general program for another universal turing machine
will halt (this is closely related to Goedel's theorem, in fact it
really is Goedel's theorem, although it was proved independently (by
Turing)). No computational device is known more powerful than a
Universal TM, so (as far as anyone knows), there is no way of telling
if an arbitrary UTM program will halt.

A language which lets you describe computations which are equivalent
to what a UTM can do is often called `turing-equivalent'. I believe
that the C++ template system is turing-equivalent in this sense, and
the Lisp macro system is certainly so. Many (all?) versions of
FORTRAN are interestingly *not* turing-equivalent, because they don't
allow recursion or dynamic memory allocation, so there are only a
finite number of states the program can be in. The fact that this is
so tells you something about how pracitically useful the notion is.

Another formalism of equivalent power to a UTM is the lambda calculus
Interestingly this formalism gave rise to an obscure and little-used
family of programming languages, characterised mostly by their extreme
slowness, unsuitability for any practical task, and the requirement
for special hardware to run them with any degree of efficiency.

--tim

(This description is fairly buggy, please forgive many minor
infelicities.)


Lieven Marchand

unread,
Jul 29, 1999, 3:00:00 AM7/29/99
to
William Deakin <wi...@pindar.com> writes:

A nice book on the mathematical theory of this stuff is "Introduction
to Formal Languages" by György E. Révész. For more practical stuff you
can start with the FAQ for the comp.compilers newsgroup. Context
sensitive languages have never been very important for programming
languages since parsing them is fairly inefficient. So it paid to
parse constructs like

Function -> BEGIN FunctionName <Body> END FunctionName.

which could be done with a context sensitive grammar as

Function -> BEGIN Identifier1 <Body> END Identifier2

followed by a check that the Identifiers are equal. Anyways, the true
challenge in parsing engines lies in reporting useful error messages
back to the user. The parser for Ada used by the GNAT system is one of
the finer examples of this. They have refined their parser to detect a
lot of the more common mistakes so that they can give an accurate
error message.

William Deakin

unread,
Jul 30, 1999, 3:00:00 AM7/30/99
to
Tom Ivar Helbekkmo wrote:

> Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier"

and he was right. Cheers,

:-) will


Francois-Rene Rideau

unread,
Jul 31, 1999, 3:00:00 AM7/31/99
to
>: Tim Bradshaw <t...@tfeb.org> on comp.lang.lisp

> Another formalism of equivalent power to a UTM is the lambda calculus
I'm _sick_ of hearing such a meaningless statement
of "being of equivalent power to a UTM" repeated over and over again
as a bad excuse for bad programming language design.

I question the meaningfulness of your term "Turing-equivalence".
What definition of it do you use, if any?
Equivalent _up to what transformation_?
Do these transformation correspond to anything _remotely_ meaningful
*in presence of interactions with the external world (including users)*?

Oh yeah, if you only need to write a One Program,
that will answer the One Question,
but you must choose your machine before to learn the One Question,
then I can see interest in considering some form of Turing-equivalence
(to be specified, still), as a meaningful concept: it tells you
what class of machines you may "optimally" choose, in some sense.
But then, I can do much better, and readily tell you
the One Answer to the One Question, which is 42.

If you need more than one program, if you already have a lot of questions,
and if more questions will come constantly, depending on what you do,
then Turing-equivalence should certainly not be a sufficient criterion
for your choice of programming system
(considering language and operating system separately is a vast crookery),
although it will most likely be a necessary criterion.

I agree I still need to write a formal paper on the matter. Meanwhile,
for a few hints on how to do better than so-called "Turing-equivalence",
read this informal paper of mine:
Metaprogramming and Free Availability of Sources
http://www.tunes.org/~fare/articles/ll99/index.en.html

I don't think Alan Turing would have appreciated his name being associated
to the loads of non-sense hidden behind the words "Turing-equivalence".
But when you commit suicide, you cannot defend yourself anymore.
Damn British bigots who killed Turing!

[ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ]
[ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ]
[ Reflection&Cybernethics | Project for a Free Reflective Computing System ]
Some people will argue that since there's no evidence either way whether
the smurf fboinks or not, it's ok to firmly believe that indeed it does.
Such people are insane, often as the result of a life-long endoctrination.

Tim Bradshaw

unread,
Jul 31, 1999, 3:00:00 AM7/31/99
to
* Francois-Rene Rideau wrote:
>> : Tim Bradshaw <t...@tfeb.org> on comp.lang.lisp
>> Another formalism of equivalent power to a UTM is the lambda calculus
> I'm _sick_ of hearing such a meaningless statement
> of "being of equivalent power to a UTM" repeated over and over again
> as a bad excuse for bad programming language design.

I think you may be getting confused. My article was in a thread
asking a rather obscure and largely theoretical question about
structure editors. It wasn't about programming language design.

You may also perhaps have missed the fact that this last little aside
to which you seem to have taken such offence was a joke.

> I question the meaningfulness of your term "Turing-equivalence".
> What definition of it do you use, if any?

I think the definition in Boolos & Jeffrey `Computability and Logic'
is as good as any. If I recall, they give quite a nice exposition of
the various equivalent notions there (perhaps not talking about lambda
calculus though).

I don't see any purpose in responding to the rest of your article,
I'm afraid.

--tim

Quentin Deakin

unread,
Jul 31, 1999, 3:00:00 AM7/31/99
to
Dear Francois-Rene Rideau

I find the following statement offensive.

M. Francois-Rene Rideau wrote


> But when you commit suicide, you cannot defend yourself anymore.
> Damn British bigots who killed Turing!

Could you expand on this a little? Surely this statement about British
bigots (as opposed to bigots) is just another form of bigotry. Also, I am
not aware that Britain has (or at the time of Turings death) a monopoly on
bigotry.

I do know the history of Alan Turing tragic and utterly utterly pointless
death. I frequent the pubs in which I am told Turing used to go, and would
hope that if he had lived to this day he would be happy to live in the
tolerant and dramatically changed place that Manchester has become.

I would also hope that cll could be place of tolerance. Let us leave it at
that eh?

Best Regards,

William Deakin

Stig Hemmer

unread,
Jul 31, 1999, 3:00:00 AM7/31/99
to
Francois-Rene Rideau <I+far...@tunes.NO.org.SPAM> writes:
> I question the meaningfulness of your term "Turing-equivalence".

The term is meaningful, in the sense that it has a well-defined
meaning, but it isn't _important_. (To a computer person, that is,
mathematicians have other priorities)

> What definition of it do you use, if any?

Turing-equivalent == being able to solve [exactly] the same problems
as a Turing Machine.

A Turning Machine has infinite memory, which means that no physical
machine can be Turing-equivalent. So, this is a purely theoretical
concept.

However, one often talks about a theoretical system which is the same
as some actual system, except that you remove some limit(s) from it.

These systems can be, and often are, Turing-equivalent.

People using sloppy language then calls the original physical system
Turing-equivalent. E.g. By saying that the editor vi is
Turing-equivalent. [Though one can of course think of "the editor vi"
as an idealized concept independent of any current implementation of
it]

> Equivalent _up to what transformation_?
?

> Do these transformation correspond to anything _remotely_ meaningful
> *in presence of interactions with the external world (including users)*?

I believe it can be made meaningful(well-defined), though trying to
give an exact definition would be beyond the scope of this article.

It is, however, not very interesting.

The problem isn't the external world really. It is rather that
Turing-equivalence say nothing about
- ease of programming. A big issue in the real world.
- execution speed. Another big issue in some contexts.

Stig Hemmer,
Jack of a Few Trades.

Vassil Nikolov

unread,
Jul 31, 1999, 3:00:00 AM7/31/99
to comp.la...@list.deja.com
Francois-Rene Rideau wrote: [1999-07-31 12:56 +0200]

[...]


> *in presence of interactions with the external world (including users)*?

I am somewhat curious why you mention interactions. One essential
features of algorithmic machines such as a Turing machine is that
there is no interaction with the external world as far as the
execution of the algorithm is concerned (supplying the initial data
(the initial contents of the tape) and collecting the result (the
final contents of the tape) is outside the scope of theory of
algorithms). Therefore I don't see any meaning in relating
Turing-equivalence to interactions with the external world.

[...]


> I don't think Alan Turing would have appreciated his name being associated
> to the loads of non-sense hidden behind the words "Turing-equivalence".

> But when you commit suicide, you cannot defend yourself anymore.

Nor can one state whether one actually appreciates one's name being
associated with this or that.


Vassil Nikolov
Permanent forwarding e-mail: vnik...@poboxes.com
For more: http://www.poboxes.com/vnikolov
Abaci lignei --- programmatici ferrei.

Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

Todd Nathan

unread,
Jul 31, 1999, 3:00:00 AM7/31/99
to
What are some good quality sources of reading on Turing and his
history, and the impact he has had on computational mathematics?
Thanks for your time...

\t

Francois-Rene Rideau

unread,
Jul 31, 1999, 3:00:00 AM7/31/99
to
Dear "Quentin Deakin" <Aniso...@tesco.net>,

> I find the following statement offensive.
Too bad for you. Maybe read a USENET-reading HOWTO. Or practice Zen.

>> Damn British bigots who killed Turing!
> Could you expand on this a little?

Typo, I guess. I meant "british bigots", but some rule was triggered
that made me type "British" instead of "british", hence the confusion.

> I am not aware that Britain has (or at the time of Turings death)
> a monopoly on bigotry.

Mind the difference between "british bigots" and "bigot British people".
In the first case, "british" here is the adjective and "bigot" is the noun;
in the second case, the roles are swapped.
Of course, in french, the order of nouns and adjectives are swapped.
Mind that I could hardly admire Turing and call all the British "bigots".

> I do know the history of Alan Turing tragic and utterly utterly pointless
> death. I frequent the pubs in which I am told Turing used to go, and would
> hope that if he had lived to this day he would be happy to live in the
> tolerant and dramatically changed place that Manchester has become.

At least three great people I admire have suffered from being gay in
intolerant cultures: Frédéric Bastiat, Piotr Tchaïkovski, Alan Turing;
the latter being the one who suffered the most for sexual discrimination.
Turing also suffered from stupid military "secret",
whereby the friendly public is not allowed to know
what the enemy spies are sure to have known for quite a long time.

> I would also hope that cll could be place of tolerance.

"La tolérance? il y a des maisons pour ça" -- Paul Claudel

Best regards,

[ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ]
[ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ]
[ Reflection&Cybernethics | Project for a Free Reflective Computing System ]

"Reality is that which, when you stop believing in it, doesn't go away".
-- Philip K. Dick

I+far...@tunes.no.org.spam

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to
>: Stig Hemmer <st...@pvv.ntnu.no> on comp.lang.lisp

> Turing-equivalent == being able to solve [exactly] the same problems
> as a Turing Machine.

A completely pointless definition,
since it forgets the most important point about equivalence,
which is "up to what transformations?".
If you accept no transformation at all, then trivially,
only Turing Machines are equivalent to Turing Machines.
If you allow "any" transformation, then as trivially,
everything is Turing-equivalent, including your neighbour's mother's socks.

> A Turning Machine has infinite memory, which means that no physical
> machine can be Turing-equivalent. So, this is a purely theoretical
> concept.

A "concept" has infinitely precise extent, which means that no physical
phenomenon can be a concept. So yours is a purely theoretical statement.

>> Do these transformation correspond to anything _remotely_ meaningful

>> *in presence of interactions with the external world (including users)*?

> I believe it can be made meaningful(well-defined), though trying to


> give an exact definition would be beyond the scope of this article.

Trying to give an exact definition would be _quite_
within the scope of the current discussion.
Oh, certainly, you could contort some meaningful definition
(and there is not a One True One) of Turing-equivalence among pure systems
to kind of work on systems with external interactions,
but you would end up with a completely irrelevant contorted concept.

> It is, however, not very interesting.

You miss the whole point.

> The problem isn't the external world really. It is rather that
> Turing-equivalence say nothing about
> - ease of programming. A big issue in the real world.
> - execution speed. Another big issue in some contexts.

This "real" world of yours is but the "external" world,
to the computing system. And speed can very well be formalized
within an interaction framework, too. So that
1) The problem IS about interactions with the external world
2) Since (badly taught) concepts of Turing-equivalence are _conspicuously_
not applicable to systems with I/O,
any claim that they do not provide data
about programming systems with I/O shows deep inadequacy
not of the many concepts of Turing-equivalence in general,
but rather of the bogus understanding of people about these concepts.

[ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ]
[ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ]
[ Reflection&Cybernethics | Project for a Free Reflective Computing System ]

The degree of civilization in a society can be judged by entering its prisons.
-- F. Dostoyevski

Francois-Rene Rideau

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to
>: Tim Bradshaw <t...@tfeb.org> on comp.lang.lisp

>>> Another formalism of equivalent power to a UTM is the lambda calculus

>>> Interestingly this formalism gave rise to an obscure and little-used
>>> family of programming languages, characterised mostly by their extreme
>>> slowness, unsuitability for any practical task, and the requirement
>>> for special hardware to run them with any degree of efficiency.

>> I'm _sick_ of hearing such a meaningless statement


>> of "being of equivalent power to a UTM" repeated over and over again
>> as a bad excuse for bad programming language design.

> I think you may be getting confused. My article was in a thread
> asking a rather obscure and largely theoretical question about
> structure editors. It wasn't about programming language design.

But as your full quote shows, you specifically used "Turing-equivalence"
as a concept meant to be relevant to language design,
and as your full post shows (not repeated here),
as a concept meant to be relevant to the "expressive power"
of programming languages.

> You may also perhaps have missed the fact that this last little aside
> to which you seem to have taken such offence was a joke.

I haven't missed the intended light tone of the remark;
I haven't missed either the implied acceptance of the widely replicated
but profoundly erroneous truism, that Turing-equivalence is
1) applicable _at all_ to tell anything about programming systems _with I/O_
2) the best criterion that Computer Science can say about such systems
(with the implication that we must resort to superstition to do better)

"To argue that gaps in knowledge which will confront the seeker must be filled,
not by patient inquiry, but by intuition or revelation, is simply to give
ignorance a gratuitous and preposterous dignity...." -- H. L. Mencken, 1930

>> I question the meaningfulness of your term "Turing-equivalence".

>> What definition of it do you use, if any?
>

> I think the definition in Boolos & Jeffrey `Computability and Logic'
> is as good as any.

That is, _not good at all_, considering programming of tasks _with I/O_.

> If I recall, they give quite a nice exposition of
> the various equivalent notions there (perhaps not talking about lambda
> calculus though).

Exposition that you seemingly didn't quite grok.
Equivalence is _always_ "up to" some transformations.
If you don't understand what these transformations are about,
when and where they do apply or not,
then you don't understand what the equivalence is about.

[ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ]
[ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ]
[ Reflection&Cybernethics | Project for a Free Reflective Computing System ]

The limit between philosophy and politics
is when you have to choose your friends.
-- adapted from Karl Schmidt

I+far...@tunes.no.org.spam

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to
>: Vassil Nikolov <vnik...@poboxes.com> in comp.lang.lisp

>> *in presence of interactions with the external world (including users)*?
>

> I am somewhat curious why you mention interactions. One essential
> features of algorithmic machines such as a Turing machine is that
> there is no interaction with the external world as far as the
> execution of the algorithm is concerned (supplying the initial data
> (the initial contents of the tape) and collecting the result (the
> final contents of the tape) is outside the scope of theory of
> algorithms). Therefore I don't see any meaning in relating
> Turing-equivalence to interactions with the external world.

I'm quite glad to see someone remember
what Turing machines and Turing equivalence are all about.

One of my points is _precisely_ that claims that
"being Turing-equivalent is not enough for a programming language",
or that "(Theoretical) Computer Science does not teach us anything
about language design" are utter nonsense;
for Turing-equivalence conspicuously does NOT apply
to programming languages with any I/O,
and it is certainly not the end-all of what theoretical computer science
has to say about language design.

And another point was that people use the term "Turing-equivalence"
without understanding at all what it was about (never mind the fact that
there even in common use within Algorithmic Information Theory,
there are _many_ _different_ concepts of "Universal" machines).

Optimae Salutationes (?),

[ "Faré" | VN: Уng-Vû Bân | Join the TUNES project! http://www.tunes.org/ ]
[ FR: François-René Rideau | TUNES is a Useful, Nevertheless Expedient System ]
[ Reflection&Cybernethics | Project for a Free Reflective Computing System ]

There is no excuse for a programming language without clean formal semantics.
For even if your language is not designed for formal reasoning by computers,
there will still be humans who'll have to reason about programs.

Stig Hemmer

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to
I+far...@tunes.NO.org.SPAM (I+far...@tunes.NO.org.SPAM) writes:
> > Turing-equivalent == being able to solve [exactly] the same problems
> > as a Turing Machine.
> A completely pointless definition,
> since it forgets the most important point about equivalence,
> which is "up to what transformations?".
>
> If you accept no transformation at all, then trivially,
> only Turing Machines are equivalent to Turing Machines.
> If you allow "any" transformation, then as trivially,
> everything is Turing-equivalent, including your neighbour's mother's socks.

[I'm not a mathematician, though I occationally play one on Usenet]

Are you talking about transformation of the problems to be solved here?

Or are you talking about equivalence relations in general?

In the first case: Turing machines, as I have seen them defined, uses
character strings as inputs and output. Computers use character
strings as input and output. No transformations needed.

In the latter case: The last math book I read which discussed
equivalence and equivalence classes in more general terms, the word
"transformation" was never used, nor any similar word. [In that context]

So, what are you talking about?

> A "concept" has infinitely precise extent, which means that no physical
> phenomenon can be a concept. So yours is a purely theoretical statement.

In addition to not being a mathematician, I'm also not a philosopher.
Which, I guess, should have made me shut up, but I was never any good
at that. Sorry.

> This "real" world of yours is but the "external" world,

No. The "real world" is the world outside academa. The "real world"
is a world where Computer Science means little, and Software
Engineering or even Programming is what is important.

The "external world" is a conc... er... _phrase_ that is used in
Computer Science. (And other places)

So, my "real world" statement was a statement about importance and
priorities, not about "Turing-equivalence" as such.

> You miss the whole point.

These things happen...I guess I was too busy trying to make a point of
my own.

Christopher B. Browne

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to
On 01 Aug 1999 01:13:41 +0200, I+far...@tunes.NO.org.SPAM
<I+far...@tunes.NO.org.SPAM> posted:

Turing Equivalence *is* important, just in a fairly small context.

It represents the condition of proving that a computer language is at
least of some minimal power to compute things.

There are some languages, usually configuration languages, that are of
tremendously weak "expressive power," where it is fair to say that
they are *fundamentally* flawed, unable to compute interesting things.

There are some machines that are Finite State Machines, not as "smart"
as a Turing Machine.

Distinguishing between these things is useful enough, when that sort
of situation comes up.

"Turing Equivalence" is, of course, only a faint compliment to a
computer language. It says that it is expressive enough to compute
everything that can be computed, providing the two essentials:
- The ability to make Decisions, and
- The ability to read/store Data.

From there, languages vary *vastly* in the quality of the abstractions
they provide for Decision Making (e.g. - control structures) and for
representing data (e.g. - data structures).

The main point of speaking of Turing Equivalence tends to be to
establish that despite the fact that two languages may have somewhat
different abstractions, they still have the essential "Turingness"
underneath that means that, however inconveniently, they may be made
to do the same things.

A really practical upshot of Turing Equivalence is that it means that
people can implement diverse sorts of computer architectures, without
needing to worry that they can't accomplish the same things.

Were it not for "T.E.", we might see something like:
"Oops. We used a 68030. It is different from a 80486, which means
that we'll never be able to run a database management system on
it..."
[I can only make lame claims as to what wouldn't be possible;
computers *are* essentially general purpose machines, a result of
Turing Equivalence...]
--
"I doubt this language difference would confuse anybody unless you were
providing instructions on the insertion of a caffeine enema."
-- On alt.coffee
cbbr...@hex.net- <http://www.hex.net/~cbbrowne/lsf.html>

Tim Bradshaw

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to
* Francois-Rene Rideau wrote:
>> I think you may be getting confused. My article was in a thread
>> asking a rather obscure and largely theoretical question about
>> structure editors. It wasn't about programming language design.
> But as your full quote shows, you specifically used "Turing-equivalence"
> as a concept meant to be relevant to language design,
> and as your full post shows (not repeated here),
> as a concept meant to be relevant to the "expressive power"
> of programming languages.

I can only quote the article here. I said.

Many (all?) versions of FORTRAN are interestingly *not*
turing-equivalent, because they don't allow recursion or dynamic
memory allocation, so there are only a finite number of states the
program can be in. The fact that this is so tells you something about
how pracitically useful the notion is.

Perhaps you missed that last sentence? After reading it, what do you
think my position is on the relevance of UTM-equivalence to language
design is?

I have no more to say on this.

--tim


Christopher R. Barry

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to
Tim Bradshaw <t...@tfeb.org> writes:

> I can only quote the article here. I said.
>
> Many (all?) versions of FORTRAN are interestingly *not*
> turing-equivalent, because they don't allow recursion or dynamic
> memory allocation, so there are only a finite number of states the
> program can be in. The fact that this is so tells you something about
> how pracitically useful the notion is.

Any computer without infinite disk, memory and processor registers can
only be in a finite number of states. The original FORTRAN (all
versions?) did not have stack or heap allocation, so in a
non-theoretic sense recursion was impossible (no stack) as well as
creating a data-structure that could outlive the procedure that
created it (no heap), but in a theoretic sense you could (very
painfully) use FORTRAN as a kind of pseudo-microprocessor and a
writable array (AFAIK the array was the original FORTRAN's only
data-structure) allocated at startup as a kind of pseudo-memory
address space. You could then theoretically implement Common Lisp this
way because you could implement stack and heap allocation within that
address-space using the "FORTRAN microprocessor."

Or something like that.

Christopher

Lars Marius Garshol

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to

* Stig Hemmer

|
| Turing-equivalent == being able to solve [exactly] the same problems
| as a Turing Machine.
|
| A Turning Machine has infinite memory, which means that no physical
| machine can be Turing-equivalent. So, this is a purely theoretical
| concept.

Not really. Common Lisp as a language is also theoretical, so if you
assume that there are no memory limits it can implement a Turing
machine. Similarly for CPUs if you view them in the abstract.

| However, one often talks about a theoretical system which is the same
| as some actual system, except that you remove some limit(s) from it.
|
| These systems can be, and often are, Turing-equivalent.

Bingo. There are no memory limits in the ANSI standard or R5RS.

--Lars M.

Lieven Marchand

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to
"Todd Nathan" <todd.nospamski.@palomablanca.net> writes:

His biography by Andrew Hodges: Alan Turing the Enigma

Try to find the 1992 edition. He added some stuff that had only then
become available.

Pierre R. Mai

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to
Lieven Marchand <m...@bewoner.dma.be> writes:

> "Todd Nathan" <todd.nospamski.@palomablanca.net> writes:
>
> > What are some good quality sources of reading on Turing and his
> > history, and the impact he has had on computational mathematics?
> > Thanks for your time...
>
> His biography by Andrew Hodges: Alan Turing the Enigma
>
> Try to find the 1992 edition. He added some stuff that had only then
> become available.

One of my favorites is Herken:1988:TUTM, which includes among other
things the following articles (all in BibTeX format, with the book at
the beginning):

@Book{Herken:1988:TUTM,
editor = {Rolf Herken},
title = {The Universal Turing Machine -- A Half-Century Survey},
publisher = {Kammerer {\&} Unverzagt},
year = 1988,
address = {Berlin}
}

@InCollection{Hodges:1988:ATTM,
author = {Andrew Hodges},
title = {Alan Turing and the Turing Machine},
booktitle = {The Universal Turing Machine -- A Half-Century Survey},
crossref = {Herken:1988:TUTM},
pages = {3--16},
publisher = {Kammerer {\&} Unverzagt},
year = 1988,
editor = {Rolf Herken},
address = {Berlin}
}

@InCollection{Kleene:1988:TAC,
author = {Stephen C. Kleene},
title = {Turing's Analysis of Computability, and Major Applications of It},
booktitle = {The Universal Turing Machine -- A Half-Century Survey},
crossref = {Herken:1988:TUTM},
pages = {17--54},
publisher = {Kammerer {\&} Unverzagt},
year = 1988,
editor = {Rolf Herken},
address = {Berlin}
}

@InCollection{Gandy:1988:TCI,
author = {Robin Gandy},
title = {The Confluence of Ideas in 1936},
booktitle = {The Universal Turing Machine -- A Half-Century Survey},
crossref = {Herken:1988:TUTM},
pages = {17--54},
publisher = {Kammerer {\&} Unverzagt},
year = 1988,
editor = {Rolf Herken},
address = {Berlin}
}

@InCollection{Feferman:1988:TLO,
author = {Solomon Feferman},
title = {Turing in the Land of O(z)},
booktitle = {The Universal Turing Machine -- A Half-Century Survey},
crossref = {Herken:1988:TUTM},
pages = {17--54},
publisher = {Kammerer {\&} Unverzagt},
year = 1988,
editor = {Rolf Herken},
address = {Berlin}
}

@InCollection{Davis:1988:MLOMC,
author = {Martin Davis},
title = {Mathematical Logic and the Origin of Modern Computers},
booktitle = {The Universal Turing Machine -- A Half-Century Survey},
crossref = {Herken:1988:TUTM},
pages = {149--174},
publisher = {Kammerer {\&} Unverzagt},
year = 1988,
editor = {Rolf Herken},
address = {Berlin}
}

I think Feferman:1988:TLO was the article about Turing and the
lambda calculus. It's also interesting to see Turings influence
on the development of finite state automata, and automata theory
as a whole, a topic I've recently had the opportunity to research
a bit. Sadly there aren't many written references of the early
developments, but once you've mined a bit, you really get a nice
overview of the different influences in the beginnings of what
later was to be called automata theory. An interesting and
influential collection in this field was

@Book{Shannon:1956:AS,
editor = {Shannon, Claude E. and McCarthy, John},
title = {Automata Studies},
publisher = {Princeton University Press},
address = {Princeton, N.J.},
year = 1956,
number = 34,
series = {Annals of Mathematics Studies},
annote = {Source: Minsky}
}

with articles by Kleene, Moore, Culbertson, Minsky, von Neumann and
others. Reconstructing Turing's influence on the developments
represented there is historically not trivial but interesting.

Herken's book was published by Springer outside of Germany (and
A/CH?), but is out of print in it's original hardback edition. OTOH
there seems to be a (paperback?) second edition by Springer available
nowadays, see Springer's site (http://www.springer.de/, search for
Herken as author, and click on all the little buttons ;). Definitely
recommended...

Regs, Pierre.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Vassil Nikolov

unread,
Aug 1, 1999, 3:00:00 AM8/1/99
to comp.la...@list.deja.com
Christopher R. Barry wrote: [1999-08-01 06:03 +0000]

Absolutely. It is just a curio, of course, but in my second year at
university (when we studied FORTRAN) I wrote (as my course work,
or is it `term project' in English) a very simple s-expression
processor in Fortran. It knew only about SETQ, CAR, CDR, CONS, QUOTE
and maybe one or two others but it could read and write lists and did
that in a doubly recursive way (i.e. recursed into the cars and the cdrs).
It was of no value to programming but it was _very_ instructive for
me. It was written in a FORTRAN dialect similar to FORTRAN 66, i.e.
no language support for explicit recursion, and the stack and the heap
were implemented by means of arrays along the lines described above.

(In the unlikely case that someone is interested, the source is
definitely not available electronically, for the very simple reason
that I don't have a card reader at my disposal, though I still keep
the batch somewhere...)

William Deakin

unread,
Aug 2, 1999, 3:00:00 AM8/2/99
to
Francois-Rene Rideau wrote:

> > I find the following statement offensive.

What is wrong with saying that I found something offensive?

> Too bad for you. Maybe read a USENET-reading HOWTO. Or practice Zen.

And thank you for patronising me. Maybe you should read USENET-write-english HOWTO.
Although I do admit I should practice Zen :-)

> > I am not aware that Britain has (or at the time of Turings death) a monopoly on
> bigotry.
> Mind the difference between "british bigots" and "bigot British people".

OK. But mind the difference between british bigots and bigots.

> Of course, in french, the order of nouns and adjectives are swapped.

So? This is english and not french.

> Mind that I could hardly admire Turing and call all the British "bigots".

Fair enough.

> At least three great people I admire have suffered from being gay

There are a large number of people in this world who might disagree with your
statment that being gay was something they suffered from. Thank you for (maybe
inadvertently) insulting my friends. Please be more careful.

You then talk of

> Frédéric Bastiat,

Of who I cannot speak.

> Piotr Tchaïkovski,

Who's untimely death is thought to be self inflicted and brought on by the threat
that his homosexuality would be exposed. But I'm not a musical historian. Although
this may have changed now as this was the current theory doing the rounds at the
time that I studied music history.

> Alan Turing; the latter being the one who suffered the most for sexual
> discrimination.

Alan Turing suffered more than Tchaïkovski? I realise this is a moot point but I am
aware of the extreme homophobia of Britain in the Fifties and Sixties and in Russia
at the end of C19th.

> Turing also suffered from stupid military "secret", whereby the friendly public
> is not allowed to know
> what the enemy spies are sure to have known for quite a long time.

There is a long history in the UK of covering up such little 'pecadillos' for the
sake of not scaring peoples mothers. Particularly when national security is even
remotely at stake. White heterosexual male heros are what Britain wants, and is
what it has mostly got.

> > I would also hope that cll could be place of tolerance.
> "La tolérance? il y a des maisons pour ça" -- Paul Claudel

I must now apologise for how poor my french is. Does this mean 'Tolerance? That is
for your houses.'

Best Regards,

:-) will

ps: Annoying people doesn't seem the best way to get people to join the Tunes
project. Although I think it was Wilde who said something like 'The only thing
worse than being talked about, is not being talked about.'


Eric Marsden

unread,
Aug 2, 1999, 3:00:00 AM8/2/99
to
>>>>> "wd" == William Deakin <wi...@pindar.com> writes:

wd> I would also hope that cll could be place of tolerance.

fare> "La tolérance? il y a des maisons pour ça" -- Paul Claudel

wd> I must now apologise for how poor my french is. Does this mean
wd> 'Tolerance? That is for your houses.'

Brothels used to be called «maisons de tolérance» (Houses of
Tolerance) in France. I deduce that Faré considers USENET to be an
appropriate venue for verbal street brawls.

Paul Claudel was a French Catholic writer from the beginning of the
century; see <URL:http://www.kirjasto.sci.fi/pclaudel.htm>.

--
Eric Marsden
It's elephants all the way down

William Deakin

unread,
Aug 2, 1999, 3:00:00 AM8/2/99
to
Thank you very much.

:-) will


Christopher B. Browne

unread,
Aug 2, 1999, 3:00:00 AM8/2/99
to
On Mon, 02 Aug 1999 08:54:42 +0100, William Deakin <wi...@pindar.com> posted:

>> > I would also hope that cll could be place of tolerance.
>> "La tolérance? il y a des maisons pour ça" -- Paul Claudel
>
>I must now apologise for how poor my french is. Does this mean
>'Tolerance? That is for your houses.'

More like:
"Tolerance? There are houses for that."

There must be some sort of idiom or historical perspective to that,
because it doesn't directly make sense. I think "houses" doesn't mean
what we think of as meaning "houses."

If it bears any relationship to the "Tolerance" movements of the last
century, it's probably talking about public houses, e.g. - what we'd
call "bars" today.

>ps: Annoying people doesn't seem the best way to get people to join
>the Tunes project. Although I think it was Wilde who said something
>like 'The only thing worse than being talked about, is not being
>talked about.'

There was a pretty bad Monty Python sketch that headed off from there.

"Tunes" seems to be all about lots of talk, with not too much actual
code. There does indeed need to be some design work done before
constructing a new OS, but it looks like it's been "all talk" for
quite a while now.

Well, it looks like there does exist now "Retro," a Forth-based
precursor to Tunes.

--
Don't panic.
-- The Hitchhiker's Guide to the Galaxy
cbbr...@ntlug.org- <http://www.hex.net/~cbbrowne/oses.html>

0 new messages