Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lisp syntax, what about resynchronization?

48 views
Skip to first unread message

Tom Breton

unread,
Jun 8, 1999, 3:00:00 AM6/8/99
to

I agree with most of the comments on this thread about Lisp syntax.
But it occurs to me that one advantage that heterogeneous Algol-type
syntaxes have over Lisp is that when they get lost, they can detect
being lost and resynchronize (And thus produce more errors, but that's
helpful for debugging). In Lisp, one misplaced parenthesis can easily
put you into "Where the hell is this problem coming from?" mode.

It also occured (*) to me that this could be remedied pretty easily
wrt Lisp syntax. You'd just need a resynchronization token or tokens,
to mean a) "push" the current depth + 1, b) "restore" the current
depth without popping, and c) "pop" the current depth off. These
positions would be read as if they contained however many left or rite
parentheses would be needed to reach the level being restored to.

I don't know whether it would be possible to do that with a reader
macro instead of a real syntax change. Using "(#@", "#@", and "#@)"
as tokens, it could look something like this (grabbing the first elisp
defun I saw, kill-all-abbrevs from abbrev.el):

(defun kill-all-abbrevs ()
"Undefine all defined abbrevs."
(interactive)
(#@ ;;Save depth
let
((tables abbrev-table-name-list
#@ ;;Restore depth
(#@ ;;Save depth, nested,
while tables
#@ ;;Restore, from the second (nested) depth, not the first.
(clear-abbrev-table (symbol-value (car tables
#@ ;;Restore, again from second.
(setq tables (cdr tables
#@);;Forget depth. Now we have the first depth again.
#@)) ;;Forget depth, and close one more parenthesis to end the defun.


(*) Actually, way back when someone here was saying that XML was "Lisp
with brackets" (which I don't entirely buy, but Lisp syntax for
markup... yum). Where resynchronization in a programming language
just lets you debug more easily, in a markup language it can make all
the difference between a document that will render and one that won't.

--
Tom Breton, http://world.std.com/~tob
Ugh-free Spelling (no "gh") http://world.std.com/~tob/ugh-free.html

Kent M Pitman

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
[Oops. Accidentally replied to this in e-mail. Copying it here to
the group, though in the process I have "revised and extended" my remarks.]

Tom Breton <t...@world.std.com> writes:

> I agree with most of the comments on this thread about Lisp syntax.
> But it occurs to me that one advantage that heterogeneous Algol-type
> syntaxes have over Lisp is that when they get lost, they can detect
> being lost and resynchronize (And thus produce more errors, but that's
> helpful for debugging). In Lisp, one misplaced parenthesis can easily
> put you into "Where the hell is this problem coming from?" mode.

As I recall, and I might be wrong because I didn't myself use it--I
just heard about it, the LMITI (LMI/MIT/TI) branch of the Lisp Machine
had some mode (I think invented/installed by Richard Stallman) that
watched out for misbalanced parens by looking for open parens that were
to the left of a containing paren. That seems to me almost entirely
adequate to the task. I don't recall anyone making much noise about
it though I think I may have seen it do its thing once and thought it
was kinda cool at first blush. It may be that in practice it was too
simplistic and it got in the way.

I've seen other compilers that keep track of the last place an open
paren in column 0 happened, so that when a compiler error happened,
especially a reader error, they could offer heuristic advice about
the paren balancing beyond what the mere count of parens would give.

Note also that Lisp used to (in Interlisp) have super-parens so you
could use [foo (bar baz)] to make sure you had firewalls for bracketing.
These were heavily (ab)used, with people writing [foo (bar baz],
though it drove many (including Emacs) nuts. The Maclisp community
(later Common Lisp) made a pretty conscious decision not to do it this way.

I do agree that it's a good property of a language that it should be
"sparse" so that errors are easily detected from non-errors. But
you can't have everything, and I think the benefit that comes for having
infix comes at a sizable disadvantage. Even LOOP, which is majorly
infix, is a routine source of indentation woes, and is much harder
to edit using normal motion commands.

Erik Naggum

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
* Tom Breton <t...@world.std.com>

| I agree with most of the comments on this thread about Lisp syntax.
| But it occurs to me that one advantage that heterogeneous Algol-type
| syntaxes have over Lisp is that when they get lost, they can detect
| being lost and resynchronize (And thus produce more errors, but that's
| helpful for debugging). In Lisp, one misplaced parenthesis can easily
| put you into "Where the hell is this problem coming from?" mode.

that's why we have editors instead of compilers help us find problems.
one very easy way to spot parenthesis errors is to let Emacs indent the
whole top-level form. if something moves, undo the indentation, and fix
whatever you fairly immediately see caused the problem. repeat until
nothing moves.

| (*) Actually, way back when someone here was saying that XML was "Lisp
| with brackets" (which I don't entirely buy, but Lisp syntax for
| markup... yum). Where resynchronization in a programming language
| just lets you debug more easily, in a markup language it can make all
| the difference between a document that will render and one that won't.

having been one of the leading SGML experts in the world before I finally
came to conclude it was a fundamentally braindamaged approach (but a good
solution once you had taken the wrong approach and stuck with it -- like
so many serious design errors in programming, or, indeed, politics, where
it is always harder to get back on track than to continue forward and to
knock down ever more hindrances -- it's like driving a tank: if you drift
off the road, any telephone poles you might knock down are only proof
positive of your mighty tank's ability to get where you want to go, and
the important psyhological "corrector" that hindrances should have been
is purposefully ignored because you are too powerful), I could go into a
long an arduous debate over exactly how little resynchronization ability
SGML's "labeled parentheses" gives you, and how immensely hard it is to
backtrack the SGML parsing process. it's a pity that XML is actually a
little _worse_ in this regard.

one of the reasons I got into SGML was that it had beautiful, explicit
markers of the beginning and end of the syntactic structure. this I
sensed was great because I had had a lot of Lisp exposure before SGML.
however, Lisp's elegance loses all its beauty if you adorn parentheses
with labels, because you actually _lose_ synchronization ability when you
have to deal with conflated errors: <BAR>...<BAZ>...</BAR> may close BAZ
and BAR at the same time in the complex game of omitted end tags in SGML,
but in the fully explicit case, it may be a typo for </BAZ>, or may be a
missing </BAZ>. any attempt to resynchronize will get it right half the
time, which causes subsequent errors to crop up, and then you'd have to
go back and try the other possibility, but that means reshuffling your
entire tree structure. the same is true of empty elements that are
mistaken for containers. perhaps <BAZ> should have been <BAZ/>? that
means you get it right only a _third_ of the time in XML where you don't
even have the DTD to help you decide anything, anymore. 'tis wondrous!

this is the same problem as the stupid IF THEN ELSE problem created by
people without functioning brains. that is, is IF THEN IF THEN ELSE a
two-branch inner or outer IF? there's simply no way to tell, so it was
decided it should be inner, which means the program compiles wrong if the
compiler's idea is different form the programmer's idea. in Lisp, a
missing parenthesis would be found when indenting the code.

Lisp's simple, unlabeled, explicit-structure-marking syntax solves all of
these problems. your silly introduction of syntactic noise is another
example of how a powerful tool can be used to knock down telephone poles
instead of taking a hint that you should get back on track. imagine
giving the programmer the duty of filling in noise like that! it's the
mistakes of SGML all over again: _adding_ stuff to make it simpler.

I hate it when history repeats itself and nodoby learned anything last
time, either: those who do not know Lisp are doomed to reimplement it.

#:Erik
--
@1999-07-22T00:37:33Z -- pi billion seconds since the turn of the century

Tom Breton

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
Kent M Pitman <pit...@world.std.com> writes:

>
> I've seen other compilers that keep track of the last place an open
> paren in column 0 happened, so that when a compiler error happened,
> especially a reader error, they could offer heuristic advice about
> the paren balancing beyond what the mere count of parens would give.

That's certainly an approach. It needs to detect an error first, tho.

> Note also that Lisp used to (in Interlisp) have super-parens so you
> could use [foo (bar baz)] to make sure you had firewalls for bracketing.
> These were heavily (ab)used, with people writing [foo (bar baz],
> though it drove many (including Emacs) nuts. The Maclisp community
> (later Common Lisp) made a pretty conscious decision not to do it
> this way.

I'm sure abuse would be a problem with my proposal too, for
essentially the same reason.

> I do agree that it's a good property of a language that it should be
> "sparse" so that errors are easily detected from non-errors. But
> you can't have everything, and I think the benefit that comes for having
> infix comes at a sizable disadvantage. Even LOOP, which is majorly
> infix, is a routine source of indentation woes, and is much harder
> to edit using normal motion commands.

I'm not sure how infix notation got into this. If what I talked about
translates to infix notation, I don't see it.

I agree that LOOP's nonconformity is regrettable. I tend to disfavor
loop for that reason.

Tom Breton

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
Erik Naggum <er...@naggum.no> writes:

> * Tom Breton <t...@world.std.com>
> | I agree with most of the comments on this thread about Lisp syntax.
> | But it occurs to me that one advantage that heterogeneous Algol-type
> | syntaxes have over Lisp is that when they get lost, they can detect
> | being lost and resynchronize (And thus produce more errors, but that's
> | helpful for debugging). In Lisp, one misplaced parenthesis can easily
> | put you into "Where the hell is this problem coming from?" mode.
>
> that's why we have editors instead of compilers help us find problems.
> one very easy way to spot parenthesis errors is to let Emacs indent the
> whole top-level form. if something moves, undo the indentation, and fix
> whatever you fairly immediately see caused the problem. repeat until
> nothing moves.

Which really doesn't address the problem, sorry. I've assumed from
the start that we were dealing with code that was automatically
indented. Inspecting the indentation does not neccessarily reveal
errors. Unless you are focussing on which level you intend an
expression to be in, one level looks very like another.

> Lisp's simple, unlabeled, explicit-structure-marking syntax solves all of
> these problems.

No, and I explained why.

> your silly introduction of syntactic noise is another
> example of how a powerful tool can be used to knock down telephone poles
> instead of taking a hint that you should get back on track. imagine
> giving the programmer the duty of filling in noise like that!

I think you did not understand what was proposed if you say that.

thi

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
Tom Breton <t...@world.std.com> writes:

> (defun kill-all-abbrevs ()
> "Undefine all defined abbrevs."
> (interactive)
> (#@ ;;Save depth
> let
> ((tables abbrev-table-name-list
> #@ ;;Restore depth
> (#@ ;;Save depth, nested,
> while tables
> #@ ;;Restore, from the second (nested) depth, not the first.
> (clear-abbrev-table (symbol-value (car tables
> #@ ;;Restore, again from second.
> (setq tables (cdr tables
> #@);;Forget depth. Now we have the first depth again.
> #@)) ;;Forget depth, and close one more parenthesis to end the defun.

looks like some snails got confused and wandered into the source.

thi

Martin Rodgers

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
There is a message <m3so82qw...@world.std.com>
scrawled in the dust by Tom Breton in a flowery script, reading:

> (And thus produce more errors, but that's helpful for debugging).

I've usually found that just one error is enough, usually the first one.
There was a time when C compilers would tell me the real error in the
second error message, but that tended to be because of type inference and
the absence of a declaration.

Using languages with reflection, I find that the first error is the only
one I can safely trust, and this is the only one that I get. Every time I
think about recovery, it's a can of worms. So why bother? I just correct
the source code, and try again. It only takes a second or two.

BTW, I also think of XML as you do. XML->documents as Lisp->code. ;)
I'm currently playing with a little tool for turning XML into Lisp code
that, say, builds a tree of CLOS objects. The code could do anything, but
I love the idea of using CLOS to process the tree. Lisp is good at this!
--
Remove insect from address to email me | You can never browse enough
"Ahh, aren't they cute" -- Anne Diamond describing drowning dolphins

Fernando Mato Mira

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to

Tom Breton wrote:

> But it occurs to me that one advantage that heterogeneous Algol-type
> syntaxes have over Lisp is that when they get lost, they can detect
> being lost and resynchronize (And thus produce more errors, but that's
> helpful for debugging). In Lisp, one misplaced parenthesis can easily
> put you into "Where the hell is this problem coming from?" mode.

Yeah, right. Like when you miss a semicolon or stuff like
that in C(++).

"Where the hell" is actually worse in C(++) as I still
haven't crossed a compiler with the single brain cell
needed to tell me in exactly which file in an #include chain a problem
is actually happening!

Regards,

Fernando Mato Mira

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to

Erik Naggum wrote:

> having been one of the leading SGML experts in the world before I finally
> came to conclude it was a fundamentally braindamaged approach (but a good

And I know you don't like LaTeX (and obviously Knuth belongs to the pathetic
`reinvent school', but what choice do I have?).
What can we use then? Any `TexROLLisp' to offer? [It's a real question]

Regards,

Paolo Amoroso

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
On Wed, 9 Jun 1999 05:42:49 GMT, Tom Breton <t...@world.std.com> wrote:

> Which really doesn't address the problem, sorry. I've assumed from
> the start that we were dealing with code that was automatically
> indented. Inspecting the indentation does not neccessarily reveal

For my--toy, presently--programs I evaluate every form just after editing
(writing or modifying) it. This provides an early occasion for testing, and
it reduces the chances that unbalanced parentheses remain undetected.

Question for experienced programmers: does this approach scale well to
large systems?


Paolo
--
Paolo Amoroso <amo...@mclink.it>

Erik Naggum

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
* Tom Breton <t...@world.std.com>

| Which really doesn't address the problem, sorry.

it seems you're reading awfully fast, but slow down so you get the issues
that are attempted communicated, OK?

| I've assumed from the start that we were dealing with code that was
| automatically indented.

but at which level? there are two major schools, here: one, which SGML
adherents tend to belong to, says that at no point in time shall the
syntactic wellformedness of the total system be in jeopardy, which means
you cannot at any time perform a task in two steps that causes the whole
document (SGML) or form (Lisp) between these two steps to break the rules
of syntax. this school favors structure editors and completely automatic
indentation and also storing the document or code in a non-text form when
they think they can get away with it. the core principle of this school
is that structure is "a priori". the other major school, which you will
find intrinsic to the Emacs philosophy, is that writing highly structured
material is a cooperation between user and editing software. at no point
in time can there be a guarantee that the structure is complete, but you
can check for it, and you can cause it to become complete once you detect
where it is not. the core principle of this scool is that structure is
"a posteriori". I generally tend to be an a posteriori kind of guy, and
I think a priori kinds of guys pretend to know the unknowable, and since
you don't listen very well, but assume you already know what I said
before you actually read it, that rhymes well with the a priori school.

I assume here that since you drag in the compiler, which is THE WRONG
PLACE to do this kind of checking and recovery and resynchronization, the
reasons for which I explained and you did not understand, you want some
form of cooperation between editor and the human, instead. that
cooperation is at its best when it is completed before the compilation
starts. less syntactic mess means better cooperation between user and
editor (Lisp). more syntactic mess means the compiler is the only tool
that could ever figure it out (C++).

| Inspecting the indentation does not neccessarily reveal errors.

I didn't say "inspect", Tom. what I said was, _when_ you are editing,
and you are inserting and deleting parenthesis, you will naturally have
expectations as to what the indentation will be. that is, if we're still
assuming that programmers are humans. you give me the impression that
you argue in a world where they are not, which rhymes well with my own
great vision of the future, where computers program people, but this is
still some ways off, and until then, we have to deal with people typing
and seeing what they do. so instead of your silly interpretation of
inspecting the indentation, I _actually_ said we should watch the
_changes_ that Emacs makes to the indentation when we reindent code that
has been changed. you have obviously never done this, so let me explain
what it means: suppose you add a binding to a LET form, but you forget to
close the outermost parenthesis in what you added. reindent. watch how
completely unrelated lines suddenly move. this is such a fantastically
simple task most people have to be shown it to understand that it is NOT
a question of inspecting a _static_ indentation, but of watching Emacs
make _unexpected_ changes to indentation. the rule of thumb is: if
something you don't expect to move, moves, you've made a mistake, and
undo the indentation immediately, and go fix it.

however, the interesting point here is that your suggestion does not
reveal errors in a slightly different, but in principle identical, way to
what you argue against in my counter-suggestion: the reason is computers
_still_ don't know the _intention_ of the human programmer, because what
we look at is a failure to communicate the intent properly in syntax. in
the case of SGML (or XML, if you want), that flaw is at the core of the
braindamaged design: you cannot communicate or even think of structure in
SGML without knowing it a priori _and_ with full knowledge of the intent
of the users of that structure. if you try to do otherwise, you will
fight structure (or make users fight it), and resort to a posteriori
means of re-establishing the syntactic structure, when the intent is
gone. which is what this silly "resynchronization" proposal is all about.

| Unless you are focussing on which level you intend an expression to be
| in, one level looks very like another.

no wonder you're seriously confused about this issue to begin with.

| > Lisp's simple, unlabeled, explicit-structure-marking syntax solves all
| > of these problems.
|
| No, and I explained why.

I see. no, you did not explain why, Tom. you cannot _both_ accuse me of
not getting your point _and_ tell me you already explained what I bring
up, which must of necessity be something other than your issues. this
only means you are not listening to anyone but yourself.

| I think you did not understand what was proposed if you say that.

sometimes, one person's brilliant idea is a brilliant person's laugh.

and in this case, my reason for leaving SGML behind was that the kind of
silly thing you proposed for Lisp was proposed for SGML about once a
month by various people who thought they had grokked SGML, but hadn't.
it finally dawned on me that the complexity of the syntax made this
almost _necessary_ for SGML, so it had to come up, and one of the least
productive solutions, XML, won the day. I was there, at the conference
table where the first thoughts that became XML surfaced. a few months
earlier, I had proposed the need for a special marker for empty elements
-- and then retracted that proposal because it led to new problems -- but
guess what survived in XML! and now we have the same stupid issues that
caused people to think long and hard and come up with XML come to Lisp
because John McCarthy made the acute observation that XML is basically
parentheses with labels on them, but some people didn't get the meaning
of that: drop the stupid labels in XML. it was _not_ an invitation to
add labels or (syntactic) markers to Lisp.

however, I don't expect anyone to understand this unless they actually
understand SGML, and very few people do, least of all the people who
think it's great, so I just wanted to put it in writing in case someone
gets the message.

structure only appears a priori evident to people who didn't build it.

Erik Naggum

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
* Fernando Mato Mira <mato...@iname.com>

| And I know you don't like LaTeX (and obviously Knuth belongs to the pathetic
| `reinvent school', but what choice do I have?).
| What can we use then? Any `TexROLLisp' to offer? [It's a real question]

the realness of the question is evident, but I don't have time to answer
this fully, now. it's fairly involved, which is also why I haven't
gotten around to write something about what attracted me to SGML and what
made me leave except in commentaries on comp.lang.lisp and comp.text.sgml.

what do I use myself? somebody else. a consistent, simple style in
plain text makes it possible for skilled typographers to make printed
text look very nice. authors generally think they know too much about
typography, or if they have the wherewithal to realize they don't, go to
extraordinary lengths to make life for typographers needlessly hard.
working _inside_ the publishing business also tells you which problems
SGML were hoped to solve and their magnitude, but also their reason: most
authors don't know jack about the _logistics_ of writing books, much less
publishing them. most authors write books like mad generals conduct
wars: without concern for how their troops shall get fuel and food and
ammo. but if you can't think in terms of logistics, at least have enough
respect for those who do that you help them by staying out of their way.
I found it wise to get out of the way, not only because it's a horribly
_practical_ industry: they just do _whatever_ it takes to get a book out
(SGML was like asking miners to use latex gloves so they wouldn't leave
finger prints on the ore), but also because SGML couldn't help anybody at
the level they actually needed help. SGML is a giant conflation of what
was once a noble division of labor, much worse than the incredibly stupid
stuff Microsoft thinks is publishing, because Microsoft thinks WYSIWYG is
going to make skilled typographers happy (hint: they aren't), but SGML
makes authors _aware_ of the structure that had hitherto been implicit,
and few authors, except highly skilled technical writers, have any idea
what their structure communicates until a _long_ way into writing it (if
we're lucky), or them (if we aren't).

if you want to consider my web pages in terms of their design, take a
look at this excellent explanation of what I didn't know I was doing
before visiting them:

http://www.webreference.com/dlab/9804/academic.html

I have been on both ends of the publishing business: I have published
many articles in magazines and newspapers -- never have the editors been
happier than when receiving plain text -- and I have helped write the
software to make a giant web of mad user input into books -- it is not a
pretty sight. so I decided to make my plain text a pretty sight and let
somebody else make my plain text into pretty formatted page layout.

hope this is a start at an answer, at least.

Barry Margolin

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
In article <m3so82t...@world.std.com>,

Tom Breton <t...@world.std.com> wrote:
>Kent M Pitman <pit...@world.std.com> writes:
>
>>
>> I've seen other compilers that keep track of the last place an open
>> paren in column 0 happened, so that when a compiler error happened,
>> especially a reader error, they could offer heuristic advice about
>> the paren balancing beyond what the mere count of parens would give.
>
>That's certainly an approach. It needs to detect an error first, tho.

Usually it does. If you have an extra or missing parenthesis somewhere in
your file, you'll either get an EOF in the middle of an object or a close
parenthesis at top-level. You would have had to have made an even number
of complementary parenthesization mistakes to avoid them being detected.

Kent mentioned an editor mode that noticed an open parenthesis too far to
the left. I don't recall this specifically, but I think I remember that
ZMACS Lisp mode recognized open parens at the left margin specially in a
number of ways. So if you had code like:

(defXXX ...
... <not enough close parens>)

(defYYY ...
... <too many close parens>)

I think the M-x Find Unbalanced Parenthesis command could tell that each of
the definitions had unbalanced parens, even though they balanced each other
out.

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Kent M Pitman

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
Tom Breton <t...@world.std.com> writes:

> > I do agree that it's a good property of a language that it should be
> > "sparse" so that errors are easily detected from non-errors. But
> > you can't have everything, and I think the benefit that comes for having
> > infix comes at a sizable disadvantage. Even LOOP, which is majorly
> > infix, is a routine source of indentation woes, and is much harder
> > to edit using normal motion commands.
>
> I'm not sure how infix notation got into this. If what I talked about
> translates to infix notation, I don't see it.

infix implies redundancy. for example, in lisp one writes:

(if x y z)

one doesn't need "then" to know that y is a then clause because it's
"obvious" that the test is a complete expression and the expression is
therefore concise at the expense of being parse-wise fragile. A single
character out of place can upset your ability to give good context.
Consider instead:

if x then y else z

This is a much less fragile thing. A typo like:

if xy then else z

can give you a bunch of really good information about the nature of
your problem that

(if xy z)

cannot. The reason is that the "then" and the "else" are textually heavily
redundant and add a great deal of additional "information" to the form
that can be used to grab some foothold in the presence of errors. Even
if you misspell a guideword, you have the fact working for you that you
expected a guide word. So in

if x ten y else z

it's fairly easy to tell that "ten" is a mis-spelled guide word becuase you
found an "else" before seeing a "then" and you had an unrecognized token
that was very much like a "then" in between. There is no equivalent of
this kind of extra contextual information in lisp.

So my comment was not about infix per se, but the redundancy of expression
common in infix languages which are typically absent in lisp, which
largely uses positional information instead of keyword information to
delineate program structure.

Tom Breton

unread,
Jun 9, 1999, 3:00:00 AM6/9/99
to
Fernando Mato Mira <mato...@iname.com> writes:

> Tom Breton wrote:
>
> > But it occurs to me that one advantage that heterogeneous Algol-type
> > syntaxes have over Lisp is that when they get lost, they can detect
> > being lost and resynchronize (And thus produce more errors, but that's
> > helpful for debugging). In Lisp, one misplaced parenthesis can easily
> > put you into "Where the hell is this problem coming from?" mode.
>
> Yeah, right. Like when you miss a semicolon or stuff like
> that in C(++).

C++, for all its lack of virtue, IME detects syntactic errors more
easily. The semi-colon errors you mentioned are notoriously very easy
to find.

thi

unread,
Jun 10, 1999, 3:00:00 AM6/10/99
to
Erik Naggum <er...@naggum.no> writes:

> structure only appears a priori evident to people who didn't build it.

ie, evidence of structure appears as you build. you might as well keep
your eyes open at that time. abdicating responsibility to a compiler is
not only irresponsible (by definition), it's not as much fun. :->

`M-C-a M-C-q' !!

thi


Rob Warnock

unread,
Jun 10, 1999, 3:00:00 AM6/10/99
to
Erik Naggum <er...@naggum.no> wrote:
+---------------

| structure only appears a priori evident to people who didn't build it.
+---------------

I believe it was David Parnas who said something to the effect that
a well-crafted program will always *look* (a posteriori) like it was
written in one fell swoop out of whole cloth (that is, look like an
a priori design), but in fact it never is.

[Sorry I've forgotten the exact reference, but IIRC it was an article
about growing the documentation & the program together in the face of
changing requirments or something like that...]


-Rob

-----
Rob Warnock, 8L-855 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. FAX: 650-933-0511
Mountain View, CA 94043 PP-ASEL-IA

Stig E. Sandų

unread,
Jun 12, 1999, 3:00:00 AM6/12/99
to
On Wed, 9 Jun 1999 23:05:41 GMT, Tom Breton <t...@world.std.com> wrote:
>> Yeah, right. Like when you miss a semicolon or stuff like
>> that in C(++).
>
>C++, for all its lack of virtue, IME detects syntactic errors more
>easily. The semi-colon errors you mentioned are notoriously very easy
>to find.

But at the cost that parsers and front-ends are extremely difficult to
write. Sure, it allows companies to live off just writing front-ends for
c++ compilers, but that should imho be a signal that something is terribly
wrong. I doubt there will ever be a companies specializing in making
front-ends to parse Lisp-syntax..

--
------------------------------------------------------------------
Stig Erik Sandø Institute of Art History, UiB Norway

Tom Breton

unread,
Jun 12, 1999, 3:00:00 AM6/12/99
to
st...@kunst.uib.no (Stig E. Sandų) writes:

> On Wed, 9 Jun 1999 23:05:41 GMT, Tom Breton <t...@world.std.com> wrote:
> >> Yeah, right. Like when you miss a semicolon or stuff like
> >> that in C(++).
> >
> >C++, for all its lack of virtue, IME detects syntactic errors more
> >easily. The semi-colon errors you mentioned are notoriously very easy
> >to find.
>
> But at the cost that parsers and front-ends are extremely difficult to
> write. Sure, it allows companies to live off just writing front-ends for
> c++ compilers, but that should imho be a signal that something is terribly
> wrong. I doubt there will ever be a companies specializing in making
> front-ends to parse Lisp-syntax..

I completely agree.

David Combs

unread,
Jun 16, 1999, 3:00:00 AM6/16/99
to
In article <31379308...@naggum.no>, Erik Naggum <er...@naggum.no> wrote:
>* Fernando Mato Mira <mato...@iname.com>
>| And I know you don't like LaTeX (and obviously Knuth belongs to the pathetic
>| `reinvent school', but what choice do I have?).
>| What can we use then? Any `TexROLLisp' to offer? [It's a real question]
>

I use the "commercial version" of cmu "Scribe".

Works fine for me, produces ascii, postscript, or a variety
of other things, mostly old, useless things.

The source is plain ascii, via a scribe-mode in emacs.

(rms originally used scribe for his emacs doc and *info*,
but when it went commercial, he couldn't (wouldn't) handle that,
of course...)

Neat thing about the postscript is they don't do kerning, the
only problem I have is my own name in caps, DAVID, the AV looks
horrible. But what's good is that the postscript is readable,
whole lines are done as one string in parens.

Also does page numbers, so ghostview can be jumped around on
them.

Does bibliography via 10,000 different styles for different
(cs) journals. Index. Contents. Macros, actually pretty neat
macros.

Why didn't Knuth ever learn lisp? Maybe he did -- but clearly
Brian Reid (did Scribe as phd thesis) sure put a lot of lisp ideas
into Scribe.

Such a cool "language".

---

I've got books on tex, latex, etc. Gawd Amighty, looks like
they TRIED to make it difficult!

If interested:

800-365-2946 412-471-2070
_Scribe: (Now: Cygnet Publishing Technologies)

I'd give you the email, but they (er, she) never reads it.

David


0 new messages