Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Eureka! Lexical bindings can be guaranteed!

33 views
Skip to first unread message

Erann Gat

unread,
Mar 6, 2000, 3:00:00 AM3/6/00
to

Paul Foley just wrote sent me the following email:

> On Mon, 06 Mar 2000 11:18:41 -0800, Erann Gat wrote:
>
> > 2. Issues with dynamic vs. lexical bindings, including beginner's
> > confusion, and the fact that there is no portable way to write code that
> > is guaranteed to produce a lexical binding.
>
> Sure there is: create an uninterned symbol for your guaranteed-lexical
> variable. If you really think this is worthwhile, you could write
> code like
>
> (let ((#1=#:foo 42))
> (lambda () (format t "~&;; The value of FOO is ~D" #1#)))
>
> or you could write a "lexical-let" macro that makes gensyms for all
> the bindings and code-walks the body doing the replacements.

And it then occurred to me that code walking isn't necessary -- you
can use SYMBOL-MACROLET to do the code walking for you. So here is
LLET, lexical-let, a macro that guarantees lexical bindings:

(defmacro llet (bindings &body body)
(let* ( (vars (mapcar 'first bindings))
(vals (mapcar 'second bindings))
(gensyms (mapcar (lambda (v) (gensym (symbol-name (first v))))
bindings)) )
`(let ,(mapcar 'list gensyms vals)
(symbol-macrolet ,(mapcar 'list vars gensyms)
,@body))))

> Personally, I wouldn't bother unless I saw some evidence that this is
> actually a problem "in real life".

As I wrote to Paul, I work for NASA and I am trying to convince people
to use Lisp on spacecraft, which are multimillion-dollar assets. Managers
want to see safety guarantees, not just empirical evidence. Just because
something hasn't been a problem in the past doesn't mean it won't be a
problem in the future. And in the space biz, the first glitch could also
be your last.

Erann Gat
g...@jpl.nasa.gov

Erann Gat

unread,
Mar 6, 2000, 3:00:00 AM3/6/00
to
In article <gat-060300...@milo.jpl.nasa.gov>, g...@jpl.nasa.gov
(Erann Gat) wrote:

> And it then occurred to me that code walking isn't necessary -- you
> can use SYMBOL-MACROLET to do the code walking for you. So here is
> LLET, lexical-let, a macro that guarantees lexical bindings:
>
> (defmacro llet (bindings &body body)
> (let* ( (vars (mapcar 'first bindings))
> (vals (mapcar 'second bindings))
> (gensyms (mapcar (lambda (v) (gensym (symbol-name (first v))))
> bindings)) )
> `(let ,(mapcar 'list gensyms vals)
> (symbol-macrolet ,(mapcar 'list vars gensyms)
> ,@body))))

Turns out this doesn't work. It's an error to use SYMBOL-MACROLET on
a symbol that has been declared SPECIAL. Damn! :-(

Well, it kind of works. It *does* guarantee lexical bindings, which is
the really important thing (IMO). But it would be nice to be able to do
the following as well:

(defvar *x* 1)
(defun foo () *x*)
(llet ( (*x* 2) ) (list *x* (foo))) --> (2 1)

But this generates an error.

? (llet ( (*x* 2) ) (list *x* (foo)))
> Error: While compiling an anonymous function :
> SPECIAL declaration applies to symbol macro *X*
> Type Command-. to abort.
See the RestartsÅ  menu item for further choices.
1 >

Erann Gat
g...@jpl.nasa.gov

Andrew Cooke

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
In article <gat-060300...@milo.jpl.nasa.gov>,
g...@jpl.nasa.gov (Erann Gat) wrote:
> As I wrote to Paul, I work for NASA and I am trying to convince people
> to use Lisp on spacecraft, which are multimillion-dollar assets.
Managers
> want to see safety guarantees, not just empirical evidence. Just
because
> something hasn't been a problem in the past doesn't mean it won't be a
> problem in the future. And in the space biz, the first glitch could
also
> be your last.

Hi,

This is a serious question, and not meant to trigger a flame-fest, so
maybe it's best answered and asked (sorry) be email. Here goes...

While I like Lisp a lot, it strikes me that a statically typed language
might be preferable in extreme reliability situations (I'm thinking of
ML, for example). Is that an issue in your experience, and how do you
defend Lisp against that criticism?

(In my very limited opinion I get the impression that static checking
has improved a lot recently (but it may be that this is decade old work
that has stalled) and that a statically typed language with flexibility
approaching that of Lisp might be possible - that's a separate issue
since you are dealing with existing languages, but is teh background to
my question).

Cheers,
Andrew


Sent via Deja.com http://www.deja.com/
Before you buy.

William Deakin

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
Andrew Cooke wrote:

> While I like Lisp a lot, it strikes me that a statically typed language
> might be preferable in extreme reliability situations (I'm thinking of
> ML, for example).

I'm puzzled by this. If I understand correctly you are saying that
statically typed languages are more reliable. First, in what way are
statically typed languges more reliable? Second, lisp *can* be typed and
for `extreme reliability situation' you may want to do this. (This touches
on Paul Graham's `lisp is really two languages' argument [1]).

Best Regards,

:) will

[1] As in 'Lisp is really two languages: a language for writing fast
programs and a language for writing programs fast.' p. 213 `ANSI Common
Lisp', P.Graham.


Andrew Cooke

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
In article <38C4EE83...@pindar.com>,

w.de...@pindar.com wrote:
> Andrew Cooke wrote:
> > While I like Lisp a lot, it strikes me that a statically typed
language
> > might be preferable in extreme reliability situations (I'm thinking
of
> > ML, for example).
>
> I'm puzzled by this. If I understand correctly you are saying that
> statically typed languages are more reliable. First, in what way are
> statically typed languges more reliable? Second, lisp *can* be typed
and
> for `extreme reliability situation' you may want to do this. (This
touches
> on Paul Graham's `lisp is really two languages' argument [1]).

OK...

1. You snipped all my defensive comments about not being an expert on
this.

2. I said *might* be preferrable.

3. Part of safety might be that you are *forced* by the language to
statically type. I have no idea if it is easy or even possible to do
static checks on Lisp code that prove that it is statically typed.

4. Other languages might provide features that help you write
statically typed programs. Again, I have no idea whether it is easy or
even possible to have type inference in Lisp compile time.

5. My not knowing or understanding something doesn't preculde me from
asking for information on it.

Tim Bradshaw

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
* Erann Gat wrote:

> As I wrote to Paul, I work for NASA and I am trying to convince people
> to use Lisp on spacecraft, which are multimillion-dollar assets. Managers
> want to see safety guarantees, not just empirical evidence. Just because
> something hasn't been a problem in the past doesn't mean it won't be a
> problem in the future. And in the space biz, the first glitch could also
> be your last.

I think that your LLET form gives this kind of guarantee. As you
pointed out in a followup, it doesn't work to try and do a
SYMBOL-MACROLET that would shadow a special variable. But the
standard makes it clear that an error is signalled this is so (not
just that `it is an error'). It doesn't say, but I assume this would
be a compile-time error (and this would be easy to check in an
implementation). So you can be pretty sure that if the code
compiles, that binding is lexical. Could you ask for more?

Incidentally, I'd be rather surprised if you can win the
dynamic-typing battle but then lose on special variables in
safety-critical lisp!

--tim


Tim Bradshaw

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
* William Deakin wrote:

> I'm puzzled by this. If I understand correctly you are saying that
> statically typed languages are more reliable. First, in what way are
> statically typed languges more reliable? Second, lisp *can* be typed and
> for `extreme reliability situation' you may want to do this. (This touches
> on Paul Graham's `lisp is really two languages' argument [1]).

Statically typed programs are considered more reliable because you (or
the compiler) can prove more properties about them, so you may get
more guarantees of the form `if this compiles, it will run' from the
system. You typically need compiler support to do this (or
code-walker support I suppose) -- just because you can say:

(defun foo (x)
(declare (type integer x))
(car x))

Does not mean that the compiler will check that declaration for you
(and some won't). Of course, you could write a type-checker for CL
programs but that would be a fair lot of work.

Of course, like many things in CS, it is not completely clear what
this reliability translates to in practice. There are undoubtedly
`studies that show...' but they are likely based on statistically
insignificant samples. (Either for the pro-static or
anti-static-typing camps.)

Both camps will also happily point you at the Ariane disaster as an
indication that the other is wrong.

--tim

Pierre R. Mai

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
Tim Bradshaw <t...@cley.com> writes:

> Both camps will also happily point you at the Ariane disaster as an
> indication that the other is wrong.

I think this is becoming a fundamental law in software engineering and
programming language development: Everybody uses the Ariane disaster.
Maybe ESA should have trademarked "Ariane 5 Disaster" and "Ariane 5
Report", and they could have recouped the money lost, probably x times
over... ;)

Regs, Pierre.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Will Deakin

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
Dear Andrew,

What I wrote was not intended as a flame. If you read it as such I
apologise :( Also, please do not think that I am an expert because I'm
not.

Andrew Cooke wrote:
> 1. You snipped all my defensive comments about not being an expert
>on this.

True. But you must have thought something along these lines otherwise
you wouldn't have posted ;)

> 3. Part of safety might be that you are *forced* by the language to
> statically type.

Why do you think this? This is the nub of what I was asking. As a
follow up to this: why are you concerned with static (as opposed to
dynamic) type checking?

>I have no idea if it is easy or even possible to do static checks on
>Lisp code that prove that it is statically typed.

But, if you can do this type of checking in C or pascal, say, why can't
you do this in lisp? Maybe the question is how hard it is to do
this.


>5. My not knowing or understanding something doesn't preculde me from
>asking for information on it.

No, that is true. If you thought that was what I was doing then there
is a problem with they way I expressed myself. If this is so, I am
sorry,

Best Regards,

:) will

Will Deakin

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
Thanks tim :)

Tim wrote:
> Statically typed programs are considered more reliable because you
> (or the compiler) can prove more properties about them, so you may

> get more guarantees...from the system.
and
> ...it is not completely clear what this reliability translates to in


> practice. There are undoubtedly `studies that show...' but they are
> likely based on statistically insignificant samples. (Either for the
> pro-static or anti-static-typing camps.)

This is what I was trying to say in my usual fuddled fashion. As far as
I know (which is not saying that much) the case for static vs dynamic
vs no type checking is moot. And as you say, these guarantees are in
the well maybe camp[1].

> Does not mean that the compiler will check that declaration for you
>(and some won't).

And yes, you got me here. I made the assumption that the compiler would
honour type declarations. Which, of course, it does not have to.
However, I would be unhappy if a compiler didn't at least try do this.

>Of course, you could write a type-checker for CL programs but that
>would be a fair lot of work.

You forsaw my follow up post. This internet telepathy is great ;)

Best Regards,

:) will

[1] As usual I look forward to being corrected on this ;)

Andrew Cooke

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
In article <8a32uj$m10$1...@nnrp1.deja.com>,

Will Deakin <w.de...@pindar.com> wrote:
> What I wrote was not intended as a flame. If you read it as such I
> apologise :( Also, please do not think that I am an expert because I'm
> not.

Sorry - I'm getting a bit frazzled by various conversation on the
internet.

Thanks for the info.

Andrew

Andrew Cooke

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
In article <8a32uj$m10$1...@nnrp1.deja.com>,
Will Deakin <w.de...@pindar.com> wrote:
> > 3. Part of safety might be that you are *forced* by the language to
> > statically type.
>
> Why do you think this? This is the nub of what I was asking. As a
> follow up to this: why are you concerned with static (as opposed to
> dynamic) type checking?

Because I program in Java and a (the one> ;-) advantage over Lisp is
that it catches some mistakes that dynamic testing doesn't. I know that
one can have tests etc, but it seems that the earlier one catches errors
the better. The original post was about software which is effectively
run once (outside testing) and must work - I was thinking that the
extreme emphasis on perfect code must favour anything that detects
errors.

> >I have no idea if it is easy or even possible to do static checks on
> >Lisp code that prove that it is statically typed.
>
> But, if you can do this type of checking in C or pascal, say, why
can't
> you do this in lisp? Maybe the question is how hard it is to do
> this.

Because Lisp is much more dynamic (eg Macros) (there was a post on this
group today about a package to allow manipulation of program fragments
in ML - I haven't looked, but it would show the difference between doing
something like that in a statically typed language and a in Lisp).

Andrew Cooke

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
In article <8a3426$mjn$1...@nnrp1.deja.com>,

Will Deakin <w.de...@pindar.com> wrote:
> > Statically typed programs are considered more reliable because you
> > (or the compiler) can prove more properties about them, so you may
> > get more guarantees...from the system.
[...]

> This is what I was trying to say in my usual fuddled fashion. As far
as
> I know (which is not saying that much) the case for static vs dynamic
> vs no type checking is moot. And as you say, these guarantees are in
> the well maybe camp[1].

My experience in ML is limited, so the following could be simply my lack
of experience in the language, but...

Not only does it allow for more automated checking, but you are
programming in an environment wher you are constantly reminded of what
is what. My programming moved from my usual slap it together and it'll
work style to writing something approaching a logical argument in code.

Obviously, this mainly means I can't code for toffee (it was also my
first meeting with functional programming), but I could see how it would
bring code closer to a specification.

Hmmm. Maybe I'll give it another try :-)

Andrew

PS If anyone is interested, I gather than OCaml is the cutting edge of
ML these days. I used SMLNJ which seems to be the "standard".

Tim Bradshaw

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
* Will Deakin wrote:
>> 3. Part of safety might be that you are *forced* by the language to
>> statically type.

> Why do you think this? This is the nub of what I was asking. As a
> follow up to this: why are you concerned with static (as opposed to
> dynamic) type checking?

Because static type checking catches (some) errors at compile time,
whereas dynamic checking will catch them only at runtime. If there's
no reasonable way of dealing with the runtime error (or if the people
writing the code didn't bother to deal with it), and the system is
safety-critical or just inaccessible (on Mars, say), then a runtime
type error may be a very bad thing to get.

(Incidentally, in this context, `type' often means much more than
`representational type' -- it can be very useful to know that
something is an (integer 0 8) say).

> But, if you can do this type of checking in C or pascal, say, why can't
> you do this in lisp? Maybe the question is how hard it is to do
> this.

You can do it, but you need compiler support, or a code-walker and the
ability to write your own type-checker at least.

--tim

Erik Naggum

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
* Andrew Cooke <and...@andrewcooke.free-online.co.uk>

| This is a serious question, and not meant to trigger a flame-fest, so
| maybe it's best answered and asked (sorry) be email. Here goes...

it's a contentious issue because there's no relevant research to support
the static-typing crowd, yet they have an "intuitive edge" that is very
hard both to defend formally and to attack informally. it's like arguing
against adults needing milk. (we don't. really.)

| While I like Lisp a lot, it strikes me that a statically typed language
| might be preferable in extreme reliability situations (I'm thinking of

| ML, for example). Is that an issue in your experience, and how do you
| defend Lisp against that criticism?

in my current setting, which is somewhat more down to earth than NASA's
lofty projects despite dealing with financial news distribution, handling
errors _gracefully_ has virtually defined the success of the project,
which is expanding and providing new services at increasing speed. the
reason explicitly cited has been that the old system that mine replaced
died completely and randomly when a component died, while mine stops in a
state that is usually continuable right away and never fatal. running a
production system under Emacs under screen may seem rather odd to the old
batch-oriented school, but what it means is that I can connect to the
Emacs process controlling Allegro CL and examine the system state, talk
to background debugging streams, and fix whatever goes wrong if the
system hiccups in any way, which it has so far done only with really
weird input from the outside world.

to obtain this level of gracefulness in one of the usual statically typed
languages would have required _massive_ amounts of code, simply because
you can't combine the dynamism with static typing without recompiling,
since _only_ the compiler is privy to the type information.

conversely, if you need all the type information hanging around at
run-time, anyway, why not make _full_ use of it? sadly, the static
typing crowd believes this is inefficient because they never had to make
it perform optimally, using their static typing the proof that it is so
much easier to do it compile-time, but that only proves they never spent
the effort to make it fast at run-time.

incidentally, my main gripe with static typing is when it is explicit.
implicit static typing (like type inference) has a bunch of specific
advantages over both dynamic and explicitly typed languages, but in
general fall short in terms of dynamism on one hand and the programmer's
ability to predict what the system is doing on the other.



| (In my very limited opinion I get the impression that static checking has
| improved a lot recently (but it may be that this is decade old work that
| has stalled) and that a statically typed language with flexibility
| approaching that of Lisp might be possible - that's a separate issue
| since you are dealing with existing languages, but is teh background to
| my question).

I think it's important to differentiate not on the basis of what the
compiler can do with the type information, but what something other than
the compiler can do with the type information. this is largely, but not
fully, orthogonal to whether the language is statically or dynamically
typed. debuggers for most languages maintain type information (mainly in
order to access the memory they use, of course), but you can't use it if
you aren't the debugger. in some development environments, you get help
calling functions and using variables, but what's the big difference
between a lookup system only available in some program after compilation
and manual pages in external documentation? the dynamism of _current_
information is missing in all of these systems.

I don't think it's possible to make a statically typed language as
dynamic as Common Lisp environments are simply because the work involved
in making a statically typed language able to handle the dynamism will
involve _very_ intelligent recompilation strategies, mounds of debugging
information that people will want to discard, and introspection will have
to be added to these languages in environment-specific ways.

#:Erik

David Hanley

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to

Tim Bradshaw wrote:

> * Will Deakin wrote:
> >> 3. Part of safety might be that you are *forced* by the language to
> >> statically type.
>
> > Why do you think this? This is the nub of what I was asking. As a
> > follow up to this: why are you concerned with static (as opposed to
> > dynamic) type checking?
>
> Because static type checking catches (some) errors at compile time,
> whereas dynamic checking will catch them only at runtime.

More to the point: in ML all possible program paths are type checked.
If testing cannot be guaranteed to cover all possible program paths,
this may be an important point.

OTOH, I seem to have very few problems with types in lisp. But this
does not mean no one will, or it might not be super-critical for some
applicaitons ( such as spacecraft programming ).

It seems possible ( to me ) to write some lisp macros to produce a typed
version of the language, eg:

(def-typed-fun square( (x float) )(* x x))

(def-typed-var (*max-users* fixnum) 100)

the second case would require creating a setf-function for setting
*max-users* to assure any setf was type checked. Might even want
to make the *real* variable name scoped so it could not be set
accidentally somehow. This also leads to issues if normal lisp code
was calling some of the type-checked functions. Wrappers would
have to be used in both directions.

dave


Erann Gat

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
In article <8a2naf$e4v$1...@nnrp1.deja.com>, Andrew Cooke
<and...@andrewcooke.free-online.co.uk> wrote:

> While I like Lisp a lot, it strikes me that a statically typed language
> might be preferable in extreme reliability situations (I'm thinking of
> ML, for example). Is that an issue in your experience, and how do you
> defend Lisp against that criticism?

No one is seriously pushing for ML in space as far as I know so the
issue hasn't come up. But if someone did the most effective argument
against it would be that there are no commercial vendors for ML and
Haskell, and it's even harder to find ML and Haskell programmers than
Lisp programmers.

There are two technical arguments that I use against people who make
the static-typing argument for C++.

The first is that static typing cannot eliminate all type errors because
the problem is undecidable. (Run-time math errors are type errors.)
Since you can't eliminate all type errors, you need a mechanism for
handling them at run time regardless of whether you are doing static
typing or not. Since you need this mechanism anyway, the incremental
benefit of having compile time checks is fairly small. In other words,
having static typing is worse than not having static typing because it
provides the *illusion* of safety but not the reality.

The third argument is that it's not hard to add static type checking
to Lisp if it is determined that the benefit is worthwhile.

Erann Gat
g...@jpl.nasa.gov

Erik Naggum

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
* David Hanley <d...@ncgr.org>
| It seems possible (to me) to write some lisp macros to produce a typed
| version of the language...

before you reinvent the wheel, please investigate the THE special form
and how it behaves in interpreted and compiled code under various
optimization settings. you might be _very_ surprised.

few Common Lisp programmers seem to know about the THE form, however.

#:Erik

Ray Blaak

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
g...@jpl.nasa.gov (Erann Gat) writes:
> The first is that static typing cannot eliminate all type errors because
> the problem is undecidable. (Run-time math errors are type errors.)
> Since you can't eliminate all type errors, you need a mechanism for
> handling them at run time regardless of whether you are doing static
> typing or not. Since you need this mechanism anyway, the incremental
> benefit of having compile time checks is fairly small. In other words,
> having static typing is worse than not having static typing because it
> provides the *illusion* of safety but not the reality.

Having had significant experience with both static and dynamic typing, the
primary benefit of static typing, in my not so humble opinion, is the checking
of interface conformance.

That is, during building a system out of components/modules/subsystems
whatever, when piecing together the parts, having things not compile unless
things "fit" is a major major help. I.e. Things like not being able to call a
function unless you supply all and only all of the required parameters and of
the right type.

Static type checking simply prevents whole classes of errors right away. The
compiler is performing certain classes of error checks immediately as opposed
to the user finding them during testing or out in the field.

Static typing does not prevent all errors by any means, and you still need
mechanisms to handle run-time failures. That does not mean, however, that the
benefits are insigificant. Just think of it as an additional tool one can use
to prevent errors.

That many statically typed languages ditch the type information during
run-time is not an argument against static typing. There is nothing to prevent
a language system from keeping type information around to be used by whoever
needs it.

That a dynamically typed system is often fundamentally more powerful than a
statically typed system is not an argument against static typing. Both styles
can co-exist, as in Common Lisp.

The point about static typing is that the programmer is specifying more
precisely how an abstraction can be used. The language environment then
verifies this usage. For many abstractions, the point is to *restrict* the
usage to a precise set, and not be flexible about it.

My approach is to do a statically typed style by default, and then to change
to a dynamic style only if the problem domain calls for it.

E.g. This is preferrable (apologies for any syntax errors):

(defun (doit (integer to-me))
(process to-me))

over this:

(defun (doit to-me)
(cond ((integerp to-me) (process to-me))
(error "unexpected type")))

that is, until it is shown to be necessary to handle multiple type
possibilities. The former is a more precise contract of what the function is
supposed to do. A client cannot even *express* (this kind of) incorrect usage,
since the static typing prevents it.

Of course, both are vastly preferrable to this:

(defun (doit to-me) ;; had better be an integer, by God!
(process to-me))

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
bl...@infomatch.com The Rhythm has my soul.

David Hanley

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to

Erik Naggum wrote:

> * David Hanley <d...@ncgr.org>
> | It seems possible (to me) to write some lisp macros to produce a typed
> | version of the language...
>
> before you reinvent the wheel, please investigate the THE special form
> and how it behaves in interpreted and compiled code under various
> optimization settings. you might be _very_ surprised.

I do know about the "the" form, but:

1) I don't beleive it's doing static checking. A compile error will
not be signaled if the types could be proved at compile time to
be mismatched.
2) The standard explicitly states that the results if
the "real" types do not match the "the" type is undefined.

http://www.xanalys.com/software_tools/reference/HyperSpec/Body/speope_the.html

dave


Tunc Simsek

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
I think that global declarations are also possible which may
allow type-checking in the sense that you're looking for:

(DECLAIM (FTYPE (FUNCTION ( {formal-types}* ) return-type)
function-name))


e.g
---
(declaim (ftype (function (integer) t)
doit))

I hope that helps and that

Tunc
On Wed, 8 Mar 2000, Robert Monfera wrote:

>
Ray Blaak wrote:
>
> > That is, during building a system out of components/modules/subsystems
> > whatever, when piecing together the parts, having things not compile
> > unless things "fit" is a major major help.
>

> I agree in that interfaces are the critical parts for type checking,
> because the developer of the module is responsible for balancing
> performance and robustness inside a module, but he can never take it
> granted how the module will be used. This is unfortunately true for
> run-time errors too, and I think that error checking at the interface
> level and static typing are orthogonal.


>
> > E.g. This is preferrable (apologies for any syntax errors):
> >
> > (defun (doit (integer to-me))
> > (process to-me))
>

> What's wrong with
>
> (defmethod doit ((integer to-me))
> (process to-me))
>
> It does what you expect, run-time.


>
> > over this:
> >
> > (defun (doit to-me)
> > (cond ((integerp to-me) (process to-me))
> > (error "unexpected type")))
>

> You could use ASSERT or even better, CHECK-TYPE. They're quite a bit
> nicer.
>
> Robert
>
>


Erik Naggum

unread,
Mar 7, 2000, 3:00:00 AM3/7/00
to
* Robert Monfera <mon...@fisec.com>
| How can you tell?

huh? through the high number of people who reinvent it in various ways,
and the low number of people who point out that they are reinventing
existing functionality whenever this happens. isn't this quite obvious?

#:Erik

Robert Monfera

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to

Erik Naggum wrote:

> few Common Lisp programmers seem to know about the THE form, however.

How can you tell?

(knows-p (the seasoned programmer) (the form 'the-form)) -> T

Robert

Robert Monfera

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to

Robert Monfera

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to

Robert Monfera wrote:

> (defmethod doit ((integer to-me))
> (process to-me))

Sorry for the typo, I just took code reuse literally :-)

(defmethod doit ((to-me integer))
(process to-me))

Paul Foley

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
On Wed, 08 Mar 2000 07:48:32 -0500, Robert Monfera wrote:

>> E.g. This is preferrable (apologies for any syntax errors):
>>
>> (defun (doit (integer to-me))
>> (process to-me))

> What's wrong with

> (defmethod doit ((integer to-me))
> (process to-me))

> It does what you expect, run-time.

But not at compile time, which is the point. However, you can do

(declaim (ftype (function (integer) t) doit))

after which some implementations (CMUCL) will issue compile-time
warnings if it can tell the argument isn't an integer.

--
Succumb to natural tendencies. Be hateful and boring.

(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))

Robert Monfera

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to

Tunc Simsek wrote:
>
> I think that global declarations are also possible which may
> allow type-checking in the sense that you're looking for:
>
> (DECLAIM (FTYPE (FUNCTION ( {formal-types}* ) return-type)
> function-name))

This is different from run-time dispatch, for example, the compiler may
completely ignore it. Even if an implementation makes use of this
declaration, it's mostly used for optimizations. Paul Foley just
hinted that CMUCL compares such declarations with inferred types, but
it's neither guaranteed by the standard nor is a widespread feature
among implementations.

Robert

Tom Breton

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
Andrew Cooke <and...@andrewcooke.free-online.co.uk> writes:

> In article <gat-060300...@milo.jpl.nasa.gov>,


> g...@jpl.nasa.gov (Erann Gat) wrote:
> > As I wrote to Paul, I work for NASA and I am trying to convince people
> > to use Lisp on spacecraft, which are multimillion-dollar assets.
> Managers
> > want to see safety guarantees, not just empirical evidence. Just
> because
> > something hasn't been a problem in the past doesn't mean it won't be a
> > problem in the future. And in the space biz, the first glitch could
> also
> > be your last.
>

> Hi,


>
> This is a serious question, and not meant to trigger a flame-fest, so
> maybe it's best answered and asked (sorry) be email. Here goes...
>

> While I like Lisp a lot, it strikes me that a statically typed language
> might be preferable in extreme reliability situations (I'm thinking of
> ML, for example). Is that an issue in your experience, and how do you
> defend Lisp against that criticism?

Lisp is easy to type statically, and the fact that it doesn't have to
always be typed has significant advantages.

There's a legitimate issue in there, in that typing isn't mandatory in
Lisp.

The one thing I would add to Lisp's type system are abstract types,
meaning types that are defined by supporting certain operations. You
can kind of kluuge them in Lisp with:

(satisfies
(compute-applicable-methods generic-function args... ))

but that's unneccessarily complex and indirect.

--
Tom Breton, http://world.std.com/~tob
Not using "gh" since 1997. http://world.std.com/~tob/ugh-free.html
Rethink some Lisp features, http://world.std.com/~tob/rethink-lisp/index.html

Will Deakin

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
Tim <t...@cley.com> wrote:
> Because static type checking catches (some) errors at compile time,
> whereas dynamic checking will catch them only at runtime. If there's
> no reasonable way of dealing with the runtime error (or if the people
> writing the code didn't bother to deal with it), and the system is
> safety-critical or just inaccessible (on Mars, say), then a runtime
> type error may be a very bad thing to get.

I agree with all of this. Its just that I'm not convinced that static
typing (or dynamic typing) is a silver bullet But what I was thinking
about at this point is probably straying into `worse is better'
territory. Having a language (and possibly an OS) that enables you to
deal which runtime errors if you want rather that going into core-dump
kernel-panic or whatever is probably a win. Putting it another way, I
would prefer BNFL to use lisp and UNIX for their control software,
rather than C++/MFC under windows 98, say ;)

> (Incidentally, in this context, `type' often means much more than
> `representational type' -- it can be very useful to know that
> something is an (integer 0 8) say).

Which, if this is honoured, is a real bonus,

Cheers,

:) will

Arthur Lemmens

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to

Andrew Cooke wrote:

> it seems that the earlier one catches errors the better.

This isn't always true.

One advantage of run-time errors (compared to compile-time errors) is that
there's a lot more context available to show you where you went wrong. You
don't just get a message from the compiler telling you in abstract terms why
your program is impossible. Instead, you get a fully worked-out example,
presented on a silver platter by your debugger.

For most typos and simple thinkos this may not be very important; but for
real bugs, it can be invaluable. I've spent hours staring at type mismatch
error messages from ML compilers only to find bugs that would've taken me
a few minutes to find with a decent debugger.

Arthur Lemmens

Pierre R. Mai

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
Robert Monfera <mon...@fisec.com> writes:

Although it's neither standard nor common, if one cared about static
typing, one could quite reasonably write a separate static type-checker
for CL that made use of such global (and local) declarations, as well
as a lot of type-inference information on the built-in functions. In
principle, this shouldn't be much harder than writing a type-checker
for one of the modern polymorphic functional languages, modulo:

- List/Cons types: We probably need better type specifiers to note in
a statically recognizable way that lists contain only elements of
type X. Without this, you can't do much type inference on list
operations (like first, etc.), and so the need for local type
declarations increases.

- Once dynamism sets in, users will have to make guarantees via local
declarations somewhere, or type-checking will break down:

(let ((x (read stream)))
;; This will propagate as Type t and cause all subsequent type
;; inference to fail => yield t. So you'll have to narrow the
;; type at some point, not do degenerate into a mass of erroneous
;; warnings/errors.
(+ x 1) ; Will raise error, since t is not a subtype of number
(cons x nil) ; Will raise no error, but TI will yield a type of
; (cons t null) which will probably get you into
; trouble later on...
(etypecase x
(number (+ x 1)) ; These work, and produce mildly
(string (cons x nil)))) ; meaningful inferred types...

- All the hairier features of CL reading and evaluation will mean that
you'll probably have to take a compiler front-end embedded in the CL
implementation of your choice as a front-end to your type-checker.

- Probably quite a number of things I've missed.

All in all you could probably do worse than starting with CMUCL's type
stuff (probably in the guise of SBCL), and modify it's behaviour in a
number of places:

- Turn around the type-checking logic in a number of places, so that
simple intersection of types doesn't suffice: Arguments must be
subtypes of the declared type.

Currently the following will compile without warnings in CMUCL
(declaim (ftype (function ((or cons number) (or cons number))
number) myfun3))
(defun myfun3 (x y)
(+ x y))

(Although with high speed settings it will produce efficiency
notes). This should raise a warning/type-error.

- Probably improve some of the type inferrence and type recognition
code in some places, a number of other things...

You might also want to take a look at Henry G. Baker's Paper "The
Nimble Type Inferencer for Common Lisp-84", which is available at
ftp://ftp.netcom.com/pub/hb/hbaker/TInference.html

All in all, I'd say if some student is looking for something to do
with both theoretical and practical contents, and a connection to CL,
then implementing something like this should make for a nice project,
possibly worth a nice degree of some sort...

Tim Bradshaw

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
* Will Deakin wrote:

> I agree with all of this. Its just that I'm not convinced that static
> typing (or dynamic typing) is a silver bullet

In case it wasn't clear from my posts in this thread: neither do I.

>> (Incidentally, in this context, `type' often means much more than
>> `representational type' -- it can be very useful to know that
>> something is an (integer 0 8) say).

> Which, if this is honoured, is a real bonus,

CMUCL can make use of info like this. For instance if you have two
suitably small fixnums a,b then it can prove that a + b is a fixnum
too. Likewise for things like SQRT (I think).

--tim

Tim Bradshaw

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
* Arthur Lemmens wrote:

> One advantage of run-time errors (compared to compile-time errors) is that
> there's a lot more context available to show you where you went wrong. You
> don't just get a message from the compiler telling you in abstract terms why
> your program is impossible. Instead, you get a fully worked-out example,
> presented on a silver platter by your debugger.

But there are situations where run-time errors are just too expensive
to have.

Ariane looks like a place where they'd decided this, but probably they
were wrong. (The argument put forward seems to be that having error
handlers in there (which Ada has, of course) would have made the code
slower so they'd miss their real-time budget. I think this is
implausible, but I don't know.)

But things like nuclear power plant control systems are definitely cases
where you might worry about this -- *certainly* you don't want to end
up sitting in the debugger in some time-critical part of the code.

There, now I've mentioned nuclear power *and* spacecraft in one
message about type checking! I believe this is sufficient to invoke
the C.L.L version of Godwin's law. Actually I shall also just
randomly say that Lisp is dead thus ensuring that I invoke it.

--tim

Fernando D. Mato Mira

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
"Pierre R. Mai" wrote:

> - List/Cons types: We probably need better type specifiers to note in
> a statically recognizable way that lists contain only elements of
> type X. Without this, you can't do much type inference on list
> operations (like first, etc.), and so the need for local type
> declarations increases.

I was thinking about this the other day. Once the cons specifier
can take optional car and cdr type arguments, the specific list types
become:

(deftype list (&optional (typ nil typ-p))
(if typ-p
`(or (cons ,typ) null)
`(or cons null)))

(deftype proper-list (typ) `(or null (cons ,typ (proper-list ,typ))))


Note that Series already understands specifiers like (list fixnum).

--
Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1 email: matomira AT acm DOT org
CH-2007 Neuchatel tel: +41 (32) 720-5157
Switzerland FAX: +41 (32) 720-5720

www.csem.ch www.vrai.com ligwww.epfl.ch/matomira.html


Fernando D. Mato Mira

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
"Fernando D. Mato Mira" wrote:

> (deftype list (&optional (typ nil typ-p))
> (if typ-p
> `(or (cons ,typ) null)
> `(or cons null)))

I forgot to recurse:

(deftype list (&optional (typ nil typ-p))
(if typ-p

`(or null (cons ,typ (or (not list) (list ,typ))))
`(or cons null)))

Andras Simon

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
Arthur Lemmens <lem...@simplex.nl> writes:


> One advantage of run-time errors (compared to compile-time errors) is that
> there's a lot more context available to show you where you went wrong. You
> don't just get a message from the compiler telling you in abstract terms why
> your program is impossible. Instead, you get a fully worked-out example,
> presented on a silver platter by your debugger.
>

> For most typos and simple thinkos this may not be very important; but for
> real bugs, it can be invaluable. I've spent hours staring at type mismatch
> error messages from ML compilers only to find bugs that would've taken me
> a few minutes to find with a decent debugger.

I think that tracking down such bugs in ML would be much easier if
compilers showed the type derivation that led to the mismatch.
(Maybe some do; the ones I've seen don't.)

Andras


Pierre R. Mai

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
"Fernando D. Mato Mira" <mato...@iname.com> writes:

> "Fernando D. Mato Mira" wrote:
>
> > (deftype list (&optional (typ nil typ-p))
> > (if typ-p
> > `(or (cons ,typ) null)
> > `(or cons null)))
>
> I forgot to recurse:
>
> (deftype list (&optional (typ nil typ-p))
> (if typ-p
> `(or null (cons ,typ (or (not list) (list ,typ))))
> `(or cons null)))

And there's the rub: In CL cons is a compound type specifier that can
take car and cdr type specs. BUT you aren't allowed to define
recursive type specifiers with deftype. See postings a couple of
years ago about this issue.

Since I agree that introducing recursive type specifiers into CL is a
bit problematic (deftype is like defmacro, but without the result
being evaluated, so you'd have to do lazy expansion), introducing an
atomic list type specifier might be the way to go.

Axel Schairer

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
Andrew Cooke <and...@andrewcooke.free-online.co.uk> writes:
> While I like Lisp a lot, it strikes me that a statically typed language
> might be preferable in extreme reliability situations (I'm thinking of
> ML, for example). Is that an issue in your experience, and how do you
> defend Lisp against that criticism?

As has been said in other posts, one fundamental thing about static
type-checking is that it is always a tradeoff. You cannot
automatically accept all `correct' programs and reject all
in-`correct' ones, and at the same time give a sensible,
uncontroversial meaning to `correct'. As an example, division by zero
is usually not considered a type error by static typing supporters
(rather it's considered a run-time exception), and quite some ML code
I've seen compiles with warnings about non-exhaustive case analyses.

Static type-checking is relative to your type-system and your
definition of type-correctness. It can give you useful information at
compile time. At the same time the restrictions of your type system
can prevent you from doing pretty sensible things, like dynamically
changing and extending a complex running piece of software in ML:
there's the static typing slogan `if it compiles it's correct'; by
modus tollens this becomes something like `it won't compile unless the
compiler thinks it's correct'.

So what are the advantages (+) and disadvantages (-) of compile-time
type-checking (in a broad sense) as compared to runtime checks? In
addition to the ones that have been mentionned in other followups I am
aware of the following.

+ You don't have to test a counterexample before you know that you got
something wrong

+ You don't have to invent the counterexample in the first place

- There are properties you cannot check at compile-time unless you are
prepared to allow interactive theorem proving as part of the
compilation process (e.g. `the resulting tree is balanced', or some
such)

+ You might not be prepared to check these properties at runtime
either (for efficiency reasons, say)

+ Some important properties are not meaningfully expressible as
runtime checks (`this function terminates')

- It is difficult to design a type system for lisp that allows at
least a significant amount of `normal' programs to be checked
automatically

Does this make sense? Any comments?

Cheers,
Axel

Fernando D. Mato Mira

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
"Pierre R. Mai" wrote:

> And there's the rub: In CL cons is a compound type specifier that can
> take car and cdr type specs.

Gee. That's "new". I missed that one.

> BUT you aren't allowed to define
> recursive type specifiers with deftype.

Yeah. I know that, but then I also write stuff like (declare
(wihout-full-continuations)) ;->

> See postings a couple of
> years ago about this issue.
>
> Since I agree that introducing recursive type specifiers into CL is a
> bit problematic (deftype is like defmacro, but without the result
> being evaluated, so you'd have to do lazy expansion), introducing an
> atomic list type specifier might be the way to go.

If I like Lisp, it's because I want one language that can do all and more nice
things any other can [And even some not so nice, like volatile char *reg,
what!]

Ray Blaak

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
Tim Bradshaw <t...@cley.com> writes:
> But there are situations where run-time errors are just too expensive
> to have.
>
> Ariane looks like a place where they'd decided this, but probably they
> were wrong. (The argument put forward seems to be that having error
> handlers in there (which Ada has, of course) would have made the code
> slower so they'd miss their real-time budget. I think this is
> implausible, but I don't know.)

Actually, the problem with Ariane, as I understand it, was that they took code
for the Ariane 4, reasoned (incorrectly) that it could run without changes or
testing on Ariane 5, and simply used it.

A situation that was impossible for Ariane 4 happened on Ariane 5,
inappropriate error handler invoked, and the rocket blew up.

The problem had nothing to do with the language used, or error handling
mechanisms per se. The problem was the use of software being used in an
environment it was not designed for.

Tim Bradshaw

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
* Ray Blaak wrote:

> A situation that was impossible for Ariane 4 happened on Ariane 5,
> inappropriate error handler invoked, and the rocket blew up.

I think the problem was that there wasn't an overflow handler
reasonably close to where the overflow happened -- it just trapped out
to some default handler which then dumped debugging output down the
wire and broke other stuff. They'd reasoned that `this can't happen'
for ariane 4 and this left out the sanity checks

> The problem had nothing to do with the language used, or error handling
> mechanisms per se. The problem was the use of software being used in an
> environment it was not designed for.

I agree with that of course. But if they had written the thing more
defensively (`even though this can never overflow I'll deal with the
case that it might', just in case someone reuses my code in some
different context without checking) they might have survived even so,
although I have heard the claim that this extra protection might have
meant they'd miss real-time deadlines.

But yes, of course it was an organisational foulup more than anything.

Ariane is like inconsistency in formal systems, you can use it to prove
whatever you like!

--tim

Harald Hanche-Olsen

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
+ Ray Blaak <bl...@infomatch.com>:

| Actually, the problem with Ariane, as I understand it, was that they
| took code for the Ariane 4, reasoned (incorrectly) that it could run
| without changes or testing on Ariane 5, and simply used it.
|

| A situation that was impossible for Ariane 4 happened on Ariane 5,
| inappropriate error handler invoked, and the rocket blew up.

It is not clear to me from what I have read if it was really
impossible on Ariane 4, or merely very unlikely. Apparently, part of
the problem was the greater acceleration of the Ariane 5. A
conversion from a 64 bit floating point number to a 16 bit signed
integer overflowed, the overflow was not trapped, and the resulting
error code was interpreted as flight data.

<http://sspg1.bnsc.rl.ac.uk/Share/ISTP/ariane5rep.htm>

| The problem had nothing to do with the language used, or error handling
| mechanisms per se. The problem was the use of software being used in an
| environment it was not designed for.

"Lack of attention to the strict preconditions below, especially the
last term in each, was the direct cause of the destruction of the
Ariane 5 and its payload [...]"

<http://www.cs.wits.ac.za/~bob/ariane5.htm>

(Risks Digest is a good source of information on this kind of thing.)
--
* Harald Hanche-Olsen <URL:http://www.math.ntnu.no/~hanche/>
- "There arises from a bad and unapt formation of words
a wonderful obstruction to the mind." - Francis Bacon

Larry Elmore

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
"Harald Hanche-Olsen" <han...@math.ntnu.no> wrote in message
news:pco1z5l...@math.ntnu.no...

> + Ray Blaak <bl...@infomatch.com>:
>
> | Actually, the problem with Ariane, as I understand it, was that they
> | took code for the Ariane 4, reasoned (incorrectly) that it could run
> | without changes or testing on Ariane 5, and simply used it.
> |
> | A situation that was impossible for Ariane 4 happened on Ariane 5,
> | inappropriate error handler invoked, and the rocket blew up.
>
> It is not clear to me from what I have read if it was really
> impossible on Ariane 4, or merely very unlikely. Apparently, part of
> the problem was the greater acceleration of the Ariane 5. A
> conversion from a 64 bit floating point number to a 16 bit signed
> integer overflowed, the overflow was not trapped, and the resulting
> error code was interpreted as flight data.
>
> <http://sspg1.bnsc.rl.ac.uk/Share/ISTP/ariane5rep.htm>

This has been discussed endlessly on comp.lang.ada several times since the
accident. With the Ariane 4, this particular condition could not possibly
occur except in the event of some sort of hardware failure, in which case
error handling was pointless and a computer shutdown (passing control to the
backup (the French usage of the term seems to be confusingly different from
the American)) was considered to be the best action to take. If the backup
did the same thing, it really didn't matter with the Ariane 4 because that
meant something was catastrophically and irretrievably wrong with the
rocket.

What I don't understand is why the Inertial Reference Systems (that choked
on the out-of-range data) were even operating at all after launch. They
served no purpose then, but it was considered a "don't care" situation. That
was one of the two stupid design decisions that caused the accident. The
other one was taking that subsystem from Ariane 4 and just sticking it in
Ariane 5 without any kind of review.

> | The problem had nothing to do with the language used, or error handling
> | mechanisms per se. The problem was the use of software being used in an
> | environment it was not designed for.
>
> "Lack of attention to the strict preconditions below, especially the
> last term in each, was the direct cause of the destruction of the
> Ariane 5 and its payload [...]"
>
> <http://www.cs.wits.ac.za/~bob/ariane5.htm>

Yes, the programmers produced _exactly_ what was specified for the Ariane 4.
It's not their fault (nor Ada's) that the specs were somewhat stupid for
Ariane 4 and totally stupid for the Ariane 5 which came along years later.
From things I've read, it's all something all too typical with the
apparently incredibly bureaucratic ESA. It does seem that NASA is also
increasingly suffering from the chronic bureaucratic bloat that eventually
seems to afflict almost every governmental agency.

Larry


Larry Elmore

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
"Andras Simon" <asi...@math.bme.hu> wrote in message
news:vcdsny1...@csusza.math.bme.hu...

Yes, this seems to be a compiler problem, not a language problem. The early
Ada compilers were wretched in this regard (as well as many others) which
helped cast a pall over the language from which it still hasn't really
recovered. As far as static vs. dynamic typing goes, I can't really add
anything useful since my experience with Lisp and dynamic typing goes back
only 4 months (I'm now working my way through Norvig's book, which certainly
lives up to it's recommendations here), but except for trivial programs, I
greatly prefer strong static typing (Ada 95) over weak static typing (C, and
to a lesser extent C++). That is, _if_ the compiler gives good warnings and
error messages. If it doesn't, you're not much better off. I found that when
I used Ada, I rarely ever _used_ a debugger, whereas with C/C++, it's almost
essential to have a good one at hand.

Forth (which I really liked and used a lot back when I had a Z-80 CP/M
machine, but is too low level where it's standardized and incredibly
fragmented where it's not (much worse than Lisp ever was, IMHO)) has no type
checking at all and what would be considered primitive debugging tools in
other languages. Using small word definitions and bottom-up interactive
programming made sophisticated debuggers superfluous. This seems to apply to
Lisp, too.

One question I have that really relates to this newsgroup is: does using
lots of smaller definitions have a negative performance impact on Lisp? I
know it does with other languages, but after learning and using Forth
(second language I learned after Basic), I came to have a real abhorrence of
long function definitions that stretch over several pages. Is there any
consensus of opinion on this question?

Larry

Christopher R. Barry

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
pm...@acm.org (Pierre R. Mai) writes:

> And there's the rub: In CL cons is a compound type specifier that can

> take car and cdr type specs. BUT you aren't allowed to define
> recursive type specifiers with deftype. See postings a couple of


> years ago about this issue.

Discussion on this issue occured more recently than a couple years
ago. I remember Lispworks being the only Lisp able to make use of CONS
compound type specifiers.

Christopher

Lieven Marchand

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
pm...@acm.org (Pierre R. Mai) writes:

> Although it's neither standard nor common, if one cared about static
> typing, one could quite reasonably write a separate static type-checker
> for CL that made use of such global (and local) declarations, as well
> as a lot of type-inference information on the built-in functions.

This has been done for Scheme by some people at Rice. They call it
soft typing. Basically, the type inference engine tries to prove the
correctness of as much of the program as it can and it inserts code to
check at run time where it can't. IIRC, you can get a front end for
DrScheme that will indicate where checks were inserted so you can get
to a fully statically checked program by inserting your own checks.

One of the papers describing this is at
http://www.cs.rice.edu/CS/PLT/Publications/toplas97-wc.ps.gz

I don't see any large problems in doing the same for CL although
encoding the effects of all standard functions would be a large job.

--
Lieven Marchand <m...@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker

Joe Marshall

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
"Larry Elmore" <ljel...@montana.campuscw.net> writes:

>
> One question I have that really relates to this newsgroup is: does using
> lots of smaller definitions have a negative performance impact on Lisp?
>

Quick answer: Not really. Don't worry about it.

Long answer: It might, depending upon how often the function runs,
how much `overhead' is involved in the function calling sequence, how
much `overhead' is involved in inlining it (by introducing more
variables into the function's caller, you might cause the compiler
to `spill' registers), what sort of machine you are using, how much
cache you have, etc. etc.

Just write things as if function calls were free. Then, once your
program is working correctly, profile it and see if there are any
bottlenecks that could be eliminated by avoiding the function call.
It is usually a simple matter to inline a function either by requesting
that the compiler do it, or by changing the function into a macro.

Tom Breton

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
pm...@acm.org (Pierre R. Mai) writes:

>
> Although it's neither standard nor common, if one cared about static
> typing, one could quite reasonably write a separate static type-checker
> for CL that made use of such global (and local) declarations, as well

> as a lot of type-inference information on the built-in functions. In
> principle, this shouldn't be much harder than writing a type-checker
> for one of the modern polymorphic functional languages, modulo:
>

> - List/Cons types: We probably need better type specifiers to note in
> a statically recognizable way that lists contain only elements of
> type X. Without this, you can't do much type inference on list
> operations (like first, etc.), and so the need for local type
> declarations increases.

Funny, I was thinking that too last week.

The first problem is that the `list' type is misnamed. It actually
means something like `directed-graph', or `list*' with no parameters.
The actual list type, by which I mean the type that always matches the
return value of the function `list', needs to be further down the
hierarchy, overlapping with cons and containing null.

(list type) can be defined recursively in terms of cons. Allowing it
to take min-length, max-length parameters is similar and a bit uglier,
but sometimes desirable.

IMO having it readily available would encourage typing so much that it
should be standardized. Along those lines, `list*' `tree', and
`tree*' are also nice.

[Good stuff snipped]

Marco Antoniotti

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to

"Larry Elmore" <ljel...@montana.campuscw.net> writes:

...

> One question I have that really relates to this newsgroup is: does using

> lots of smaller definitions have a negative performance impact on Lisp? I
> know it does with other languages, but after learning and using Forth
> (second language I learned after Basic), I came to have a real abhorrence of
> long function definitions that stretch over several pages. Is there any
> consensus of opinion on this question?

Long functions break the "first get it right, then get it fast"
principle. They also hamper the "maintainability mantra". Short and
cryptic names do te same.

AMD just announced a Gigahertz chip. Need I say more? :)

Cheers


--
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa

Jon S Anthony

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
Tim Bradshaw wrote:
>
> I agree with that of course. But if they had written the thing more
> defensively (`even though this can never overflow I'll deal with the
> case that it might', just in case someone reuses my code in some
> different context without checking) they might have survived even so,
> although I have heard the claim that this extra protection might have
> meant they'd miss real-time deadlines.

The real-time issue vis-a-vis the required processor for the original
design is just a part of the overall context. The right way to look
at this is to view it in light of practice in any "real engineering"
discipline. Suppose you design a component, say an aircraft wing
spar, for a particular application _context_ (these sorts of stresses,
attaching points, attaching means, etc.) and you show that in such
context a spar made of aluminum of cross section S _will_ be strong
enough for a reasonable envelope for the context (perhaps 1.5X the
expected stress). You then build and provide this component with
documentation stating the full context of use parameters.

Now some bonehead(s) come along and take this spar and place it in a
context where the stresses will be 5X the specified context of use.

Would you then say the designers and builders of the spar should have
been more "defensive" and made the thing out of steel with 3S cross
section, etc. (thereby making it completely unuseable for its intended
context) just in case such an idiot as this decided to use it in a
completely inappropriate application?


/Jon

--
Jon Anthony
Synquiry Technologies, Ltd. Belmont, MA 02478, 617.484.3383
"Nightmares - Ha! The way my life's been going lately,
Who'd notice?" -- Londo Mollari

Jon S Anthony

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
Larry Elmore wrote:

> One question I have that really relates to this newsgroup is: does
> using lots of smaller definitions have a negative performance impact
> on Lisp?

IME the answer is a definite no. First, for any low level performance
concern such as this the real determiner is how many times you are
going to be calling each function, not that you are going to call
several of them. Second, certainly industrial quality Lisp systems
have rather better function call performance than stuff like C and
Ada. Third, you can always inline those functions where you think the
extra call overhead might actually make a difference (better to
actually check this with the (typically good) profilers provided
first). Fourth, if push comes to shove, you could use macros to
"compile away" these definitions while maintaining the power of their
abstraction.

In the large industrial applications here, I actually made various
performance comparisons between Common Lisp, C, Ada95, Java, and C++
on various pieces. While this is anectdotal evidence, for us, Common
Lisp easily was on par with C, Ada and C++. Java (and we have tried
several implementations, JIT compilers, etc.) was and is dismal (well
over a magnitude worse in both speed and memory use. At one point IBM
had a "HP" compiler for 1.1 that produced completely native output for
Windoze and this output _was_ competitive. But it would take 12+
hours to compile on a stand alone 200MHZ Pentium with 128MB RAM.
Needless to say this was an overall loser).

Of course, the expressivity and sophisticated application construction
capacity of Common Lisp blows all the others _completely_ away.
They're not even close. Java fairs somewhat better in this regard,
but again it is not even remotely close to CL.

Paolo Amoroso

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
On Wed, 8 Mar 2000 13:29:35 -0700, "Larry Elmore"
<ljel...@montana.campuscw.net> wrote:

> One question I have that really relates to this newsgroup is: does using
> lots of smaller definitions have a negative performance impact on Lisp? I

Here is a section title of a document by Chris Riesbeck: "Use Very Many,
Very Short, Well-Named Functions". I don't have the URL of his Web site
handy, sorry.


Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/

Pierre R. Mai

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
Paolo Amoroso <amo...@mclink.it> writes:

> On Wed, 8 Mar 2000 13:29:35 -0700, "Larry Elmore"
> <ljel...@montana.campuscw.net> wrote:
>
> > One question I have that really relates to this newsgroup is: does using
> > lots of smaller definitions have a negative performance impact on Lisp? I
>
> Here is a section title of a document by Chris Riesbeck: "Use Very Many,
> Very Short, Well-Named Functions". I don't have the URL of his Web site
> handy, sorry.

Use

http://www.cs.nwu.edu/academics/courses/c25/readings/lisp-style.html

which contains the section title. Ain't www.google.com grand? :)

Reini Urban

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
Larry Elmore wrote:
>From things I've read, it's all something all too typical with the
>apparently incredibly bureaucratic ESA. It does seem that NASA is also
>increasingly suffering from the chronic bureaucratic bloat that eventually
>seems to afflict almost every governmental agency.

not only governmental agencies. every larger (software) company as well.
not to forget the increasing influence of our favorite marketing folks.
--
Reini Urban, rur...@x-ray.at http://www.x-ray.at/

Erik Naggum

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
* Jon S Anthony <j...@synquiry.com>

| Third, you can always inline those functions where you think the extra
| call overhead might actually make a difference (better to actually check
| this with the (typically good) profilers provided first).

inlining user functions is frequently a very hazardous business, and some
implementations do not heed inline declarations for user functions.

| Fourth, if push comes to shove, you could use macros to "compile away"
| these definitions while maintaining the power of their abstraction.

compiler macros provide the best of both worlds, and can be quite the
tool to optimize code beyond belief without being force into macro land.

#:Erik

Coby Beck

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to

Erik Naggum <er...@naggum.no> wrote in message news:31616251...@naggum.no...

| * Jon S Anthony <j...@synquiry.com>
| | Third, you can always inline those functions where you think the extra
| | call overhead might actually make a difference (better to actually check
| | this with the (typically good) profilers provided first).
|
| inlining user functions is frequently a very hazardous business, and some
| implementations do not heed inline declarations for user functions.
|
| #:Erik

What are the hazards of inlining functions?

Coby


Jon S Anthony

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
Erik Naggum wrote:
>
> * Jon S Anthony <j...@synquiry.com>
> | Third, you can always inline those functions where you think the
> | extra call overhead might actually make a difference (better to
> | actually check this with the (typically good) profilers provided
> | first).
>
> inlining user functions is frequently a very hazardous business,
> and some implementations do not heed inline declarations for user
> functions.

Both points are true, but presumably people understand the issues
here. Certainly for the first one the problem is basically the same
as inlining in any language, say, Ada. Of course, you could have more
system wide integrity checks helping you.


> | Fourth, if push comes to shove, you could use macros to "compile
> | away" these definitions while maintaining the power of their
> | abstraction.

>
> compiler macros provide the best of both worlds, and can be quite
> the tool to optimize code beyond belief without being force into
> macro land.

Certainly compiler macros can help "keep the entropy down" by only
expanding at compile time (not when interpreted). OTOH, in analogy to
inlining, this is only the intent and you may get expansion at other
times.

I don't find this issue of redefining macros (or inlined functions)
all that big of a deal. Mostly this is because if I decide to change
one or more of these things I do it in a constrained setting and when
satisfied simply do a rebuild and reload in the development setting.
Then again, maybe I'm missing something that will bite me "out of the
blue"??

/Jon

Jon S Anthony

unread,
Mar 9, 2000, 3:00:00 AM3/9/00
to
Coby Beck wrote:
>
> What are the hazards of inlining functions?

You redefine such a thing and forget to recompile anything that used
the original.

Erik Naggum

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
* Coby Beck

| What are the hazards of inlining functions?

1 version disparity upon redefinition
2 reduced accuracy in reporting errors in the debugger
3 code bloat

#:Erik

Robert Monfera

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to

Erik Naggum wrote:
>
> * Coby Beck
> | What are the hazards of inlining functions?
>
> 1 version disparity upon redefinition

As long as the inter-dependency of named functions is known, it would be
possible to propagate the invalidation of old definitions and recompile
with the new ones. This may admittedly lead to avalanche effect, and I
don't know how to trigger recompilation of closures without some
heavyweight administration though.

> 2 reduced accuracy in reporting errors in the debugger

Inlining reduces debuggability, but reporting accuracy is already
reduced at high speed optimization settings, and inlining built-in
functions is one of the several existing and valid reasons.

> 3 code bloat

Yes, but isn't this effect the same for compiler macros if you use them
for the same purpose?

Motivations overlap behind the use of inlining and compiler macros, and
I'm trying to see if what you wrote about making concepts easy or hard
is also applicable here. Do you think that it's too easy to declare a
function inlined, and it's a good idea to make this non-trivial concept
a little harder by having to use compiler macros? (BTW I also like
compiler macros for reasons beyond inlining too.)

Robert

Tim Bradshaw

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
* Jon S Anthony wrote:

> The real-time issue vis-a-vis the required processor for the original
> design is just a part of the overall context. The right way to look
> at this is to view it in light of practice in any "real engineering"
> discipline. Suppose you design a component, say an aircraft wing
> spar, for a particular application _context_ (these sorts of stresses,
> attaching points, attaching means, etc.) and you show that in such
> context a spar made of aluminum of cross section S _will_ be strong
> enough for a reasonable envelope for the context (perhaps 1.5X the
> expected stress). You then build and provide this component with
> documentation stating the full context of use parameters.

But I think the answer is (I have mail about this somewhere) that the
exception handlers would not cause you to miss deadlines *unless they
were invoked*, because they have basically zero overhead. So there
was some twisty argument going on that you shouldn't put this
protection in there because you might miss the deadline if this thing
that could not happen did happen, instead of which you lose the rocket
for sure if it happened.

(I'm sure I'm exaggerating the argument).

--tim

Erik Naggum

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
* Robert Monfera <mon...@fisec.com>

| As long as the inter-dependency of named functions is known, it would be
| possible to propagate the invalidation of old definitions and recompile
| with the new ones. This may admittedly lead to avalanche effect, and I
| don't know how to trigger recompilation of closures without some
| heavyweight administration though.

the granularity that we get for free when redefining normal, non-inlined
functions should thus be eminently implementable, but there are no such
systems around. there aren't even any systems around that automatically
recompile functions which use macros that have changed, and that's a much
more acute problem.

in some circumstances, there's a need to upgrade a bunch of functions to
the next generation en bloc, preserving calls within the older generation
to avoid version skew, but this kind of version control is unavailable.

what makes you think something like what you propose would be available?

| Inlining reduces debuggability, but reporting accuracy is already
| reduced at high speed optimization settings, and inlining built-in
| functions is one of the several existing and valid reasons.

but users don't generally debug built-in functions.

| Yes, but isn't this effect the same for compiler macros if you use them
| for the same purpose?

no, and this is the crucial difference between compiler macros and
inlining. a compiler macro can decide to punt on the expansion, which
causes a normal function call. a compiler macro can also decide to
redirect the function call to some other function, or handle part of the
function call and punt on the rest. this means that you have make an
informed choice about the expansion. you don't have that choice in an
inlined function's expansion.q

| Do you think that it's too easy to declare a function inlined, and it's a
| good idea to make this non-trivial concept a little harder by having to
| use compiler macros?

since it doesn't help to declare it inline in Allegro CL, I haven't
noticed the problem, and in any case, I agree with the decision not to
honor inline declarations. languages that require such features are
typically unable to provide meaningful alternatives -- something Common
Lisp actually does.

#:Erik

Raymond Toy

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
>>>>> "Tim" == Tim Bradshaw <t...@cley.com> writes:

Tim> * Will Deakin wrote:
[snip]
>>> (Incidentally, in this context, `type' often means much more than
>>> `representational type' -- it can be very useful to know that
>>> something is an (integer 0 8) say).

>> Which, if this is honoured, is a real bonus,

Tim> CMUCL can make use of info like this. For instance if you have two
Tim> suitably small fixnums a,b then it can prove that a + b is a fixnum
Tim> too. Likewise for things like SQRT (I think).

Certainly true for a + b. For SQRT, you need a fairly recent version
of CMUCL and with certain *features* enabled. I think they're
PROPAGATE-FUN-TYPE and PROPAGATE-FLOAT-TYPE.

Ray

Jon S Anthony

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
Tim Bradshaw wrote:
>
> * Jon S Anthony wrote:
>
> > The real-time issue vis-a-vis the required processor for the original
> > design is just a part of the overall context. The right way to look
> > at this is to view it in light of practice in any "real engineering"
> > discipline. Suppose you design a component, say an aircraft wing
> > spar, for a particular application _context_ (these sorts of stresses,
> > attaching points, attaching means, etc.) and you show that in such
> > context a spar made of aluminum of cross section S _will_ be strong
> > enough for a reasonable envelope for the context (perhaps 1.5X the
> > expected stress). You then build and provide this component with
> > documentation stating the full context of use parameters.
>
> But I think the answer is (I have mail about this somewhere) that
> the exception handlers would not cause you to miss deadlines *unless
> they were invoked*, because they have basically zero overhead. So

You missed the point completely. The above only sets the context.
These bits center on the point:

| Now some bonehead(s) come along and take this spar and place it in a
| context where the stresses will be 5X the specified context of use.
|
| Would you then say the designers and builders of the spar should have
| been more "defensive" and made the thing out of steel with 3S cross
| section, etc. (thereby making it completely unuseable for its intended
| context) just in case such an idiot as this decided to use it in a
| completely inappropriate application?

Jon S Anthony

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
Robert Monfera wrote:
>
> Erik Naggum wrote:
> >
...
> > 3 code bloat

>
> Yes, but isn't this effect the same for compiler macros if you use
> them for the same purpose?

Depends. In the case of inlining you will typically (if the compiler
is rather smart it may do better than this) get all the code for the
function. In the case of macros (compiler or otherwise), you can take
into account various things about the call and or calling context and
only generate the "exact" amount of actual required code.

Jon S Anthony

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
Erik Naggum wrote:
>
...

> since it doesn't help to declare it inline in Allegro CL, I
> haven't noticed the problem, and in any case, I agree with the
> decision not to honor inline declarations. languages that require
> such features are typically unable to provide meaningful
> alternatives -- something Common Lisp actually does.

This is a very important point. In something like Ada inlining is a
big deal (for one thing you have no macro facility at all). In
something like C++ it is even more important since the macro facility
is so crippled and broken it is basically impossible to write
_correct_ macros for this sort of thing (so you can get a lot of
broken code due to someone's incorrectly placed emphasis on speed
without understanding what they are doing is wrong).

Tim Bradshaw

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
* Jon S Anthony wrote:

> You missed the point completely. The above only sets the context.
> These bits center on the point:

No, I didn't.

> |
> | Would you then say the designers and builders of the spar should have
> | been more "defensive" and made the thing out of steel with 3S cross
> | section, etc. (thereby making it completely unuseable for its intended
> | context) just in case such an idiot as this decided to use it in a
> | completely inappropriate application?

The point I was trying to make is that in the physical-world context
you can't use the big thick spar because it is too big, too heavy and
so on for the small application. However in the software context you
have at your command wonder materials which can be enormously stronger
(better error protection) while weighing no more (no runtime cost of
the error handler).

.. So I think the point is not as very good one I'm afraid. The only
problem is that the wonder materials do cost a little bit more in
labour, although some extent you also have access to wonder lathes
which can eliminate this cost (I don't think this was true for Ariane,
as they already had a wonder tool, but it's true for the vast number
of C/C++ programs which don't bounds check arrays &c).

--tim

Pierre R. Mai

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
Tim Bradshaw <t...@cley.com> writes:

> The point I was trying to make is that in the physical-world context
> you can't use the big thick spar because it is too big, too heavy and
> so on for the small application. However in the software context you
> have at your command wonder materials which can be enormously stronger
> (better error protection) while weighing no more (no runtime cost of
> the error handler).

But they do carry non-trivial additional cost, since the added code
will have to be designed, written, reviewed and tested to the same high
standards that all other code in such applications is. There are quite
good safety reasons to keep the amount of code in such applications as
low as possible. Also money does matter, because you don't have an
infinite budget, and therefore money you spend on unnecessary efforts
will restrict you in other (possibly also safety-critical) areas.

If you can prove that a given situation cannot physically arise, and
that any values out of the specified range are therefore indications
of hard- or software failures in the system, passing control to a
backup system is therefore a reasonable thing to do.

BTW the added check wouldn't have solved the main problem, which was
unchecked reuse and lowered standards (no system integration test with
simulated real data) in Ariane 5. The "missing" check was only
missing in hindsight, and it might have been any of a number of other
possible problems that could have gone undetected by the management
decisions on Ariane 5.

I still think that the Ariane 5 problem shouldn't really be taken out
of context to discuss general software engineering practices. Safety
critical software in embedded systems is developed under very
different constraints and therefore practices than commodity software
is. The quality-assurance process in Ariane 5 broke down because of
questionable management decisions, and under those circumstances
defects will come through the cracks. This isn't a specific problem
of software, the same could have happened to the physical part of
Ariane 5, had the same breakdown in process occurred there.

IMHO the one lesson one might reasonably draw from this is that management
still measures with two measures when it comes to software vs. the "real"
world: I don't think that anyone would have considered leaving out real
systems integration testing on the physical side of things.

Jon S Anthony

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
Tim Bradshaw wrote:
>
> * Jon S Anthony wrote:
>
> > You missed the point completely. The above only sets the context.
> > These bits center on the point:
>
> No, I didn't.

It certainly appears as if you did and ...


>
> > |
> > | Would you then say the designers and builders of the spar should have
> > | been more "defensive" and made the thing out of steel with 3S cross
> > | section, etc. (thereby making it completely unuseable for its intended
> > | context) just in case such an idiot as this decided to use it in a
> > | completely inappropriate application?
>

> The point I was trying to make is that in the physical-world context
> you can't use the big thick spar because it is too big, too heavy and

still do. This stuff _is_ like the "physical world context" as it
_does_ have constraints on the total package size. For example, you
typically can't use some new wonder processor with a 1/2 Gig of RAM,
but rather some rather primitive thing (perhaps a decade or so old)
and with very limited amounts of memory and such. Why use this
"junk"? Because they are _hardened_ in various ways that the newer
stuff isn't. Constraints may literally be the amount of generated
code as it won't fit on the available ROM.

But that's not even the point. When you design and build something
for a given context and you clearly specify what this context of use
is (whether you think such design and construction is "good" or not is
completely _irrelevant_ to the point), then if someone _ignores_ this
approved context of use, the burden is _not_ on the builder of the
component, but rather on the user of it. That's the point.

Jon S Anthony

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
Pierre R. Mai wrote:
>
> is. The quality-assurance process in Ariane 5 broke down because of
> questionable management decisions, and under those circumstances
> defects will come through the cracks. This isn't a specific problem
> of software, the same could have happened to the physical part of
> Ariane 5, had the same breakdown in process occurred there.

Right. They are in fact _exactly_ the same, but as you point out

> IMHO the one lesson one might reasonably draw from this is that
> management still measures with two measures when it comes to
> software vs. the "real" world: I don't think that anyone would have
> considered leaving out real systems integration testing on the
> physical side of things.

many people don't get this.

As an aside: When many people do start getting this, the average
quality of the people producing software and the tools they use will
rise considerably, because there will be much higher expectations for
the overall quality and capability of software and you can't provide
this with the typical programmer and typical junk tools being used
today.

Tim Bradshaw

unread,
Mar 10, 2000, 3:00:00 AM3/10/00
to
* Pierre R Mai wrote:

> But they do carry non-trivial additional cost, since the added code
> will have to be designed, written, reviewed and tested to the same high
> standards that all other code in such applications is. There are quite
> good safety reasons to keep the amount of code in such applications as
> low as possible. Also money does matter, because you don't have an
> infinite budget, and therefore money you spend on unnecessary efforts
> will restrict you in other (possibly also safety-critical) areas.

Yes, this is what I meant by `labour costs'. I was just disputing the
claim that was made (not in this newsgroup) that having error handlers
in the code would have made it miss the realtime deadlines. Trying to
force this into the spar analogy, what I'm saying is that we have
wonder materials but they are sometimes expensive to make.

The reason I wanted to make that point, (other than the usual
sad usenet need to win arguments (:-)), was that *Lisp is a wonder
material* and it is often claimed that this makes it too expensive or
that it doesn't fit or that the error handling causes it not to fit on
a floppy. And I hate that!

> If you can prove that a given situation cannot physically arise, and
> that any values out of the specified range are therefore indications
> of hard- or software failures in the system, passing control to a
> backup system is therefore a reasonable thing to do.

I am always wary of claims about software that say `if you can prove
x'. Proving things is incredibly difficult and incredibly vulnerable
(as here) to changes in the assumptions you made to get the proof.
Sometimes it's better to be careful I think.

> I still think that the Ariane 5 problem shouldn't really be taken out
> of context to discuss general software engineering practices.

I absolutely agree. Somewhere back in this thread I said that ariane
can be used to demonstrate anything you like, and there was meant to
be an implicit `mis' in front of `used'. Somehow I got sucked in to
arguing about it anyway, but I will stop now.

There's a book called something like `software catastrophes' which has
various examples of software-engineering disasters, some of which may
be valid but others of which -- notably the `lisp caused this project
to fail' one -- are clearly not (the lisp one was also some classic
management foulup as far as I can gather).

One day I am going to write a book called "`software catastrophes'
catastrophes" which will be the metalevel of this book and describe
all the misuses of disasters to prove bogus points while missing the
obvious screwups. It will have the ariane thing in it, several lisp
ones of course, and the USS Yorktown disaster (bogusly used to say
rude things about NT).

(Shortly after this, I will write a metacircular version of this
meta-book describing its own bogusness).

--tim

Christopher R. Barry

unread,
Mar 11, 2000, 3:00:00 AM3/11/00
to
Erik Naggum <er...@naggum.no> writes:

> compiler macros provide the best of both worlds, and can be quite the
> tool to optimize code beyond belief without being force into macro land.

Does anyone have any good examples of using them? None of the Lisp
books out there give them any non-reference coverage at all. The
HyperSpec shows some short examples, but I'm curious how real
programmers use them in real programs.

Christopher

Christopher R. Barry

unread,
Mar 11, 2000, 3:00:00 AM3/11/00
to
Tim Bradshaw <t...@cley.com> writes:

> ones of course, and the USS Yorktown disaster (bogusly used to say
> rude things about NT).

Then what's the real story?

Christopher

Tim Bradshaw

unread,
Mar 11, 2000, 3:00:00 AM3/11/00
to
* Jon S Anthony wrote:
> still do. This stuff _is_ like the "physical world context" as it
> _does_ have constraints on the total package size. For example, you
> typically can't use some new wonder processor with a 1/2 Gig of RAM,
> but rather some rather primitive thing (perhaps a decade or so old)
> and with very limited amounts of memory and such. Why use this
> "junk"? Because they are _hardened_ in various ways that the newer
> stuff isn't. Constraints may literally be the amount of generated
> code as it won't fit on the available ROM.

I have never heard a claim that the Ariane code was up against memory
size limits. Do you have any evidence for that? Instead the report
on the disaster claims quite specifically that certain protection was
not put in in order to meet realtime budgets. And this is what turns
out to be a very dubious claim since the error protection need not
cost you time if it is not invoked. I feel reasonably confident that
if they were up against size limits the report would say this, as it's
rather a good reason for leaving out extra code.

Incidentally, please don't assume I'm not aware of the issues with
hardened processors. One of the first systems I was involved with was
very concerned with exactly these issues (if you watched significant
amounts of footage of the gulf war you have probably seen these
devices in fact).

> But that's not even the point. When you design and build something
> for a given context and you clearly specify what this context of use
> is (whether you think such design and construction is "good" or not is
> completely _irrelevant_ to the point), then if someone _ignores_ this
> approved context of use, the burden is _not_ on the builder of the
> component, but rather on the user of it. That's the point.

Right. BUT THAT IS NOT WHAT I WAS ARGUING ABOUT. I was arguing about
the specific claim made in the second paragraph of section 2.2 of the
report on the disaster. And that is *all*.

Sigh.

--tim

Tim Bradshaw

unread,
Mar 11, 2000, 3:00:00 AM3/11/00
to

The real story is that it is incredibly stupid to design a warship
with no redundancy. NT would not have been my choice for a system to
control the computers, but they should have ensured that the failure
of the computer system did not make the ship unmaneuverable.

--tim

Lieven Marchand

unread,
Mar 11, 2000, 3:00:00 AM3/11/00
to
cba...@2xtreme.net (Christopher R. Barry) writes:

> Does anyone have any good examples of using them?

Allegro's regular expression package has defined a compiler macro for
MATCH-REGEXP that will call COMPILE-REGEXP when the regular expression
is a constant string.

--
Lieven Marchand <m...@bewoner.dma.be>
If there are aliens, they play Go. -- Lasker

Joe Marshall

unread,
Mar 11, 2000, 3:00:00 AM3/11/00
to
cba...@2xtreme.net (Christopher R. Barry) writes:

> Does anyone have any good examples of using [compiler macros]?

In a commercial project I am working on, we use compiler macros in a
few places. Here are some examples (variable names have been changed
to protect the innocent):

(defun calculate-foo (bar baz)
(+ bar (* baz +foo-constant+)))

(define-compiler-macro calculate-foo (&whole form bar baz
&environment env)
(if (and (constantp bar env)
(constantp baz env))
(calculate-foo bar baz)
form))

In this case, we noticed that the function calculate-foo was
frequently, but not always, used where the arguments were available at
compile time. The compiler macro checks to see if the arguments are
constant, and if so, computes the value at compile time and uses that.
If not, the form is left unchanged.

This one is hairier:

(defun mapfoo (function element-generator)
(loop
while (element-available? element-generator)
do
(funcall function (get-next-element element-generator))))

(define-compiler-macro mapfoo (&whole form function element-generator)
(if (and
;; Look for '(FUNCTION ...)
(consp function)
(eq (car function) 'FUNCTION)
(consp (cdr function))
(null (cddr function)))
(let ((pl (cadr function)))
(if (and
;; Look for '(LAMBDA (...) ...)
(consp pl)
(eq (car pl) 'LAMBDA)
(consp (cdr pl))
(consp (cddr pl)))
(let ((lambda-bound-variables (cadr pl))
(lambda-body (caddr pl)))
(if (and (consp lambda-bound-variables)
(null (cdr lambda-bound-variables)))
(let ((generator-var (gensym))
(bound-variable (car lambda-bound-variables)))
`(let ((,generator-var ,element-generator))
(loop while (element-available? ,generator-var)
do (let ((,bound-variable (get-next-element ,generator-var)))
,@lambda-body))))
form)) ;; '(function (lambda (...) ...)) syntactically wrong.
form)) ;; didn't find '(lambda ...)
form)) ;; didn't find '(function ...)

In this case, we often found that MAPFOO was called with a literal
lambda expression like this:

(mapfoo #'(lambda (a-foo) (frob a-foo t nil)) (get-foo-generator))

The compiler-macro looks for this pattern, and if it finds it, turns
it into

(let ((#:G1234 (get-foo-generator)))
(loop
while (element-available? #:G1234)
do (let ((a-foo (get-next-element #:G1234)))
(frob a-foo t nil))))

So we can use the higher-order function mapfoo everywhere, but still
get the performance of the obvious loop function when we know what the
mapped function will be. This avoids the consing of a closure, the
multiple levels of function calls, and the indirect variable lookup.

> Erik Naggum <er...@naggum.no> writes:
>
> > compiler macros provide the best of both worlds, and can be quite the
> > tool to optimize code beyond belief without being force into macro land.
>

Compiler macros are a useful tool. In this case I can write my code
in a more abstract style and not think about optimization. Then, when
I discover some of the bottlenecks, I can add a compiler macro to
optimize the critical sections without doing a single rewrite of the
existing code.

On the other hand, compiler-macros are macros, and they come with all
the variable capture problems of regular macros. They have one great
advantage of allowing you to punt and return the original form if
things get too hairy.

But in both cases I have outlined above, the compiler-macros are
acting as a `poor-man's inliner'. If inline declarations worked, I
could have simply declared the relevant functions as inline and not
bothered with the compiler-macros at all. Most of our uses of
compiler-macros are to make up for this deficiency.

There are uses for compiler-macros other than getting around the lack
of inlining: Suppose you are writing a function that has some
constraints on its arguments, e.g., the first argument should be a
compile-time constant number. You can write a compiler macro to check
that this is the case.

--
~jrm

Christopher R. Barry

unread,
Mar 11, 2000, 3:00:00 AM3/11/00
to
Lieven Marchand <m...@bewoner.dma.be> writes:

> cba...@2xtreme.net (Christopher R. Barry) writes:
>

> > Does anyone have any good examples of using them?
>
> Allegro's regular expression package has defined a compiler macro for
> MATCH-REGEXP that will call COMPILE-REGEXP when the regular expression
> is a constant string.

regexp.cl isn't included with a trial edition but I guess I get the
idea.

USER(9): (funcall (compiler-macro-function 'match-regexp)
'(match-regexp "foo" "barfoobaz") nil)
(MATCH-REGEXP (LOAD-TIME-VALUE (COMPILE-REGEXP "foo")) "barfoobaz")


Thank you,
Christopher

Paolo Amoroso

unread,
Mar 11, 2000, 3:00:00 AM3/11/00
to
On Sat, 11 Mar 2000 02:57:19 GMT, cba...@2xtreme.net (Christopher R. Barry)
wrote:

> Does anyone have any good examples of using them? None of the Lisp
> books out there give them any non-reference coverage at all. The
> HyperSpec shows some short examples, but I'm curious how real
> programmers use them in real programs.

The following recent paper deals with compiler macros:

"Increasing Readability and Efficiency in Common Lisp"
Ant\'{o}nio Menenzes Leit\~{a}o
aml AT gia DOT ist DOT utl DOT pt
Proceedings of the European Lisp User Group Meeting '99

Abstract:
Common Lisp allows programmer intervention in the compilation process by
means of macros and, more appropriately, compiler macros. A compiler
macro describes code transformations to be done during the compilation
process. Unfortunately, compiler macros are difficult to write, read and
extend. This article presents a portable extension to Common Lisp's
compiler macros. The extension allows easy definition and overloading of
compiler macros and is based in two known techniques, namely pattern
matching and data driven programming. Three different uses are presented:

(1) Code optimization, (2) Code deprecation, and (3) Code quality
assessment. The extension is being used extensively in the reengineering
of a large AI system, with good results.

As I write this I am offline and I don't know whether the paper is also
available at a Web site. If you need a copy of the proceedings you may
contact Franz's sales department. The proceedings include another paper by
the same author, titled "FoOBaR - A Prehistoric Survivor", which
illustrates the evolution of the large AI system mentioned in the abstract.


Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/

Russell Wallace

unread,
Mar 13, 2000, 3:00:00 AM3/13/00
to
Andrew Cooke wrote:
> Because I program in Java and a (the one> ;-) advantage over Lisp is
> that it catches some mistakes that dynamic testing doesn't. I know that
> one can have tests etc, but it seems that the earlier one catches errors
> the better. The original post was about software which is effectively
> run once (outside testing) and must work - I was thinking that the
> extreme emphasis on perfect code must favour anything that detects
> errors.

Having programmed in Scheme as well as in a bunch of statically typed
languages (including C, C++, Ada and Java), in my experience it actually
doesn't matter.

I find that something which will show up at compile time in an ST
language will show up immediately on even the most cursory of testing in
a DT language. Call a function with the wrong type and it'll usually
give an error the first time you run the code (whereas calling with
incorrect values may require extensive testing to find). The bugs that
make it past testing into production code, IME, are of a very different
nature, and usually result from misunderstood specifications or poor
design.

There is an advantage in *convenience*: in an ST language I can have a
bunch of errors found for me for the effort of typing 'make' rather than
having to run tests. OTOH, there's a cost in convenience of having to
write the type declarations (and of having to explicitly say so when you
really do want 'any type', e.g. for collection classes). On the third
hand, I usually need to figure out what the types should be anyway from
a design point of view. Six of one, half a dozen of the other.

I'm inclined to the opinion that this whole issue of typing errors is
worth, at a generous estimate, a tenth the amount of ink/electrons
that's been spilled over it :)

--
"To summarize the summary of the summary: people are a problem."
Russell Wallace
mailto:mano...@iol.ie

Michael Hudson

unread,
Mar 13, 2000, 3:00:00 AM3/13/00
to
Russell Wallace <mano...@iol.ie> writes:

> Andrew Cooke wrote:
> > Because I program in Java and a (the one> ;-) advantage over Lisp is
> > that it catches some mistakes that dynamic testing doesn't. I know that
> > one can have tests etc, but it seems that the earlier one catches errors
> > the better. The original post was about software which is effectively
> > run once (outside testing) and must work - I was thinking that the
> > extreme emphasis on perfect code must favour anything that detects
> > errors.
>
> Having programmed in Scheme as well as in a bunch of statically typed
> languages (including C, C++, Ada and Java), in my experience it actually
> doesn't matter.

Ah, but have you programmed in a language that has a *proper* types
system, like Haskell or ml? For *certain classes of problem* I find
the thinking I have to do to work out the types of things leads to
insights into the problem I probably wouldn't otherwise have had.

I had a minor epiphany in this direction just the other day, so I hope
you'll forgive the evangelism.

Cheers,
Michael

--
very few people approach me in real life and insist on proving they are
drooling idiots. -- Erik Naggum, comp.lang.lisp

Andrew Cooke

unread,
Mar 13, 2000, 3:00:00 AM3/13/00
to
In article <38CCF7...@iol.ie>,

mano...@iol.ie wrote:
> There is an advantage in *convenience*: in an ST language I can have a
> bunch of errors found for me for the effort of typing 'make' rather
than
> having to run tests. OTOH, there's a cost in convenience of having to
> write the type declarations (and of having to explicitly say so when
you
> really do want 'any type', e.g. for collection classes). On the third
> hand, I usually need to figure out what the types should be anyway
from
> a design point of view. Six of one, half a dozen of the other.

It's not just running tests, but also writing them and making sure they
check all the things that a compiler would check for a static-typed
language - you would have tests anyway, but you might well need more
without static types.

(But I agree with some of what you've said and other posts on this
thread show that some static type checking is possible - it varies
significantly with the implementation).

Andrew


Sent via Deja.com http://www.deja.com/
Before you buy.

William Deakin

unread,
Mar 13, 2000, 3:00:00 AM3/13/00
to
Michael Hudson wrote:

> I had a minor epiphany in this direction just the other day, so I hope
> you'll forgive the evangelism.

You are forgiven as, with all fad-gadgets, it'll wear off in the fullness of
time

;) will


Jon S Anthony

unread,
Mar 13, 2000, 3:00:00 AM3/13/00
to
Tim Bradshaw wrote:
>
> * Jon S Anthony wrote:
> > still do. This stuff _is_ like the "physical world context" as it
> > _does_ have constraints on the total package size. For example, you
> > typically can't use some new wonder processor with a 1/2 Gig of RAM,
> > but rather some rather primitive thing (perhaps a decade or so old)
> > and with very limited amounts of memory and such. Why use this
> > "junk"? Because they are _hardened_ in various ways that the newer
> > stuff isn't. Constraints may literally be the amount of generated
> > code as it won't fit on the available ROM.
>
> I have never heard a claim that the Ariane code was up against memory
> size limits. Do you have any evidence for that?

No one here has made that claim and nothing in the above paragraph
says otherwise.


> Incidentally, please don't assume I'm not aware of the issues with

I have not made that assumption.


> > But that's not even the point. When you design and build something
> > for a given context and you clearly specify what this context of use
> > is (whether you think such design and construction is "good" or not is
> > completely _irrelevant_ to the point), then if someone _ignores_ this
> > approved context of use, the burden is _not_ on the builder of the
> > component, but rather on the user of it. That's the point.
>
> Right. BUT THAT IS NOT WHAT I WAS ARGUING ABOUT. I was arguing
> about the specific claim made in the second paragraph of section 2.2
> of the report on the disaster. And that is *all*.

Then we have simply been talking past one another. After all, the
message you replied to by me was _only_ about this point and that is
all. Go back and look. Which is why I kept remarking that you were
missing the point. If you didn't want to discuss this point, then
there was no point in replying to that message in the first place.
Criminey!

> Sigh.

Really.

Ray Blaak

unread,
Mar 13, 2000, 3:00:00 AM3/13/00
to
cbbr...@knuth.brownes.org (Christopher Browne) writes:
> Static typing may provide *some* of the testing implicitly; the real
> point is that type errors are only a subset of the total list of possible
> errors that may be made in writing a program.

The errors its tends to find are the common ones that are completely avoidable
(admittedly much more prevalent in the C family of languages than than in
Lisp).

> It's not that "static typing is downright bad, and never finds errors
> for you."
>
> It is rather the case that if you construct the "unit tests" *that should
> be built* to examine boundary cases and the likes, this will indirectly
> provide much of the testing of types that you suggest need to be implicit
> in the type system.

Yes an no. Unit testing is fundamentally important even with static typing. A
good test would indeed have boundary cases to catch the typing errors. But now
you have to do the tedious effort of writing those cases. It is better if those
cases can't even be expressed in the first place. For example, how many kinds
of invalid values can you pass to a function? A string, an atom, a list, a
vector, a float, an integer,... If your function was restricted to integers,
say, then you could simply concentrate on the invalid integer values.

> If there are good unit tests, this diminishes the importance of static
> type checking. Not to zero, but possibly to the point of not being
> dramatically useful.

Static typing is dramatically useful in integration tests. A unit test will not
exhaustively exercise other units. Static typing allows units to fit together
without the piles of dumb stupid interface errors, allowing you to concentrate
on the smart stupid logic errors :-).

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
bl...@infomatch.com The Rhythm has my soul.

Christopher Browne

unread,
Mar 14, 2000, 3:00:00 AM3/14/00
to
Centuries ago, Nostradamus foresaw a time when Andrew Cooke would say:

>In article <38CCF7...@iol.ie>,
> mano...@iol.ie wrote:
>> There is an advantage in *convenience*: in an ST language I can have a
>> bunch of errors found for me for the effort of typing 'make' rather
>than
>> having to run tests. OTOH, there's a cost in convenience of having to
>> write the type declarations (and of having to explicitly say so when
>you
>> really do want 'any type', e.g. for collection classes). On the third
>> hand, I usually need to figure out what the types should be anyway
>from
>> a design point of view. Six of one, half a dozen of the other.
>
>It's not just running tests, but also writing them and making sure they
>check all the things that a compiler would check for a static-typed
>language - you would have tests anyway, but you might well need more
>without static types.

Static typing may provide *some* of the testing implicitly; the real


point is that type errors are only a subset of the total list of possible
errors that may be made in writing a program.

It's not that "static typing is downright bad, and never finds errors
for you."

It is rather the case that if you construct the "unit tests" *that should
be built* to examine boundary cases and the likes, this will indirectly
provide much of the testing of types that you suggest need to be implicit
in the type system.

If there are good unit tests, this diminishes the importance of static


type checking. Not to zero, but possibly to the point of not being
dramatically useful.

--
"Bother," said Pooh, as he deleted his root directory.
cbbr...@hex.net - - <http://www.ntlug.org/~cbbrowne/lsf.html>

Erik Naggum

unread,
Mar 14, 2000, 3:00:00 AM3/14/00
to
* Ray Blaak <bl...@infomatch.com>

| For example, how many kinds of invalid values can you pass to a function?
| A string, an atom, a list, a vector, a float, an integer,... If your
| function was restricted to integers, say, then you could simply
| concentrate on the invalid integer values.

(check-type <argument> <restricted-type>) takes care of this one for you,
or you can use assert or throw your own errors if you really have to. I
don't see the problem. writing safe code isn't hard in Common Lisp.

| Static typing is dramatically useful in integration tests. A unit test
| will not exhaustively exercise other units. Static typing allows units
| to fit together without the piles of dumb stupid interface errors,
| allowing you to concentrate on the smart stupid logic errors :-).

when the compiler stores away the type information, this may be true.
when the programmers have to keep them in sync manually, it is false.

static typing is like a religion: it has no value outside the community
of believers. inside the community of believers, however, it is quite
impossible to envision a world where static types do not exist, and they
think in terms that restrict their concept of "type" to that which fits
the static typing religion.

to break out of the static typing faith, you have to realize that there
is nothing conceptually different between an object of type t that holds
a value you either know or don't know how to deal with, and an object of
a very narrow type that holds a value you either know or don't know how
to deal with. the issue is really programming pragmatics. static typing
buys you exactly nothing over dynamic typing when push comes to shove.
yes, it does buy you something _superficially_, but look beneath it, and
you find that _nothing_ has actually been gained. the bugs you found are
fixed, and the ones you didn't find aren't fixed. the mistakes you made
that escaped the testing may differ very slightly in expression, but they
are still there. the mistakes you did find may also differ slightly in
expression, but they are still gone. what did you gain by believing in
static typing? pain and suffering and a grumpy compiler. what did you
gain by rejecting this belief and understanding that neither humans nor
the real world fits the static typing model? freedom of expression! of
course, it comes with a responsibility, but so did the static typing,
only the dynamic typing responsibility is not to abuse freedom, while the
static typing responsibility is not to abuse the power of restriction.

personally, I think this is _actually_ a personality issue. either you
want to impose control on your environment and believe in static typing,
or you want to understand your environment and embrace whatever it is.

#:Erik

Marc Battyani

unread,
Mar 14, 2000, 3:00:00 AM3/14/00
to
It's not a specific response to Erik's post just some info on the subject.

For the static typing believers (of which i'm not) here is a new language
that is supposed to be a "Typed Lisp" "even faster than C" "As a summary:
Pliant is typed-Lisp + C++ + reflective-C in one language"

They got rid of the GC, which is rather unusal for a lisp...

You can look at the rest here: http://pliant.cams.ehess.fr/pliant/

Marc Battyani

Ray Blaak

unread,
Mar 14, 2000, 3:00:00 AM3/14/00
to
Erik Naggum <er...@naggum.no> writes:
> * Ray Blaak <bl...@infomatch.com>

> | Static typing allows units
> | to fit together without the piles of dumb stupid interface errors,
> | allowing you to concentrate on the smart stupid logic errors :-).
>
> when the compiler stores away the type information, this may be true.
> when the programmers have to keep them in sync manually, it is false.

How so? If there is an error, the programmer has to manually fix things
anyway, regardless of whether or not it was automatically discovered, or
whether or not the compiler stores type information. Maybe I misunderstood
what you mean by "keep them in sync".

> static typing is like a religion [...]

The point is that having tool support to detect the kinds of errors that
static typing can discover is helpful. In my experience these errors are very
common, easy to detect, and easy to fix. It is only a useful tool, no
fanatical zeal required.

I freely admit, however, that with Lisp systems at least, the consequences of
no static typing are not as important as with other languages, mainly due to
the built-in abilities of the language. One just doesn't corrupt things as
easily.

> the issue is really programming pragmatics. static typing
> buys you exactly nothing over dynamic typing when push comes to shove.
> yes, it does buy you something _superficially_, but look beneath it, and
> you find that _nothing_ has actually been gained. the bugs you found are
> fixed, and the ones you didn't find aren't fixed. the mistakes you made
> that escaped the testing may differ very slightly in expression, but they
> are still there. the mistakes you did find may also differ slightly in
> expression, but they are still gone.

I disagree with the "slightly". Pragmatically speaking, I have found the
number of errors prevented to be significant. One still has real bugs of
course, it's just the the silly ones I don't have to worry about.

> what did you gain by believing in static typing? pain and suffering and a
> grumpy compiler. what did you gain by rejecting this belief and
> understanding that neither humans nor the real world fits the static
> typing model? freedom of expression!

I encounter this attitude of "fighting the compiler" often, and I don't
understand it. A programmer describes an abstraction to a computer. The
computer then adheres to it, and points out violations that do not conform to
it. If one finds that they are bumping against a restriction that means either
the abstraction is not general enough, or that there is an error of usage that
should be fixed. The programmer has complete freedom to change things. One
does not fight the compiler per se; rather, one discovers the correctness of
their design.

It's not about "believing" in static typing. It's about being able to describe
an abstraction to an appropriate level of detail. Any tool support that can
aid in verifying it is only a win.

> of course, it comes with a responsibility, but so did the static typing,
> only the dynamic typing responsibility is not to abuse freedom, while the
> static typing responsibility is not to abuse the power of restriction.

I don't agree with this dichotomy. To me there is only freedom. Dynamic vs
static typing is actually a false distinction. They are really different
points along a continuum of "typing". One makes abstractions as general or as
constrained as necessary, balancing flexibility, robustness and correctness.

The responsibility is simply to do things "right" :-).

> personally, I think this is _actually_ a personality issue. either you
> want to impose control on your environment and believe in static typing,
> or you want to understand your environment and embrace whatever it is.

I think that one can and should do both. The art is in acheiving the
appropriate balance.

Christopher Browne

unread,
Mar 15, 2000, 3:00:00 AM3/15/00
to
Centuries ago, Nostradamus foresaw a time when Ray Blaak would say:

>cbbr...@knuth.brownes.org (Christopher Browne) writes:
>> Static typing may provide *some* of the testing implicitly; the real
>> point is that type errors are only a subset of the total list of possible
>> errors that may be made in writing a program.
>
>The errors its tends to find are the common ones that are completely avoidable
>(admittedly much more prevalent in the C family of languages than than in
>Lisp).
>
>> It's not that "static typing is downright bad, and never finds errors
>> for you."
>>
>> It is rather the case that if you construct the "unit tests" *that should
>> be built* to examine boundary cases and the likes, this will indirectly
>> provide much of the testing of types that you suggest need to be implicit
>> in the type system.
>
>Yes an no. Unit testing is fundamentally important even with static typing. A
>good test would indeed have boundary cases to catch the typing errors. But now
>you have to do the tedious effort of writing those cases. It is better if those
>cases can't even be expressed in the first place. For example, how many kinds

>of invalid values can you pass to a function? A string, an atom, a list, a
>vector, a float, an integer,... If your function was restricted to integers,
>say, then you could simply concentrate on the invalid integer values.

This assumes that the function is necessarily supposed to be
restricted to integers.

The "Lisp assumption" is that this is *not* the case.

Alternatively, if this *should* be the case, then it may make sense to
define the function as a CLOS method, and attach typing information to
the method, at which point you get similar guarantees to those
provided by static type checking.

But at any rate, by the time you've thrown boundary case testing at
the code, it is relatively unimportant what kinds of values are
invalid.

If each function has been tested to verify that it accepts reasonable
input, and produces reasonable output, I'm not terribly worried about
how it copes with unreasonable input/output. After all, *all* of the
functions are producing reasonable output, right?

>> If there are good unit tests, this diminishes the importance of static
>> type checking. Not to zero, but possibly to the point of not being
>> dramatically useful.
>

>Static typing is dramatically useful in integration tests. A unit

>test will not exhaustively exercise other units. Static typing allows


>units to fit together without the piles of dumb stupid interface
>errors, allowing you to concentrate on the smart stupid logic errors
> :-).

I have yet to write CLOS code, but it looks to me like that (or
"tiny-CLOS" schemes), combined with packaging, are the ways that
Lisp-like systems can attack the "interface control" problem...

The point here is that CLOS does "type-based" dispatching which
grapples with at least part of the "dumb stupid" interface problems.

And the other point that you appear quite obstinate in refusing to
recognize is that as soon as one pushes even *faintly* realistic test
data through a set of functions, this *quickly* shakes out much of
those "piles of dumb stupid interface errors."
--
Q: If toast always lands butter-side down, and cats always land on
their feet, what happens if you strap toast on the back of a cat and
drop it?
A: it spins, suspended horizontally, a few inches from the ground.
cbbr...@ntlug.org- <http://www.ntlug.org/~cbbrowne/lsf.html>

Gareth McCaughan

unread,
Mar 15, 2000, 3:00:00 AM3/15/00
to
Ray Blaak wrote:

> The point is that having tool support to detect the kinds of errors that
> static typing can discover is helpful. In my experience these errors are very
> common, easy to detect, and easy to fix. It is only a useful tool, no
> fanatical zeal required.

Hmm. My experience is that most of the type errors I make when
programming in C or C++[1] are ones that don't really have
counterparts in Lisp. I mistype a declaration, or accidentally
refer to |foo| where I mean |*foo| or vice versa, or use a signed
integral type when I want an unsigned one; in Lisp I don't write
declarations until later, when I'm thinking hard about types of
objects, andconfusing an object with its address is an issue
that just doesn't arise, and integers are *real* integers :-).

What sort of type errors do you make that get caught in languages
with static typing?

> I encounter this attitude of "fighting the compiler" often, and I
> don't understand it. A programmer describes an abstraction to a
> computer. The computer then adheres to it, and points out violations
> that do not conform to it. If one finds that they are bumping
> against a restriction that means either the abstraction is not
> general enough, or that there is an error of usage that should be
> fixed. The programmer has complete freedom to change things. One
> does not fight the compiler per se; rather, one discovers the
> correctness of their design.

If all languages were elegantly and consistently designed,
that might be true. But when there are gratuitous restrictions
or inconsistencies or complications in the language design,
it seems perfectly reasonable to me to speak of "fighting
the compiler" (or, maybe, "fighting the language").

Having to write

#define FOO(x) do { ... } while (0)

in C is "fighting the compiler". So is having to make up names
for intermediate objects if you want to set up (at compile
time) a nested structure where the nesting is done via pointers.
So is having to allocate and free temporary objects explicitly
on account of there being no GC.

All these things are just consequences of the language being
how it is, yes. That doesn't mean that the restrictions and
annoyances aren't real. The programmer *doesn't* have complete
freedom to change things; s/he can't change the language.
I don't think that "it's unpleasant to express in C" indicates
a flaw in the design of a program. It might indicate a flaw
in the design of C.

I'm not saying that a design should take no notice of the
language it's likely to get implemented in, of course. I'm
saying that often the best design still lumbers you with some
annoyances in implementation that could have been alleviated
if the language had been slightly different, and that these
count as "fighting the compiler".


[1] I haven't done anything non-trivial in languages like ML and
Haskell that are statically typed but have "better" type
systems than C's.

--
Gareth McCaughan Gareth.M...@pobox.com
sig under construction

Ray Blaak

unread,
Mar 15, 2000, 3:00:00 AM3/15/00
to
cbbr...@news.hex.net (Christopher Browne) writes:
> Centuries ago, Nostradamus foresaw a time when Ray Blaak would say:
> If each function has been tested to verify that it accepts reasonable
> input, and produces reasonable output, I'm not terribly worried about
> how it copes with unreasonable input/output. After all, *all* of the
> functions are producing reasonable output, right?

Testing with bad inputs is a fundamental part of testing. This is definitely a
religious belief on my part. If you don't agree, we'll simply leave it at
that.

> And the other point that you appear quite obstinate in refusing to
> recognize is that as soon as one pushes even *faintly* realistic test
> data through a set of functions, this *quickly* shakes out much of
> those "piles of dumb stupid interface errors."

Actually I agree with this. My point was to be able to avoid doing the work of
writing certain kinds of (tedious) tests in the first place. That is, let the
computer effectively do it for me.

Also, consider that unit tests often use "stub" versions of other units, since
they might not be available yet, be mutually dependent, etc. In that case
exercising your unit can test it adequately, but you can still miss bad calls
to the other units due to differences in behaviour between the stubbed
versions and the real versions. Some sort of interface specification ability
(static typing, CLOS style descriptions, whatever) can help reduce these
problems.

Ray Blaak

unread,
Mar 15, 2000, 3:00:00 AM3/15/00
to
Gareth McCaughan <Gareth.M...@pobox.com> writes:
> What sort of type errors do you make that get caught in languages
> with static typing?

Things like using an object in a context where it shouldn't, just because it
has the same representation. E.g. A record with two integer fields can be
considered a point or a rational number. If I was using a rational number in a
context where a point was expected, it is most likely an error. A reasonable
statically typed language would prevent such accidental (or lazy) uses. If I
really intended such behaviour, I would be forced to be explicit about it, and
provide conversion routines. E.g.

(let ((p (make-point 1 2)) ; Assume representation as a list of length 2
(r (make-rational 3 4))) ; Ditto
(draw-to p)
(draw-to r) ; Shouldn't be allowed.
(draw-to (point-of r))) ; Ok, now I am being explicit

This is an example only to illustrate a type mismatches due to identical
representations. Forget for the moment that real Lisp structs and rational
numeric types exist.

Another common situation is getting parameter values right. E.g.

(defun make-person (name age) ...)

(let ((p1 (make-person "Joe" 5)) ; Ok.
(p2 (make-person 15 "John"))) ; Not ok.
...)

Note that I don't consider C to be any sort of reasonable statically typed
language. C++ can be alright if one disciplines themselves to stick the
classes as much as possible and avoid the macros and low level features. The
primary example of a "traditional" statically typed language is Ada, with its
rich type specification ability. [Most Ada programmers are serious static
typing freaks. If people think I am a zealot about this, they haven't met the
real ones yet. I am distinctly mellow by comparison.]

Haskell and ML I would like to learn more about since they seem fundamentally
more expressive.

> Ray Blaak wrote:
> > I encounter this attitude of "fighting the compiler" often, and I

> > don't understand it. [...] One does not fight the compiler per se; rather,


> > one discovers the correctness of their design.
>
> If all languages were elegantly and consistently designed,
> that might be true. But when there are gratuitous restrictions
> or inconsistencies or complications in the language design,
> it seems perfectly reasonable to me to speak of "fighting
> the compiler" (or, maybe, "fighting the language").

[...]


> I'm saying that often the best design still lumbers you with some annoyances
> in implementation that could have been alleviated if the language had been
> slightly different, and that these count as "fighting the compiler".

Fair enough. I was referring to bumping against restrictions of the
programmer's abstractions, not language features. Using a castrated language
is always frustrating.

Pierre R. Mai

unread,
Mar 15, 2000, 3:00:00 AM3/15/00
to
Ray Blaak <bl...@infomatch.com> writes:

> Gareth McCaughan <Gareth.M...@pobox.com> writes:
> > What sort of type errors do you make that get caught in languages
> > with static typing?
>
> Things like using an object in a context where it shouldn't, just
> because it has the same representation. E.g. A record with two
> integer fields can be considered a point or a rational number. If
> I was using a rational number in a context where a point was
> expected, it is most likely an error. A reasonable statically
> typed language would prevent such accidental (or lazy) uses. If I
> really intended such behaviour, I would be forced to be explicit
> about it, and provide conversion routines. E.g.

This issue is totally independent of the distinction between static
vs. dynamic typing. It's a case of strong vs. weak/no typing. Sadly
the two almost always get mixed up, which further diminishes the value
of "discussions" about dynamic vs. static typing.

> This is an example only to illustrate a type mismatches due to identical
> representations. Forget for the moment that real Lisp structs and rational
> numeric types exist.

If you forget for the moment all higher level type constructors found
in your favourite statically typed language, you'll get the same
results, with the one distinction that the supposed type-mismatch
which previously wasn't found at run-time, will now _not_ be found at
compile-time, i.e. there is no difference. This tells us that the
issue is about something different from dynamic vs. static typing.
See above.

> Another common situation is getting parameter values right. E.g.
>
> (defun make-person (name age) ...)
>
> (let ((p1 (make-person "Joe" 5)) ; Ok.
> (p2 (make-person 15 "John"))) ; Not ok.
> ...)

Now here the only distinction is between dynamic vs. static typing.

OTOH since a meaningful unit test will have to cover all executable
branches of code anyway, this kind of error will always be caught with
no additional testing effort, as long as the types of the arguments
and parameters that are mismatched are indeed different. Given the
prevalence of string and integer types, this will often not be the
case, in which case both static and dynamic typing will break down.
This is one of the reasons why I think that environmental support
(automagic display of lambda lists, online HyperSpec) is the better
way to tackle issues of this sort.

Things will get a little more complicated to test when the type of a
parameter depends on dynamic properties of the program:

(make-person (blabla <something dynamic>) 5)

OTOH in statically-typed languages the code will look like this:

(make-person (union-string (blabla ...)) 5)

I.e. blabla now has a declared union type, and you need to explicitly
invoke the union accessor that gets you the string, for this to
type-check correctly. BUT you'll still need to test, since the
accessor will fail (either loudly (=> strongly-typed), or silently (=>
C unions)) at run-time, if the return value isn't the string variant
expected. You simply can't do better than run-time checking here.

Some B&D languages will make you put in a case construct, but since
you'll only raise an error in the else case anyway, this doesn't get
you more safety, but gives you more headaches. B&D.

I believe that anyone who doesn't exercise each part[1] of a unit at
least once during testing deserves any bugs he gets. I also believe
that static-typing would be more useful if computer programs consisted
of total, non-discrete functions. Since they don't, it isn't. ;-)

Regs, Pierre.


Footnotes:
[1] Note that this is different from (and easier than) exercising
each possible path through a program/unit/function. The one
grows linearly, the other grows exponentially.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Gareth McCaughan

unread,
Mar 15, 2000, 3:00:00 AM3/15/00
to
Ray Blaak wrote:

[I asked him:]


>> What sort of type errors do you make that get caught in languages
>> with static typing?
>
> Things like using an object in a context where it shouldn't, just because it
> has the same representation. E.g. A record with two integer fields can be
> considered a point or a rational number.

Hmm. Using conses and lists for everything is sort of the Lisp
equivalent of using |void *| for everything in C and casting
every pointer every time you use it. If you do that, then you
*deserve* to lose. :-)

> This is an example only to illustrate a type mismatches due to identical
> representations. Forget for the moment that real Lisp structs and rational
> numeric types exist.

But they do, and they do away with this kind of problem pretty
completely, no? I mean, do you often define several kinds of
object with the same representation? (If so, why?)

> Another common situation is getting parameter values right. E.g.
>
> (defun make-person (name age) ...)
>
> (let ((p1 (make-person "Joe" 5)) ; Ok.
> (p2 (make-person 15 "John"))) ; Not ok.
> ...)

(defun make-person (&key name age) ...)

(let ((p1 (make-person :name "Joe" :age 5))
(p2 (make-person :age 15 :name "John")))
...)

I suppose the point of all this is that static typing does
buy you something, but that in a well-designed dynamic language
the amount it buys you is very little unless you choose to
shoot yourself in the foot.

> Note that I don't consider C to be any sort of reasonable statically typed
> language.

I'm very pleased to hear it! It's amusing just how much more
weakly typed C is than CL. (Which is one reason why I don't
like to hear people say "strong typing" when they mean "static
typing". C is weakly typed and statically typed; CL is strongly
typed and dynamically typed. I am not, at all, suggesting that
you don't understand this.)

> C++ can be alright if one disciplines themselves to stick the
> classes as much as possible and avoid the macros and low level features. The
> primary example of a "traditional" statically typed language is Ada, with its
> rich type specification ability. [Most Ada programmers are serious static
> typing freaks. If people think I am a zealot about this, they haven't met the
> real ones yet. I am distinctly mellow by comparison.]

I don't think you're a zealot about it, for what it's worth.

> Haskell and ML I would like to learn more about since they seem
> fundamentally more expressive.

I'm not sure I'd go that far, but they do combine strict static
typing with a good enough type inference system that you don't
have to decorate everything with types as you do in C. That's
certainly a good thing.

>> I'm saying that often the best design still lumbers you with some annoyances
>> in implementation that could have been alleviated if the language had been
>> slightly different, and that these count as "fighting the compiler".
>
> Fair enough. I was referring to bumping against restrictions of the
> programmer's abstractions, not language features. Using a castrated language
> is always frustrating.

Almost all languages are castrated to some degree, I think.
I've never yet used one where I don't sometimes think "aargh,
if only I could say X". Common Lisp can at least be extended
when that happens (though it isn't always wise).

Ray Blaak

unread,
Mar 15, 2000, 3:00:00 AM3/15/00
to
Gareth McCaughan <Gareth.M...@pobox.com> writes:

> Ray Blaak wrote:
> > Things like using an object in a context where it shouldn't, just because
> > it has the same representation.
[...]

> > This is an example only to illustrate a type mismatches due to identical
> > representations. Forget for the moment that real Lisp structs and rational
> > numeric types exist.
>
> But they do, and they do away with this kind of problem pretty
> completely, no? I mean, do you often define several kinds of
> object with the same representation? (If so, why?)

If typing information is automatically part of the representation itself
(i.e. some sort of type tag), then the answer is no, I don't have different
objects with same representation. Otherwise, yes, its definitely possible.

Let me flip to Ada for a second:

type Seconds is range 0 .. 59;
type Minutes is range 0 .. 59;

These are both integer types, both with exactly the same representation, and
yet because they are different types, I cannot accidently use minutes where
seconds is required, and vice versa. E.g.

declare
m : Minutes;
s : Seconds;
begin
s := 10;
m := s; -- won't compile
m := Minutes(s); -- ok, explicit conversion
end;

To me the representation of an object is in general a private thing to the
object, and a user of the object should only think in terms of the allowable
operations on the object.

> (let ((p1 (make-person :name "Joe" :age 5))
> (p2 (make-person :age 15 :name "John")))
> ...)

This is also an excellent feature that I like to have in languages. It prevents
errors, makes things clearer to the reader, etc. In Ada, for example, I could
say:

p2 := Make_Person (Age => 15, Name => "John");

> > Haskell and ML I would like to learn more about since they seem
> > fundamentally more expressive.
>

> I'm not sure I'd go that far [...]

Well, more expressive in terms of specifying types is what I meant, at least
compared to Ada and C++. How do they compare to CL in that regard?

Ray Blaak

unread,
Mar 15, 2000, 3:00:00 AM3/15/00
to
pm...@acm.org (Pierre R. Mai) writes:

> Ray Blaak <bl...@infomatch.com> writes:
> > Things like using an object in a context where it shouldn't, just
> > because it has the same representation.

> This issue is totally independent of the distinction between static


> vs. dynamic typing. It's a case of strong vs. weak/no typing. Sadly
> the two almost always get mixed up, which further diminishes the value
> of "discussions" about dynamic vs. static typing.

Well, I do indeed mean strong static typing vs strong dynamic typing.

Hmmm.

I think the issue here is this example is not really appropriate to Lisp, where
objects of different types have different representations by definition. If I
used real Lisp types, my example becomes good old fashioned parameter type
mismatch:

(let ((p (make-point 1 2)) ; a struct
(r 3/4))
(draw-to p) ; ok
(draw-to r)) ; error

See my reply to Gareth for a realistic example in Ada of different types with
identical representations.

> > Another common situation is getting parameter values right. E.g.
>

> Now here the only distinction is between dynamic vs. static typing.
>
> OTOH since a meaningful unit test will have to cover all executable
> branches of code anyway, this kind of error will always be caught with
> no additional testing effort

Not if the operation being invoked is in another unit stubbed for testing
purposes, such that the stubbed version doesn't care about the parameter
type. Unit tests will cover the paths through the unit but can miss this case
due to the stub. Integration tests will bring the real units together, but
don't necessarily cover all paths. Some sort of static specification ability
will catch the bad call.

Now it all depends what you are doing, and in what language. For a smallish
system the testing of all representative paths with actual units is feasible
and will indeed catch the error. Large systems, on the other hand need every
bit of automated error detection they can reasonably get.

> Things will get a little more complicated to test when the type of a

> parameter depends on dynamic properties of the program [...] in
> statically-typed languages [...] you'll still need to test

Indeed you will. Statically typed doesn't mean completely static. There is
almost always some dynamic aspect possible, and run-time checking cannot be
completely avoided, even in Ada. However, usually (especially with
object-oriented programming) you might not know the exact type, but can assume
some minimal abilities. E.g. in Java:

class Root
{
public void Doit ()
{...}
}

class Descendant extends Root
{...}

void DoADescendant(Descendant d)
{...}

void DoARoot(Root obj)
{
obj.Doit(); // I don't know the exact type, but I know I can do this.
DoADescendant(obj); // run-time check
}

In DoARoot, know I cannot be passed anything else but an object descended from
Root. Thus I don't have to test such cases such as trying with a string, an
integer, a whatzit. Clients cannot accidentally call DoARoot with such cases.

Static typing is only a additional tool in a programmer's bag of useful
tricks. Maybe I should really be saying "machine verifiable specification
abilities".

Russell Wallace

unread,
Mar 16, 2000, 3:00:00 AM3/16/00
to
Michael Hudson wrote:
> Ah, but have you programmed in a language that has a *proper* types
> system, like Haskell or ml? For *certain classes of problem* I find
> the thinking I have to do to work out the types of things leads to
> insights into the problem I probably wouldn't otherwise have had.

Nope. I've looked at the docs and sample code for them and figured
they're not my style, though I understand for some people they work very
well.

Gareth McCaughan

unread,
Mar 16, 2000, 3:00:00 AM3/16/00
to
Ray Blaak wrote:

>>> This is an example only to illustrate a type mismatches due to identical
>>> representations. Forget for the moment that real Lisp structs and rational
>>> numeric types exist.
>>
>> But they do, and they do away with this kind of problem pretty
>> completely, no? I mean, do you often define several kinds of
>> object with the same representation? (If so, why?)
>
> If typing information is automatically part of the representation itself
> (i.e. some sort of type tag), then the answer is no, I don't have different
> objects with same representation. Otherwise, yes, its definitely possible.
>
> Let me flip to Ada for a second:
>
> type Seconds is range 0 .. 59;
> type Minutes is range 0 .. 59;
>
> These are both integer types, both with exactly the same representation, and
> yet because they are different types, I cannot accidently use minutes where
> seconds is required, and vice versa. E.g.

OK, that's a good example. I like it.

A question, though: does the decrease in testing time that
comes from declaring all your counts of minutes and seconds
explicitly make up for the increase in coding time that
comes from, er, declaring all your counts of minutes and
seconds explicitly?

Actually, this particular example isn't so great because
the Right Answer is almost certainly not to have separate
types for minutes and seconds, but to have objects for
holding times, or time intervals, of which minutes and
seconds would just be special cases. But there are other
parallel examples where similar options aren't available.
(Distances and angles, maybe.)

These examples make me wonder whether there's anything to
be said for giving a programming language a notion of
*units*, so that (+ (minutes 5) (seconds 5)) is (seconds 305)
and (+ (metres 10) (square-metres 10)) is a type error.
It's probably just much, much too painful. :-)

(There are systems that let you do things like this.
For instance, the "Mathcad" package does.)

> To me the representation of an object is in general a private thing to the
> object, and a user of the object should only think in terms of the allowable
> operations on the object.

I agree. But I wonder how far you really want to take it.
Distinguish between a positive integer that describes the
length of a list and another that describes the length
of an array, perhaps? Or between indices into two different
arrays? Should two cons cells be of different types, if
the CAR of one is a symbol representing a kind of fruit
and the CAR of the other is a symbol representing a kind
of bread? That way, I think, lies madness...

>> (let ((p1 (make-person :name "Joe" :age 5))
>> (p2 (make-person :age 15 :name "John")))
>> ...)
>
> This is also an excellent feature that I like to have in
> languages. It prevents errors, makes things clearer to the reader,
> etc. In Ada, for example, I could say:
>
> p2 := Make_Person (Age => 15, Name => "John");

One of the things I really hate about C and C++ is that they
have no keyword arguments.

>
>>> Haskell and ML I would like to learn more about since they seem
>>> fundamentally more expressive.
>>
>> I'm not sure I'd go that far [...]
>
> Well, more expressive in terms of specifying types is what I meant, at least
> compared to Ada and C++. How do they compare to CL in that regard?

Well, obviously, *nothing* is more expressive than CL in
any regard whatever. :-)

In ML there are types like "list of integers" or "function
that, given an object of any type, returns a list of objects
of that type". CL's type system doesn't have those. On
the other hand, I don't think ML has the "list of objects
of any types" type.

Ray Blaak

unread,
Mar 16, 2000, 3:00:00 AM3/16/00
to
Gareth McCaughan <Gareth.M...@pobox.com> writes:
> OK, that's a good example. I like it.
>
> A question, though: does the decrease in testing time that
> comes from declaring all your counts of minutes and seconds
> explicitly make up for the increase in coding time that
> comes from, er, declaring all your counts of minutes and
> seconds explicitly?

There is the up front work of declaring the types and their allowable
operations. Once that is done, actually using the types is not really much more
work than using the default types.

The payoff is not so much a decrease in testing time. It is substantial
decrease in development time in general. The Ada development cycle tends to be
compile->link->run with the occasional foray into the debugger. C++, by
comparison is more like compile->link->whoops, unresolveds!->link->debug->
debug->debug->...->eventually run for real.

On the other hand, Ada is fundamentally a compiled language, with a mandatory
analysis phase. An interpreted version of Ada would be tedious to use indeed.

Ray Blaak wrote:
> > To me the representation of an object is in general a private thing to the
> > object, and a user of the object should only think in terms of the
> > allowable operations on the object.
>
> I agree. But I wonder how far you really want to take it.

In the Ada mindset one tends to take things pretty far. The guiding principle
is that abstractions that are not supposed to interfere with each other will
not, at least not implicitly. For abstractions that are supposed to deal with
each other, one constructively specifies what one is allowed to express.

But this is not an Ada newsgroup, and I think I have said enough on the matter.

It is loading more messages.
0 new messages