Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

closures

24 views
Skip to first unread message

Vladimir Zolotykh

unread,
Jan 30, 2002, 9:22:48 AM1/30/02
to
Consider the following in Common Lisp

(let ((y 7))
(defun foo (x)
(declare (special y)) ; [1]
(list x y))
)

Is is true to say that (foo N) always returns (N 7) ignoring
any value of y at time of the call ?

Also that declaration [1] (or any other possible declaration)
doesn't change that behavior, does it?

--
Vladimir Zolotykh gsm...@eurocom.od.ua

Nils Goesche

unread,
Jan 30, 2002, 9:56:55 AM1/30/02
to
In article <3C5801B8...@eurocom.od.ua>, Vladimir Zolotykh wrote:
> Consider the following in Common Lisp
>
> (let ((y 7))
> (defun foo (x)
> (declare (special y)) ; [1]
> (list x y))
> )
>
> Is is true to say that (foo N) always returns (N 7) ignoring
> any value of y at time of the call ?
>
> Also that declaration [1] (or any other possible declaration)
> doesn't change that behavior, does it?

The HyperSpec says:

# A special declaration does not affect inner bindings of a var; the
# inner bindings implicitly shadow a special declaration and must be
# explicitly re-declared to be special.

If you leave out [1], (foo N) will always return (N 7), at least
in Common Lisp. But if you put in [1], you make sure you get
the dynamic binding of y at the time of the call, and an error
if no such binding exists. At least that's how I understand it,
and LispWorks, too, it seems :-)

CL-USER 17 > (let ((y 42)) (declare (special y)) (foo 3))
(3 42)

Regards,
--
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x42B32FC9

Vladimir Zolotykh

unread,
Jan 30, 2002, 10:09:47 AM1/30/02
to
Nils Goesche wrote:
>
> > (let ((y 7))
> > (defun foo (x)
> > (declare (special y)) ; [1]
> > (list x y))
> > )
My mistake here, should be

(let ((y y))
(declare (special y))
(defun foo (x)
(list x y))
)

In that case all works as you described.

--
Vladimir Zolotykh gsm...@eurocom.od.ua

Nils Goesche

unread,
Jan 30, 2002, 10:15:08 AM1/30/02
to
In article <3C580CBB...@eurocom.od.ua>, Vladimir Zolotykh wrote:
> Nils Goesche wrote:
>>
>> > (let ((y 7))
>> > (defun foo (x)
>> > (declare (special y)) ; [1]
>> > (list x y))
>> > )
> My mistake here, should be
>
> (let ((y y))
^
Do you mean to take y here as initializer?

> (declare (special y))
> (defun foo (x)
> (list x y))
> )
>
> In that case all works as you described.

Regards,

Vladimir Zolotykh

unread,
Jan 30, 2002, 10:18:30 AM1/30/02
to
Nils Goesche wrote:
>
> > (let ((y y))
> ^
> Do you mean to take y here as initializer?

Just typos, should be (let ((y 7)) of course.

--
Vladimir Zolotykh gsm...@eurocom.od.ua

Nils Goesche

unread,
Jan 30, 2002, 10:28:24 AM1/30/02
to
In article <3C580EC6...@eurocom.od.ua>, Vladimir Zolotykh wrote:
> Nils Goesche wrote:
>>
>> > (let ((y y))
>> ^
>> Do you mean to take y here as initializer?
>
> Just typos, should be (let ((y 7)) of course.

Well, in that case, I'd say that

(let ((y 7))


(declare (special y))
(defun foo (x)
(list x y)))

has exactly the same semantics as the version I commented on before,
with the special declaration inside `foo'; right?

Vladimir Zolotykh

unread,
Jan 30, 2002, 10:38:25 AM1/30/02
to
Nils Goesche wrote:
>
> Well, in that case, I'd say that
>
> (let ((y 7))
> (declare (special y))
> (defun foo (x)
> (list x y)))
>
> has exactly the same semantics as the version I commented on before,
> with the special declaration inside `foo'; right?
Yes, semantic is the same, but 'syntax' was wrong in my original
version.
CMUCL for example issued the

Note: Variable Y defined but never used.

--
Vladimir Zolotykh gsm...@eurocom.od.ua

Tim Bradshaw

unread,
Jan 30, 2002, 10:47:44 AM1/30/02
to
* Nils Goesche wrote:

> has exactly the same semantics as the version I commented on before,
> with the special declaration inside `foo'; right?

Yes, I think so. In particular, if

(defvar *c*
(let ((y 1))
(declare (special y))
#'(lambda () y)))

then

(let ((y 1)) (funcall *c*))

is an error

but

(let ((y 2)) (declare (special y)) (funcall *c*))

-> 2

--tim

Rene de Visser

unread,
Jan 30, 2002, 11:56:14 AM1/30/02
to
I tried exactly what is below in two other lisps (Allegro and Corman) and
they had different behavior.

Answer: always 7.

(unless I did something drastically wrong).

And also in Lispworks which shows the other behavior.

What I have read so far in the CLHS does not seem to exclude
that the declaration defines y to be special inside the context of the let
(and not just within the defun).

i.e. that the effect of declaration doesn't stop when it hits the defun.

In contrary I wonder what statement in the CLHS prevents the declare
applying to the defining form which in
this case is the let statement.


"Nils Goesche" <car...@cartan.de> wrote in message
news:a391jn$16cmru$1...@ID-125440.news.dfncis.de...

Nils Goesche

unread,
Jan 30, 2002, 12:29:47 PM1/30/02
to

Hm,

(let ((y 7))
(declare (special y))
(defun foo (x)
(list x y)))

might well establish a dynamic binding for y while the defun
expression is evaluated; but that binding is removed when it
(the let expression) returns, I'd say. Now, if foo is called,
and there is no dynamic binding for y at the moment, you should
get an error because y in (list x y) is not in the lexical environment
of foo and thus automatically assumed special. In fact, your other
version

(let ((y 7))
(defun foo (x)
(declare (special y))

(list x y)))

seems to be slightly different: Now, y seems to be 7 in the lexical
environment of foo, but, because of the declaration, the y in
(list x y) refers to the dynamic binding of y at the time of calling,
shadowing the lexical binding.

So, I think LispWorks is correct here. But I am curious what the
experts say :-)

Tim Bradshaw

unread,
Jan 30, 2002, 12:35:29 PM1/30/02
to
* Rene de Visser wrote:
(talking about something like this:

(let ((y 1))
(defun foo ()
(declare (special y))
y))

> What I have read so far in the CLHS does not seem to exclude
> that the declaration defines y to be special inside the context of the let
> (and not just within the defun).

No, it applies only to the DEFUN. From 3.3.4 the declaration is a
free declaration within DEFUN, which in turn is within the outer LET.
This matches the third para of 3.3.4. This declaration therefore
only applies to the body of the DEFUN form, which in this case is just
Y. Therefore, *that* Y is treated as a special variable, while the
outer binding by LET is lexical.

I think a system that returns 7 for this is in error, unless there is
a globally special Y, or some other visible special binding of Y at
the time FOO is called.

(unless I've misread something, I guess)

--tim

Tim Bradshaw

unread,
Jan 30, 2002, 12:43:50 PM1/30/02
to
* Nils Goesche wrote:
> So, I think LispWorks is correct here. But I am curious what the
> experts say :-)

I think it is, too (I'm probably not an expert, but most of them seem
to have gone away for the time being). I think it's kind of easier to
see if you take the DEFUN away. One case is then equivalent to this
(omitting the BLOCK & so on).

(let ((y 1))
(declare (special y)) ; bound declaration of Y
(setf (symbol-function 'foo)
#'(lambda (x)
;; the Y here is the same as the Y established by LET
;; above, and is a reference to a special binding
(list x y))))

THe other is

(let ((y 1)) ; lexical binding of Y
(setf (symbol-function 'foo)
#'(lambda (x)
(declare (special y)) ; free declaration of Y.
;; The Y here is *not* the same as the Y established
;; above - it's a reference to a special binding
(list x y))))


So although these look kind of different, I think the effect is
exactly the same - the reference to Y is a reference to a special
binding of it, which should exist at the time FOO is called.

--tim

Kalle Olavi Niemitalo

unread,
Jan 30, 2002, 12:44:36 PM1/30/02
to
"Rene de Visser" <Rene_de...@hotmail.de> writes:

> In contrary I wonder what statement in the CLHS prevents the declare
> applying to the defining form which in
> this case is the let statement.

It is in 3.3.4 Declaration Scope.

# A declaration that appears at the head of a binding form and applies
# to a variable or function binding made by that form is called a bound
# declaration; such a declaration affects both the binding and any
# references within the scope of the declaration.

In the expression

(let ((y 7))
(defun foo (x)
(declare (special y))

(list x y)))

the declaration applies to the name Y, but it is not in the head of
the LET form that binds the variable Y. Thus it is not a bound
declaration.

# Declarations that are not bound declarations are called free
# declarations.

The declaration of Y is a free declaration.

# A free declaration in a form F1 that applies to a binding for a name N
# established by some form F2 of which F1 is a subform affects only
# references to N within F1; it does not to apply to other references to
# N outside of F1, nor does it affect the manner in which the binding of
# N by F2 is established.

The free declaration of Y affects only the DEFUN form (F1) where it
appears; it does not affect the LET form (F2) that binds the variable.

See also the examples in 3.3.4.1.

Thomas F. Burdick

unread,
Jan 30, 2002, 2:00:15 PM1/30/02
to
Nils Goesche <car...@cartan.de> writes:

> Hm,
>
> (let ((y 7))
> (declare (special y))
> (defun foo (x)
> (list x y)))
>
> might well establish a dynamic binding for y while the defun
> expression is evaluated; but that binding is removed when it
> (the let expression) returns, I'd say. Now, if foo is called,
> and there is no dynamic binding for y at the moment, you should
> get an error because y in (list x y) is not in the lexical environment
> of foo and thus automatically assumed special. In fact, your other
> version
>
> (let ((y 7))
> (defun foo (x)
> (declare (special y))
> (list x y)))
>
> seems to be slightly different:

It's very different. They both both versions of FOO compute the same
result when given the same input, but the function of the LET is quite
different in the two cases.

> Now, y seems to be 7 in the lexical environment of foo, but, because
> of the declaration, the y in (list x y) refers to the dynamic
> binding of y at the time of calling, shadowing the lexical binding.

That's my interpretation, too. And that's the behavior of the
compiler I'm working on. Given:

(let ((y 7))
(defun foo (x)
(declare (special y))
(list x y)))

(let ((y 7))
(declare (special y))
(defun bar (x)
(list x y)))

FOO is defined in an environment with a lexical binding of Y to 7
(which it never uses), and BAR is defined in the null environment.
The difference can be seen by:

(let ((y 7))
(defun baz (x)
(list x ; lexical x
y ; lexical y (closed-over)
(locally (declare (special y))
y)))) ; dynamic y

(let ((y 8))
(declare (special y))
(baz 6))
=> (6 7 8)

> So, I think LispWorks is correct here. But I am curious what the
> experts say :-)

Me, too. I think I understand what's supposed to be hapening here,
but I'm kind of disturbed that ACL and Corman Lisp apparently do:

(let ((y 7))
(defun qux (x)


(declare (special y))
(list x y)))

(qux 6) => (6 7)

I think I'll go re-read the appropriate sections of the spec today.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Pierre R. Mai

unread,
Jan 30, 2002, 3:24:24 PM1/30/02
to
Nils Goesche <car...@cartan.de> writes:

> In article <3C580EC6...@eurocom.od.ua>, Vladimir Zolotykh wrote:
> > Nils Goesche wrote:
> >>
> >> > (let ((y y))
> >> ^
> >> Do you mean to take y here as initializer?
> >
> > Just typos, should be (let ((y 7)) of course.
>
> Well, in that case, I'd say that
>
> (let ((y 7))
> (declare (special y))
> (defun foo (x)
> (list x y)))
>
> has exactly the same semantics as the version I commented on before,
> with the special declaration inside `foo'; right?

Not very relevant to this particular case, but there is of course a
difference, in that in the latter case, the special declaration is a
bound declaration, which therefore affects the let binding
(i.e. results in the special y being bound), whereas in the former
case (replicated here for completeness):

> (let ((y 7))
> (defun foo (x)
> (declare (special y))

> (list x y)))

the special declaration is a free declaration, and hence doesn't
affect the let-bound variable...

Regs, Pierre.

--
Pierre R. Mai <pm...@acm.org> http://www.pmsf.de/pmai/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents. -- Nathaniel Borenstein

Thomas F. Burdick

unread,
Jan 30, 2002, 6:51:50 PM1/30/02
to
t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> I think I'll go re-read the appropriate sections of the spec today.

Okay, I reread it, and, assuming the previous poster didn't type his
example in wrong, I think ACL and Corman Lisp are in error. If Franz
and Corman think this is a bug in their respective products, great
(for me :). If they think this is correct behavior, I'd be interested
in hearing their reasoning. The (mis)behavior is:

(let (var)
#'(lambda (x)
(declare (special var)) ; free declaration
...))

Inside the body of the LAMBDA, VAR should be a different variable than
the one introduced by the LET. Any references to VAR should be
lookups of the current dynamic binding, and should not close over the
VAR introduced by the LET.

So, anyone at Franz or Corman care to comment?

Raffael Cavallaro

unread,
Jan 30, 2002, 11:38:51 PM1/30/02
to
FWIW, this behaves correctly (i.e., as one would expect from the
Hyperspec) in MCL, and another poster indicated that it works properly
in LW as well.

Duane Rettig

unread,
Jan 31, 2002, 2:00:00 AM1/31/02
to
t...@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>
> > I think I'll go re-read the appropriate sections of the spec today.
>
> Okay, I reread it, and, assuming the previous poster didn't type his
> example in wrong, I think ACL and Corman Lisp are in error. If Franz
> and Corman think this is a bug in their respective products, great
> (for me :). If they think this is correct behavior, I'd be interested
> in hearing their reasoning. The (mis)behavior is:
>
> (let (var)
> #'(lambda (x)
> (declare (special var)) ; free declaration
> ...))
>
> Inside the body of the LAMBDA, VAR should be a different variable than
> the one introduced by the LET. Any references to VAR should be
> lookups of the current dynamic binding, and should not close over the
> VAR introduced by the LET.
>
> So, anyone at Franz or Corman care to comment?

Definitely a bug in Allegro CL's interpreter. Unfortunately, I have
to claim responsibility for inadvertently introducing the bug into
version 6.0 when I replaced the interpreter with an implementation
based on first class environments.

The compiler treats it correctly and always has.

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Erann Gat

unread,
Jan 31, 2002, 3:16:47 AM1/31/02
to
In article <4k7tz6...@beta.franz.com>, Duane Rettig <du...@franz.com> wrote:

> t...@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>
> > t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> >
> > > I think I'll go re-read the appropriate sections of the spec today.
> >
> > Okay, I reread it, and, assuming the previous poster didn't type his
> > example in wrong, I think ACL and Corman Lisp are in error. If Franz
> > and Corman think this is a bug in their respective products, great
> > (for me :). If they think this is correct behavior, I'd be interested
> > in hearing their reasoning. The (mis)behavior is:
> >
> > (let (var)
> > #'(lambda (x)
> > (declare (special var)) ; free declaration
> > ...))
> >
> > Inside the body of the LAMBDA, VAR should be a different variable than
> > the one introduced by the LET. Any references to VAR should be
> > lookups of the current dynamic binding, and should not close over the
> > VAR introduced by the LET.
> >
> > So, anyone at Franz or Corman care to comment?
>
> Definitely a bug in Allegro CL's interpreter.

Definitely a bug in the language design IMO. See:

http://groups.google.com/groups?q=g:thl1102610405d&hl=en&selm=gat-2902001653240001%40milo.jpl.nasa.gov

E.

Nils Goesche

unread,
Jan 31, 2002, 5:52:43 AM1/31/02
to

Whether it's a bug in the design or not is debatable, at least, as
that very thread shows. Erik said all there is to say about it, IMHO.

Erann Gat

unread,
Jan 31, 2002, 12:35:40 PM1/31/02
to
In article <a3b7lr$16u06k$1...@ID-125440.news.dfncis.de>, Nils Goesche
<car...@cartan.de> wrote:

> In article <gat-310102...@192.168.1.50>, Erann Gat wrote:
> > In article <4k7tz6...@beta.franz.com>, Duane Rettig
<du...@franz.com> wrote:
> >
> >> Definitely a bug in Allegro CL's interpreter.
> >
> > Definitely a bug in the language design IMO. See:
> >
> >
http://groups.google.com/groups?q=g:thl1102610405d&hl=en&selm=gat-2902001653240001%40milo.jpl.nasa.gov
>
> Whether it's a bug in the design or not is debatable, at least, as
> that very thread shows. Erik said all there is to say about it, IMHO.

Erik's view in a nutshell was:

[We should] focus our attention on stuff that actually affects
users of all categories much more than this trifling issue

The reason I raise this issue again now is that here we have evidence that
it *does* affect users of all categories, from beginners to implementors.
The fact that *two* implementors got this wrong, including the industry
leader, indicates that this really is a pervasive problem, and not just a
fluke.

E.

Thomas F. Burdick

unread,
Jan 31, 2002, 1:10:44 PM1/31/02
to
Nils Goesche <car...@cartan.de> writes:

> In article <gat-310102...@192.168.1.50>, Erann Gat wrote:
> > In article <4k7tz6...@beta.franz.com>, Duane Rettig <du...@franz.com> wrote:
> >
> >> Definitely a bug in Allegro CL's interpreter.
> >
> > Definitely a bug in the language design IMO. See:
> >
> > http://groups.google.com/groups?q=g:thl1102610405d&hl=en&selm=gat-2902001653240001%40milo.jpl.nasa.gov
>
> Whether it's a bug in the design or not is debatable, at least, as
> that very thread shows. Erik said all there is to say about it, IMHO.

I agree. I think the big, wrong premise in that post is that dynamic
binding is confusing. I think the semantics of ANSI CL wrt specials
is actually rather intuitive. How on earth can I say that? Yesterday
I was talking to a couple of non-Lispers who learned Scheme a couple
years ago. I'd already explained to them what specials are, how you
make special variables, and the *name* convention. I asked what they
thought this form should do:

(let (var)
#'(lambda () (declare (special var)) var))

and they both got it right: it closes over the environment in the let,
but within the body of the lambda, var refers the the dynamic value of
var. The first question they both asked was "why would you do that,
isn't that horrible style?". Well, yes. And that's the answer to
most specials confusion -- specials need to be explained to people
well. I think most people just kind of deal with them as they come
along, which is gonna be confusing. And now that Lispers know how
specials work, everyone will understand what's going on, unless
someone does something in horrible style[*], or trying to be confusing.
Languages can't protect against that.

I would like to be able to do something like:

(locally (declare (lexical *var*)) ...)

But oh well. It's not a big deal, really.

[*] The Ciel-80 compiler bitches quite nicely about this. Lexicals
named *like-specials* and specials without stars get you a message.
It's not even preachy about it, it just tells you:

In (LAMBDA (BORK) ...), SOME-VAR is special.

or:

In (LAMBDA (BORK) ...), *SOME-VAR* is not special.

If you already knew this, great, it's easy to ignore. If not, now you
know. It's really just an issue of managing complexity, which is a
good job for tools to help. In Squeak SmallTalk, if you name a local
variable like a global something (class, or whatever)

| SomeVar |
SomeVar := Bork new.

you'll get a nice little pop-up menu asking you what you thought
SomeVar was -- if you insist on being confusing, you can go ahead and
name it that way against convention. But you're not going to do it on
accident, which is the important thing. (and this is exactly where I
got the idea -- you can even set CL80-COMPILER:*STYLE-ERRORS* to T and
it'll raise continuable errors, even more like Squeak).

Thomas F. Burdick

unread,
Jan 31, 2002, 1:19:57 PM1/31/02
to
Duane Rettig <du...@franz.com> writes:

> t...@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>
> > t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> >
> > > I think I'll go re-read the appropriate sections of the spec today.
> >
> > Okay, I reread it, and, assuming the previous poster didn't type his
> > example in wrong, I think ACL and Corman Lisp are in error. If Franz
> > and Corman think this is a bug in their respective products, great
> > (for me :). If they think this is correct behavior, I'd be interested
> > in hearing their reasoning. The (mis)behavior is:
> >
> > (let (var)
> > #'(lambda (x)
> > (declare (special var)) ; free declaration
> > ...))
> >
> > Inside the body of the LAMBDA, VAR should be a different variable than
> > the one introduced by the LET. Any references to VAR should be
> > lookups of the current dynamic binding, and should not close over the
> > VAR introduced by the LET.
> >
> > So, anyone at Franz or Corman care to comment?
>
> Definitely a bug in Allegro CL's interpreter. Unfortunately, I have
> to claim responsibility for inadvertently introducing the bug into
> version 6.0 when I replaced the interpreter with an implementation
> based on first class environments.
>
> The compiler treats it correctly and always has.

Oh good :). I wrote myself a custom inspector just so I could make
sure that all my various environment pointers point to the right
places, figuring that if I relied on *print-circle*, I'd get edge
cases with things pointing every which way, and not notice.

Nils Goesche

unread,
Jan 31, 2002, 1:42:18 PM1/31/02
to

Not really. Somehow I doubt very much that the reason they got it wrong
is that Duane Rettig doesn't grok special variables... ;-)

Erann Gat

unread,
Jan 31, 2002, 2:18:34 PM1/31/02
to
In article <a3c36a$17bso4$1...@ID-125440.news.dfncis.de>, Nils Goesche
<car...@cartan.de> wrote:

> In article <gat-310102...@192.168.1.50>, Erann Gat wrote:
> > In article <a3b7lr$16u06k$1...@ID-125440.news.dfncis.de>, Nils Goesche
> ><car...@cartan.de> wrote:
> >
> >> Whether it's a bug in the design or not is debatable, at least, as
> >> that very thread shows. Erik said all there is to say about it, IMHO.
> >
> > Erik's view in a nutshell was:
> >
> > [We should] focus our attention on stuff that actually affects
> > users of all categories much more than this trifling issue
> >
> > The reason I raise this issue again now is that here we have evidence that
> > it *does* affect users of all categories, from beginners to implementors.
> > The fact that *two* implementors got this wrong, including the industry
> > leader, indicates that this really is a pervasive problem, and not just a
> > fluke.
>
> Not really. Somehow I doubt very much that the reason they got it wrong
> is that Duane Rettig doesn't grok special variables... ;-)

Yes, of course Duane groks special variables. So does Roger. That is
precisely my point. They both got it wrong *despite* the fact that they
understand. The most reasonable way to account for that IMO is that there
is an underlying aspect of the language design that makes it very hard to
get this right even for smart people who understand. I call that a design
flaw.

E.

Erann Gat

unread,
Jan 31, 2002, 3:00:27 PM1/31/02
to
In article <xcvaduu...@conquest.OCF.Berkeley.EDU>,

t...@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) wrote:

> I agree. I think the big, wrong premise in that post is that dynamic
> binding is confusing.

That's not what I said. What I said was:

Lisp does not make an adequate syntactic distinction
between [dynamic and lexical bindings].

That is very different from saying that dynamic binding is confusing.

> I think the semantics of ANSI CL wrt specials
> is actually rather intuitive. How on earth can I say that? Yesterday
> I was talking to a couple of non-Lispers who learned Scheme a couple
> years ago.

[snip]

Anecdotes like this only prove that some people get it right some of the
time, but I don't think there was ever any disagreement about that.
Perhaps even most people get it right it most of the time. But it is
clear that not everyone gets it right all of the time, including, on
occasion, experts. And that's the problem. It's a problem precisely
because, as you say, dynamic binding *isn't* confusing, or at least it
shouldn't be. Dynamic binding is a very simple and elegant concept.
There's no reason why everyone shouldn't get it right all the time, even
the experts :-)

E.

Nils Goesche

unread,
Jan 31, 2002, 3:35:29 PM1/31/02
to

But... I don't care if anything in CL is hard to implement for compiler
writers. IIRC, Pascal didn't have closures because it's designer(s?)
thought they would be too hard to implement. Getting CLOS right is
probably a thousand times harder than getting special variables right
for the compiler writers, but I'd still like to keep it. What you
are saying sounds like the ``Worse is Better'' philosophy to me.

Kenny Tilton

unread,
Jan 31, 2002, 4:46:17 PM1/31/02
to

Erann Gat wrote:
>
> In article <xcvaduu...@conquest.OCF.Berkeley.EDU>,
> t...@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) wrote:
>
> > I agree. I think the big, wrong premise in that post is that dynamic
> > binding is confusing.
>
> That's not what I said. What I said was:
>
> Lisp does not make an adequate syntactic distinction
> between [dynamic and lexical bindings].

I am reminded of the point in time when I figured out one of these
deliberately obscure special versus lexical variable puzzles and
thought, jeez, no wonder we put *barbed-wire* around specials.

kenny
clinisys

Duane Rettig

unread,
Jan 31, 2002, 5:00:01 PM1/31/02
to
g...@jpl.nasa.gov (Erann Gat) writes:

While I consider it a great honor to be placed on this pedestal
(e.g. "if Duane made a mistake, it must be a design flaw"), please
do not keep me on such a pedestal. I could never live up to such
a responsibility. I make mistakes. All kinds of them. Some
understandable, some stupid. This was a stupid mistake. I didn't
implement free declarations in my environments implementation (I had
even put a comment in the code to that effect). And although the
interpreter is free to ignore declarations in general, it is not free
to ignore special declarations. My mistake was to ignore free
special declarations in the interpreter (now was that free/gratis,
or free as opposed to proprietary declarations? :-)

That said, I don't consider much whether or not it is a design flaw
how Common Lisp treats specials. The concern I have over any new
proposal is not how much better it might be, or even how hard it
would be to implement as a vendor. What concerns me is the many
customers we have whose code would break if such radical changes
were made to the spec. I don't remember seeing in the proposal
any indication of compatibility and what to look out for (though I
only skimmed it; there may be such treatment in the proposal, and I
apologize if I missed it). What I would want to see is some
kind of study of what kinds of code would become broken by your
proposal, and how code can be kept from having to be changed. Or,
as an alternative, a translator to change code of the older style
into the newer style (though that is much less acceptable than
not requiring any change to existing code).

Pierre R. Mai

unread,
Jan 31, 2002, 5:08:56 PM1/31/02
to
g...@jpl.nasa.gov (Erann Gat) writes:

> The reason I raise this issue again now is that here we have evidence that
> it *does* affect users of all categories, from beginners to implementors.
> The fact that *two* implementors got this wrong, including the industry
> leader, indicates that this really is a pervasive problem, and not just a
> fluke.

Absent detailed arguments, I fail to see how "implementor X introduced
a bug in the handling of feature Y in release Z" is at all related to
the problems users might have with feature Y. I don't think this is a
compelling argument. Either feature Y is problematic on its own, or
it isn't. Difficulty in implementing said feature is only related to
the question "is having feature Y feasible", and not the question "is
having feature Y desirable". Confusing the two doesn't seem at all
useful to me. It is not as if Duane Rettig had indicated that he
introduced the bug because he didn't understand the semantics of
special variable declarations. Absent such indications, it seems to
me that it is far more likely that there's a far more mundane reason
for that bug being introduced.

We've had bugs in the handling of top-level closures in CMU CL, yet
I've never seen anyone claim that this indicates that closures are a
"pervasive(sic!) problem", and should therefore be "fixed". It is
IMHO sufficient to fix the buggy implemention, and off we go.

Things might be different, if implementors consciously disagreed in
their interpretation of the intended semantics. In that case the
standard would need to be fixed/augmented.

Yet it is IMHO fairly clear what the intended semantics of both lexical
closures, and special variables are. But, as you might know, there is
a gap between knowing what to implement, and implementing it correctly.

That's why we have language implementors, and don't all implement the
language ourselves.

Kaz Kylheku

unread,
Jan 31, 2002, 7:35:48 PM1/31/02
to

They got it wrong because Duane made a simple human error, and that
error affected the treatment of a rarely travelled area of the language
that did not break any test case or other programs. So it was not caught
prior to shipping the software. Am I close?

Ed L Cashin

unread,
Jan 31, 2002, 8:08:24 PM1/31/02
to
Nils Goesche <car...@cartan.de> writes:

..


> But... I don't care if anything in CL is hard to implement for compiler
> writers.

But Stroustrup says the same thing continually through _The C++
Programming Language_, and I believe it was a mistake, since compiler
implementers had such a hard time implementing all the features that
users had to wait and wait for standard features to make it into the
implementations.

> IIRC, Pascal didn't have closures because it's designer(s?)
> thought they would be too hard to implement.

Yes, both extremes lead to problems: ignoring implmentation difficulty
and fretting too much.

--
--Ed L Cashin | PGP public key:
eca...@uga.edu | http://noserose.net/e/pgp/

Erik Naggum

unread,
Feb 1, 2002, 12:01:21 AM2/1/02
to
* Erann Gat

| Erik's view in a nutshell was:
|
| [We should] focus our attention on stuff that actually affects
| users of all categories much more than this trifling issue
|
| The reason I raise this issue again now is that here we have evidence that
| it *does* affect users of all categories, from beginners to implementors.
| The fact that *two* implementors got this wrong, including the industry
| leader, indicates that this really is a pervasive problem, and not just a
| fluke.

A slightly more accurate nutshell summary would be just what I said:

removing special variables because they confuse a few people is a
typical "modern" reaction to the lack of diligence and effort that
"modern" users are no longer expected to expend in learning anything.
this is simply a beginner's issue. Common Lisp used to cater to
experienced programmers at the cost of having to learn it, like a
_skill_, something people of reasonable competence levels would _value_.

Most countries have laws that are fairly obscure and hence are neither
commonly followed nor broken. However, a universal legal principle is
"ignorantia legis neminem excusat" (ignorance of the law excuses no one).
It behooves the citizen of a country to know its lawa. We find that more
or less formal standards of good practice exists in every field of
professional endeavor -- with the exception of programming computers.
For some unfathomable reason, computers should be optimzed for those who
are completely clueless, who cannot accept the responsibility for their
own actions, who do nothing to rectify problems they run into, who are,
plain and simple, _incompetent_ at their job. Microsoft has clearly made
a huge contribution to this state of affairs in its incessant propaganda
that computers should be _so_ easy to use that actually learning to use
them well has become a disadvantage. However, failing to understand how
something works, be it the society in which you live of the computer you
use, can only lead to frustrations, complaints, and miserable experiences
with the software you use. So, too with programming languages.

I believe that there is nothing whatsoever to be gained by catering to
stupid and incompetent people. Others, such as Bill Gates, believe
otherwise, but in order to take advantage of stupidity and incompetence,
you have to make a choice: Make yourself manifestly unattractive to good
and smart people. If a society catered to its criminals, good and smart
people would leave that society, too. This is why I think it is worth
the pain to expose and get rid of those who have no desire to see their
wishes in a context of a community that would have to (help) fulfill them.

If there is a law we do not like, there are some exceptionally complex
_laws_ to follow to change it. It is not just in programming language
communities that "cost to implementors" is considered and valueed higher
than some "perfect solution". If a society is made up of people who are
so ill equipped to deal with disagreement that they throw up their arms
and cry "design flaw" whenever their pet peeve comes up, they survive
solely on the momentum of whatever processes allowed that society to be
created and grow out of barbarianism and "might is right" to begin with..
Sadly, many Western societies virtually surf on the waves made by those
who were smart enough to figure out that need for legal infrastructures
such as constitutions. It takes a considerable amount of intellectual
effort to see that freedom is achieved only within a society of just law,
because most people who have not quite grown up mentally are still quite
short-sigtedly egoistic and think that their desires are the law. Some
think this regardless of the consequences for others. Those who hold the
community interests above their own, or, more precisely, who adjust their
own desires so that they do not require a different community than that
in which they live, are generally much more successful in achieving their
goals than those who spend their time wishing for a different community,
society, world, or universe.

If you stumble and fall, do you blame gravitation, meomentum, friction,
or some other part of what we consider "laws of physics"? If you stumble
in your code and it misperforms, do you blemae physics for making it
harder to type in correct programs, do you blame the hardware for not
doing what you want, do you blame the compiler for not understanding you,
do you blame the language specification for not being what you want? If
you get ill from a disease for which you were genetically predisposed, do
you blame your God, do you blame your parents, do you blame your doctors
for not fixing it, do you blame medicine as a whole for not being able to
give you the life you want? Or do you, in each of these cases, figure
out that there are many things you cannot change and that it is useless
to fight, that there are many things that just happen by coincidence and
which have no conscious will behind them, that even if things seem wrong
to you, they are in fact right for a lot of other people?

You are rich if you can purchase whatever you want, whenever you want it.
Corollarily, it depends more on what you want and when than on how much
money you have. The basic question is: Can you adapt to a society made
by others? Some are unable to do this and have to have their own will or
they threaten to destabilize the society that made their very "protest"
possible -- in a less civilized society, they would have been orstacized
or killed, but because of the freedom they enjoy to express their views,
they attack the very basis of that freedom. These people are generally
dissatisfied with some part of their society (and this will not stop --
they are basically fighting _against_ the "establishment" and it will
never be to their liking, because their role is a fighter, a role that
would vanish if their "demands" were met) and act like the proverbial
disgruntled postal workers instead of trying to find something more
valuable to wast their life on.

Are special variables in Common Lisp difficult to understand? No, but
let me rephrase the question to explain why: how much _education_ (as
opposed to mere _training_) do you need to figure out what variable
reference mechanisms exist and how they work and is this more than (1)
any other feature in the language (it is clearly not), (2) any other
features of other languages regarding non-trivial handling of bindings
(it is clearly not). What we have is a failure of some people to spend
any effort _at all_ to learn how to use special variables, and those seem
to think it is legitimate to fault the language for their lack of desire
to expend effort to learn it. So, the question is really, do we want to
optimize a _language_ for the uneducated, untrained, unskilled moneky who
coulud be replaced at a moment's notice or do we aim for the professional
or professioal-to-be in both our education of beginners and in the design
of our language?

In yet other terms, do we recognize that knowing how anything works that
is not exactly like something else you already know takes special effort?
The first thing you learn is so simple and easy to internalize precisely
because _nothing_ is there to get into a conflict with, but if you know
one thing, like C's global variables, then your next thing, which will in
_some_ way be different, will take some effort to understand. Those who
do not want to expend this effort, _assume_ that all things are the same
and do not even want to recognize that they might be wrong when they run
into problems. This is flat out _stupid_ and betrays an _unconscious_
modus operandi, which I consider to be _fundamentally_ incompatible with
programming computers. Those who fail to think even when the suggestion
that they do so is presented by something as clear-cut as a failure of a
computer do their bidding, should not be used as arguments to redesign a
language that works perfectly well for those who _have_ learned it. Such
is no more than nihlistic change-for-the-sake-of-change and serves nobody.

Common Lisp does _not_ have the luxury of being first contect, so those
who discover it and want to learn it and use it productively have no more
choice about having to expend the extra, but necessary effort to learn
how it both exceeds and differs from other languages whose designers
never dared to challenge the same design issues than those who wish to
learn a physical skill that involved "unlearning" some bad habits, such
as learning proper pronunciation in a foreign language, driving a car,
becoming a master marksman, or even just typing faster on a computer.
However, if the first thing an otherwise eager student sees when he is in
this transition and re-learning period is some disgrunted ex-Lisper whine
that so and so feature is a design flaw, the likelihood that he will want
to expand that effort approaches zero faster than intelligent youngsters
get rid of their desire to learn mathematics when presented with idiotic
drills in arithmetic and a teacher who thinks real mathamatics is "too
difficult".

What we should do instead of this stupid whining about how badly designed
the language is, is simply to show people what is on the other side of
learning it well. It is my personal opinion that those who gripe about
bad design are unable to adapt their personal will to _any_ design not
their own, and if my memory does not fail me, the same person who whines
about special variables yet again also whined about how "impossible" it
would be to use Common Lisp is real life applications, too, going all
George Michael on us and "losing his religion" in the process.

Learning the prerequisite skills to do the necessary work is _not_ an
option. Education to understand the theoretical underpinnings of one's
work is _not_ an option. Being nice to uneducated, untrained, unskilled
fools who effectively complain that the world is not to their lazy liking
_is_ an option, but forcing others to accomodate them because there are
lazy idiots out there who do not want to learn anything at all, is not.
Every other human endeavor requires expenditure of effort and some pain
in learning even to be bad at it, and much more to be good at it. There
is not a single exceptions. and computer programming most certainly is
not it just because it has an amazing ability to reward those who learn
its first, simple skills faster than any other discipline. However, one
must not _stop_ at the simple skills just because it begins to get harder
to become better and more advanced. If there is only one way to destroy
a discipline, it is to rob people of the opportunity to improve by giving
them too few challenges.

I am, incidentally, interested in and care deeply about _professional_
exchange of information, among and for the professional programmer or
professional-to-be. Lazy hobbyists who do not want to work at all,
disgruntled ex-professionals who still hang around instead of just doing
something else that is valuable to them, anti-social individuals who want
to destroy the basis of the profession, i.e., its formal standards and
informal standards of best practice, etc, should in my view get the hell
out of public fora designed for the exchange of information between those
who have at least some of the interests they do not share with the rest
of the community. Compulsive whiners are not welcome in any professional
setting and I fail to see how they are welcome in any personal company,
either.

As for the Common Lisp community's "resistance to change": If you have
only bad ideas, do not blame other people for the resistance to change,
be _intelligent_ and figure out what constitutes acceptable changes and
propose them. Improving Common Lisp is _hard_. That is the result of it
being good in the first place. It used to be damn easy to improve Perl
or C++, and any random idea got in, but even that is getting harder. It
would be foolish to argue that that which has precisely been _matured_ by
exposure to a barrage of ideas, good and bad, and have in the past been
more than willing to adopt the good ones, has somehow _stopped_ adopting
ideas, but it turns out that the education necessary to improve something
that is highly advanced takes so much longer to acquire that those who
have no such education, and who fail to understand what they are trying
to improve, will merely _repeat_ all the "good ideas" that have already
been rejected in the past. This, howeer, is precisely what marks the
transition from a hobbyist discipline to a professional discipline: When
the hobbyists can no longer offer up random ideas that are good enough to
merit changes, the language is so good on its own terms that only those
ideas made by experienced, mature professionals stand a chance of causing
a change. As examples: random syntactic stunts are foolish hobbyist
ideas, whether "marking" special variables or some new conditional macro
to supplant all others, but a new basic design for streams that maintains
compatibility and respect for other members of the professional community
by offering something superior in functionality and performance, is very
clearly a professional idea, complete with implementation. Another good
pair of examples: Useless whining about lowercase symbol names and a way
of doing this that violates the standard is a hobbyist's lack of ability
to grasp what he is facing, while using the substrate of the language to
offer extensive support for external-formats and a uniform internal
character set, complete with implementations with very good performance,
is clearly a professional design improvement. In my view, the Common
Lisp community is short on praise for real, professional changes, and
very long on whining about the resistance to adopting hobbyist stunts.

In summary, what we lack is a professional attitude towards loyal
opposition. The disloyal opposition offered by whining hobbyists who
want change because _they_ feel bad about something and cannot get over
it for personal reasons, causes a lot of grief and traffic in this
newsgroupp. Those who have misgivings about something, appear to be so
uneducated, untrained, and unskilled in public debate that they cannot
present an argument from loyal opposition, which would have respected
past decisions, accepted and existing designs, rejected suggestions, and
those who have made up the community and brought it where it is today,
but seem instead obsessed about the need to make that which they want to
"improve" look _bad_, those who designed it _braindamged_, those who want
to adhere to procedures and to support the community _religious_, etc.
Instead of being able to see a series of steps from here to there, those
who are far more disgruntled than interested in improving anything, want
all of their changes in one giant step, crushing any and all opposition
and forcing a _departure_ and _forking_ instead of linear improvement.

The disloyal opposition can be summarized with "I don't like this, so
therefore it's bad", while the loayl opposition would argue "this might
be even better than what we already have".

/// 2002-02-01
--
In a fight against something, the fight has value, victory has none.
In a fight for something, the fight is a loss, victory merely relief.

Will Deakin

unread,
Feb 1, 2002, 3:54:44 AM2/1/02
to
On a particularly miserable, dark, wet and windy friday morning it was
cheering to read this.

Thank you,

Will Deakin

Nils Goesche

unread,
Feb 1, 2002, 6:48:35 AM2/1/02
to
In article <xo2u1t2...@uga.edu>, Ed L Cashin wrote:

> Nils Goesche <car...@cartan.de> writes:
>
>> But... I don't care if anything in CL is hard to implement for compiler
>> writers.
>
> But Stroustrup says the same thing continually through _The C++
> Programming Language_, and I believe it was a mistake, since compiler
> implementers had such a hard time implementing all the features that
> users had to wait and wait for standard features to make it into the
> implementations.

If those features were worth it, it would also be very well worth it
waiting for them. The /real/ problem with C++ is rather that they
make fundamental changes to the language over and over again, so
you never stop waiting, because when one compiler has finally finished
implementing some new set of features, some other has already halfway
implemented the next set of changes leading to a situation where
it is very hard to compile any C++ program without the very version
of the C++ compiler it was written for.

Tim Bradshaw

unread,
Jan 31, 2002, 3:58:22 PM1/31/02
to
* Erann Gat wrote:

> Anecdotes like this only prove that some people get it right some of the
> time, but I don't think there was ever any disagreement about that.
> Perhaps even most people get it right it most of the time. But it is
> clear that not everyone gets it right all of the time, including, on
> occasion, experts. And that's the problem. It's a problem precisely
> because, as you say, dynamic binding *isn't* confusing, or at least it
> shouldn't be. Dynamic binding is a very simple and elegant concept.
> There's no reason why everyone shouldn't get it right all the time, even
> the experts :-)

But since some people get it right some of the time, but no one gets
*anything* right all the time, unless someone actually studies the
problem in a principled way (as in statistics of some kind) then *all*
that happens is people quoting anecdotes at each other: you as much as
anyone. There are enough people writing code out there that you can
find examples of virtually any depravity or virtue.

While I fully accept that standard IT practice is to avoid any kind of
actual careful measurement of anything, preferring to make
billion-dollar decisions based on anecdotes, war stories, experience
from 30 years ago, badly taught university courses or, if pushed,
`measurements' so badly botched as to be useless both because of the negligible
scientific education of most computing people and because they aren't
actually interested in knowing the answer but in confirming their
prejudices, this doesn't mean that this is a good way to proceed.
Indeed, I think that it's pretty much our bounden duty as Lisp people
to try and drag the standard practice into the 19th century.

And Lisp is God's gift to this kind of study: source code is data!
We can write programs to analyse our programs and look for occurrences
of this kind of problem, if we wanted to. Then we could *know* how
bad a problem it is rather than quoting anecdotes at each other! Why
don't we do this? Well I can think of two reasons: (1) quoting
anecdotes is kind of fun, and it might be inconvenient to actually
know it is or is not a problem. (2) It's not enough of a problem that
anyone can be bothered.

Finally here are some anecdotes, to keep the noise level good and
high.

I can not remember when I last wrote code that was confused about
special/lexical bindings. I'm sure it's happened to me, but I doubt
it has been within the last 8 years.

I recently worked on a reasonably large system which was really pretty
badly written by people who didn't understand CL very well. It had
many, many egregious problems. Special/lexical confusion was not one
of them.

So there's two random stories. I'm sure that there are plenty of
others showing the opposite. However what I'm really sure of is that
STORIES LIKE THIS ARE WORTHLESS.

--tim

Tim Bradshaw

unread,
Jan 31, 2002, 1:26:14 PM1/31/02
to
* Erann Gat wrote:
> The reason I raise this issue again now is that here we have evidence that
> it *does* affect users of all categories, from beginners to implementors.
> The fact that *two* implementors got this wrong, including the industry
> leader, indicates that this really is a pervasive problem, and not just a
> fluke.

I guess Duane should speak for Allegro, but it looks to me rather as
if they got the declaration scoping issues tangled in the intepreter
rather than being confused about special binding.

--tim

Tim Bradshaw

unread,
Jan 31, 2002, 4:42:53 AM1/31/02
to
* Erann Gat wrote:
> http://groups.google.com/groups?q=g:thl1102610405d&hl=en&selm=gat-2902001653240001%40milo.jpl.nasa.gov

I'm sure you could have found a better article than that, you've been
going on about this for at least 10 years.

Nicolas Neuss

unread,
Feb 1, 2002, 7:50:50 AM2/1/02
to
Erik Naggum <er...@naggum.net> writes:

> ... [interesting article]

It takes me much more time to scan through c.l.l. when you are
posting, but I feel that this time is well invested. Thanks for the
interesting article. Fine that you are back.

Nicolas.

Kenny Tilton

unread,
Feb 1, 2002, 8:41:51 AM2/1/02
to

Tim Bradshaw wrote:
> So there's two random stories. I'm sure that there are plenty of
> others showing the opposite. However what I'm really sure of is that
> STORIES LIKE THIS ARE WORTHLESS.

I have a friend who in his first (and last) programming class took a
week and the combined efforts of everyone in the computer lab to
discover he had typed (on a punchcard) "o" where "0" was intended.

Ten years later I worked with two other programmers on one Cobol, well,
listing (early XP?). One of the guys never spoke except to say, "That
'o' should be '0'".

An acquaintance at a bar once showed up with a C++ listing from his
first programming class saying he had spent three days trying to get his
fifty lines to compile, and had resorted to running a virus detector in
desperation. I told him I had never done C++, but shouldn't that
<class>:<funcname> have another colon?

You shoulda heard the scream.

I think every language should have stuff like this to thin the herd.

--

kenny tilton
clinisys, inc
---------------------------------------------------------------
"I don't think the heavy stuff is gonna come down for a while."
- Carl, Caddy Shack

Marc Spitzer

unread,
Feb 1, 2002, 10:31:13 AM2/1/02
to
It is good to see your posts again welcome back Erik. And this one
was a most pleasant read.

marc

Erann Gat

unread,
Feb 1, 2002, 1:24:35 PM2/1/02
to
In article <41yg6n...@beta.franz.com>, Duane Rettig <du...@franz.com> wrote:

> g...@jpl.nasa.gov (Erann Gat) writes:

> > Yes, of course Duane groks special variables. So does Roger. That is
> > precisely my point. They both got it wrong *despite* the fact that they
> > understand. The most reasonable way to account for that IMO is that there
> > is an underlying aspect of the language design that makes it very hard to
> > get this right even for smart people who understand. I call that a design
> > flaw.
>
> While I consider it a great honor to be placed on this pedestal
> (e.g. "if Duane made a mistake, it must be a design flaw"),

That's neither what I said, nor what I meant. A closer paraphrase would
be "If Duane and Roger both make the *same* mistake then that's an
indication that there might be a design flaw."

> And although the
> interpreter is free to ignore declarations in general, it is not free
> to ignore special declarations.

A seemingly random exception to an otherwise general rule without any
clear benefit over alternative designs that do not require the exception
-- another indication that this might be a design flaw.

> That said, I don't consider much whether or not it is a design flaw
> how Common Lisp treats specials. The concern I have over any new
> proposal is not how much better it might be, or even how hard it
> would be to implement as a vendor. What concerns me is the many
> customers we have whose code would break if such radical changes
> were made to the spec.

A fine evaluation criterion. Although I didn't say so explicity, all
aspects of my proposal are 100% backwards-compatible with the existing
standard. (An indication of this is that I was able to implement most of
my proposal as a macro.)

E.

Erann Gat

unread,
Feb 1, 2002, 1:34:29 PM2/1/02
to

I never said any different. Why do you feel the need to misrepresent my
position just so you can criticize it?

> I'm sure you could have found a better article than that, you've been
> going on about this for at least 10 years.

Even if that were true (the article in question is less than two years
old) I am at a loss to understand what you were hoping to accomplish by
saying that.

You know, Tim, you are beginning to make me understand why Erik sometimes
responded to people the way he did.

E.

Erann Gat

unread,
Feb 1, 2002, 1:51:14 PM2/1/02
to
In article <ey37kpy...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:

> * Erann Gat wrote:
>
> > Anecdotes like this only prove that some people get it right some of the
> > time, but I don't think there was ever any disagreement about that.
> > Perhaps even most people get it right it most of the time. But it is
> > clear that not everyone gets it right all of the time, including, on
> > occasion, experts. And that's the problem. It's a problem precisely
> > because, as you say, dynamic binding *isn't* confusing, or at least it
> > shouldn't be. Dynamic binding is a very simple and elegant concept.
> > There's no reason why everyone shouldn't get it right all the time, even
> > the experts :-)
>
> But since some people get it right some of the time, but no one gets
> *anything* right all the time, unless someone actually studies the
> problem in a principled way (as in statistics of some kind) then *all*
> that happens is people quoting anecdotes at each other: you as much as
> anyone. There are enough people writing code out there that you can
> find examples of virtually any depravity or virtue.

True. That is why the crucial point here is not that Duane made a
mistake, but that *both* Duane and Roger made the *same* mistake, and that
it is a mistake that was in some sense predicted by a theory. If we had
some data on the general frequency with which Duane and Roger introduce
bugs into their implementations this might even rise to the level of a
statistically significant result.

BTW, Tim, as one of the few people who has actually published
statistically significant results concerning Lisp I would really
appreciate getting a little bit of the benefit of the doubt from you that
I might actually understand some of these issues.

E.

Pierre R. Mai

unread,
Feb 1, 2002, 3:25:49 PM2/1/02
to
g...@jpl.nasa.gov (Erann Gat) writes:

> True. That is why the crucial point here is not that Duane made a
> mistake, but that *both* Duane and Roger made the *same* mistake, and that
> it is a mistake that was in some sense predicted by a theory. If we had

How do you know that they made _the same_ mistake? Whatever mistakes
they made resulted in similar behaviour for _one_ test-case. There
are many reasons they might have introduced that particular bug. For
all we know one of them might have been a typo.

Furthermore, for any argument at all to start, we'd have to find out
whether they made their "common" mistake _independently_. Case in
point:

Just today we had the case that CMU CL and LW have a similar (the
same?) bug in their handling of declarations in clauses of
handler-case that don't bind the condition. Do I start making
conjections as to "statistically significant" correlation of bugs in
CMU CL and LWW, without taking into account relevant context
information, like e.g. the strong possibility that this particular bug
might have one common source, based on the common reference
implementation, and hence is really just one bug?

And what is this nonsense about "the" mistake being predicted by a
theory? Which theory? It will obviously have to also predict that
CMU CL, LispWorks, ACL's compiler, ACL's old interpreter, CLISP, GCL,
... would not make that mistake (as seen in percentages). If you've
got a theory that can predict all of that, then I'm seriously
interested, and I bet most psychologists, too. If your "theory" is
only that "special binding is a non-trivial thing to implement, hence
people might make mistakes _implementing it_", or even worse, the more
general "theory" that "people make mistakes", please don't bother...

> some data on the general frequency with which Duane and Roger introduce
> bugs into their implementations this might even rise to the level of a
> statistically significant result.
>
> BTW, Tim, as one of the few people who has actually published
> statistically significant results concerning Lisp I would really
> appreciate getting a little bit of the benefit of the doubt from you that
> I might actually understand some of these issues.

You have squandered that benefit by lumping "implementor problems" and
"user problems" into one big heap, without giving any kind of
argument, not to speak of evidence, that the two are related, and if
so, in which way. This does not make your case any stronger, it just
suggests that you have failed making your case in a more convincing
way, and are now starting to clutch at straws...

Erann Gat

unread,
Feb 1, 2002, 4:19:15 PM2/1/02
to
In article <87665h9...@orion.bln.pmsf.de>, "Pierre R. Mai"
<pm...@acm.org> wrote:

> g...@jpl.nasa.gov (Erann Gat) writes:
>
> > True. That is why the crucial point here is not that Duane made a
> > mistake, but that *both* Duane and Roger made the *same* mistake, and that
> > it is a mistake that was in some sense predicted by a theory. If we had
>
> How do you know that they made _the same_ mistake?

I don't. I should have said that they both made a mistake that manifested
itself in the same way.

> Furthermore, for any argument at all to start, we'd have to find out
> whether they made their "common" mistake _independently_. Case in
> point:
>
> Just today we had the case that CMU CL and LW have a similar (the
> same?) bug in their handling of declarations in clauses of
> handler-case that don't bind the condition. Do I start making
> conjections as to "statistically significant" correlation of bugs in
> CMU CL and LWW, without taking into account relevant context
> information, like e.g. the strong possibility that this particular bug
> might have one common source, based on the common reference
> implementation, and hence is really just one bug?

That's a valid point. I assumed that since these are both competing
commercial (which is to say non-open-source) implementations that neither
borrowed source code from the other.

> And what is this nonsense about "the" mistake being predicted by a
> theory? Which theory?

The theory that: Lisp does not make an adequate syntactic distinction
between two completely different ways of binding variables. That's an
exact quote from my original posting. Did you bother to read it?

> You have squandered that benefit by lumping "implementor problems" and
> "user problems" into one big heap,

My position is that there is a design flaw that causes problems for both
implementors and users. Why should I not lump user problems and
implementor problems together as evidence in support of such a theory?

> without giving any kind of argument,

Excuse me? What have I been doing here if not giving an argument?

> not to speak of evidence, that the two are related, and if
> so, in which way.

I have no evidence that they are related, only a theory that claims that
they are. The theory says that (declare (special ...)) is confusing to
both users and implementors. The evidence that it confuses users is the
myriad posts from confused users to comp.lang.lisp asking about it. The
(recent) evidence that it confuses implementors is that Duane said so.
(He "forgot" that special declarations are an exception -- the only
exception -- to the general rule that you can safely ignore
declarations.) There's some indirect evidence that it confused Roger, but
I haven't talked to him so the possibility that something else happened in
that case is still open.

> This does not make your case any stronger, it just
> suggests that you have failed making your case in a more convincing
> way, and are now starting to clutch at straws...

I lost interest in making my case some time ago. It's way past the point
of diminishing returns. It's clear that I'm not going to convince anyone,
and frankly I don't really care. At this point I am trying simply to get
people to stop misrepresenting my position.

E.

Tim Howe

unread,
Feb 2, 2002, 1:15:29 AM2/2/02
to
I found myself nearly cheering as I read this. Thanks for such a
pointed and insightful post.

--
vsync
http://quadium.net/
(cons (cons (car (cons 'c 'r)) (cdr (cons 'a 'o))) ; Orjner
(cons (cons (car (cons 'n 'c)) (cdr (cons nil 's))) nil))

Tim Bradshaw

unread,
Feb 2, 2002, 6:56:21 AM2/2/02
to
* Erann Gat wrote:

> True. That is why the crucial point here is not that Duane made a
> mistake, but that *both* Duane and Roger made the *same* mistake, and that
> it is a mistake that was in some sense predicted by a theory. If we had
> some data on the general frequency with which Duane and Roger introduce
> bugs into their implementations this might even rise to the level of a
> statistically significant result.

No they didn't make the same mistake, they made mistakes one of whose
symptoms happened to be the same. There's no evidence that the
mistakes were related at all.

> BTW, Tim, as one of the few people who has actually published
> statistically significant results concerning Lisp I would really
> appreciate getting a little bit of the benefit of the doubt from you that
> I might actually understand some of these issues.

I'm sorry, although I quite liked your survey (and contributed to it)
I don't think it was statistically interesting (neither was the java
one it was responding to). As I said, the state of the art is people
trading anecdotes, and I'm as happy as anyone to trade them (since I
definitely can't to invest the years I'd need to produce meaningful
statistics).

--tim

Tim Bradshaw

unread,
Feb 2, 2002, 6:52:32 AM2/2/02
to
* Erann Gat wrote:
>>
>> I guess Duane should speak for Allegro, but it looks to me rather as
>> if they got the declaration scoping issues tangled in the intepreter
>> rather than being confused about special binding.

> I never said any different. Why do you feel the need to misrepresent my
> position just so you can criticize it?

I'm not trying to do that. What I'm saying is that it looks like the
issue (for ACL) was not to do with special declarations being
confusing but to do with the scoping of free and bound declarations.
So it's not actually related to specials at all, it just happens that
this is one of the bugs it introduces.

> You know, Tim, you are beginning to make me understand why Erik sometimes
> responded to people the way he did.

Yes, I remember who one of those people was, too.

Thomas F. Burdick

unread,
Feb 2, 2002, 1:41:47 PM2/2/02
to
Tim Bradshaw <t...@cley.com> writes:

> What I'm saying is that it looks like the issue (for ACL) was not to
> do with special declarations being confusing but to do with the
> scoping of free and bound declarations.

Since Roger Corman hasn't chimed in here on what's going on, it was
pointed out to me in e-mail that CLtL1 didn't make such a distinction.
So that may have something to do with what's going on here (with
Corman Lisp anyway)

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Erann Gat

unread,
Feb 3, 2002, 3:00:24 AM2/3/02
to
In article <ey3sn8k...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:

> * Erann Gat wrote:
>
> > True. That is why the crucial point here is not that Duane made a
> > mistake, but that *both* Duane and Roger made the *same* mistake, and that
> > it is a mistake that was in some sense predicted by a theory. If we had
> > some data on the general frequency with which Duane and Roger introduce
> > bugs into their implementations this might even rise to the level of a
> > statistically significant result.
>
> No they didn't make the same mistake, they made mistakes one of whose
> symptoms happened to be the same. There's no evidence that the
> mistakes were related at all.

That's right. As I said before, I misspoke. I should have said that they
both made mistakes that manifested themselves in the same way.

> > BTW, Tim, as one of the few people who has actually published
> > statistically significant results concerning Lisp I would really
> > appreciate getting a little bit of the benefit of the doubt from you that
> > I might actually understand some of these issues.
>
> I'm sorry, although I quite liked your survey (and contributed to it)
> I don't think it was statistically interesting

Statistically interesting is not the same thing as statistically
significant. Interesting is a matter of opinion. Significant has a
technical definition. A result is statistically significant if there is
only a small probability that it could have arisen by chance. Note that
I'm not saying that the matter at hand is a statistically significant
result, only that I understand the issues involved in distinguishing
statistically significant results from anecdotes.

E.

Erann Gat

unread,
Feb 3, 2002, 3:07:21 AM2/3/02
to
In article <ey3wuxw...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:

> * Erann Gat wrote:
> >>
> >> I guess Duane should speak for Allegro, but it looks to me rather as
> >> if they got the declaration scoping issues tangled in the intepreter
> >> rather than being confused about special binding.
>
> > I never said any different. Why do you feel the need to misrepresent my
> > position just so you can criticize it?
>
> I'm not trying to do that. What I'm saying is that it looks like the
> issue (for ACL) was not to do with special declarations being
> confusing but to do with the scoping of free and bound declarations.
> So it's not actually related to specials at all, it just happens that
> this is one of the bugs it introduces.

I don't necessarily disagree with that. It may well be that the confusion
extends beyond specials to the scoping of all declarations (I don't
know). But it is in the scoping of special declarations where it seems to
cause the most recurring problems.

E.

Brian P Templeton

unread,
Feb 3, 2002, 12:15:45 PM2/3/02
to
g...@jpl.nasa.gov (Erann Gat) writes:

[...]


> That's a valid point. I assumed that since these are both competing
> commercial (which is to say non-open-source) implementations that neither
> borrowed source code from the other.
>

I think that you mean that they are competing proprietary
implementations. Being `commercial software' is orthogonal to whether
the software is free software or not, just as `non-commercial
software' is a completely separate attribute from being proprietary
software.

[...]

Regards,
--
BPT <b...@tunes.org> /"\ ASCII Ribbon Campaign
backronym for Linux: \ / No HTML or RTF in mail
Linux Is Not Unix X No MS-Word in mail
Meme plague ;) ---------> / \ Respect Open Standards

Brian P Templeton

unread,
Feb 3, 2002, 12:15:50 PM2/3/02
to
Nils Goesche <car...@cartan.de> writes:

... making C++, in effect, a single-implementation language designed
by a committee. *Shudder*.

:D

> Regards,
> --
> Nils Goesche
> "Don't ask for whom the <CTRL-G> tolls."
>
> PGP key ID 0x42B32FC9

--

Tim Bradshaw

unread,
Feb 4, 2002, 10:56:22 AM2/4/02
to
I'm quoting several levels deep here not to be nasty but to avoid any
confusion as I've already made one mistake here...

g...@jpl.nasa.gov (Erann Gat) wrote in message news:<gat-030202...@192.168.1.50>...

(you wrote)


> > > BTW, Tim, as one of the few people who has actually published
> > > statistically significant results concerning Lisp I would really
> > > appreciate getting a little bit of the benefit of the doubt from you that
> > > I might actually understand some of these issues.
> >

(I followed up)

> > I'm sorry, although I quite liked your survey (and contributed to it)
> > I don't think it was statistically interesting

(you again)

>
> Statistically interesting is not the same thing as statistically
> significant. Interesting is a matter of opinion. Significant has a
> technical definition. A result is statistically significant if there is
> only a small probability that it could have arisen by chance. Note that
> I'm not saying that the matter at hand is a statistically significant
> result, only that I understand the issues involved in distinguishing
> statistically significant results from anecdotes.
>

Ok, my wording was bad, sorry. I think that the survey, while
*interesting* is at the level of anecdote and is not statistically
significant. This is not intended to be dismissive of it. As I hope
I've made clear I think that there are almost no statistically
significant results in this kind of language comparison: we live in a
world of more-or-less convincing anecdotes (I find the study
convincing, personally).

--tim

Erann Gat

unread,
Feb 4, 2002, 2:50:25 PM2/4/02
to
In article <fbc0f5d1.02020...@posting.google.com>,
tfb+g...@tfeb.org (Tim Bradshaw) wrote:

> Ok, my wording was bad, sorry. I think that the survey, while
> *interesting* is at the level of anecdote and is not statistically
> significant.

Statistical significance is not a matter of opinion, it is a mathematical
measure of the probability that a certain result could have arisen by
chance. I haven't actually carried out the calculations but just by
inspection it is clear that the statistical significance of at least some
of the results in my paper is in fact overwhelming, almost certainly
>0.99, where >0.95 is generally considered sufficient for publication in a
scientific journal. (Statistical significance of >X means there is less
than a 1-X probability that the results could have arisen purely by
chance.)

Whether the results are reproducible or whether they are due to a factor
other than the language (like self-selection bias) is open to debate, but
the statistical significance of the results is not.

> This is not intended to be dismissive of it. As I hope
> I've made clear I think that there are almost no statistically
> significant results in this kind of language comparison: we live in a
> world of more-or-less convincing anecdotes

You're right that there are *almost* no statistically significnat results
in this area. But you seem to be implying that we are forever doomed to
this state of affairs by the laws of physics or something, but you're
wrong about that. The only reason we don't have more statistically
significant results in this area is that they require a non-trivial amount
of work to produce, and seem to result in very little in the way of reward
for those who bother to do the work.

> (I find the study convincing, personally).

What did it convince you of?

E.

Tim Bradshaw

unread,
Feb 5, 2002, 5:53:46 AM2/5/02
to
g...@jpl.nasa.gov (Erann Gat) wrote in message news:<gat-040202...@eglaptop.jpl.nasa.gov>...

> Statistical significance is not a matter of opinion, it is a mathematical
> measure of the probability that a certain result could have arisen by
> chance. I haven't actually carried out the calculations but just by
> inspection it is clear that the statistical significance of at least some
> of the results in my paper is in fact overwhelming, almost certainly
> >0.99, where >0.95 is generally considered sufficient for publication in a
> scientific journal. (Statistical significance of >X means there is less
> than a 1-X probability that the results could have arisen purely by
> chance.)
>
> Whether the results are reproducible or whether they are due to a factor
> other than the language (like self-selection bias) is open to debate, but
> the statistical significance of the results is not.

I guess that's my disagreement. I think that if the underlying data
is very poor or ill-controlled that any statistical significance you
associate with it is fairly meaningless. It's like demonstrating that
I like the coffee I like. In particular I think you'd have a hard
time arguing convincingly the result measured anything about Lisp
rather than about the people who contributed results or the particular
problem. If I was to play devil's advocate, for instance I'd say that
Lisp people are better, because smart people like Lisp, but they don't
like it because of performance. So the results represent smart Lisp
programmers not anything about Lisp. Further I could argue that
hiring smart people is a serious problem because they are fractious
and hard to manage and spend all their time contributing to studies
run by other smart people... and not doing boring jobs.

I also think that if the `dimension' of the underlying data is
enormous then simple measurements of one or two parameters are often
not useful, because they reduce an enormous space to one or two
numbers. Of course, people like to do this because a couple of
numbers (or, even better a single bit or information like `x is better
than y') are easy to understand whereas thinking about a huge complex,
probably ill-understood, space is not.

I think everyone in CLL would agreee that the traditional single-bit
of informaton about lisp - `lisp is slow' is meaningless. Expanding it
to have some possible meaning - `implementation x of lisp is slower
than implementation y of language z' - we get something which I think
is closer to meaning but still has enormous problems - you'd have to
consider all implementations of all possible algorithms to solve all
possible problems (which halt, so there's a whole other problem), and
it's pretty clear that this is very close to meaningless again. So we
can do a further reduction `for solving specified problem x, then
implementaton y of Lisp is slower/faster/bigger/smaller than
implementation z of language q'. This is getting somewhere close to a
measurement one could do. But how much does it tell you about the
underlying question of whether Lisp is any good or not? Not very much,
I think.

And finally, I think that if the nature of the space is very
ill-understood, that attempts to reduce its dimension or, really, to
get any kinds of measurements at all, are likely to result in metrics
which aren't metrics or measure something very uninteresting. I've
posted before about the LOC `metric' which I think is just such a
bogus measure, especially when used for things like bugs per KLOC.
I've also posted before about how the space of programs is very
strange compared to the kinds of spaces where measurements are
successfully done - it's almost-everywhere discontinuous rather than
almost-everywhere smooth and linear. People understand linear systems
reasonably well, nonlinear systems tend to cause aeroplanes to fall
out of the sky and bridges to collapse: what are we going to say about
systems which aren't even continuous?

> You're right that there are *almost* no statistically significnat results
> in this area. But you seem to be implying that we are forever doomed to
> this state of affairs by the laws of physics or something, but you're
> wrong about that. The only reason we don't have more statistically
> significant results in this area is that they require a non-trivial amount
> of work to produce, and seem to result in very little in the way of reward
> for those who bother to do the work.

No, I'm not saying that. I am saying that we don't understand enough
about computing systems to make many meaningful measurements, so far.
I hope we will understand enough, because I'd like computing systems
to be as reliable as bridges. This is more than non-trivial work -
labour is not enough to solve this, but definitely less than a
statement that it's not possible.

>
> > (I find the study convincing, personally).
>
> What did it convince you of?

It reinforced my belief that I shouldn't bother learning Java, I
guess.

--tim

Erann Gat

unread,
Feb 5, 2002, 12:34:59 PM2/5/02
to
In article <fbc0f5d1.0202...@posting.google.com>,
tfb+g...@tfeb.org (Tim Bradshaw) wrote:

> g...@jpl.nasa.gov (Erann Gat) wrote in message
news:<gat-040202...@eglaptop.jpl.nasa.gov>...
>
> > Statistical significance is not a matter of opinion, it is a mathematical
> > measure of the probability that a certain result could have arisen by
> > chance. I haven't actually carried out the calculations but just by
> > inspection it is clear that the statistical significance of at least some
> > of the results in my paper is in fact overwhelming, almost certainly
> > >0.99, where >0.95 is generally considered sufficient for publication in a
> > scientific journal. (Statistical significance of >X means there is less
> > than a 1-X probability that the results could have arisen purely by
> > chance.)
> >
> > Whether the results are reproducible or whether they are due to a factor
> > other than the language (like self-selection bias) is open to debate, but
> > the statistical significance of the results is not.
>
> I guess that's my disagreement. I think that if the underlying data
> is very poor or ill-controlled that any statistical significance you
> associate with it is fairly meaningless.

You really ought to read up on statistical significance. (Look it up on
Google.) You're displaying embarrassing ignorance. Statistical
significance is not the same thing as "interesting" or "important" or
"conclusive."

> It's like demonstrating that I like the coffee I like.

It's very different. That you like the coffee that you like is a tautology.

> In particular I think you'd have a hard
> time arguing convincingly the result measured anything about Lisp
> rather than about the people who contributed results or the particular
> problem. If I was to play devil's advocate, for instance I'd say that
> Lisp people are better, because smart people like Lisp, but they don't
> like it because of performance. So the results represent smart Lisp
> programmers not anything about Lisp.

That's mostly true, and I even say so about the paper. But 1) if you can
reliably obtain better results by using Lisp (and that's a big if) then it
doesn't matter whether it's the language or the people drawn to that
language who are the cause of those results. And 2) I believe that the
runtime results actually *are* a convincing demonstration that the
prejudice that "Lisp is slow" is in fact wrong.

> Further I could argue that
> hiring smart people is a serious problem because they are fractious
> and hard to manage and spend all their time contributing to studies
> run by other smart people... and not doing boring jobs.

Yes, I believe that's true too. Which is why I don't claim that my paper
shows anything even remotely like that everyone ought to use Lisp.

> I also think that if the `dimension' of the underlying data is
> enormous then simple measurements of one or two parameters are often
> not useful, because they reduce an enormous space to one or two
> numbers. Of course, people like to do this because a couple of
> numbers (or, even better a single bit or information like `x is better
> than y') are easy to understand whereas thinking about a huge complex,
> probably ill-understood, space is not.

Another way of saying the same thing is that people do this because it's
the only way to get a handle on certain very complex problems. Just
because the problem is simplified from its true nature doesn't in and of
itself mean that the results are meaningless. We don't fully understand
the mechanisms by which many drugs operate, for example. That doesn't
stop us from assessing the efficacy of drugs simply by doing studies where
we give the drug to some people, a placebo to others, and ask them if they
are feeling better.

> I think everyone in CLL would agreee that the traditional single-bit
> of informaton about lisp - `lisp is slow' is meaningless.

No, it's not meaningless, it's just (mostly) wrong. It is possible for a
language to be inherently slow because it does not allow the programmer to
provide information that would allow the compiler to produce efficient
code. Python, for example, lacks declarations, making it extremely
difficult or impossible to produce fast implementations.

[snip]

> People understand linear systems
> reasonably well, nonlinear systems tend to cause aeroplanes to fall
> out of the sky and bridges to collapse: what are we going to say about
> systems which aren't even continuous?

It sounds like you're saying, "I can't imagine what we are going to say
about these systems because the problem seems overwhelmingly complicated
to me (despite the fact that I haven't done my homework on how similarly
complex problems are in fact solved in other fields, because the fact is
that airplanes and bridged are pretty damned reliable) so anyone who
claims to be making progress on this problem must be wrong."

> No, I'm not saying that. I am saying that we don't understand enough
> about computing systems to make many meaningful measurements, so far.

We can make lots of meaningful measurements. Whether we can draw
meaningful conclusions from those measurements is a different question.

> I hope we will understand enough, because I'd like computing systems
> to be as reliable as bridges. This is more than non-trivial work -
> labour is not enough to solve

It might not be sufficient, but it is necessary.

> > > (I find the study convincing, personally).
> >
> > What did it convince you of?
>
> It reinforced my belief that I shouldn't bother learning Java, I
> guess.

That's unfortunate.

E.

Wade Humeniuk

unread,
Feb 5, 2002, 1:39:48 PM2/5/02
to

"Tim Bradshaw" <tfb+g...@tfeb.org> wrote in message
news:fbc0f5d1.0202...@posting.google.com...

> g...@jpl.nasa.gov (Erann Gat) wrote in message
news:<gat-040202...@eglaptop.jpl.nasa.gov>...
> I guess that's my disagreement. I think that if the underlying data
> is very poor or ill-controlled that any statistical significance you
> associate with it is fairly meaningless. It's like demonstrating that
> I like the coffee I like. In particular I think you'd have a hard
> time arguing convincingly the result measured anything about Lisp
> rather than about the people who contributed results or the particular
> problem. If I was to play devil's advocate, for instance I'd say that
> Lisp people are better, because smart people like Lisp, but they don't
> like it because of performance. So the results represent smart Lisp
> programmers not anything about Lisp. Further I could argue that
> hiring smart people is a serious problem because they are fractious
> and hard to manage and spend all their time contributing to studies
> run by other smart people... and not doing boring jobs.

I am really going to have disagree that smart people are fractious. The
best example is Feynman's books where he talks with working with Bohr et.al
on the atomic bomb project. They were anything but fractious. Because they
were smart (I do not think anyone would dispute that they were) the
bandwidth of communication was small and they thought and considered before
they spoke. I am also going to submit that being smart, learning and doing
new things, takes a great deal of humbleness and hence open-mindeness.

Wade


Will Deakin

unread,
Feb 5, 2002, 2:34:01 PM2/5/02
to
Wade Humeniuk wrote:
>I am really going to have disagree that smart people are fractious.
...and by replying to this I put my self in the fractious and dumb camp...
>They were anything but fractious. Because they ere smart (I do not think
anyone would >ispute that they were) the and idth of communication was small and
they thought and >onsidered before hey spoke. I am also going to submit that

being smart, learning and >doing new things, takes a great deal of humbleness
and hence open-mindeness.
Yes, I agree that smart people *tend* not to be fractious per se -- particularly
with their peers and when they are doing what they want to do. But this is not
my complete experience. I also don't think this is what Tim says. IME smart
people tend to be fractious in the face of stupid, interfering, ill considered,
misguided, badly thought out, disorganised management.

To use your example of Richard Feynman -- I think there are a number of
enlightening stories he tells about dealing with the military bureaucracy.
Particularly the one involving holes in fences.

:)w


Thomas F. Burdick

unread,
Feb 5, 2002, 3:00:46 PM2/5/02
to
g...@jpl.nasa.gov (Erann Gat) writes:

> In article <fbc0f5d1.0202...@posting.google.com>,
> tfb+g...@tfeb.org (Tim Bradshaw) wrote:
>
> > g...@jpl.nasa.gov (Erann Gat) wrote in message
> news:<gat-040202...@eglaptop.jpl.nasa.gov>...
> >
> > > Statistical significance is not a matter of opinion, it is a mathematical
> > > measure of the probability that a certain result could have arisen by
> > > chance. I haven't actually carried out the calculations but just by
> > > inspection it is clear that the statistical significance of at least some
> > > of the results in my paper is in fact overwhelming, almost certainly
> > > >0.99, where >0.95 is generally considered sufficient for publication in a
> > > scientific journal. (Statistical significance of >X means there is less
> > > than a 1-X probability that the results could have arisen purely by
> > > chance.)
> > >
> > > Whether the results are reproducible or whether they are due to a factor
> > > other than the language (like self-selection bias) is open to debate, but
> > > the statistical significance of the results is not.

This is true.

> > I guess that's my disagreement. I think that if the underlying data
> > is very poor or ill-controlled that any statistical significance you
> > associate with it is fairly meaningless.

This is also true.

> You really ought to read up on statistical significance. (Look it up on
> Google.) You're displaying embarrassing ignorance. Statistical
> significance is not the same thing as "interesting" or "important" or
> "conclusive."

The problem is you keep spouting the phrase "statistical significance"
as though it were anything but a mathematical artifact. Any flaws in
the experiment, and all mathematical conclusions that you draw from it
are at least as flawed. Stats only remove information, they don't add
any; any deficiencies in the data are only exaggerated.

Bulent Murtezaoglu

unread,
Feb 5, 2002, 3:23:52 PM2/5/02
to
>>>>> "TFB" == Thomas F Burdick <t...@apocalypse.OCF.Berkeley.EDU> writes:
[...]
EG> You really ought to read up on statistical significance. (Look
EG> it up on Google.) You're displaying embarrassing ignorance.
EG> Statistical significance is not the same thing as "interesting"
EG> or "important" or "conclusive."

TFB> The problem is you keep spouting the phrase "statistical
TFB> significance" as though it were anything but a mathematical
TFB> artifact. [...]

This is how things get out of hand in CLL sometimes. EG clearly gave
the textbook meaning of statistical significance. Unless I missed
some postings he keeps spouting it because it seems his usage of the
term is not coveying the (standard) meaning he intended it to. I would
not call it a "mathematical artifact" unless he went on a "fishing
expedition" compared lots of numbers and only reported what he did.
If that were the process then, yes, he would get figures that satisfy
the statistical significane criterion because he basically would have
done a search on the event space in a manner that guaranteed his
finding the rare event. That would have been a mathematical artifact.

All he is saying is what he measured is highly unlikely to have been
caused by chance. He is _not_ saying that it could not have been
caused by a myriad of other possible flaws in the experiment. Again
maybe I missed a couple of postings, (it is also likely that I'm too
cranky today) but I find it very hard to understand why semantics of
statistical significance is being debated here when EG's usage is
clearly correct.

cheers,

BM

Will Deakin

unread,
Feb 5, 2002, 4:12:54 PM2/5/02
to
Erann Gat wrote:
>We don't fully understand the mechanisms by which many drugs operate, for
example. >That doesn't stop us from assessing the efficacy of drugs simply by
doing studies >where we give the drug to some people, a placebo to others, and
ask them if they
>are feeling better.
I really don't think this is correct. One measure made of the effectiveness of
drugs is to look at whether people are still alive in five years time...

>It sounds like you're saying, "I can't imagine what we are going to say
>about these systems because the problem seems overwhelmingly complicated
>to me (despite the fact that I haven't done my homework on how similarly
>complex problems are in fact solved in other fields, because the fact is
>that airplanes and bridged are pretty damned reliable)

And it sound to me that you're say "I don't like what is being said so I'll
patronise tim and turn this into a flame war..."

A lot of bridges have fallen down over the last thousand years and a large
number of plane have fallen out of the sky over the last one hundred or so
years. To stop this people have spent billions of pounds in looking at what
causes this. I'm not convinced that this is true of software development[1].
There are boards of inquiry that painstakingly look at why crashes and
engineering failures happen. Where is the equivalent for software systems? To
use the medical example: is there any research that look at, say 1000 projects
implemented in java or lisp and whether there was an overrun? Or whether what
was delivered worked? Or where are the journals dedicated to publishing these
results? Maybe it is *you* that needs to do the homework.

>> No, I'm not saying that. I am saying that we don't understand enough
>> about computing systems to make many meaningful measurements, so far.
>
>We can make lots of meaningful measurements. Whether we can draw
>meaningful conclusions from those measurements is a different question.

Surely this is an oxymoron. How can you make a meaningful measurement from which
you cannot draw meaningful conclusions?

I really liked the java vs lisp work you did. It is disheartening that there is
not more systematic work out there based on much larger samples, looking at much
more realistic metrics for software development.

Will

[1] I hesitate to call this 'software engineering' for this has as much to do
with engineering as Godzilla has as an demonstrator of the fine art of ballroom
dancing.


Daniel Barlow

unread,
Feb 5, 2002, 5:36:06 PM2/5/02
to
"Will Deakin" <aniso...@hotmail.com> writes:

> There are boards of inquiry that painstakingly look at why crashes and
> engineering failures happen. Where is the equivalent for software systems? To
> use the medical example: is there any research that look at, say 1000 projects
> implemented in java or lisp and whether there was an overrun? Or whether what

~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Does anyone even do that for civil engineering projects? Even as a
completely uninformed member of the public I hear enough[*] about time
and budget overruns in the construction of buildings, bridges,
railways and roads that I'm left with the impression that their
industry is scarcely any better than ours when it comes to project
planning.


[*] Mostly, admittedly, from Private Eye. But, still ...

-dan

--

http://ww.telent.net/cliki/ - Link farm for free CL-on-Unix resources

Thomas A. Russ

unread,
Feb 5, 2002, 6:48:13 PM2/5/02
to
Daniel Barlow <d...@telent.net> writes:
> "Will Deakin" <aniso...@hotmail.com> writes:
>
> > There are boards of inquiry that painstakingly look at why crashes and
> > engineering failures happen. Where is the equivalent for software systems? To
> > use the medical example: is there any research that look at, say 1000 projects
> > implemented in java or lisp and whether there was an overrun? Or whether what
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> Does anyone even do that for civil engineering projects? Even as a
> completely uninformed member of the public I hear enough[*] about time
> and budget overruns in the construction of buildings, bridges,
> railways and roads that I'm left with the impression that their
> industry is scarcely any better than ours when it comes to project
> planning.

My impression is that Will was referring to "overrun", as in "buffer
overrun" leading to an application crash, not a project time or budge
overrun. I suspect that the real answer to this is that such inquiries
are normally only conducted for life- or safety-critical applications.

I think it is rather similar in mechanical or civil engineering and
medical fields. For example, in transportation, it seems that there is
generally a much higher level of scrutiny for mass transit. Aircraft
crashes are subject to intense investigation, bus crashes to a lower
level, and automobile crashes to the least amount. Furthermore, nobody
really does an in-depth analysis of exactly why a lawnmower engine
suddenly stalls. One typically "reboots" the lawnmower and continues
with the task.

Although frustrating, most software is not really used in critical
situations. When there is a problem, such as with telephone switching
software causing major outages, there usually is a more detailed
(although not always public) investigation into the causes.

--
Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu

Erann Gat

unread,
Feb 6, 2002, 12:55:05 AM2/6/02
to
In article <xcvwuxr...@apocalypse.OCF.Berkeley.EDU>,

t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) wrote:

> The problem is you keep spouting the phrase "statistical significance"
> as though it were anything but a mathematical artifact.

Ah, so that's the problem, is it? Thank you for clearing that up.

E.

Will Deakin

unread,
Feb 6, 2002, 4:26:56 AM2/6/02
to
Daniel Barlow wrote:

> Does anyone even do that for civil engineering projects?

More than this the people I know who studied civil engineering did
stuff like "how projects go wrong" and "what to do about this".

> Even as a completely uninformed member of the public I hear about

> ... overruns in the construction of buildings, bridges,

> railways and roads that I'm left with the impression that their
> industry is scarcely any better than ours when it comes to project
> planning.

Hmmm. Not sure about this. There are a lot of bridges and houses and
railways and roads out there. To add a couple of anecdotes to this: my
experience of having several houses built and seen computer modules
added to a live systems would indicate that the methods used planning
and implementing building a house were much more well behaved.

However, I know think I am suffering "death by analogy..."

:)w

> [*] Mostly, admittedly, from Private Eye. But, still ...

Surely this makes you misinformed ;)


:)w

Tim Bradshaw

unread,
Feb 6, 2002, 4:33:15 AM2/6/02
to
g...@jpl.nasa.gov (Erann Gat) wrote in message news:<gat-050202...@192.168.1.50>...

> > >
> > I guess that's my disagreement. I think that if the underlying data
> > is very poor or ill-controlled that any statistical significance you
> > associate with it is fairly meaningless.
>
> You really ought to read up on statistical significance. (Look it up on
> Google.) You're displaying embarrassing ignorance. Statistical
> significance is not the same thing as "interesting" or "important" or
> "conclusive."
>

Did I say it was? I will put what I was trying to say very briefly:
yes, your survery may have statistically significant results. But I
do not think this makes those results in any way useful. In fact I
think that your survery is useless other than as anecdote.

And yes, I know about statistics, thanks, I'm a physicist, you don't
get far doing experiments in physics without understanding some stats.

I'm not going to respond further in this thread.

--tim

Will Deakin

unread,
Feb 6, 2002, 4:33:34 AM2/6/02
to
Thomas A. Russ wrote:

> I think it is rather similar in mechanical or civil engineering and
> medical fields. For example, in transportation, it seems that there is
> generally a much higher level of scrutiny for mass transit. Aircraft
> crashes are subject to intense investigation, bus crashes to a lower
> level, and automobile crashes to the least amount.

I would argue that this depends -- or it does here in the UK. Bus and
car crashes if somebody is injured or killed are investigated quite
alot, particularly if they happen more than once on a stretch of road.
More than this, somebody then comes along and digs the road up and
tries to do something about the problem!

> Furthermore, nobody really does an in-depth analysis of exactly why
> a lawnmower engine suddenly stalls. One typically "reboots" the
> lawnmower and continues with the task.

That is if the lawnmower then does start -- or if this has only
happened a couple of times. Otherwise you need to contact your
lawnmower hardware vendor

:)w

Siegfried Gonzi

unread,
Feb 6, 2002, 5:11:08 AM2/6/02
to
Bulent Murtezaoglu wrote:

> All he is saying is what he measured is highly unlikely to have been
> caused by chance. He is _not_ saying that it could not have been
> caused by a myriad of other possible flaws in the experiment. Again
> maybe I missed a couple of postings, (it is also likely that I'm too
> cranky today) but I find it very hard to understand why semantics of
> statistical significance is being debated here when EG's usage is
> clearly correct.

The Bayesian approach is an interesting alternative (the classical statistics
is always pronounced in textbooks; but there is no clear evidence why the
Bayesian one could not be applicable):

Whos was Reverend Bayes:
http://www.bayesian.org/

The kernel of the Bayesian approach:
http://bidug.pnl.gov/presentations/PEP/sld038.htm


S. Gonzi

Siegfried Gonzi

unread,
Feb 6, 2002, 5:26:58 AM2/6/02
to
Will Deakin wrote:

> Daniel Barlow wrote:
>
> > Does anyone even do that for civil engineering projects?
>
> More than this the people I know who studied civil engineering did
> stuff like "how projects go wrong" and "what to do about this".

I attribute every honor to B. Meyer (Eiffel) for his temptation to
introduce some form of formal-engineering into software-programming and
design. Though, I am not a fan of Eiffel, nor I am addicted to object
oriented programming (I find the object approach comprehensible, but I
find it awkward and counter-intuitive to use:
circle.read(file,parameter)).

But Meyer was the first I know of ( notice I am not a computer-scientists
and naively know some names which only occur from time to time in
magazines or other circles) who recognized that something must be wrong
with the attitude: "Look, I am C programmer, look I am C programmer, look
again I am C programmer and hacker". As long as there are not more Meyers
out there, I believe that programming and especially software engineering
will always stay an insidious hazard.


S. Gonzi

Erann Gat

unread,
Feb 6, 2002, 11:49:50 AM2/6/02
to
In article <fbc0f5d1.02020...@posting.google.com>,
tfb+g...@tfeb.org (Tim Bradshaw) wrote:

> g...@jpl.nasa.gov (Erann Gat) wrote in message
news:<gat-050202...@192.168.1.50>...
> > > >
> > > I guess that's my disagreement. I think that if the underlying data
> > > is very poor or ill-controlled that any statistical significance you
> > > associate with it is fairly meaningless.
> >
> > You really ought to read up on statistical significance. (Look it up on
> > Google.) You're displaying embarrassing ignorance. Statistical
> > significance is not the same thing as "interesting" or "important" or
> > "conclusive."
> >
>
> Did I say it was? I will put what I was trying to say very briefly:
> yes, your survery may have statistically significant results. But I
> do not think this makes those results in any way useful. In fact I
> think that your survery is useless other than as anecdote.

"Anecdote" in the context of a discussion on how to interpret data *means*
"not statistically significant." (Look up "anecdotal evidence" on
Google.) So you have just very succinctly contradicted yourself. You
also continue to talk about significance as if it were a matter of opinion
(my "survey" *may* have statistically signficant results). It isn't.

BTW, the point here is not to argue that my study was useful. It may in
fact be useless (that *is* a matter of opinion). Personally, I think it's
utility, while small, is non-zero, which puts it head and shoulders above
the pack. The point is that you don't seem to understand the distinction
between "statistically significant", "anecdotal", and "useful" as these
terms are defined in the context of scientific discourse.

> And yes, I know about statistics, thanks, I'm a physicist, you don't
> get far doing experiments in physics without understanding some stats.

I believe that.

> I'm not going to respond further in this thread.

Of that I am skeptical.

E.

Tim Bradshaw

unread,
Feb 6, 2002, 1:42:54 PM2/6/02
to
Bulent Murtezaoglu <b...@acm.org> wrote in message news:<87eljzl...@nkapi.internal>...

>
> All he is saying is what he measured is highly unlikely to have been
> caused by chance. He is _not_ saying that it could not have been
> caused by a myriad of other possible flaws in the experiment. Again
> maybe I missed a couple of postings, (it is also likely that I'm too
> cranky today) but I find it very hard to understand why semantics of
> statistical significance is being debated here when EG's usage is
> clearly correct.

(having said I wouldn't post again...)

I haven't tried to dispute the meaning of statistical significance. I
made an error early on by saying `significant' when I meant it in an
informal way. I think I have now made another error by saying
`meaningless' when I meant *that* in an informal way too, which seems
to have caused yet more convulsions.

Let me therefore be clear. I do not dispute the statistical
significance of the results. I *do* question whether these results
have any useful real-world meaning or significance at all.

There, now I've really done.

--tim

Thomas F. Burdick

unread,
Feb 6, 2002, 2:28:28 PM2/6/02
to
g...@jpl.nasa.gov (Erann Gat) writes:

> In article <fbc0f5d1.02020...@posting.google.com>,
> tfb+g...@tfeb.org (Tim Bradshaw) wrote:
>
> > g...@jpl.nasa.gov (Erann Gat) wrote in message
> news:<gat-050202...@192.168.1.50>...
> > > > >
> > > > I guess that's my disagreement. I think that if the underlying data
> > > > is very poor or ill-controlled that any statistical significance you
> > > > associate with it is fairly meaningless.
> > >
> > > You really ought to read up on statistical significance. (Look it up on
> > > Google.) You're displaying embarrassing ignorance. Statistical
> > > significance is not the same thing as "interesting" or "important" or
> > > "conclusive."
> > >
> >
> > Did I say it was? I will put what I was trying to say very briefly:
> > yes, your survery may have statistically significant results. But I
> > do not think this makes those results in any way useful. In fact I
> > think that your survery is useless other than as anecdote.
>
> "Anecdote" in the context of a discussion on how to interpret data *means*
> "not statistically significant."

Interesting, that's not what my dictionairy says. Nor, more
importantly, have I seen the word used that way among scientists.

> (Look up "anecdotal evidence" on Google.) So you have just very
> succinctly contradicted yourself. You also continue to talk about
> significance as if it were a matter of opinion (my "survey" *may*
> have statistically signficant results). It isn't.

I read that as him giving you the benefit of the doubt that you found
significance in your data. Your paper doesn't give figures for this,
but if you claim you found significance, people will tend to grant you
the conceit -- "you may well have found it". "May" because I haven't
seen it with my own eyes.

> BTW, the point here is not to argue that my study was useful.

Failing that, what was the point?

Thomas F. Burdick

unread,
Feb 6, 2002, 2:30:51 PM2/6/02
to
Bulent Murtezaoglu <b...@acm.org> writes:

> I would not call it a "mathematical artifact" unless he went on a
> "fishing expedition" compared lots of numbers and only reported what
> he did.

Well, that's the term often used to describe this situation (at least
in biology). Perfectly good analysis of the carefully collected data
of a flawed experiment, yeilds mathematical artifacts.

Erann Gat

unread,
Feb 6, 2002, 6:06:20 PM2/6/02
to
In article <xcv665a...@apocalypse.OCF.Berkeley.EDU>,

t...@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) wrote:

> > "Anecdote" in the context of a discussion on how to interpret data *means*
> > "not statistically significant."
>
> Interesting, that's not what my dictionairy says.

Anecdote has a colloquial definition that means something like "an amusing
story." The word is typically not used as a noun in scientific
discussions but rather as an adjective as in "anecdotal evidence." See
e.g. http://www.tnp.com/encyclopedia/glossary/32/.

> Nor, more
> importantly, have I seen the word used that way among scientists.

It's not someting that most scientists spend a lot of their time talking
about in day to day life.

> I read that as him giving you the benefit of the doubt that you found
> significance in your data. Your paper doesn't give figures for this,
> but if you claim you found significance, people will tend to grant you
> the conceit -- "you may well have found it". "May" because I haven't
> seen it with my own eyes.

Look at the graphs. There are clearly more than two standard deviations
of difference in some of the results, and the differences in the sizes of
the standard deviations are enormous despite the fact that N is relatively
large (>10). So, no, I haven't carried through the calculations, but it's
pretty clear that the probability that these differences arose purely by
chance is very small, which is what "statistically significant" means.

> > BTW, the point here is not to argue that my study was useful.
>
> Failing that, what was the point?

I believe I explained that in the part of the paragraph that you deleted.
I suggest you go back and read it again and pay particular attention to
the sentence that begins "The point is..."

E.

Paul Tarvydas

unread,
Feb 6, 2002, 7:17:04 PM2/6/02
to
Daniel Barlow <d...@telent.net> wrote in
news:874rkvft...@noetbook.telent.net:

> "Will Deakin" <aniso...@hotmail.com> writes:
>
>> There are boards of inquiry that painstakingly look at why crashes and

...


> Does anyone even do that for civil engineering projects? Even as a

Absolutely! For example, Canadian Engineers (I am one) wear an "Iron
Ring" on their pinkies. The material for those rings comes from a bridge
that collapsed nearly a 100 years ago and is supposed to remind them of
their responsibility. Engineers put their seal on drawings and sign them,
accepting responsibility for faults in the design. Engineers are also
required to take a course in Law, especially Tort Law[+], as part of their
Engineering education.

Bridges just don't collapse very often anymore, so you don't see inquiries
very often. Designing bridges has become a repeatable process. The
process of ensuring that Architects and Engineers involved in bridge design
are legally responsible and are, hence, very careful, is also well-
understood (Law, accreditation of professions). When something does
happen, there is certainly an inquiry, for example, when the Montreal
Olympic Stadium started to fall apart (I believe that it turned out to be
caused by political corruption, rather than Engineering).

The materials and processes used in civil engineering are well understood
(if not, then they are considered to be part of Scientific R&D and are not
used in the regular Engineering process). In school, an Engineer learns to
look up formulae in handbooks and to apply them. If you specify / buy
bridge-building materials from a supplier, the supplier provides you with
charts and guarantees on how these materials react over a range of
conditions. The Engineer ensures that the parameters and guarantees for
all materials and design methods overlap correctly within the range of
application.

It's really quite boring and predictable, compared to programming.

IMHO, software development isn't even at the bridge-falling-down-and-we-
really-care-to-know-why stage yet[*][$].

We do, however, live in a society where most professions have been
formalized to a very high degree and have more history behind them than
software development. Laypeople (and software professionals) actually
believe in Science (the Scientific process and the handing over of
scientifically-gathered data to accredited Engineers). They believe that
the whole world runs on Science and don't get the difference between what
goes on in Engineering and what goes on in software development[**].

So, they (we) expect similarly predictable results in both fields
(Engineering vs. software development). The general populace expects that
we can just pull components off of a shelf and plug them together like a
stereo in a predictable, low-cost way, and guarantee the results. They are
genuinely surprised when we can't do that or when they see a properly
thought-out quote for some software development work. [True, I over-
generalize, but there are precious few examples I can think of, for
producing predictable software results. Building mundane web-sites in HTML
comes to mind as being close to an Engineering-like, repeatable development
act].

Yeah, civil projects have cost overruns, but you don't often see the
Engineers and Architects being dragged through lawsuits because their
designs were inadequate (which is certainly possible, under the Engineering
Acts), because it doesn't happen very often anymore. Engineers are
expected to carry liability insurance. Last time I called an insurance
agent and mentioned "software", they literally hung up on me.
Tradespeople, in an honest operation, don't "hack" on blueprints - they
call up the Engineer and request changes, if new information has arisen, or
if the requirements have changed.

pt


[+] Tort Law, oversimplified - if it's your design and it hurts someone,
you are legally responsible, regardless of intermediate agreements such as
various indemnifications, incorporations, etc.

[*] We (software designers) haven't even developed what I consider to be a
fundamental requirement for something to reach Engineering'hood - a
concrete method to draw instances of designs [UML ain't it - it's geared
towards drawing generalizations about possible designs, instead of
describing to the tradesmen where the bolts should go and what size they
should be]. What field of acredited Engineering does NOT utilize drawings
and concrete mathematics to express detailed designs?

[**] I'm not sure that it is legal to refer to software development as
Engineering, at least in Canada.

[$] Did the software developers who wrote the Airbus code actually sign
their work and accept full legal responsibility? (I don't know the
answer).

Kenny Tilton

unread,
Feb 6, 2002, 9:03:55 PM2/6/02
to

Paul Tarvydas wrote:

[lots of good stuff]

>
> IMHO, software development isn't even at the bridge-falling-down-and-we-
> really-care-to-know-why stage yet[*][$].

True, no outcries a la Enron or the Challenger, but when I look at what
caring programmers go on about I hear mostly "how the hell can we make
this easier?", not so much "check out this cool algorithm".

It reminds me of rock-climbing. Non-climbers don't realize that most of
what is going on is, understandably, "how can I do this without getting
hurt?"

You might say the profession has been in "really-care" mode since
Assembly language got created to make things easier.

>
> We do, however, live in a society where most professions have been
> formalized to a very high degree and have more history behind them than
> software development.

That's the key, I think. We are still trying to get the tools
(languages) right, never mind higher-order design. Tools-wise, jeez,
look how long it took to figure out that automatic GC was a good idea,
even though Lisp was there as an existence proof for thirty years. Give
us another thousand years, we'll get this sorted out.

>
> [*] We (software designers) haven't even developed what I consider to be a
> fundamental requirement for something to reach Engineering'hood - a
> concrete method to draw instances of designs

I like it. And I doubt one could come up with such a thing. So forget
that thousand-year job quote, we're doomed.

:)

kenny
clinisys

Will Deakin

unread,
Feb 7, 2002, 4:19:00 AM2/7/02
to
> Erann Gat wrote:

>> Thomas F. Burdick wrote:
>>> "Anecdote" in the context of a discussion on how to interpret
data >>> *means* "not statistically significant.
>> Nor, more importantly, have I seen the word used that way among
>> scientists.
> It's not someting that most scientists spend a lot of their time talking
> about in day to day life.
Well, I have experience of scientist who spend much of their time
swapping stories -- but I'm not sure these are anecdotes since I
cannot find a web reference to prove this,

;)w

Will Deakin

unread,
Feb 7, 2002, 5:13:21 AM2/7/02
to
Bulent Murtezaoglu wrote:
> This is how things get out of hand in CLL sometimes.
Absolutely!

> EG clearly gave the textbook meaning of statistical significance.

This is then, according to the textbook[1], an argument related to
"category errors". Clearly words have different meanings in different
contexts. Thus the arguments arise from different people making a
series of statements -- perfectly correct in their own context -- but
with different meaning in the different context.

> He is _not_ saying that it could not have been caused by a myriad
of > other possible flaws in the experiment.

Sure. I think this is the real argument. And is the basis of my
ongoing concerning with the "software development process."

> (it is also likely that I'm too cranky today)

Try drinking more coffee with steamed milk. And maybe have a nice
toasted teacake or maybe a buttered scone. I'm not sure it'll help --
but I always feel happier.

:)w
[1] That is -- the philosophy textbook ;)

John Bane

unread,
Feb 7, 2002, 3:42:45 PM2/7/02
to
Wade Humeniuk wrote:

> I am really going to have disagree that smart people are fractious.

The problem is not so much smart people, it's having smart people be
supervised by less smart people. The former are often viewed as a
threat by the latter.

Your example is in line with this - all the participants knew they were
doing completely new science, and they all knew they were hand-picked as
the best available to solve the problem, so they had a common goal and
worked toward it as peers.

This can be particularly bad in programming - would the average manager
prefer to supervise N Lisp hackers or (L x N) Java hackers (where L is
the Lisp productivity increase)? The perverse incentive of getting paid
more to manage more people just makes it worse.

- Bob Bane
ba...@removeme.gst.com

Paul Tarvydas

unread,
Feb 10, 2002, 12:07:59 PM2/10/02
to
Kenny Tilton <kti...@nyc.rr.com> writes:

[Strange - I only saw this reply in GNUS. Outlook, NetScape, XNews(under
windows) insist that this reply doesn't exist]

> That's the key, I think. We are still trying to get the tools
> (languages) right, never mind higher-order design. Tools-wise, jeez,

Actually, I'd argue that we're in "epicycle" mode, just like pre-Copernican
astronomy (cosmology) was. We *know* that what we're doing doesn't solve
the problem, but we're sure that the very next epicycle we add is going to
magically fix all of the problems and give us an order-of-magnitude increase
in productivity or reliability (a fun read is "Sleepwalkers" by Arthur
Koestler - it's about the people and personalities (what a wild and weird
bunch they were!) involved in the shift from pre-Copernican to
post-Copernican astronomy; think "software" as you read it).

OOP is a great example of epicyclic thinking. It's touted to solve all
sorts of problems, but, IMHO it has added more complication than it has
solved.

Electronics went through a huge increase in productivity when the "bus" was
invented and accepted by the masses (for example, the S-100 bus). All of a
sudden, electronics became easy (and so boring, that I exited and jumped
onto the software bandwagon :-).

OOP (well, any common software technique) has certainly not achieved that
sort of boost in software development.

We should be more critical about our techniques and recognize that adding
more epicycles - i.e. more languages, more features - ain't "working".
We've done it for many decades and we've hit an asymptote. In essence, CL
is as good as its gonna get, within a few percent (CL encompasses all known
paradigms and useful features like GC). In the last OOPSLA Proceedings,
there's a paper wherein someone recognized that Java doesn't have macros as
powerful as those of CL, so they added them to Java. What's the point,
other than to let Java get closer to the same asymptote that CL is at? My
experience is that Java is already too bloated to use productively - I have
three 3-inch books purporting to describe its classes, yet I can't ever find
what I want.

The problem is that "textual programming" is maxed out. If you want
order(s) of magnitude better productivity instead of a few more percent,
then you have to stop adding epicycles and start thinking from the beginning
again. It's scary, but it's possible.

> > [*] We (software designers) haven't even developed what I consider to be
a
> > fundamental requirement for something to reach Engineering'hood - a
> > concrete method to draw instances of designs
>

> I like it. And I doubt one could come up with such a thing. So forget
> that thousand-year job quote, we're doomed.

Actually, there's more hope than that. Our small consulting company has
been using our own compilable diagramming notation for some 10 years now
(and shipping real projects on 8031's to large financial institutions, with
it for at least 5 years). In the true spirit of reuse (unlike the NIH
spirit of reuse displayed by the designers of OOP languages :-), we stepped
back (way, way back, to before call-return was invented) and asked ourselves
"what did the electronics guys get right, and why can't we duplicate this
process in software?" (an old example snapshot of our Visual Frameworks(r) /
Drawware(tm) can be seen at http://www.tscontrols.com/pdf/vf.pdf). If we're
doing it, someone else is doing it, too - the meme is in the air.

pt

Thomas F. Burdick

unread,
Feb 10, 2002, 3:56:01 PM2/10/02
to
"Paul Tarvydas" <tarv...@attcanada.ca> writes:

> Kenny Tilton <kti...@nyc.rr.com> writes:
>
> [Strange - I only saw this reply in GNUS. Outlook, NetScape, XNews(under
> windows) insist that this reply doesn't exist]

Even stranger, your message didn't contain it in the References line.
Maybe Kenny canceled the article, and you found it in weird
race-condition land? I still don't understand why it wouldn't be
mentioned in the references

Paul Tarvydas

unread,
Feb 10, 2002, 5:15:33 PM2/10/02
to
Possibly because I didn't use GNUS to reply - I pasted the newsitem from
GNUS into another reader and replied from there, probably losing semantic
info in the process...
pt


"Thomas F. Burdick" <t...@hurricane.OCF.Berkeley.EDU> wrote in message
news:xcvr8nt...@hurricane.OCF.Berkeley.EDU...

Kenny Tilton

unread,
Feb 10, 2002, 8:19:24 PM2/10/02
to

"Thomas F. Burdick" wrote:
>
> "Paul Tarvydas" <tarv...@attcanada.ca> writes:
>
> > Kenny Tilton <kti...@nyc.rr.com> writes:
> >
> > [Strange - I only saw this reply in GNUS. Outlook, NetScape, XNews(under
> > windows) insist that this reply doesn't exist]
>
> Even stranger, your message didn't contain it in the References line.
> Maybe Kenny canceled the article,

Nope. btw, I saw Paul's reply in my first and did not realize it also
came here, so here's my reply:

Paul Tarvydas wrote:
>
> Kenny Tilton <kti...@nyc.rr.com> writes:
>
> [Strange - I only saw this reply in GNUS. Outlook, NetScape, XNews(under
> windows) insist that this reply doesn't exist]
>

> >That's the key, I think. We are still trying to get the tools
> >(languages) right, never mind higher-order design. Tools-wise, jeez,
>
> Actually, I'd argue that we're in "epicycle" mode, just like pre-Copernican
> astronomy (cosmology) was.

Interesting.

But OOP, p-shift or not, was touted/welcomed overtly as a new paradigm.
Prolog certainly is a different paradigm, as is constraint-logic
programming. It may well be that folks are thinking too small when
trying to break paradigms, but I would not characterize the community as
trying to tweak programming.

OTOH, I agree that Java-Python-Perl-Ruby are merely tweakish. It is nice
seeing folks escape from C++ and these new languages as you noted bring
along some of the advantages of CL, but their adherents are no threat to
Ptolemy (?).

>
> Electronics went through a huge increase in productivity when the "bus" was
> invented and accepted by the masses (for example, the S-100 bus). All of a
> sudden, electronics became easy (and so boring, that I exited and jumped
> onto the software bandwagon :-).

That's a great insight. An electronics know-nothing, I often marvel that
while electronics seems to me it should be every bit as challenging as
programming, in fact it seems to be utterly straightforward in that the
products always Just Work.

>
> OOP (well, any common software technique) has certainly not achieved that

> sort of boost in software development ...snip... I have


> three 3-inch books purporting to describe its classes, yet I can't ever find
> what I want.

exactly. the reuse thing never happened.

>
> The problem is that "textual programming" is maxed out.

Ahh, the hidden agenda! :) Actually, after reading this I googled
"visual programming" to remind myself about Prograph. I gotta tell you,
looking at /those/ diagrams made me wonder if the text-visual contrast
was really a fundamental dichotomy; the diagrams /felt/ familiar when
examined closely. But I should not draw conclusions before living with
VP for a while.

My question is, does depicting logic visually really change anything? I
see all the same details in the VP diagram, so... well, try that Q from
this angle: Would writing NL text in the form of diagrammed sentences
(there's a concept: the Gettysburgh Address all diagrammed) change the
expressive process in fundamental ways? I can see how it would eliminate
ambiguities. Do you have experience reports such as fewer bugs, greater
reuse, etc from your success with your tool? Does it /feel/ different?
Did you have to learn to think differently? Did you catch yourself
programming the VP in the text paradigm and scold yourself. :)

> (an old example snapshot of our Visual Frameworks(r) /
> Drawware(tm) can be seen at http://www.tscontrols.com/pdf/vf.pdf).

Just curious: Did you get "wires" from Steele's thesis in which he
developed a CP language? Or is that metaphor more common than I realize?
Do you all have CP backgrounds?

> If we're
> doing it, someone else is doing it, too - the meme is in the air.
>

Speak of the devil: check out my Cell project sometime. (No link yet. I
am too busy with a start-up of my own to take time to get SourceForge
working for me, so I just email out ZIPs to interested folk.) Cells are
a text-based dataflow hack which works by what (I think!) you call wires
(my code "links" Cells in a user-used relationship).

An interesting advantage of Cells is that they make classes reusable,
because while the slots are fixed their values for any given instance
can come from any arbitrary HLL formula, much like a spreadsheet cell.
That makes it very likely that a reasonably appropriate class can serve
my immediate purposes when I hit some novel requirement.

Fun stuff, eh?

--

kenny tilton
clinisys, inc
---------------------------------------------------------------
"We have a pond and a pool. The pond would be good for you."
- Ty to Carl, Caddy Shack

Rahul Jain

unread,
Feb 11, 2002, 4:30:26 AM2/11/02
to
Kenny Tilton <kti...@nyc.rr.com> writes:

> while electronics seems to me it should be every bit as challenging as
> programming, in fact it seems to be utterly straightforward in that the
> products always Just Work.

I assume you haven't used a VIA chipset-based motherboard recently...

--
-> -/- - Rahul Jain - -\- <-
-> -\- http://linux.rice.edu/~rahul -=- mailto:rj...@techie.com -/- <-
-> -/- "I never could get the hang of Thursdays." - HHGTTG by DNA -\- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
Version 11.423.999.221020101.23.50110101.042
(c)1996-2002, All rights reserved. Disclaimer available upon request.

Marc Battyani

unread,
Feb 11, 2002, 5:46:31 AM2/11/02
to

"Rahul Jain" <rj...@sid-1129.sid.rice.edu> wrote in message
news:87sn88l...@photino.sid.rice.edu...

> Kenny Tilton <kti...@nyc.rr.com> writes:
>
> > while electronics seems to me it should be every bit as challenging as
> > programming, in fact it seems to be utterly straightforward in that the
> > products always Just Work.
>
> I assume you haven't used a VIA chipset-based motherboard recently...

It's because chipset like this are written in VHDL. At last, the software
quality and reliability is coming to hardware :)

Marc
(who mostly write in Lisp and VHDL ...)

Brian P Templeton

unread,
Feb 11, 2002, 7:56:27 PM2/11/02
to
"Paul Tarvydas" <tarv...@attcanada.ca> writes:

Interesting. What you mean by ``"textual programming" is maxed out''?
(That is, what is your definition of textual programming, and why is
it going to be inadequate for large future advancements?)

>> > [*] We (software designers) haven't even developed what I consider to be
> a
>> > fundamental requirement for something to reach Engineering'hood - a
>> > concrete method to draw instances of designs
>>
>> I like it. And I doubt one could come up with such a thing. So forget
>> that thousand-year job quote, we're doomed.
>
> Actually, there's more hope than that. Our small consulting company has
> been using our own compilable diagramming notation for some 10 years now
> (and shipping real projects on 8031's to large financial institutions, with
> it for at least 5 years). In the true spirit of reuse (unlike the NIH
> spirit of reuse displayed by the designers of OOP languages :-), we stepped
> back (way, way back, to before call-return was invented) and asked ourselves
> "what did the electronics guys get right, and why can't we duplicate this
> process in software?" (an old example snapshot of our Visual Frameworks(r) /
> Drawware(tm) can be seen at http://www.tscontrols.com/pdf/vf.pdf). If we're
> doing it, someone else is doing it, too - the meme is in the air.
>
> pt
>
>
>

--
BPT <b...@tunes.org> /"\ ASCII Ribbon Campaign
backronym for Linux: \ / No HTML or RTF in mail
Linux Is Not Unix X No MS-Word in mail
Meme plague ;) ---------> / \ Respect Open Standards

Paul Tarvydas

unread,
Feb 15, 2002, 12:27:52 PM2/15/02
to
"Brian P Templeton" <b...@tunes.org> wrote in message
news:87sn87r...@tunes.org...

> Interesting. What you mean by ``"textual programming" is maxed out''?
> (That is, what is your definition of textual programming, and why is
> it going to be inadequate for large future advancements?)

I'll try to keep this short, as it is off-topic (but I will gladly continue,
if prodded and receive no complaints, modulo my being out of town for the
next couple of weeks :-).

It is clear that these are my opinions, yes? I don't present them as facts.

Textual programming, by my definition, is the thing we know - compilable
code represented as strings of text, e.g. Lisp, Java, C, Smalltalk, Perl,
OCCAM, etc, etc.

The thing I called graphical programming (which should be called Visual
Programming, but that word has been corrupted) is the act of creating
compilable code using pictures of some sort.

As for the rest of the question - why I think that textual programming is
maxed out - well, it's just my firmly held belief / intuition. I base this
belief on observations and experiences I've had in 20 years of building
compilers mixed with 17 years of running a software consulting company along
with a schooling in electrical engineering and physics.

The best I can do in a small amount of time, is to give you a few of my
observations - I hope that you don't find them too boring :-)...

- A picture is worth a 1,000 words - it's true. If you don't believe this,
then you won' buy the rest of my arguments. Quit reading now. I find it
hard to convince people of this mostly because they've only ever seen
software as text and can't imagine it being expressed in a graphical
notation (and when they do see a graphical notation, it's one that doesn't
do justice to the idea, e.g. UML and Prograph). A simple observation: when
someone hands me a .h file for some data structure, I grok it much more
slowly than when they just draw the data structure on a white board (no
matter how ambiguous the drawing notation). If you agree with this
observation, then you should be able to infer/admit that graphical notation
is more desirable than textual notation, for at least a class of problems
(even if you don't believe that a graphical notation is *possible*).

- Observation - we really do *want* to draw diagrams of software, we just
can't seem to suss out how. Every software design shop is filled with
whiteboards covered in diagrams (albeit, in an ambiguous notation).

- Observation of successful engineering disciplines - just about everyone
uses detailed diagrams, except us (software developers). If we truly
believed in reuse, we would be looking at other disciplines and stealing
ideas and processes from them.

- Corollary - the main reason we *don't* draw diagrams of software is that
we can't figure out how to retrofit what we already know (textual
programming) into a diagrammatic form. We shun what we don't understand
(it's amazing how many times I've been told that our graphical process
cannot even *work*, after I've just finished demonstrating working compilers
and code!).

- Observation of the electronics field. When the bus was invented, it
looked like a pessimization. The bus was clearly less efficient than
discrete logic, yet it resulted in huge net gains in productivity. The
desktop computer became feasible only after the "bus meme" spread throughout
the industry. We're due for one of those aha's in software.

- The observation that you can't draw sensible pictures of
badly-encapsulated software, followed by the observation that most software
is badly encapsulated because of the implicit CALL / RETURN protocol of
subroutines and methods (yes, "CALL Considered Harmful" :-) [What is needed
is the full space and time encapsulation afforded by "processes" without the
cost of multi-tasking kernels, preemption, etc., so that one could
encapsulate right down to the level of a single statement.]

- I believe that we need to break complex software systems into
fully-encapsulated components, which can be tested in a stand-alone manner.

- I believe that we need to be able to describe static connections between
components. The Java listener model, Windows ActiveX registry stuff, Jini,
sockets and ports are ways of describing dynamic connections between
components. This is useful for only one class of problem - that of creating
an application which can be arbitrarily changed at run time. Most
programming problems I encounter don't have this requirement - at worst,
applications need a way to update software in the field. This class of
problems - software that is designed to fill specific, slowly changing,
requirements - can be solved without the overhead of dynamic connection
registries. I argue, then, that this class of problems *should* be solved
without dynamic connection registries and should be expressed in a way that
communicates the Architects' and Engineers' design intentions (including the
static connections). In fact, most software today is built this way - e.g.
a CALL is a static connection between two modules (it just happens to be a
crummy, non-visualizable connection, which leads to poor encapsulation :-).

- I believe that the next major gain in software design will be the ability
to "wire up" designs using components and connections between components.
The leap will come when we find a notation for showing and compiling
connections.

- So, if you buy my argument that we need to make and express explicit
connections between components, you are faced with "how" to do it. (1) You
can try to express the connections as text - the obvious choice being a "net
list", e.g. Part1, pin1 -> part15, pin4; Part2, pin3, part4, pin5 -> part30,
pin2, part31, pin3, etc, etc. I've tried it. I conclude that the result is
an unreadable mess for any significantly-sized software project. (2) You can
draw the connections on a diagram. An obvious choice is the electronics
schematic model - boxes with named pins, lines (wires) between pins. I've
tried it, it's very expressive (esp. when you add hierarchy or hyperlinks
between diagrams). This leads me to conclude that text can't do what needs
to be done and pictures can, hence, I conclude that "text is maxed out".

- I have a small sample space of real applications that we have shipped,
that supports my conclusions (but, due to the small sample space, doesn't
"prove" its efficacy). We have a graphical tool that we use to build and
compile projects made up of hierarchical parts and connections. Parts are
implemented graphically as state machines (annotated with textual code
snippets) and as blobs of textual code. We use it to program 8031-based
VISA card (POS - point of sale) terminals. A typical app has a spec which
is O(600 pages) and devolves to about 300 software components. This
compiles and fits into approx. 128k of ROM needing some 32-64k of RAM.
Management understands the pictures. Gantt charts contain at least 2 bars
for each part (implement, test). The fact that the application is chopped
up into small, encapsulated parts, means that the implementation of the
parts is easy to estimate and that mis-estimates average out - our ability
to estimate the development schedule is better than when we try to estimate
a textually-built project. Integration is *way* easier, since integration
happens in the graphical domain and, believe it or not, much of the
integration happens before code is written (the graphical tool checks
consistency of the graphical design, hence, irons out most of the
first-order integration bugs). My conclusion is that this graphical
approach gives us a huge advantage in delivering reliable software (we don't
deliver it sooner, we just deliver it better within the same time frame).

pt


Kaz Kylheku

unread,
Feb 15, 2002, 2:55:21 PM2/15/02
to
In article <sybb8.145$oO....@news1.bloor.is>, Paul Tarvydas wrote:
>- I have a small sample space of real applications that we have shipped,
>that supports my conclusions (but, due to the small sample space, doesn't
>"prove" its efficacy). We have a graphical tool that we use to build and
>compile projects made up of hierarchical parts and connections. Parts are
>implemented graphically as state machines (annotated with textual code
>snippets) and as blobs of textual code.

I'd expect much more from a graphical programming language than just
the expression of state machines. State machines are too low of an
abstraction, and can as well be rendered in symbolic representations,
such as transition tables, or even grammar transformation rules.

One thing I would want is to be able to treat pieces of the graphical
program as objects that could be easily analyzed and synthesized.
I'd like to be able to draw, a few bubbles connected by lines, and have
it mean absolutely anything I want it to mean.

The reason a picture can say so much is because of context; a picture
doesn't contain much, but acts as a trigger of symbols that are in
your head. A picture is a macro, and you are the expander function.

I wouldn't want graphics to be the sole representation of a program.
Like text, these should just be a printed representation. And of course
the inner representation would have to be free of the graphical details.
For example, if some form is represented as a bubble on the screen,
I wouldn't want the attributes of that bubble, such as it's B-spline
control points or line thickness, to be attributes of that form.

That would be idiotic, like treating the Lisp parentheses as attributes
of a list object, something that a list *has*.

Parentheses or curve control points are aspects of the printed
representation that must disappear when the object is swallowed into
the bowels of the programming system; only the bare, relevant abstraction
of that object must remain.

And anyway, you are missing a big point here. Text representations are
produced by the human and read directly (modulo a simple mapping through
a character encoding) by the implementation of the programming language. This
makes them a true printed representation that is actually read by
the machine.

A graphics programming language will not be this, unless the machine
actually scans the graphical representation. In reality, the machine will
be working on some alternate representation, which is---guess what--text.

And probably it is kind of text that is chock full of irrelevant crap
that has nothing to do with the semantics of the program, such as the
placement of graphical elements and their attributes.

>first-order integration bugs). My conclusion is that this graphical
>approach gives us a huge advantage in delivering reliable software (we don't
>deliver it sooner, we just deliver it better within the same time frame).

My question would be, what do you do when you need to fix a bug in an old
release of the software, and then merge the bugfix to the latest baseline?

For that matter, how do you review differences when doing a pre-commit
check?

If I had a copy of your graphical code, what tools would I need to
send you a patch?

Raymond Toy

unread,
Feb 15, 2002, 4:43:15 PM2/15/02
to
>>>>> "Paul" == Paul Tarvydas <tarv...@attcanada.ca> writes:

Paul> - A picture is worth a 1,000 words - it's true. If you don't believe this,
Paul> then you won' buy the rest of my arguments. Quit reading now. I find it
Paul> hard to convince people of this mostly because they've only ever seen
Paul> software as text and can't imagine it being expressed in a graphical
Paul> notation (and when they do see a graphical notation, it's one that doesn't
Paul> do justice to the idea, e.g. UML and Prograph). A simple observation: when
Paul> someone hands me a .h file for some data structure, I grok it much more
Paul> slowly than when they just draw the data structure on a white board (no
Paul> matter how ambiguous the drawing notation). If you agree with this
Paul> observation, then you should be able to infer/admit that graphical notation
Paul> is more desirable than textual notation, for at least a class of problems
Paul> (even if you don't believe that a graphical notation is *possible*).

Paul> - Observation - we really do *want* to draw diagrams of software, we just
Paul> can't seem to suss out how. Every software design shop is filled with
Paul> whiteboards covered in diagrams (albeit, in an ambiguous notation).

I've used a (commercial) tool at work that lets you drop a bunch of
boxes of various functions and you draw lines connecting them together
and voila, you have a system simulation. This make great pictures to
show people about how the system works.

It is a nightmare if you want to do something out of the ordinary or
something not expected. And dragging boxes around is much slower than
typing, and connecting the lines properly between the boxes is very
error prone. And then I have to configure the boxes "state" or
"options" in some way by using a dialog box. And what about boxes
that do exactly the same thing? I have to set them each to the
appropriate value, and hope I did the right thing.

And after I've run the simulation, what exactly did it do? I have to
go into every single box and see what I options I entered.

And then what do I do when it doesn't quite work? I draw a whole
bunch of other boxes to get debugging info. When it works, I have to
delete them or the simulations take much longer. But then the bug
shows up again and I have to remember how I set up those boxes to get
the info I needed. Oh boy.

Consider how this would have worked in your favorite language and
debugger.

This just plain sucks.

Maybe when the tools get better, this will work much better, but I'm
not going to hold my breath.

I think there's a reason why pictographic languages aren't really used
much anymore[1], and perhaps extending pictographs to programming is a
bit of a stretch.


Paul> - Observation of successful engineering disciplines - just about everyone
Paul> uses detailed diagrams, except us (software developers). If we truly
Paul> believed in reuse, we would be looking at other disciplines and stealing
Paul> ideas and processes from them.

Paul> - Corollary - the main reason we *don't* draw diagrams of software is that
Paul> we can't figure out how to retrofit what we already know (textual
Paul> programming) into a diagrammatic form. We shun what we don't understand
Paul> (it's amazing how many times I've been told that our graphical process
Paul> cannot even *work*, after I've just finished demonstrating working compilers
Paul> and code!).

I have to point to ASIC designers. Didn't they used to draw little
(big?) schematics for the chips? I don't think they do that anymore.
It's all VHDL now. There must be a reason for that.

(I'm not an ASIC guy, so don't really know, except that the ASIC guys
I DO know use VHDL.)

Ray

Footnotes:
[1] Except Chinese and Japanese and a few others, but I think even there,
the pictographs are getting simpler and simpler over time.


lin8080

unread,
Feb 15, 2002, 3:17:33 PM2/15/02
to
Paul Tarvydas schrieb:

>
> "Brian P Templeton" <b...@tunes.org> wrote in message
> news:87sn87r...@tunes.org...
> > Interesting. What you mean by ``"textual programming" is maxed out''?
> > (That is, what is your definition of textual programming, and why is
> > it going to be inadequate for large future advancements?)

> - I believe that the next major gain in software design will be the ability
> to "wire up" designs using components and connections between components.
> The leap will come when we find a notation for showing and compiling
> connections.

> - So, if you buy my argument that we need to make and express explicit
> connections between components, you are faced with "how" to do it.

^^^^^^^^^^^^^^^^^^^^^^^^^

> (1) You
> can try to express the connections as text - the obvious choice being a "net
> list", e.g. Part1, pin1 -> part15,

> (2) You can
> draw the connections on a diagram. An obvious choice is the electronics
> schematic model - boxes with named pins, lines (wires) between pins.

> ...

> pt

... reading this I heard my radio playing something. This box uses
frequencies, to transport information. You know, its cheap, it works,
and frequencies beginns at 0, going up to ...
So, why talking about pictures to transport information. Translate them
into frequencies and transport it to wherever you want. For near single
user(components) you can set them between 20 to 20 000kHz, for office
rooms you can use infrared, for far distances you can use light.
Think about the radio, these techniques are well known and there is one
big advantage: it is only *real-time*, but you can store it anyway and
translate it to digits ...

stefan

(setq ton01 279)


Alain Picard

unread,
Feb 15, 2002, 6:51:53 PM2/15/02
to

[Apologies to the group: This post reads like a random walk through my neural processes.
You've been warned. :-)]


"Paul Tarvydas" <tarv...@attcanada.ca> writes:

> It is clear that these are my opinions, yes? I don't present them as facts.

Understood.

> - A picture is worth a 1,000 words - it's true. If you don't believe this,
> then you won' buy the rest of my arguments. Quit reading now.

I cheated. I don't believe it, but kept reading. ;-)

I agree that this might be the crux of the issue, however. But if a picture
is worth a thousand words, how many pictures is one word sometimes worth?

Humans are distinguished by their ability for language. Extremely
subtle shades of meaning, (nuances) can be conveyed by the choice of
one word over one of its many close synonyms. Can a picture do that?
[I think "Yes", but only in a vague "artistic" sense].

The problem is computers have no sense of artistry. :-) So what you
have to convey to them is _exactly_ what has to happen at every step.
Certainly we do not have a "visual language" yet sufficiently expressive
to do this; the interesting question is such a language even possible,
and would it offer any advantages over textual representation.

Consider something like DEFMACRO. DEFMACRO lets a human teach a computer
a great deal about a new textual element. The arbitrariness of the computation
which can be performed during the macroexpansion makes it difficult for me
to imagine how a visual language could have this power. In hardware, the
visual elements are pre-fab components, with known properties, and the visual
tool knows how to aggregate them, and deduce the properties of the aggregate.
But isn't that the defining difference between software and hardware; the
lack of pre-fab components? Or are you saying that software as a whole would
see a massive productivity boost from working only with pre fabricated components?

I think such an endeavour would fail, and here's why. Hardware components have
known properties because they are bound by the laws of physics. Software components
are bound by the cognitive load a human can bear. The closest software comes to
this level of modularity (presently) are standards (e.g. TCP/IP protocol, ANSI CL
definition, Microsoft Foundation Classes). As you can imagine, these "standards"
vary considerably in their stability, ease of use, and usefulness. Currently,
I see the trend in software to *destroy* such standards, not embrace them.

> - Observation - we really do *want* to draw diagrams of software, we just
> can't seem to suss out how. Every software design shop is filled with
> whiteboards covered in diagrams (albeit, in an ambiguous notation).

No -- we want to *explain* our *ideas* to our fellow programmers. As soon
as we see that they've "got it", we rub out the board and return to our
keyboards, because it's SO much easier to talk to the computer that way than
to manipulate pictures. [It would truly take a successful strong AI to
understand one of _my_ diagrams! ;-)]

> - Observation of successful engineering disciplines - just about everyone
> uses detailed diagrams, except us (software developers).

Certainly. But who said software was an engineering discipline? I think
it's more like novel writing, inventing new food recipes, and gardening.
No, I'm not being facetious.

BTW, I find your exposition of your current successes at software design
fascinating (if not convincing for the general case). Can you tell us more
about all this?

--
It would be difficult to construe Larry Wall, in article
this as a feature. <1995May29....@netlabs.com>

Kaz Kylheku

unread,
Feb 15, 2002, 7:23:30 PM2/15/02
to
In article <86d6z64...@gondolin.local.net>, Alain Picard wrote:
>
>[Apologies to the group: This post reads like a random walk through my neural processes.
> You've been warned. :-)]
>
>
>"Paul Tarvydas" <tarv...@attcanada.ca> writes:
>
>> It is clear that these are my opinions, yes? I don't present them as facts.
>Understood.
>
>> - A picture is worth a 1,000 words - it's true. If you don't believe this,
>> then you won' buy the rest of my arguments. Quit reading now.
>
>I cheated. I don't believe it, but kept reading. ;-)
>
>I agree that this might be the crux of the issue, however. But if a picture
>is worth a thousand words, how many pictures is one word sometimes worth?
>
>Humans are distinguished by their ability for language. Extremely
>subtle shades of meaning, (nuances) can be conveyed by the choice of
>one word over one of its many close synonyms. Can a picture do that?
>[I think "Yes", but only in a vague "artistic" sense].

A picture can only trigger a vague connotative meaning. When it has
a rigid structure designed to convey a specific denotative meaning,
it's no longer an ordinary picture. It is writing.

>The problem is computers have no sense of artistry. :-) So what you
>have to convey to them is _exactly_ what has to happen at every step.
>Certainly we do not have a "visual language" yet sufficiently expressive
>to do this; the interesting question is such a language even possible,
>and would it offer any advantages over textual representation.

Sure we have such a language. Two sticks with a connection make the letter A.
There, that's a picture. :)

Why do we indent code? To achieve a nice graphical layout for us to read.

lin8080

unread,
Feb 15, 2002, 7:20:32 PM2/15/02
to
Kaz Kylheku schrieb:

> If I had a copy of your graphical code, what tools would I need to
> send you a patch?

:)
You simply choose the light blue color out of the clickbox and merge the
rest of the screen to black. Now place this small light blue colored
drop whitch is representing your patch in exactly horizontal-line 1020
and vertical-line 766. Thats all. sounds easy, hmmm ?
/:)

But thinking about it, there are ways to make it. Sure, the question is,
what is better ? And I tend to answer: Letters are better. Or could you
imagine, how this words look like in colors, in a picture, as an image ?
Brrr.

stefan


lin8080

unread,
Feb 15, 2002, 8:44:27 PM2/15/02
to
Raymond Toy schrieb:

> It is a nightmare if you want to do something out of the ordinary or
> something not expected. And dragging boxes around is much slower than
> typing, and connecting the lines properly between the boxes is very
> error prone. And then I have to configure the boxes "state" or
> "options" in some way by using a dialog box. And what about boxes
> that do exactly the same thing? I have to set them each to the
> appropriate value, and hope I did the right thing.

> And after I've run the simulation, what exactly did it do? I have to
> go into every single box and see what I options I entered.

Hm. Think about java and the class files. Your boxes are like classes.
And as in java you manage some classes together, you clip some icons
together. When there is an icon you missed, just write a new one, or
when you dont want the whole icon-function, you can overwrite it, just
like a class in java. In other words, you make the classes visible, draw
lines and drag&drop icons and build your structure or hierarchy.

> And then what do I do when it doesn't quite work? I draw a whole
> bunch of other boxes to get debugging info. When it works, I have to
> delete them or the simulations take much longer. But then the bug
> shows up again and I have to remember how I set up those boxes to get
> the info I needed. Oh boy.

And what about that, you can see where datas move inside your programm,
you can edit the datapaths to other boxes, you can draw red frames
arround, when something goes wrong. For big programmes, this may be a
good help, otherwise you have to know that all in your brain, and when
an error occurred, you guess around and try and look (may take some
days).

> Consider how this would have worked in your favorite language and
> debugger.

Sounds horrible. Who will learn that tons of symbols ? And every
implementation uses their own variants. And when you get one popular
like in the browser upper line, there is a long patent number.

...


> I have to point to ASIC designers. Didn't they used to draw little
> (big?) schematics for the chips? I don't think they do that anymore.
> It's all VHDL now. There must be a reason for that.

But on the other side, when I take a look at the graphical development
surfaces, I can see that the text-window gets smaller and smaller. And
that has a reason too.

> Footnotes:
> [1] Except Chinese and Japanese and a few others, but I think even there,
> the pictographs are getting simpler and simpler over time.

And as I know, the egypt culture takes a jump up, that time, they use
their pictographs as letters.

So, what about a mixture ?

The example "save file to disk" is a good one for a pictograph, you can
drag and drop it inside the code, where it is necessary and in a popup
window you can edit the parameters, maybe as click boxes or as text,
what your arrow-disk-icon should do (put in a dialogbox, to say where
the file should go, or save it to the current path, or give that file a
textbased name). Things like that you see often in big applications, so
why not spend them some automatism ?

Even in lisp there are standard () and following that way, you give them
a name and add it to the next update. Now you can draw a small icon for
that and (have to) use a graphical system (with a shell like boxed
window). This may speed up program development, but you also can add
such standards to a good editor and get the same. Clicking on the
toolbars save icon in my editor is very easy and quick.

stefan


Paul Tarvydas

unread,
Feb 16, 2002, 1:42:27 AM2/16/02
to

"Alain Picard" <api...@optushome.com.au> wrote in message
news:86d6z64...@gondolin.local.net...

> I cheated. I don't believe it, but kept reading. ;-)

This is not a good time to admit to a Canadian that you cheated :-).

> Humans are distinguished by their ability for language. Extremely
> subtle shades of meaning, (nuances) can be conveyed by the choice of
> one word over one of its many close synonyms. Can a picture do that?
> [I think "Yes", but only in a vague "artistic" sense].
>
> The problem is computers have no sense of artistry. :-) So what you
> have to convey to them is _exactly_ what has to happen at every step.

Aren't you comparing apples to mona lisas? The "language" we use to program
computers is a far cry from the language that humans are capable of and use
daily in communication with other humans. We take our natural language and
dumb it down heavily before trying to use it to control a computer.

Likewise, I'm suggesting - only - that we can use dumbed-down pictures as a
syntax for programming computers.

I checked and, technically, I did use the words "compile" and "diagrams"
together in my previous posting, although I might not have emphasized this
aspect enough.

Every diagrammatic element we use, except comments, has a well-defined
semantics and can be compiled, by a compiler, into executable code. [I'm a
compiler-guy at heart - if it doesn't compile, it ain't worth using].

To illustrate that it is at least *possible* to compile a diagram to code,
here is a very rudimentary model of very rudimentary picture-compilation.
Imagine a flow-chart language consisting of boxes and arrows. Boxes contain
snippets of textual code. Boxes compile into labelled PROGN's that contain
the code snippets. Arrows compile into GOTO's. Can you imagine writing a
successful compiler for this language? (Hint: don't say "no" :-). Even
this brain-dead graphical notation has some advantage - it makes control
flow visible - if you draw spaghetti code, the diagram will expose your
folly. Structured programming went to great extents to rid the world of
evil control-flow in the textual domain. If they'd used pictures instead of
text, they would have achieved the same result, without needing to
black-list the poor GOTO statement, which does have its uses sometimes :-).

I may have given a wrong impression - when I said that textual programming
is maxed out, I wasn't implying that we should stop using text altogether.
We should stop trying to solve *all* problems with textual solutions and we
should explore alternative syntaxes (such as diagrams). We should use text
when it makes sense and not use text when it doesn't make sense.

Examples of bad graphical programming notations:

1) UML - semantics are ambiguous, can't be compiled to code in all cases
(UML use-cases are a good tool to use when dealing with icky wetware
problems like trying to extract requirements from customers, as long as you
don't believe that you will be able to compile the result). UML also has
the fault of trying to graphicalize every textual programming doo-hicky
known to man. The evidence suggests that our collection of textual
doo-hickies doesn't solve the real problem - coming up with a cute
visualization for each of them isn't going to make the fundamental problem
go away.

2) ProGraph - tries to create gaphical entities for each low-level
programming construct, like "+". We don't need that, it doesn't bring
anything new to the table - text already does "+" perfectly well. We need
to use the power of diagrams to express things that we find hard to express
in text (like architecture, complex control flow, complex data flow,
hierarchy, etc).

3) VisualWorks, JavaStudio, LabView - a thin veneer of graphics pasted onto
text. Can't compose new diagrams from existing diagrams (no hierarchy).

4) Patterns - almost a good idea. Focus on the wrong thing - diagrams
should be used for specification not generalization.

> Certainly we do not have a "visual language" yet sufficiently expressive
> to do this; the interesting question is such a language even possible,
> and would it offer any advantages over textual representation.

I claim that it is possible and that I've been using such a notation for
about a decade to do real work (sufficiently real to put bread on the table
for a small consulting company). I don't claim that the visual language
concept is as well-developed as other computer languages. You gotta start
somewhere, though.

> Consider something like DEFMACRO. DEFMACRO lets a human teach a computer

As much as I love Lisp and DEFMACRO, I think that you are focusing on an
epicycle. If you truly wish to examine a new idea, don't bring your old
baggage along and expect it to work, in a way that you are familiar with, in
the new paradigm.

I like DEFMACRO because it lets me express architecture in the textual
domain without sacrificing efficiency. Paul Graham claims that DEFMACRO
gives Lisp a programmable syntax, yet his examples (in OnLisp) don't really
create a new syntax so much as use the old syntax (function call syntax) to
express "heavy" concepts like searching, non-determinism, etc. He folds
complicated constructs back onto the "old" syntax - everything looks more or
less, like a function call, even if it expands into a hoary whack of code.
If he were truly inventing a new "syntax", he would have come up with
something like LOOP - a bunch of keywords, new semantics, etc.

I claim that an appropriate Visual Language starts where DEFMACRO leaves
off. The appropriate set of boxes, arrows, hierarchy, hyperlinks and text
snippets, allow you to express architecture in a meaningful (and compilable)
way. If you can compose diagrams using diagrams, you don't really need
DEFMACRO (yet).

The Visual Language thing is in its infancy. McCarthy (probably) didn't
envision how DEFMACRO would work when he wrote the Lisp 1.5 book.

> to imagine how a visual language could have this power. In hardware, the
> visual elements are pre-fab components, with known properties, and the
visual
> tool knows how to aggregate them, and deduce the properties of the
aggregate.

Good wording. I was trying to make this point when I talked about
encapsulation.

> But isn't that the defining difference between software and hardware; the
> lack of pre-fab components? Or are you saying that software as a whole
would
> see a massive productivity boost from working only with pre fabricated
components?

We started out fully believing that this graphical notation would be the way
to build software from pre-fab components. A decade later, I have yet to
see a single (successful) instance of pre-fab usage.

What I have found is that the graphical paradigm is so powerful that I've
stopped wanting pre-fab. I just want to *look* at old designs and rip them
off (the True use of Patterns). Our tool is fairly pathetic in its
cut&paste abilities, but that seems not to matter. If I can't cut/paste the
piece of ripped-off architecture I want, I simply re-draw it (it only takes
a few minutes to draw a few boxes, yet they express a whole lot about the
design).

> I think such an endeavour would fail, and here's why. Hardware components
have
> known properties because they are bound by the laws of physics. Software
components
> are bound by the cognitive load a human can bear. The closest software
comes to

Yes. And that's the problem. We would do a whole lot better if we
standardized the "physics" of software components. The electronics industry
exploded when they "standardized" the pessimized-concept of a bus.

Let me suggest that we standardize the physics of communications between
software components. That the communications be in the form of events and
that there not be an implied CALL/RETURN protocol (you send an event, you
move on, not waiting for a reply). Nice and simple. Lets you build just
about anything (including CALL / RETURN protocols when you need them).

> this level of modularity (presently) are standards (e.g. TCP/IP protocol,
ANSI CL
> definition, Microsoft Foundation Classes). As you can imagine, these
"standards"
> vary considerably in their stability, ease of use, and usefulness.
Currently,
> I see the trend in software to *destroy* such standards, not embrace them.

The truly reusable software components we have today are O/S'es (Windows,
MacOS, Linux, etc). Too bad that they are all buggy...

> No -- we want to *explain* our *ideas* to our fellow programmers. As soon
> as we see that they've "got it", we rub out the board and return to our
> keyboards, because it's SO much easier to talk to the computer that way
than

Naaah. If you could quickly enter a diagram from the whiteboard into the
computer and then compile it, you would. You just can't imagine doing this,
because you haven't encountered a graphical syntax that is properly
compilable and convenient.

Here's how we work: we (a group of 2 or more senior anal-retentive software
architecture types) sketch loose ideas on the whiteboard for about half a
day. After that, our ideas start becoming more concrete. We bring in a
laptop and hook it up to a projector pointed at the blank wall in our
conference room (we used to have a picture hanging in just the wrong place -
it was moved). Someone sits down at the laptop and starts transcribing the
whiteboard ideas into the inflexible, yet compilable, graphical notation.
Everyone else watches the evolution of the diagram projected on the wall and
chimes in with comments and details. We continue to refine the diagram -
the thing we call Architecture - for at least a week, usually two. After
about two weeks, we can't stand doing this any more, and, more importantly,
the first-cut hierarchical decomposition of the application is complete. We
can't imagine breaking any leaf component down any further - every component
looks simple enough to implement as a state machine or as a blob of code
(not necessarily a state machine).

The estimate for the implementation of every component is on the order of 3
days or less (if an estimate for one component is more than 3 days, we begin
to worry that we haven't decomposed the architecture enough and go back to
the drawing board for a bit). Most estimates are wrong - half are too
short, half are too long - the deltas tend to average out. The law of
averages works because there are so many components (100's to 1,000's).

> to manipulate pictures. [It would truly take a successful strong AI to
> understand one of _my_ diagrams! ;-)]

My family recently acquired our first dog. We took it to dog training
class. In the end, we found out that the class was actually a
human-training class - we discovered the limitations of our dog and we
discovered how to manipulate the dog's actions within its limited domain.

Likewise, if there existed a diagram notation for programming computers that
you *believed* in, you would learn to live within that notation's
limitations. Your diagrams would begin to take the shape of the final
limited notation.

You've already gone through this process at least once. You've learned to
express your ideas in the limited computer programming language called Lisp.
It is an extremely stunted subset of human natural language. It even
contains highly unnatural concepts like LOOP (I once learned just how
unnaturally I have learned to think when I tried to give a simple disk
backup procedure to a secretary as a written set of instructions on a piece
of paper; I used loop's and if's in the document expressed as simple English
(if this happens, do that, etc.) - she had no clue what I meant and did
something utterly bizarre and unexpected, yet totally logical).

> Certainly. But who said software was an engineering discipline? I think
> it's more like novel writing, inventing new food recipes, and gardening.
> No, I'm not being facetious.

Well, (like my tween daughter would say), yeaaaah!

Software ain't an engineering discipline, but it bloody well should be.

Novel writing is a heck of a lot more like engineering than software is!
Take a novel writing course some day - you'll be surprised at how well
organized the process is, much tighter than software development. I just
finished a song-writing course and now I know how to crank songs better than
I can crank software (1.5 years at song-writing, 20 years at
software-writing).

We're not going to make any great strides in software productivity until we
transform it into a boring engineering function. Some of you implicitly
know how to engineer software and are doing it, like Gretzky just grok'ed
hockey and Woods just groks golf. But, most of us don't know how to
engineer software and we flounder and produce bug-ridden garbage.

> BTW, I find your exposition of your current successes at software design
> fascinating (if not convincing for the general case). Can you tell us
more
> about all this?

Thanks for the compliment! It would help if you continued to ask questions.
I'm off to another continent on business for a couple of weeks - if I get my
email hooked up, I'll have nothing better to do than to write humoungous
theses about these concepts. If I don't get my email hooked up, you won't
hear from me for a couple of weeks :-).

Thanks
pt

Rahul Jain

unread,
Feb 16, 2002, 3:46:28 AM2/16/02
to
"Paul Tarvydas" <tarv...@attcanada.ca> writes:

> 2) ProGraph - tries to create gaphical entities for each low-level
> programming construct, like "+". We don't need that, it doesn't
> bring anything new to the table - text already does "+" perfectly
> well. We need to use the power of diagrams to express things that
> we find hard to express in text (like architecture, complex control
> flow, complex data flow, hierarchy, etc).

Prograph has graphical entities for control-flow elements, not for
function elements. There is only one function element graphical
object, in which you type the name of the function you want that
``instance'' to be. Then the input and output handles on the box
change to reflect the actual inputs and outputs of the function.

> I like DEFMACRO because it lets me express architecture in the
> textual domain without sacrificing efficiency. Paul Graham claims
> that DEFMACRO gives Lisp a programmable syntax, yet his examples (in
> OnLisp) don't really create a new syntax so much as use the old
> syntax (function call syntax) to express "heavy" concepts like
> searching, non-determinism, etc. He folds complicated constructs
> back onto the "old" syntax - everything looks more or less, like a
> function call, even if it expands into a hoary whack of code. If he
> were truly inventing a new "syntax", he would have come up with
> something like LOOP - a bunch of keywords, new semantics, etc.

There's a reason why I prefer SERIES over LOOP, and it's because of
this ``feature'' of LOOP. You can't extend LOOP using Lisp, since a
LOOP's body isn't Lisp. SERIES is simply an addition of some
abstractions to Lisp, and can be extended using defun, et al. Also,
this means that you don't need to re-invent stuff already in Lisp,
like IF, COND, etc.

> I claim that an appropriate Visual Language starts where DEFMACRO
> leaves off. The appropriate set of boxes, arrows, hierarchy,
> hyperlinks and text snippets, allow you to express architecture in a
> meaningful (and compilable) way. If you can compose diagrams using
> diagrams, you don't really need DEFMACRO (yet).

The point of DEFMACRO, I think, is that it lets you abstract over the
exact syntax of your code, allowing the indended semantics to be more
clear. What this would mean to a visual language is something I can't
answer.

Håkon Alstadheim

unread,
Feb 16, 2002, 10:15:41 AM2/16/02
to
Raymond Toy <t...@rtp.ericsson.se> writes:

> >>>>> "Paul" == Paul Tarvydas <tarv...@attcanada.ca> writes:
>
> Paul> - A picture is worth a 1,000 words - it's true. If you

> Paul> don't believe this,

[ snip lots of good points against one specific graphical programming
tool]

> This just plain sucks.
>
> Maybe when the tools get better, this will work much better, but I'm
> not going to hold my breath.
>
> I think there's a reason why pictographic languages aren't really used
> much anymore[1], and perhaps extending pictographs to programming is a
> bit of a stretch.

[more snipped]


Your problems seem to me to be with the schematic/graphical tool being
a *visual* *only* tool. If the model is amenable to programmatic changes
either through an API or directly working on the model, most problems
go away. Think AutoCad with auto-lisp, or think about how it would be
if the model was stored in a text file which could be grepped and
cut-pasted and massaged like you do with old style source-files.

Then you could have it both ways.


To get back on topic, the model would obviously need to be stored as
lisp expressions.

--
Håkon Alstadheim, hjemmepappa.

Greg Menke

unread,
Feb 16, 2002, 10:35:31 AM2/16/02
to

> The thing I called graphical programming (which should be called Visual
> Programming, but that word has been corrupted) is the act of creating
> compilable code using pictures of some sort.

Why is a graphical approach somehow more expressive than a textual
representation? It makes for pretty pictures and dramatic demos, I
wonder if it actually captures much more information than your basic
well-considered requirements meeting with people who know what they're
doing and are interested in getting things done.


> - A picture is worth a 1,000 words - it's true. If you don't believe this,
> then you won' buy the rest of my arguments. Quit reading now. I find it
> hard to convince people of this mostly because they've only ever seen
> software as text and can't imagine it being expressed in a graphical
> notation (and when they do see a graphical notation, it's one that
> doesn't

Sometimes a graphical representation is better, sometimes it isn't.
It depends on what you're making pictures of and the capacity of the
analyis & programming teams. Sometimes a project would be lots better
off if the effort put into pictures were instead put into
understanding the low level details of requirements & design.

> - Observation - we really do *want* to draw diagrams of software, we just
> can't seem to suss out how. Every software design shop is filled with
> whiteboards covered in diagrams (albeit, in an ambiguous notation).

True, but every time I've ever been around such diagrams, its been in
a situation where we're using the pictures to develop an understanding
of the problem. That doesn't mean the most effective representation
of something is automatically graphical. Lots of electronics
engineering involves developing and working out equations, but
sometimes schematics are important too.

> - Observation of successful engineering disciplines - just about everyone
> uses detailed diagrams, except us (software developers). If we truly
> believed in reuse, we would be looking at other disciplines and stealing
> ideas and processes from them.

If we truly believed in re-use, we wouldn't re-invent notation and
terminology every 15 years or so; we'd create fundamental notation and
technique that would make existing practice inadequate. Short of that
goal, I'm all for improvement & refinement of current practice, but
lets not mistake that for real progress.


> - Corollary - the main reason we *don't* draw diagrams of software is that
> we can't figure out how to retrofit what we already know (textual
> programming) into a diagrammatic form. We shun what we don't understand
> (it's amazing how many times I've been told that our graphical process
> cannot even *work*, after I've just finished demonstrating working compilers
> and code!).

I'm sure graphical representations of programs can be made to work
fine in many cases. So can textual representations. I just don't see
how a picture is so fundamentally better than code. Presumably
someone could make a stupidly designed & unmaintainable set of
pictures just as quickly as a stupidly designed & unmaintainable chunk
of programming.

> - Observation of the electronics field. When the bus was invented, it
> looked like a pessimization. The bus was clearly less efficient than
> discrete logic, yet it resulted in huge net gains in productivity. The
> desktop computer became feasible only after the "bus meme" spread throughout
> the industry. We're due for one of those aha's in software.

True- I'd love a software "ah ha". Yet I don't think redefining 15
year old notation and inventing a new generation of graphical code
generator tools will go very far towards that goal. All the UML
design tools I've seen so far are the same old thing from a decade
ago, just with fancier pictures and capacity for greater numbers of
gizmos. The term "Software Engineering" is pretty much nonsense until
fundamental equations can be expressed relating to how software works.
Until then we're just trying out variations of documentation & source
code. As soon as you give me the software equivalent of E=IR based on
fundamental principles of physics, which I can use as a basis for
putting software together, I'll buy it.

Are graphical design techniques useful? I think in some
circumstances, yes. Are they a liability and a distraction from
important issues? Sometimes, yes. Sometimes a picture is just plain
insufficient and a few words can resolve the whole thing more
effectively.


>
> - I believe that we need to break complex software systems into
> fully-encapsulated components, which can be tested in a stand-alone
> manner.

Sometimes such practice its possible, sometimes not. Stand-alone
testing is great, I've always done it where I can. Full regression
testing? Absolutely. Its a great tactical advantage when you can do
it. This has always been true.


> - I believe that the next major gain in software design will be the ability
> to "wire up" designs using components and connections between components.
> The leap will come when we find a notation for showing and compiling
> connections.

Connections are no less and no more a part of the problem than data
formats, algorithms and debugging. Encapsulation is a nice idea, and
having it is often better than not. Even so, its no more a panacea
than UML.


>
> - So, if you buy my argument that we need to make and express explicit
> connections between components, you are faced with "how" to do it. (1) You
> can try to express the connections as text - the obvious choice being a "net
> list", e.g. Part1, pin1 -> part15, pin4; Part2, pin3, part4, pin5 -> part30,
> pin2, part31, pin3, etc, etc. I've tried it. I conclude that the result is
> an unreadable mess for any significantly-sized software project. (2) You can
> draw the connections on a diagram. An obvious choice is the electronics
> schematic model - boxes with named pins, lines (wires) between pins. I've
> tried it, it's very expressive (esp. when you add hierarchy or hyperlinks
> between diagrams). This leads me to conclude that text can't do what needs
> to be done and pictures can, hence, I conclude that "text is maxed out".

One problem is electronics is very different from software and
something as simple as a line on a schematic is a very small part of
the consequence of the electrical connection. Each of those lines has
issues like susceptibility to noise, current capacity, resistance,
inductance, capacitance, inductance, length which <must> be
considered. Frequently all of these issues are not important for a
given wire, but often many of them are. Then you have to bear in mind
what the circuits on ends of the wire are doing with it, and also have
to understand what all the other functionally related & geographically
related wires & things are up to. All this stuff has to be worked out
on an piecemeal and collective basis both in design and
implementation. A schematic is a nice way to summarize a circuit in
many situations, but its only part of the story.

The thing that concerns be about UML as "source code" is you tend to
bet the farm on the notation in a way you can't back out of. If the
notation becomes inadequate or more likely, the runtime system can't
hack it, then you're stuck.

Perhaps after software design has actually evolved into engineering
after another 30 or so years, we'll end up with notation and practice
that can really handle the problem. UML doesn't seem much more than a
rehash of the systems analysis language of 15 years ago. Please tell
me how it or something like it is the next quantum leap. PLEASE. I'm
getting this stuff crammed down my throat right now and I'd love to
see it as worth more than it actually looks lke it is. So far I'm not
getting much more out of it than more & different notation for
relationships and "Use Cases".



> - I have a small sample space of real applications that we have shipped,
> that supports my conclusions (but, due to the small sample space, doesn't
> "prove" its efficacy). We have a graphical tool that we use to build and
> compile projects made up of hierarchical parts and connections. Parts are
> implemented graphically as state machines (annotated with textual code
> snippets) and as blobs of textual code. We use it to program 8031-based
> VISA card (POS - point of sale) terminals. A typical app has a spec which
> is O(600 pages) and devolves to about 300 software components. This
> compiles and fits into approx. 128k of ROM needing some 32-64k of RAM.
> Management understands the pictures. Gantt charts contain at least 2 bars
> for each part (implement, test). The fact that the application is chopped
> up into small, encapsulated parts, means that the implementation of the
> parts is easy to estimate and that mis-estimates average out - our ability
> to estimate the development schedule is better than when we try to estimate
> a textually-built project. Integration is *way* easier, since integration
> happens in the graphical domain and, believe it or not, much of the
> integration happens before code is written (the graphical tool checks
> consistency of the graphical design, hence, irons out most of the
> first-order integration bugs). My conclusion is that this graphical
> approach gives us a huge advantage in delivering reliable software (we don't
> deliver it sooner, we just deliver it better within the same time frame).

This seems like a reasonable approach for some situations. Why is it
always a better approach than someting more conventional? I've
shipped apps with 3" of design & requirement docs as the culmination
of years of effort, which actually end up relevant & useful years
later- to the point that people have actually <used> them. Even then,
I think its only good insofar as not having it sucks even worse. Much
of the documentation is related to the sometimes very tedious (and
incompletely understood/evolving) algorithms & processes- some of
which we have control of, other we don't. Plain old, nasty, beastly
procedural defintions of grunt calculation with all the unpleasant
special cases and screwball properties that make real systems such a
pain in the butt to maintain year after year. How do you find UML at
dealing with this aspect of software?

If graphical & encapsulation techniques work for you, thats wonderful-
but are you sure its a universal solvent for developing any system,
even those where the principals are unwilling/unable to come to grips
with the ill-defined and perhaps contradictory aspects of their
problems? I think your techniques will fall apart approximately the
same time as the more "conventional" ones do, leaving you in the
expected position of making sense of nonsense, kludging the things you
must in order to not disrupt the things you can do in a robust way.

Gregm

It is loading more messages.
0 new messages