One question is whether a form has an implicit progn. This is a clue
that it is has statement nature. So if it does not have an implicit
progn, but only one value-returning form, this is a clue that it is an
expression.
One question is whether a form should be used for its value or not. The
forms when and unless should clearly not be used for their value. They
also have implicit progns. There is no point in a progn unless you have
other forms satisfying another clue to statement nature: their value(s)
are discarded and ignored.
The if form is both statement and expression, and it is neither, because
it does not quite have either nature. As an expression, it is not a good
idea to leave out the alternate, even if you want that value to be nil.
As a statement, adding explicit progns can be somewhat annoying.
The general conditional cond also has some of this dual nature. It has
implicit progn, but returns values. Leaving out the final branch is a
known source of annoying bugs, whether it be the unexamined fall-through
case or an unexpected nil return value. Still, cond is the smartest of
the conditional forms I have seen in any number of programming languages:
a long chain of "else if" looks like whoever designed that excuse for a
language forgot to design it for more than either-or style cases. Also,
the verbosity of the conditionals in inferior languages in this style
gets in the way of the task at hand, so cond is the definite winner.
The specialized conditionals case (with ccase and ecase) and typecase
(with ctypecase and etypecase) cover common comparison cases that are
easy to optimize and whose similarities would be totally obscured by a
cond. E.g.,, a test for (<= 0 x 99) may be significantly more expensive
than a test for the type (integer 0 99). A case "dispatching" on a set
of characters may be implemented as a very fast table lookup that would
otherwise be hard to do as efficiently.
The one-branch conditionals when and unless might seem to be redundant,
but they communicate something that neither if nor cond can communicate:
A promise that there will not be any other alternatives, or that if its
condition is not met, nothing needs to be done. If it is merely an
accident that there is one branch now, which might change in the future,
do not use when or unless, because you are lying to yourself. Both when
and unless communicate an expectation. I think when and unless should be
used such that the condition is expected to be true, in which case when
evaluates the body and unless does not. In other words, unless says that
when the condition is false, that is an exceptional thing, but when says
that when the condition is false, that is nothing special. That is, you
should expect both (when (not/null ...)) and (unless (not/null ...)).
There are no serious coding standards that apply to the rich set of
Common Lisp conditionals. I would, however, appeal to those who seek to
use them all to their fullest not use if as a statement, but always as an
expression with both branches specified. Even if the alternate branch is
nil, specify it explicitly. (This may be read to imply that I think it
was a mistake to make the alternate branch optional, but it was probably
made optional because if looks so much like programming languages that
had a statement-only if.) While other programming languages may have an
if statement, e.g., C, Common Lisp's if is much closer in meaning to C's
conditional expression, ?:, which is usually not abused to chain a whole
lot of them together as you lose track of which condition must hold to
reach each expression. In that case, use cond or one of the case forms
as an expression, meaning: without using the option to include statements
with the implicit progn in each body form, and ensure that you specify
the value when none of the conditions or cases are met, even if it is nil.
For the conditional statement, I would likewise appeal to those who seek
to use the most of te language to use when and unless when there is only
one known branch, but not use them for their value. However, there may
be an implicit alternate branch even when using when or unless: They may
throw or return or signal an error. I favor using unless for this, as
explained above about unless being somewhat exceptional in nature. When
there are more than one branch in the conditional statement, use cond.
In the statement capacity, I favor _not_ terminating it with a final
alternate branch unless that branch includes some statements of its own,
so as not to confuse it with an expression. A final alternate branch
like (t nil) tells me the value of the whole cond is used for something
and thus I need to look for the return value for each condition. To make
this abundantly clear, it may be prudent in a complex form to use values
forms explicitly, even if returning only a single value, but this, too,
may decrease readability if it is not obviously seen as useful.
Whareas most programming language have a single conditional statement
and, if you are lucky, maybe a conditional expression, Common Lisp once
again proves itself as the language that communicates with programmers
and captures their intent through the choice of which of several forms to
use. Using only one form for a slew of different purposes is the same
kind of impoverished literary style that you find in bad authors, but if
you come from a language that has only one conditional, embrace the set
of options you have in this new language. It may even be true that in
the natural language of the programmer, even be it English, the set of
conditional expressions fit the single if-then-else form of programming
languages (the writing styles of many programmers may indicate that they
would appreciate a limited vocabulary so as not to feel alienated), but
then using Common Lisp would be an opportunity to improve both styles.
///
--
Norway is now run by a priest from the fundamentalist Christian People's
Party, the fifth largest party representing one eighth of the electorate.
--
Carrying a Swiss Army pocket knife in Oslo, Norway, is a criminal offense.
On Wed, 21 Nov 2001 02:45:42 GMT, Erik Naggum <er...@naggum.net> said:
[...]
EN> Even if the alternate branch is
EN> nil, specify it explicitly.
Precisely---if you omit the `else' part, you are not really
returning a value (even though the language is willing to make up
for it), so perhaps you want to use WHEN (or UNLESS).
EN> (This may be read to imply
EN> that I think it was a mistake to make the alternate branch
EN> optional, but it was probably made optional because if
EN> looks so much like programming languages that had a
EN> statement-only if.)
Maybe existing programs also had to be accommodated which had been
written before the introduction of WHEN and UNLESS and which
omitted the `else' part?
EN> While other programming languages may
EN> have an if statement, e.g., C, Common Lisp's if is much
EN> closer in meaning to C's conditional expression, ?:, which
EN> is usually not abused to chain a whole lot of them together
Or, if we include rarely used languages, isn't Common Lisp's IF in
fact closest to Algol's *if*? (As C's ?: cannot have the `else'
part omitted. Though I admit my memory fails me as to what happens
in e.g. ``i := if b then j''---a compile-time error?) In fact, off
the top of my head I should think that Algol 60 was the first
language where *if* names a conditional _expression_, a significant
difference from a conditional statement viewed essentially as an
abstraction hiding a couple of goto's. (I am primarily talking
about the use of the _name_ `if'; otherwise, I don't know if Algol
or Lisp was the first to have a conditional expression, as they
were developed roughly at the same time, but Lisp had only COND in
the beginning.)
One couldn't say, though, that Algol's *if* was not `abused in
chains' (if I may borrow the expression). By the way, there are
programmers who (in languages with C-like syntax) do not write
else-if chains, but rather stuff like
if (FOO1) BAR1;
if (FOO2) BAR2;
etc. where FOO1 and FOO2 will not be true at the same time. (Not
that I like this too much.)
---Vassil.
> made me think that this is not accurate. Lisp's if returns a value. The
> ifs of other programming languages are generally _statements_ that do not
> return a value. If they have a conditional that can return a value, it
> is usually very different from the usual if statement for one significant
> reason: It has to have a value for both the consequent and the alternate,
> while the traditional if forms have an optional alternate.
Is the latter true? I haven't used SIMULA for 16 years so I can't really
remember if the SIMULA "expression IF" worked that way, but certainly
the perl "expression if" does not work that way:
ev@wallace:~ % perl -we 'print "yes\n" if (1);'
yes
(perl's _unless_, which can't be mentioned often enough since some people
think lisp is the only language with such an animal, can of course also
be used as a "expression unless").
> One question is whether a form should be used for its value or not. The
> forms when and unless should clearly not be used for their value.
I strongly disagree. I think when and unless can be perfectly well used
for their value. In general I think the Common Lisp programmer should
be free to use any form for it's value whenever that value can be given
a meaningful interpretation.
--
(espen)
> Erik Naggum <er...@naggum.net> writes:
>
> > One question is whether a form should be used for its value or
> > not. The forms when and unless should clearly not be used for
> > their value.
>
> I strongly disagree. I think when and unless can be perfectly well
> used for their value. In general I think the Common Lisp programmer
> should be free to use any form for it's value whenever that value
> can be given a meaningful interpretation.
The point is not about legality or conformity, but about the
communication of intent. Lots of things in Common Lisp can be done in
many different ways and since the programs are written more for humans
than for computers, the programmer should choose the way that most
clearly communicates the intent (just like some people _do_ quote nil
and keywords).
If you use when or unless for value, you rely on the fact that if the
test fails they will return nil and you will use it as a meaningful
value. But this value does not appear anywhere and does not make this
explicit. Attention to detail at this level is what makes programs
beautiful and readable and approachable by non-experts in Common Lisp.
--
Janis Dzerins
Eat shit -- billions of flies can't be wrong.
> Espen Vestre <espen@*do-not-spam-me*.vestre.net> writes:
>
> > Erik Naggum <er...@naggum.net> writes:
> >
> > > One question is whether a form should be used for its value or
> > > not. The forms when and unless should clearly not be used for
> > > their value.
> >
> > I strongly disagree. I think when and unless can be perfectly well
> > used for their value. In general I think the Common Lisp programmer
> > should be free to use any form for it's value whenever that value
> > can be given a meaningful interpretation.
>
> The point is not about legality or conformity, but about the
> communication of intent. Lots of things in Common Lisp can be done in
> many different ways and since the programs are written more for humans
> than for computers, the programmer should choose the way that most
> clearly communicates the intent (just like some people _do_ quote nil
> and keywords).
But intent is the product of a set of style rules plus a set of actions
under those rules. We have no style rule for determining whether the
number 42 will be returned; that is done by using a construct from the
use of which it is ambiguous as to whether it returns 42 or not, plus
actually reading the code and determining what it will do. Yet such
code is not blocked from clearly communicating whether 42 is returned.
Nor is it the burden of every programmer to exhaust every possible cue
with an associative nuance, so that if two constructs compute the same
value, there must be a difference in nuance between the two. It is a
legitimate style decision to simply say "I use WHEN if there is no
alternative, and UNLESS if there is no consequent." It is also a
legitimate style decision to code otherwise. What matters more is to
know the convention than for us all to have the same convention,
because on the latter point we will not agree and we will war endlessly.
And certainly no one's style rules should keep him/her from seeing the
possibility that others use different rules. Style rules are merely a
cue to help manage the probability of various choices being made. If
one blindly infers that they assert actual meaning, one will find oneself
substantially and rightfully hampered in the ability to debug code.
> If you use when or unless for value, you rely on the fact that if the
> test fails they will return nil and you will use it as a meaningful
> value. But this value does not appear anywhere and does not make this
> explicit. Attention to detail at this level is what makes programs
> beautiful and readable and approachable by non-experts in Common Lisp.
I believe your case would be better made by asserting "I prefer the style
rule that says: ..." Because that leaves room for others to choose their
own path without argument. Asserting a style rule as if it were true,
or canonical, or right, or otherwise uniquely determined invites opposition
to what should be a common sense statement if presented properly. Style
rules assert a possible and useful axis of consistency from among many
that are available to freely suggest.
Personally, my preferred style rule says not to omit values when they
are possible to provide and when a value is required. So I might use
(IF x y) when an IF is not value-producing but (IF x y NIL) if the IF
is value producing. But that doesn't mean that when I use (WHEN x y)
I am saying there is no value. I choose between IF and WHEN on the
basis of whether I think there is a reasonable chance I might want to
add an alternative. If I do, I sometimes prefer IF, whether or nto I
am actually supplying an alternative at the time. When I use WHEN,
I usually either mean "this supplies no value" or "this supplies a value
which is either a meaningful true value, or a subordinate false value".
(Think of the vowel schwa in English.) So, for example, I might
commonly use:
(defun get-property (x y)
(when (symbolp x)
(get x y)))
In such a place, I intend this to be read "the value of GET-PROPERTY is
the result of (GET X Y) when X is a symbol, but it really has no properties
otherwise". This is computationally the same as, but connotationally
different than:
(defun get-property (x y)
(if (symbolp x)
(get x y)
nil))
which says "I mean to define that the value of the Y property of a
symbol X is stored on its plist when X is a symbol, and is NIL
otherwise." The latter gives, to me, a stronger sense that I have
thought through the consequences of yielding NIL and that I think this
is the right value to return, while the former gives the sense that I
am merely saying "there really is no property list for non-symbols,
and you'll have to assume a default". These are highly subtle
matters, but that's what style rules do: convey subtlety. And to
understand that subtlety, one does not consult a central table of
uniquely determined subtlety, one consults the programmer (or
programmer team, when programmers happen to be of pre-agreed or
pre-ordained like mind) and then the code.
And, personally, I would rather convey an abstract subtlely, irrelevant
to the computation per se, but very relevant to the programmer, such
as the one shown above ("non-symbols don't have plists" vs "it works for
the values of non-symbol properties to be assumed to be NIL, the former
being a statement about the symbol, the latter a statement about the use
of the symbol's properties) than a concrete cue such as "this will get
used for value" or "this will not get used for value", because it conveys
information about how I _think_ about the program, rather than merely
information about how I perform the rather mundane act of translating
my thoughts into code.
But I emphasize that this is merely a choice on my part, and that it
will not by any absolute metric be possible to show it better or worse
than your choice.
To understand a wise style choice, one must understand wisdom. Wisdom
is not a "what" question ("what do I do?", which almost precludes
choice) but a "why" question ("why would I do these various things?"
and then a willful selection among consequences in a space that is
most commonly not subject to linear ordering).
The rules are delightfully Scheme-y, with the
difference being that in Scheme these are not "style
rules". One has no recourse but to follow the rules,
as one cannot rely on an absent branch resolving
to #f, or even #t, or indeed any value that you can
trust.
Looks like the CL community is about to
rediscover the value (!) of the unspecified. :-)
--d
> (defun get-property (x y)
> (when (symbolp x)
> (get x y)))
>
> In such a place, I intend this to be read "the value of GET-PROPERTY is
> the result of (GET X Y) when X is a symbol, but it really has no properties
> otherwise".
good example, and quite representative for the kind of context in which
I use the value of when myself!
--
(espen)
Perl is not a language worth comparing anything to.
| (perl's _unless_, which can't be mentioned often enough since some people
| think lisp is the only language with such an animal, can of course also
| be used as a "expression unless").
Just being postfix does not make it an expression.
* Erik Naggum
> One question is whether a form should be used for its value or not. The
> forms when and unless should clearly not be used for their value.
* Espen Vestre
| I strongly disagree. I think when and unless can be perfectly well used
| for their value. In general I think the Common Lisp programmer should be
| free to use any form for it's value whenever that value can be given a
| meaningful interpretation.
Well, unlike what a Perl hacker will expect,
(setq foo (when bar zot))
actually does modify foo when bar is false. I think using the one-branch
conditionals when and unless for value is highly, highly misleading.
However, if they are used in a function body as the value-returning form
and it is defined to return nil when "nothing happens", I think it is
much more perspicuous to have an explicit (return nil) form under the
appropriate conditions than let when or unless fall through and return
nil by default, which can be quite invisible.
* Espen Vestre
| good example, and quite representative for the kind of context in which
| I use the value of when myself!
This is a sort of half-breed between a statement and an expression. I
hate examples, because people attach too much meaning to their specifics,
but I was trying to draw a line between statements and expressions, which
I honestly assumed would be understood as directly value-returning forms.
I assume from your responses that you would _not_ write
(setq foo (when bar zot))
but _would_ write
(setq foo (whatever))
in the presence of
(defun whatever ()
(when bar
zot))
the latter of which does communicate "enough" to be defensible. However,
I find it very, very strange that you think this is so representative of
using when as an expression that you strongly disagree that when and
unless should not be used for their value. To me, that says that you
want to use when in let bindings and for arguments to functions, etc,
which I think is just plain wrong. Please let me know if you want to use
when in the usual expression positions, and thus make a distinction
between whole function bodies inside when and unless and using them for
smaller expressions.
Scheme is all wrong. Making nil different from false is nuts, and
letting forms have an unspecified value is also just plain wrong.
It is important that style rules be breakable. Enforcing them is wrong.
> Well, unlike what a Perl hacker will expect,
>
> (setq foo (when bar zot))
>
> actually does modify foo when bar is false.
Huh? That's exactly what a perl hacker would expect; in fact, s/he'd
also expect foo = zot whenever bar is true, and foo = bar otherwise.
*That* might be surprising to a lisp programmer, though :)
<ot>
Both conditionals and loop constructs usually have return values
in Perl; the conditional itself is returned whenever there is no
successful alternative. Perl's for/foreach are the exceptions here,
not the rule.
</ot>
--
Joe Schaefer
I think the big mistake in all this is thinking code should be readable.
It should not be. Not by non-experts, experts or even the author. it
should be understandable. anyone reading too fast to realize what when
does will not understand the code anyway.
JF's premise in that astonishing coding standards bit is that code is
read more often than it is written. Nah, the developer spends hours and
hours on code which may never be read by anyone. Even in large teams,
when code fails the pager of the author goes off. A few years later the
whole system gets rewritten from scratch.
me, i do not have to understand my code because if i have to look at it
I rewrite it anyway.
If I cannot use when for its value, why don't we just dump the whole
language? functional programming right out the window. We can make it an
error for the when conditional to evaluate to false.
And what's this about AND and OR returning the last evaluated form's
value? talk about something not being explicit. let's get that sorted
out immediately and return t or nil, period.
<g>
kenny
clinisys
* Joe Schaefer
| Huh? That's exactly what a perl hacker would expect; in fact, s/he'd
| also expect foo = zot whenever bar is true, and foo = bar otherwise.
| *That* might be surprising to a lisp programmer, though :)
Heh, not at all, because if bar is false, foo will equal bar in Common
Lisp, too, although it is a slightly unusual way to look at it. Since
there is but one false value in Common Lisp, and Perl has a whole range
of them, I suppose there is Perlish sense in your version of this.
What I had in mind, however, was that Espen Vestre's example using a
postfix if would cause the statement preceding it _not_ to be evaluated
if the condition was false.
"Should not" is different from "cannot". Grasp this and relax, please.
> I assume from your responses that you would _not_ write
>
> (setq foo (when bar zot))
I grepped through some thousand lines of code and found only one case
of that pattern (so maybe I actually intuitively do avoid it?)
But to me, 'when' has a very strong "silent" "otherwise nil" to it, so
I have absolutely no problems reading a statement like that. (And
since I read 'when' that way, '(if bar zot nil)' is somewhat "noisy"
to me).
I really don't see the big advantage of reserving 'when' and 'unless'
to side-effects-only cases.
--
(espen)
Since "when" (and "unless") must return a value, why
not return a meaningful value? A return value of nil
currently does not imply that the "when" failed.
If we returned the value of the _test_, then the
result is meaningful and usable.
--d
> * Dorai Sitaram
> | Looks like the CL community is about to rediscover the value (!) of the
> | unspecified. :-)
>
> Scheme is all wrong. Making nil different from false is nuts, and
> letting forms have an unspecified value is also just plain wrong.
I agree with the unspecified value part (which seems really weird for
a language with such funcitonal aspirations, too). I'm not so sure
about false, though. I don't like the pun that () is the same as
boolean false. I also don't like that the empty list is a symbol.
Don't get me wrong, it's easy enough to cope with, but it's
conceptually sloppy. I don't even see any good reason why NIL should
be false. It's a symbol; every other symbol is true. Of course, if
#f were a special false value, I don't know what BLOCK should return,
so that's maybe a practical argument against it. Or, maybe it should
just return #F. I hate the idea of someone doing something like:
(cons 'foo (dolist ...))
so () would be a bad choice. If NIL were just another symbol, that
would be a foolish choice. So the NIL that's returned now must be NIL
in the boolean sense.
So, I guess what I'm saying is that you ought to explain your
objection here (or point to a message where you've explained it
before, since I'm sure you have).
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
Ahem!
* (when 3 4)
4
* (when nil 5)
NIL
Cheers
--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
"Hello New York! We'll do what we can!"
Bill Murray in `Ghostbusters'.
"Thomas F. Burdick" wrote:
> I don't even see any good reason why NIL should
> be false.
Understood, but in this case the proof is in the pudding for me. By
which I mean, programming with nil as false is so terrific it must be
Deeply Right.
isn't there a funny essay somewhere about the consequences of porting
something from Lisp to Scheme, specifically about the problem of having
to then differentiate between nil and false? I recall assoc figuring
prominently in the piece.
kenny
clinisys
> "Thomas F. Burdick" wrote:
> > I don't even see any good reason why NIL should
> > be false.
>
> Understood, but in this case the proof is in the pudding for me. By
> which I mean, programming with nil as false is so terrific it must be
> Deeply Right.
But have you tried programming in a CL-like dialect where (), nil, and
boolean false were three different objects? Otherwise, that would be
somewhat like saying that lexical scoping was a bad idea because you
had so much more fun in Maclisp or Interlisp than in Scheme.
> isn't there a funny essay somewhere about the consequences of porting
> something from Lisp to Scheme, specifically about the problem of having
> to then differentiate between nil and false? I recall assoc figuring
> prominently in the piece.
Oh boy, how I hate when code depends on the pun between false and ().
Bleah. Would it really kill people to put a couple (not (null ..))s
here and there, and make their intentions more explicit?
Dorai Sitaram wrote:
> A return value of nil
> currently does not imply that the "when" failed.
Just to be sure I understand you, are you making the point that when
WHEN returns nil i do not know if the conditional or the consequent
returned nil? ie, it could be (when nil t) or (when t nil)?
To save an iteration at the risk of answering a straw man: that is an
interesting observation. But it should not matter; side-effects aside,
every form is a black box, the only thing a client cares about is the
result of evaluating the form, not its internals.
kenny
clinisys
> "Thomas F. Burdick" wrote:
> > I don't even see any good reason why NIL should
> > be false.
As to the choice of the symbol, I somewhat agree. It's tradition, but
one I could have done without. On the other hand, in my 20+ year
career doing some fairly intensive Lisp, I've found it a minor
irritation once or twice, but basically have never found it got in the
way in any seriously material way ... so I guess I just don't buy any
argument that it matters.
As to the choice of the empty list being false, I also don't see a problem
there and actually see some virtue. I think all too much fuss has been
made over this.
> Understood, but in this case the proof is in the pudding for me. By
> which I mean, programming with nil as false is so terrific it must be
> Deeply Right.
In the MOO programming language, which (I think by design) has a very
Lispy feel, a bunch of objects are false, including 0 and "" and {}.
Errors are also false. They make the mistake of making their type OBJ
(sort of vaguely like our standard class) be false, and that causes
problems. But the other values being false don't cause any problem to
learn and it's quite handy to have these various degenerate cases all
count as false. MOO is a byte compiled language and not overly
obsessed with efficiency; there's probably something good about being
able to do a truth test in one instruction, so maybe it's not awful
that we have a single false value. But even so, I think that, if
anything, we lose out by not having more false values, not by failing
to jump on the Scheme bandwagon of "everything is bad but the
explicit".
> isn't there a funny essay somewhere about the consequences of porting
> something from Lisp to Scheme, specifically about the problem of having
> to then differentiate between nil and false? I recall assoc figuring
> prominently in the piece.
Well, people often do (cdr (assoc ...)) in LISP, and it's true that assoc
returns false when it fails, which you can't cdr. But you also can't cdr
the empty list in Scheme. So it would be a problem anyway.
I agree '() being false being nil is confusing at first. I just
remembered what it was like when I was first learning Lisp. My feeling
was, OK, they're making this sh*t up as they go along. But now it feels
so natural I had no idea what you were going on about--until I
remembered my early experience.
Which takes me back to my pudding proof. If it was Truly Confused (like
some things I have worked with) it would never feel natural. But it
feels natural, so the problem up front must have been with me.
Scheme worked from first principles when they "improved" on this. That
is always dangerous. We are not as smart as we think. Who says we have
the first principles right? One of my design rules says good design
changes lead to less code; if I find myself adding code such as (not
(null ...)) all over the place I back out the change. Schemers should
have done the same when they saw the consequences of their change.
I believe it was David St. Hubbens (in collaboration with Nigel) who
said, It's a fine line between clever and stupid.
kenny
clinisys
Someone may need to go beyond Boolean logic. Any three-valued or
multi-valued logic implementer would prefer to interpret NIL as
undefined or something else different form false.
Everyone who deals with database programming knows that AND, OR, and
other logical connectives has three-valued interpretation in SQL. I
consider this quite natural. Several lisp modules for relational
database access (SQL/ODBC and the like) makes database's NULL be
converted into/from Lisp's NIL. In this use, NIL would be considered as
an undefined value rather than the false value.
--
Sincerely,
Dmitri Ivanov
www.aha.ru/~divanov
This implies that the current return value is not meaningful, a premise
with which I simply disagree. Although it would be nice if when and
unless were defined that way, I think it should be possible to return a
value from these forms, just that it needs to be a judicious choice.
Omigod. You actually consider:
(when (find long-sought-thing input-stuff)
<stuff to do>)
hatefully inferior readability-wise to a double-negative? You can read
double-negatives?! Tell your programmers you just failed the Turing
Test.
Seriously, I think NIL as false is Deeply Correct. Consider FIND. Your
position is that (find x y) returns two possible values, x or nil. Too
literal. FIND either finds x or it does not.
kenny
clinisys
I find it conceptually very clean. I find it equally conceptually clean
that 0 is false in C. Neither do I have any problems with Perl's false
values. I do have a problem with Scheme's anal-retentive false, because
it is so goddamn hard to obtain a false.
| I don't even see any good reason why NIL should be false.
There has to be at least one false value. Making the empty list false is
simply a very good choice for a language that gives you lists as a close
to primitive type. Linked lists have to be terminated by _something_,
and that something might as well be the answer to the question: "Are
there any more elements?" as a matter of pragmatics.
| So, I guess what I'm saying is that you ought to explain your objection
| here (or point to a message where you've explained it before, since I'm
| sure you have).
I have lost track of which objection you might be referring to here, but
I hazard a guess it is that I think making nil different from false is
nuts. It is nuts from a pragmatic point of view. Conceptualizations
that go against the pragmatic are even more wrong than Scheme in general,
but it is precisely because Scheme's conceptulization is impractical that
they have chosen to make their language impractical but conceptually
pure. I think Scheme is an excellent example of how you go wrong when
you decide that "practicality" is a worthless axis to find a reasonable
position on, or not even consider at all. Common Lisp is a practical
language and its conceptualization is one of trying to figure out what
the most elegant practical expression would be, not how impractical the
most elegant expression would be. This is just something Scheme freaks
will never accept as a point of serious difference between Common Lisp
and Scheme. I look at Scheme from a Common Lisp viewpoint, of course,
and that is not very productive, but Dorai Sitaram brought up this Scheme
nonsense as a reasonable way to talk about Common Lisp features. I think
I put it sufficiently clear when I started my reply with "Scheme is all
wrong". Those who do not think so are of course free to talk about
Scheme all they want, but the sheer _insistency_ that Scheme freaks come
to present their "views" about Common Lisp is really annoying. At least
they have their own community, so it is not as if they are fragmenting
the _Common_ Lisp community, but they also think they have a Lisp, and
that is at least as annoying.
Yes. Adding (not (null ...)) to previously obvious code would make
everything nearly unreadable. Besides, the intention _is_ to return the
empty list to those who look for a list and false to those who want a
boolean value. This is an important property of Common Lisp. Scheme
does not have this property, and consequently Scheme systems have a
gazillion small functions that are anal-retentively type-specific. One
of the reasons I like Common Lisp is that it has none of this nonsense.
> * Thomas F. Burdick
> | I'm not so sure about false, though. I don't like the pun that () is the
> | same as boolean false. I also don't like that the empty list is a
> | symbol. Don't get me wrong, it's easy enough to cope with, but it's
> | conceptually sloppy.
>
> I find it conceptually very clean. I find it equally conceptually clean
> that 0 is false in C. Neither do I have any problems with Perl's false
> values. I do have a problem with Scheme's anal-retentive false, because
> it is so goddamn hard to obtain a false.
Bleah, I guess this is just a matter of taste, then. I personally
think that false should have a type of boolean, not integer nor
symbol. And, honestly, I don't see anything about #F as different
from () and NIL that would make it hard to get, per se. When I'm
testing for the end of a list, I write (cond ((null foo) ...)). When
I'm testing for not-the-end, I write (cond ((not (null foo)) ...>))
rather than (cond (foo ...)), because I want to be explicit about my
intentions. But I guess that shows that there's a lot of style issues
mixed in here.
[ As for Perl, you really don't mind the *string* "0" being false?!?! ]
> | I don't even see any good reason why NIL should be false.
>
> There has to be at least one false value. Making the empty list false is
> simply a very good choice for a language that gives you lists as a close
> to primitive type. Linked lists have to be terminated by _something_,
> and that something might as well be the answer to the question: "Are
> there any more elements?" as a matter of pragmatics.
Actually, I meant NIL qua symbol, not the empty list. Although, if I
had my druthers, () wouldn't be false either. I just don't see what's
so hard about writing (not (null ...)).
> | So, I guess what I'm saying is that you ought to explain your objection
> | here (or point to a message where you've explained it before, since I'm
> | sure you have).
>
> I have lost track of which objection you might be referring to here, but
> I hazard a guess it is that I think making nil different from false is
> nuts.
That would be it.
> It is nuts from a pragmatic point of view. Conceptualizations
> that go against the pragmatic are even more wrong than Scheme in
> general, but it is precisely because Scheme's conceptulization is
> impractical that they have chosen to make their language
> impractical but conceptually pure.
While I won't argue with that characterization of Scheme, I don't
think that seperating () and false is choosing purity over
practicality. I think it's easier to read code that checks for NIL
explicitly, and I don't think that checking with the NULL predicate is
a particular hinderance.
> "Thomas F. Burdick" wrote:
> > Oh boy, how I hate when code depends on the pun between false and ().
> > Bleah. Would it really kill people to put a couple (not (null ..))s
> > here and there, and make their intentions more explicit?
>
> Omigod. You actually consider:
>
> (when (find long-sought-thing input-stuff)
> <stuff to do>)
>
> hatefully inferior readability-wise to a double-negative? You can read
> double-negatives?! Tell your programmers you just failed the Turing
> Test.
But (not (null ...)) is only a double negative when you're punning on
false and ()! NULL checks for end-of-list-ness. NOT negates. Do you
consider (not (zerop ...)) to be a double negative? I certainly hope
you don't write (null (symbolp foo)) to negate the boolean value
returned by SYMBOLP.
But, yes, my first reaction to your code above is that FIND returns a
boolean. Really, it returns a list. So the body of your WHEN is
being run when FIND returns a non-empty list, not when it returns
true. Imagine recasting it numerically:
(when (not (zerop (count long-sought-thing input-stuff)))
(do-stuff))
if that had been:
(when (count long-sought-thing input-stuff)
(do-stuff))
you would assume that COUNT was a predicate.
> Seriously, I think NIL as false is Deeply Correct. Consider FIND. Your
> position is that (find x y) returns two possible values, x or nil. Too
> literal. FIND either finds x or it does not.
No, my position is that there's a conceptual difference between the
empty list (), and the boolean false NIL. And that it's kinda weird
that false is a symbol.
None of this causes me much of a headache, though, and it's usually a
difference of a half a second or less to figure out that someone meant
false, not ().
And, so there's no misunderstanding, I don't mean to rehash this in
any sort of Common-Lisp-is-broken-because... or The Next Standard
Should Do... sort of way. I just don't think it's nutty to
distinguish NIL, (), and flase.
> * Erik Naggum
> > Well, unlike what a Perl hacker will expect,
> >
> > (setq foo (when bar zot))
> >
> > actually does modify foo when bar is false.
>
> * Joe Schaefer
> | Huh? That's exactly what a perl hacker would expect; in fact, s/he'd
> | also expect foo = zot whenever bar is true, and foo = bar otherwise.
> | *That* might be surprising to a lisp programmer, though :)
>
> Heh, not at all, because if bar is false, foo will equal bar in Common
> Lisp, too, although it is a slightly unusual way to look at it. Since
> there is but one false value in Common Lisp, and Perl has a whole range
> of them, I suppose there is Perlish sense in your version of this.
Of course you're right; but I probably should point out that this
also applies to negated conditionals and loops as well (where the
returned conditional is something true):
% perl -wle '$a="cond"; sub f { unless ($a) {"cons"} }; print f'
cond
% perl -wle 'sub f { ($a="cond", "cons") until $a }; print f'
cond
> What I had in mind, however, was that Espen Vestre's example using a
> postfix if would cause the statement preceding it _not_ to be
> evaluated if the condition was false.
I know what you meant- I've just never seen anyone attempt
to write a "postfix if" in lisp before :). Now if you could
tell me whether or not a postfixed conditional declaration is
"legal" Perl ...
(please don't answer that- it's a rhetorical question :)
--
Joe Schaefer
Nonono, you are just _wrong_. :)
| And, honestly, I don't see anything about #F as different from () and NIL
| that would make it hard to get, per se.
It would mean that functions need to know whether they should return an
empty list or a boolean. This is the reason that the pun is useful, just
as 0 being false in C is useful. I actually find s specific boolean type
to be incredibly painful -- to use it, you would always need multiple
values. It is not unlike asking real people "Do you know the time?".
"Yes" is not a good answer. Just the time will do nicely, thank you. If
you had me test for, say, find returning true before I could use its
return value, I would probably write macros to circumvent such nonsense.
| [ As for Perl, you really don't mind the *string* "0" being false?!?! ]
That Perl does type conversions east and west so you never actually know
what type of object you are dealing with is sometimes convenient par
excellence, but most of the time, it is just plain nuts. However, it is
actually useful in a number of languages. SQL, for instance, gets around
some of its syntax problems with this flexibility.
| Actually, I meant NIL qua symbol, not the empty list. Although, if I had
| my druthers, () wouldn't be false either. I just don't see what's so
| hard about writing (not (null ...)).
Well, it looks like really bad language design. Like Scheme. Failing to
have a function that turns something into a boolean directly is bad. If
you want to see if foo is a non-empty list, (not (null foo)) looks dumb,
but if you are expecting a list, (consp foo) is the right choice, in
which case you are well advised to look at typecase instead of cond.
| While I won't argue with that characterization of Scheme, I don't think
| that seperating () and false is choosing purity over practicality.
Well, conflating "nothing" and "false" has long traditions in both logic
and programming languages. I think objection to it come only from a
misguided sense of purity without a sense of history or continuity.
| I think it's easier to read code that checks for NIL explicitly, and I
| don't think that checking with the NULL predicate is a particular
| hinderance.
Well, if you used typecase and tested for null and cons and other types,
I would probably support you, but (not (null ...)) says "Scheme freak!"
to me, and in a horrible font, too. :)
> I strongly disagree. I think when and unless can be perfectly well used
> for their value. In general I think the Common Lisp programmer should
> be free to use any form for it's value whenever that value can be given
> a meaningful interpretation.
Well, Erik was talking about _style_. I know I couldn't keep my
lunch down if I saw something like
(setf foo (when (valid? bar) (blah))) ; Using implicit NIL as valid return
But I'd have no problems with
(setf foo (if (valid? bar)
(blah)
nil))
I only use WHEN and UNLESS in things like
(unless (valid? bar)
(error 'invalid-bar-error))
To each his own, I guess...
--
It would be difficult to construe Larry Wall, in article
this as a feature. <1995May29....@netlabs.com>
> Jeez, I use when and unless for their values all the time. But then I am
> an expert in lisp, I know what they return. :) I guess I am also an
> expert because nil and '() look identical to me.
>
That's funny, NIL and '() look totally different to me.
The former says FALSE, the latter says EMPTY LIST.
I'm very careful about their use. Thus
(let ((bar '())
(baz nil))
;; Here I fully expect BAR to be used as a list,
;; and BAZ as an atom.
...)
What's more, I used NOT to make that distinction. It's only with
some years of practise (i.e. with _expertise_, to use your word)
that I've discovered that that distinction matters.
YMMV, of course.
That depends on how you use it. I prefer (/= 0 ...) to duoble predicates.
| But, yes, my first reaction to your code above is that FIND returns a
| boolean. Really, it returns a list.
But find _does_ return nil or the object you asked it to find. It does
_not_ return a list, because it _cannot_ return a cons unless you have
asked it to find a cons, which is a statement about the specific call,
not about the function in general. (Functions like this that return nil
or cons are member and assoc.) The function position also returns nil or
the position where it found something, not a cons, so its notion of true
is simply "non-nil".
I maintain that it would not make sense to make nil or () not be false
without making every other object also not be true. If punning on false
is bad, punning on true is also bad. If we have a pure boolean type, we
would need to rethink this whole return value business and consider pure
predicates, not just for things like null to see if a list is empty, but
it would also be wrong to give it a non-list. So this is not a question
just of (not (null ...)), but of (and (listp ...) (not (null ...))),
which is again what (consp ...) does more efficiently, so the whole (not
(null ...)) thing is kind of moot. It works today _because_ of the pun
on true and false. It would not make sense to use null on non-lists in a
strict boolean-type universe, because boolean would not be the only type
to be treated anal-retentively.
| So the body of your WHEN is being run when FIND returns a non-empty list,
| not when it returns true.
I think you should look up what find does and returns, now...
| Imagine recasting it numerically:
|
| (when (not (zerop (count long-sought-thing input-stuff)))
| (do-stuff))
|
| if that had been:
|
| (when (count long-sought-thing input-stuff)
| (do-stuff))
|
| you would assume that COUNT was a predicate.
But this carefully elides the context in which nil = () = false makes
sense. In a language where you return pointers or indices into vectors,
0 = NULL = false makes sense. In such a language, you could not have a
function position that returned the position where it found something,
because 0 would
| No, my position is that there's a conceptual difference between the empty
| list (), and the boolean false NIL. And that it's kinda weird that false
| is a symbol.
Since there _is_ no "false" concept as such, but rather "nothing" vs
"thing" that supports a _generalized false_, I think the conclusion only
follows because you think the type boolean is defined differently than it
is. In fact, the boolean type is defined as the symbols t and nil.
(Please look it up in the standard.)
| None of this causes me much of a headache, though, and it's usually a
| difference of a half a second or less to figure out that someone meant
| false, not ().
I think this betrays a conceptual confusion. If you realized that they
are in fact exactly the same thing, you would not need to think about it,
but since you think they are something they are not, it takes think time
to undo the confusion. It is much smarter to fix the confusion at its
core and realize that you you cannot get what you want because what you
want is not within the tradition that produced Common Lisp.
Whew! You had me nervous there for a second.
"Thomas F. Burdick" wrote:
>
> But (not (null ...)) is only a double negative when you're punning on
> false and ()!
Touche! But this confirms you are composed of logic gates.
> Do you
> consider (not (zerop ...)) to be a double negative?
Zero is negative? :) Well I cannot understand even singly-negated tests.
The only time I use NOT is for:
(if (not <test>)
'no
<50 lines of code>)
> I certainly hope
> you don't write (null (symbolp foo)) to negate the boolean value
> returned by SYMBOLP.
If I have to choose between NULL and NOT I indeed use NOT against
predicates and NULL against existence, but this only happens because for
some reason I have for some reason been cornered into coding one or
another of those. But (not (null <test>)) not only obfuscates, it adds
two keywords where none were needed.
> No, my position is that there's a conceptual difference between the
> empty list (), and the boolean false NIL.
Buddha taught that in the beginning was the void, and the void became
nothing and something, and the something became the multitude. So
nothing, false, empty are all the same while true comes in many forms.
After we come to agreement on this let's go over to comp.lang.c and get
everyone agree on where to put {}s. :)
kenny
clinisys
> > (setq foo (when bar zot))
>
> I grepped through some thousand lines of code and found only one case
> of that pattern (so maybe I actually intuitively do avoid it?)
After sleeping on it, I think the actual reason for the fact that such
forms are rare even if you're comfortable with using WHEN for its
value, is that in contexts where you SETF a variable which has NIL as
one of its possible values, that variable is very frequently already
_initialized_ with NIL. And that means that (when bar (setf foo zot))
is the right form... In fact I begin to suspect that the underlying
reason for this style rule may be that in good code, WHEN will
"naturally" have its most dominant usage for side effects. And this
statistical observation has then been simplified to a style rule which
disturbs me because it tries to introduce a classification of lisp
forms into two classes, a classification which I consider rather
unlispish (and I haven't touched scheme for the last 13 years, in case
you start to wonder ;-))
"All lisp forms are equalp and their values are all valuable" :-)
--
(espen)
> Well, if you used typecase and tested for null and cons and other types,
> I would probably support you, but (not (null ...)) says "Scheme freak!"
> to me, and in a horrible font, too. :)
Well, it's not something I do often -- I'm actualy very prone to using
etypecase. I just grepped through several hundred lines of code (and
a Lisp compiler, no less, so it makes unusually heavy use of lists),
and found about .6 instances per hundred lines of "(not (null ", which
is about what I expected. The vast majority of the time, consp or
typecase is the right thing to do, and it looks like I usually use
(not (null ...)) when I'm expecting the value to be nil. As in:
(defun foo (list bar)
(if (not (null list))
(do-something #'foo list bar)
(let ((qux ...))
(do-the-rest-of-the-function))))
Maybe other people's coding style involves cases where this test would
come up more often? I don't know. Or, maybe it's because I learned
Scheme first. Ah, well, surviving habits from scheme, maybe, but not
freakiness.
> "Thomas F. Burdick" wrote:
> >
> > But (not (null ...)) is only a double negative when you're punning on
> > false and ()!
>
> Touche! But this confirms you are composed of logic gates.
Crud, I thought those were neurons in there.
[...]
> If I have to choose between NULL and NOT I indeed use NOT against
> predicates and NULL against existence, but this only happens because for
> some reason I have for some reason been cornered into coding one or
> another of those. But (not (null <test>)) not only obfuscates, it adds
> two keywords where none were needed.
Shoot, I don't think I ever use find (I couldn't find any instances in
a big chunk of code -- plenty of "member"s, though), and I was
mistaken about what it does (I thought it returned a list of the
matching items, oops). Which makes this exchange heavy in the
talking-past-eachother area. Absolutely, (not (null (predicate))) is
confusing.
(not (null (returns-a-list))) is what I meant to be talking about. Of
course, I'm more likeley to check based on type, or to put (null ...)
higher in the cond, letting non-empty lists fall through for more
testing.
> > No, my position is that there's a conceptual difference between the
> > empty list (), and the boolean false NIL.
>
> Buddha taught that in the beginning was the void, and the void became
> nothing and something, and the something became the multitude. So
> nothing, false, empty are all the same while true comes in many forms.
Yeah, but he also taught that after I'm dead, I'm not gone.
> After we come to agreement on this let's go over to comp.lang.c and get
> everyone agree on where to put {}s. :)
They *still* haven't gotten that figured out? Maybe they could use
some help :-)
> Espen Vestre <espen@*do-not-spam-me*.vestre.net> writes:
>
> > I strongly disagree. I think when and unless can be perfectly well used
> > for their value. In general I think the Common Lisp programmer should
> > be free to use any form for it's value whenever that value can be given
> > a meaningful interpretation.
>
> Well, Erik was talking about _style_. I know I couldn't keep my
> lunch down if I saw something like
>
> (setf foo (when (valid? bar) (blah))) ; Using implicit NIL as valid return
Why? You immediately know what it does when you see "(when ".
I sometimes write
(let ((foo (when bar ....)))
...
instead of (if bar (progn ....) nil)
>
> But I'd have no problems with
>
> (setf foo (if (valid? bar)
> (blah)
> nil))
With IF, my eyes look for else form [only to find NIL at the end
in this case.] That's slightly more work.
abe
--
<keke at mac com>
Alain Picard wrote:
>
> Kenny Tilton <kti...@nyc.rr.com> writes:
>
> > Jeez, I use when and unless for their values all the time. But then I am
> > an expert in lisp, I know what they return. :) I guess I am also an
> > expert because nil and '() look identical to me.
> >
>
> That's funny, NIL and '() look totally different to me.
> The former says FALSE, the latter says EMPTY LIST.
What does the compiler say? No Lisp implementation differentiates
between NIL and '()? So, as I see it, my mental processes should not.
Don't forget, folks, we are engaged in hand-to-hand combat with
compilers. Acknowledge their states of mind or perish. I remember
reading about SmallTalk that it had no pointers. Chya. So tell me (mr
author) is copy deep or shallow? Oops.
It would be great if some 3GL let us forget about assembly language, but
in fact every time I want to whack something from a list I must choose
between delete and remove. Explain to bubba that delete and remove from
lists are different...no way, you have to talk about implementation,
cons cells, pointers.
Now to be fair, by saying NIL and '() you are over-specifying, latching
onto conceptual differences at a finer granularity than the compiler, so
you are on solid ground. But to me that attitude is dangerous. It relies
on a fiction. Better to acknowledge the mindset of the compiler, who in
the end decides the behavior of our so-called high-level code.
kenny
clinisys
> Kenny Tilton <kti...@nyc.rr.com> writes:
>
> > "Thomas F. Burdick" wrote:
> > > I don't even see any good reason why NIL should
> > > be false.
>
> As to the choice of the symbol, I somewhat agree. It's tradition, but
> one I could have done without. On the other hand, in my 20+ year
> career doing some fairly intensive Lisp, I've found it a minor
> irritation once or twice, but basically have never found it got in the
> way in any seriously material way ... so I guess I just don't buy any
> argument that it matters.
It's not a serious material way, but it has caused irritation to me
that I can't use T as a variable. It's easily-get-roundable, of
course...
[(defun d2x/dt2 (x xdot y ydot time)
...)]
... but it has still once or twice gotten me to say "Aaargh" :-)
Cheers,
Christophe
--
Jesus College, Cambridge, CB5 8BL +44 1223 510 299
http://www-jcsu.jesus.cam.ac.uk/~csr21/ (defun pling-dollar
(str schar arg) (first (last +))) (make-dispatch-macro-character #\! t)
(set-dispatch-macro-character #\! #\$ #'pling-dollar)
> When I'm
> testing for the end of a list, I write (cond ((null foo) ...)).
Have you considered using function endp?
--
Janis Dzerins
Eat shit -- billions of flies can't be wrong.
> OK, let's finish off this dead horse...
(Oh don't tempt me, a friend and I used to have this joke about dead
horses. See, there's this funny thing, where you keep beating them.
And, yeah, you've heard this before, but it's really funny. Anyways,
dead horse, yeah. So, I was beating it ... [you get the idea, I hope].)
> I agree '() being false being nil is confusing at first. I just
> remembered what it was like when I was first learning Lisp. My feeling
> was, OK, they're making this sh*t up as they go along. But now it feels
> so natural I had no idea what you were going on about--until I
> remembered my early experience.
>
> Which takes me back to my pudding proof. If it was Truly Confused (like
> some things I have worked with) it would never feel natural. But it
> feels natural, so the problem up front must have been with me.
I don't know that the problem is the inexperienced user. It certainly
feels natural to me anymore (despite the fact that I have
philosophical issues with it), but then, so do the wacky rules of
English and French grammar. And some of the "unintuitive" bits of
evolution science (I remember when they seemed weird, but it takes
effort). But our current conception of evolution could very well be
off, and I don't think it takes a genius to figure out that English is
a crazy (if practical) language.
Oy, I really didn't mean to draw this out into a long thread. I
*really* just wanted to refute the idea that it was nuts to separate
NIL and false. For all the arguments in that direction, the obvious
usability of Common Lisp means that it's also reasonable to conflate
them, despite any conceptual messiness that might (or might not)
entail. I kept responding because I felt like I needed to clarify
myself. And then, here we are with a long thread that should've been
3 messages long. (sigh). So, this is my last post to this thread,
and not just because I'm about to be away from home for a few days
:-). It should have died already, and I think it's my fault it
hasn't. On that note, here are some nice words from R. Kelly, having
nothing to do with Lisp, (), NIL, false, nor Scheme:
"I don't see nothing wrong
[...]
this is going on 'til the early morn'
and my word is bond."
But anyway, what was more interesting was to look at what I did with
NOT and NULL and conditionals. I have never thought about this until
I read through code, and I still don't understand why I write what I
do. For instance why this:
(when (not x) ...)
instead of
(unless x ...)
-- I use both, and they have different meanings to me (not to CL).
Not to mention (NOT (NULL ...)):
(when (not (null x)) ...)
(unless (null x) ...)
-- I have both these too, why? They have different connotations to
me, but I'm not sure why. Sometimes I can look back through CVS logs
and catch myself changing one into the other. And back again.
--tim
> In article <86itc3c...@gondolin.local.net>, Alain Picard <api...@optushome.com.au> wrote:
>
> > Espen Vestre <espen@*do-not-spam-me*.vestre.net> writes:
> >
> > > I strongly disagree. I think when and unless can be perfectly well used
> > > for their value. In general I think the Common Lisp programmer should
> > > be free to use any form for it's value whenever that value can be given
> > > a meaningful interpretation.
> >
> > Well, Erik was talking about _style_. I know I couldn't keep my
> > lunch down if I saw something like
> >
> > (setf foo (when (valid? bar) (blah))) ; Using implicit NIL as valid return
>
> Why? You immediately know what it does when you see "(when ".
>
> I sometimes write
> (let ((foo (when bar ....)))
> ...
>
> instead of (if bar (progn ....) nil)
You could use (and bar ....) instead. That would make the value more
explicit to my eyes. But it's not something I worry about, I can read
the when form just fine, I'd just probably write the and version
myself.
Regs, Pierre.
--
Pierre R. Mai <pm...@acm.org> http://www.pmsf.de/pmai/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents. -- Nathaniel Borenstein
Is there no difference between (- x) and (1+ (lognot x)), either?
How about (first x) and (car x)? Or (ash x 3) and (* x 8)?
| Don't forget, folks, we are engaged in hand-to-hand combat with compilers.
I have never viewed compilers this way. Compilers are not the IRS.
| It would be great if some 3GL let us forget about assembly language, but
| in fact every time I want to whack something from a list I must choose
| between delete and remove. Explain to bubba that delete and remove from
| lists are different...no way, you have to talk about implementation, cons
| cells, pointers.
Huh? (let ((x '(1 2 3))) (delete 2 x) (equal x '(1 2 3))) => nil while
(let ((x '(1 2 3))) (remove 2 x) (equal x '(1 2 3))) => t.
| Now to be fair, by saying NIL and '() you are over-specifying, latching
| onto conceptual differences at a finer granularity than the compiler, so
| you are on solid ground. But to me that attitude is dangerous. It relies
| on a fiction. Better to acknowledge the mindset of the compiler, who in
| the end decides the behavior of our so-called high-level code.
Some of us think the specification for the compiler decides the behavior
of the compiler. That some things which produce the same effect but look
different should be treated differently can be exemplified by "its" and
"it's" and by "there" and "their" and "they're", both of which for some
reason do not appear to be distinct to a number of American spellers.
> > Why? You immediately know what it does when you see "(when ".
> >
> > I sometimes write
> > (let ((foo (when bar ....)))
> > ...
> >
> > instead of (if bar (progn ....) nil)
>
> You could use (and bar ....) instead. That would make the value more
> explicit to my eyes.
Yes. I agree. I do use (and bar ...) too. I realized that I've been
inconsistent in this.
Thanks,
> It certainly feels natural to me anymore (despite the fact that I
> have philosophical issues with it), but then, so do the wacky rules
> of English and French grammar.
[You don't have to take this personally, Thomas. I'm starting from
your remark here because it was a good starting point, but the comments
are directed generally to anyone that "fits the bill"...]
But this is EXACTLY the point people are trying to make.
The Scheme community (and, I would say, the FP community in general)
wants to DEFINE what is natural and easy by a simplicity metric they
pulled out of a hat, as if the very meaning of natural/easy is
uniquely determined/predicted by that simplicity metric. It is not.
That's a possible theory of simplicity, but it is (a) not proven to in
fact make things simpler for most people and (b) not proven to be the
only way to make things simpler for most people.
The Scheme community then works to minimize the number of premises
they work from, as if minimizing this one axis in what is obviously a
multivariate system will yield a consequent minimization of the other
axes by magic. I don't buy that. First, I see language size and
isolated sentence size as inversely correlated. The bigger the
language, the less you have to say. THat is, a language that has has
all 64 crayola crayon colors as primitive concepts is at a marked
advantage for describing a sunset over one that has only the primary
colors plus the modifier words "saturated", "bright", "dark" and
adverbials like "not" and "very". So it may please some teacher of
this Esperanto-like "simple" language to to know he "empowered" people
generally to describe ANYTHING with his tools, but it won't
necessarily please the reader of a fine novel to see the sunset
described as a "very, very, light, unsaturated mix of blue and red" if
what is meant is "lavender". The reason "lavender" works, even though
it takes a long time to learn all the subtle color names, is that your
brain wetware learns to make a primitive connection from this word to
this color and doesn't have to waste compute time simulating things.
Moreover, it can be compared in more "satisfying" ways to other colors
like "orchid" than can "very, very, light ..." be compared to just
"very light..." Sure, you can make all the claims you want about the
extra word "very" being missing giving the same sense, but that still
requires you to remember the *input* string and deal with it as a
formal algebraic string because the human brain is not a precise
enough processor to actually carry the consequence of such thought
exercises primitively. I know my girlfriend often orders a medium
coffee extra extra extra light from Dunkin Donuts but I've seen them
sometimes give her a medium coffee extra extra light, which both
suggests that people receiving such data are bad at holding even that
long a string in their head and that she's unable to hold the precise
notion of that concept in her head.
Now, second, the Scheme community has another very unwarranted premise
underlying a great deal of its teaching, and it overflows sometimes
into design: that's that if they can find a simpler way to express
things (using their simplicity metric: smaller language) that this
will autmatically lead something to be easier to learn. This makes
unstated assumptions about how people think and how people learn. I
claim, without enumerated proof here, but with a belief that at least
I could dredge up proof if I had to because psychologists do research
these things all the time, that the human brain is not like an Intel
box, and that people do not operate on a RISC instruction set. I
think people have a highly parallel associative mechanism capable of
efficiently managing an enormous amount of special cases at the same
time. All naturally-designed human languages that I know of, not
counting the ones that were designed by professional logicians or
computer people, have tons of special cases, have context, etc. Of
course, human languages are all very different, each with their own
idiosyncracies. But what this tells you is that it's ok to have some
variation here. Humans will cope.
I think the fact is that the reason to keep programming languages
simple is not for the sake of humans, but for the sake of code. So
yes, for the sake of code, not making the language too difficult will
make programming easier. But on that we have a 25-or-so year history
during which both Scheme and CL have existed, and in no cases have I
ever seen or heard of someone tearing out their hair and leaving the
CL community saying "I just can't build complex programs because this
NOT/NULL [or false/empty-list] thing is making it too hard to write
metaprograms". It just doesn't happen. So no one ever points to that
as the reason for splitting false/empty-list. They point instead into
the murky depths of the human brain, citing simplicity without
defining their simplicity metric, citing how hard it is for students
to learn (which it is, in the abstract, but mostly because students
are often ill-trained to learn). I sometimes would go so far as to
think some students of Scheme are indoctrinated by some teachers to
actually reject, almost as a political action, the possibility of
receiving certain kinds of learning because they recognize it as not
simple enough. Yet they have somehow managed to recognize and adopt
this whole anti-complexity complexity when it would be simpler to just
learn things than to apply political philosophy to everything you
learn. The only other place I've ever seen that kind of resistance is
in recent converts to this or that church, who are never sure if they
should try to directly understand a truth you're trying to offer them
because they're not sure it's offered to them in a form that it would
be good for them to receive it in.
But to me, the issue of what makes a language easy/hard to use is the
dissonance between the mental representation of the user and the
manifest structure of the program. I have some theories about that,
but I acknowledge this as a hard problem. I'm not trying to argue CL
is an utterly simple language to learn and use. But people do often
claim that Scheme is simple to learn and use, and also that it is
simpler to learn and use than CL, and I'm arguing against accepting
such claims without better proof. If someone doesn't bring either
serious case studies or serious psychological evidence of mental
models into play in the discussion, I think it's worth questioning all
these unstated simplicity metrics that are advanced instead.
On learning and understanding, I think this: Years ago, I taught
myself Portuguese in anticipation of a couple of weeks vacation I was
going to spend in Brasil, it didn't occur to me that it was going to
be pronounced differently than Spanish. When I got to Rio, I found
some people who could understand me and some who couldn't. It wasn't
smart people who understood me and stupid people who didn't. It
crossed those lines. The line I eventually drew was that people who
wanted to understand me (i.e., those who stubbornly resisted
entertaining the implausible notion that what was coming from my mouth
might not be "an attempt at language") had no trouble understanding
me. But those who didn't want to understand me, or didn't mind not
understanding me, looked at me with blank stares. I'm convinced those
people actually had allowed themselves to think "perhaps this isn't
speech but just some kind of useless babble", and the believe that
this "simpler" answer could be true allowed them to rest confident
that not understanding me was the simplest answer to their
conversational dilemma. I really think the same of Lisp. Some people
see the NOT/NULL thing or the false/empty-list thing and decide "it
probably means the langauge is incapable of expressing anything
plainly". And voila, they find proof of their claim in the fact that
they cannot express themselves. Or else they decide that it's just an
artifact of the language that can/must be worked with, and, wonder of
wonders, it never becomes a problem. But the dividing line is not
that language feature.
Languages CAN accomodate ambiguity. (A huge percentage of all words in
all languages have multiple definitions, based on context.) Languages
cannot accomodate a closed mind.
I think it is. Some people insist on being inexperienced despite their
experience. I think this is a stupid kind of stubbornness, but we find
it in a number of people.
| But our current conception of evolution could very well be off, and I
| don't think it takes a genius to figure out that English is a crazy (if
| practical) language.
But it takes a really smart person to accept it for what it is and not
nurture a desire to change the grammar. The same is true for medicine.
It is pretty easy for anyone who is studying anatomy to figure out ways
that the human body could be improved. Fortunately, the ethical
standards of the medical discipline tend to discourage such desires.
| I *really* just wanted to refute the idea that it was nuts to separate
| NIL and false.
It _is_ nuts. False is defined as nil. Just as true is defined as t.
| For all the arguments in that direction, the obvious usability of Common
| Lisp means that it's also reasonable to conflate them, despite any
| conceptual messiness that might (or might not) entail.
They are not conflated. There is no conceptual messiness. False is
defined as nil. True is defined as t. That is how it is. If you cannot
deal with this, it is only your problem. Augustine's prayer may apply:
God grant me serenity to accept the things I cannot change, courage to
change the things I can, and wisdom to know the difference.
> That some things which produce the same effect but look different
> should be treated differently can be exemplified by "its" and
> "it's" and by "there" and "their" and "they're", both of which for
> some reason do not appear to be distinct to a number of American
> spellers.
This is nothing new. There is even a paper written with exact rules of
what should be done. Here it is:
A Plan for the Improvement of English Spelling
by Mark Twain
For example, in Year 1 that useless letter "c" would be dropped
to be replased either by "k" or "s", and likewise "x" would no longer
be part of the alphabet. The only kase in which "c" would be retained
would be the "ch" formation, which will be dealt with later. Year 2
might reform "w" spelling, so that "which" and "one" would take the
same konsonant, wile Year 3 might well abolish "y" replasing it with
"i" and Iear 4 might fiks the "g/j" anomali wonse and for all.
Jenerally, then, the improvement would kontinue iear bai iear
with Iear 5 doing awai with useless double konsonants, and Iears 6-12
or so modifaiing vowlz and the rimeining voist and unvoist konsonants.
Bai Iear 15 or sou, it wud fainali bi posibl tu meik ius ov thi
ridandant letez "c", "y" and "x" -- bai now jast a memori in the maindz
ov ould doderez -- tu riplais "ch", "sh", and "th" rispektivli.
Fainali, xen, aafte sam 20 iers ov orxogrefkl riform, wi wud
hev a lojikl, kohirnt speling in ius xrewawt xe Ingliy-spiking werld.
> we have a 25-or-so year history
> during which both Scheme and CL have existed, and in no cases have I
> ever seen or heard of someone tearing out their hair and leaving the
> CL community saying "I just can't build complex programs because this
> NOT/NULL [or false/empty-list] thing is making it too hard to write
> metaprograms". It just doesn't happen. So no one ever points to that
> as the reason for splitting false/empty-list. They point instead into
> the murky depths of the human brain, citing simplicity without
> defining their simplicity metric
Here's what I don't get about CL confusing false and the empty list:
suppose you have a function (foo name) that returns a list of all the
classes being taken by a student. If the function returns NIL, does
that mean that the student is taking no classes at the moment, or does
it mean that the student doesn't exist?
It seems to me that *every* datatype deserves to potentially have a
"don't know" or "doesn't exist" value. Like NULL in SQL. An integer
field in a database can allow NULL, or can be declared to be NOT NULL
and this is an important distinction. In any complex program some code
needs to be written to cope with NULL values, and some code wants to
assume that there is always valid data.
It's bad to use a value of zero in an integer variable to mean NULL.
It's bad to use a value of -1 in an integer variable to mean NULL. It's
also bad, I think, to have a null pointer value in every pointer data
type. How often do C programs fail because some function gets passed a
null pointer that didn't expect it?
It seems to me that using an empty list to represent NULL is just as bad.
-- Bruce
> Here's what I don't get about CL confusing false and the empty list:
> suppose you have a function (foo name) that returns a list of all
> the classes being taken by a student. If the function returns NIL,
> does that mean that the student is taking no classes at the moment,
> or does it mean that the student doesn't exist?
That's what multiple values are for in CL, see the CLHS for GETHASH
for an example.
Edi
> Here's what I don't get about CL confusing false and the empty list:
> suppose you have a function (foo name) that returns a list of all
> the classes being taken by a student. If the function returns NIL,
> does that mean that the student is taking no classes at the moment,
> or does it mean that the student doesn't exist?
To me that would be (at least in principle) a poorly specified
function that performs two unrelated tasks: Checking whether a student
exists and listing a student's current classes. Those should be
separate functions, and calling the latter function on an unknown
student would be an error (exceptional situation). If for some reason
those two tasks must be smashed into a single function, good style
would be to return two values from that function. It would also be
possible to return a designated symbol like :no-such-student, although
I'd consider that an inferior option, stylewise.
> It seems to me that *every* datatype deserves to potentially have a
> "don't know" or "doesn't exist" value.
I disagree.
--
Frode Vatvedt Fjeld
> Bruce Hoult <br...@hoult.org> writes:
>
> > Here's what I don't get about CL confusing false and the empty list:
> > suppose you have a function (foo name) that returns a list of all
> > the classes being taken by a student. If the function returns NIL,
> > does that mean that the student is taking no classes at the moment,
> > or does it mean that the student doesn't exist?
>
> To me that would be (at least in principle) a poorly specified
> function that performs two unrelated tasks: Checking whether a student
> exists and listing a student's current classes. Those should be
> separate functions, and calling the latter function on an unknown
> student would be an error (exceptional situation).
Quite possibly, though an exception is a pretty heavy-weight mechanism.
You might also want to delay the exception until the client code
actually tries to use the value inappropriately.
> If for some reason those two tasks must be smashed into a single
> function, good style would be to return two values from that
> function.
That also seems pretty heavy-weight given that you're already taking a
hit for using a dynamically-typed language in which every value has a
general enough representation to be a number, string, symbol or anything
else.
> It would also be possible to return a designated symbol like
> :no-such-student, although I'd consider that an inferior option,
> stylewise.
I think that's a reasonable thing to do, and also that "false" is a
pretty good designated symbol to use.
> > It seems to me that *every* datatype deserves to potentially have a
> > "don't know" or "doesn't exist" value.
>
> I disagree.
So you want to have pairs of variables nearly everywhere, with one being
a boolean saying whether or not the other one is valid? And slots to be
in pairs? And functions to return two values?
Seems very strange when you only need one bit to represent that second
value, and the language you are using has *already* made provision to
incorporate that sort of information in every value. Every value, that
is, except for the empty list.
-- Bruce
>> To me that would be (at least in principle) a poorly specified
>> function that performs two unrelated tasks: Checking whether a student
>> exists and listing a student's current classes. Those should be
>> separate functions, and calling the latter function on an unknown
>> student would be an error (exceptional situation).
>
> Quite possibly, though an exception is a pretty heavy-weight
> mechanism. You might also want to delay the exception until the
> client code actually tries to use the value inappropriately.
An error is sort of a heavy-weight thing regardless of the costs
associated with signaling conditions. If signaling an error is the
right thing to do, then not doing so because it is or might be
"heavy-weight" is to me a rather absurd notion.
>> If for some reason those two tasks must be smashed into a single
>> function, good style would be to return two values from that
>> function.
>
> That also seems pretty heavy-weight given that you're already taking
> a hit for using a dynamically-typed language in which every value
> has a general enough representation to be a number, string, symbol
> or anything else.
I don't consider returning multiple values to be prohibitively
expensive except possibly in situations where extreme optimization is
required, in which case the rules of "good style" change
entirely. What good is having a rich language if you can't use it?
> So you want to have pairs of variables nearly everywhere, with one
> being a boolean saying whether or not the other one is valid? And
> slots to be in pairs? And functions to return two values?
I think Common Lisp is excellent evidence that this works out very
well in practice.
--
Frode Vatvedt Fjeld
Erik Naggum wrote:
> Or (ash x 3) and (* x 8)?
good point. ok, there is no rational basis for me not using '().
> Huh? (let ((x '(1 2 3))) (delete 2 x) (equal x '(1 2 3))) => nil while
> (let ((x '(1 2 3))) (remove 2 x) (equal x '(1 2 3))) => t.
I meant the man on the street would scoff at the idea that removing
something from a list was different from deleting something from a list.
Reminds me of the time (1978) I tried to explain the Basic "x = x + 1"
to a Math teacher. He got pretty upset about that.
kenny
clinisys
Please stop thinking of it as a "confusion", and you may get it.
| suppose you have a function (foo name) that returns a list of all the
| classes being taken by a student. If the function returns NIL, does that
| mean that the student is taking no classes at the moment, or does it mean
| that the student doesn't exist?
Well, read the documentation of the function to find out. What else can
there possibly be to say about this? This is a non-existing problem.
Since you think SQL's NULL is so great, do you get a NULL if the student
whose classes you ask for does not exist? Why would you ask for the
classes of a non-existing student? Not to mention, _how_? Is a student
somehow magically adressable? In other words, what did you give that
function as an argument to identify the student?
| It seems to me that using an empty list to represent NULL is just as bad.
If you need to distinguish the empty list from NULL (nil) in the same
function, your design sucks because you are confused, not the language.
> In article <sfw8zcy...@shell01.TheWorld.com>, Kent M Pitman
> <pit...@world.std.com> wrote:
>
> > we have a 25-or-so year history
> > during which both Scheme and CL have existed, and in no cases have I
> > ever seen or heard of someone tearing out their hair and leaving the
> > CL community saying "I just can't build complex programs because this
> > NOT/NULL [or false/empty-list] thing is making it too hard to write
> > metaprograms". It just doesn't happen. So no one ever points to that
> > as the reason for splitting false/empty-list. They point instead into
> > the murky depths of the human brain, citing simplicity without
> > defining their simplicity metric
>
> Here's what I don't get about CL confusing false and the empty list:
> suppose you have a function (foo name) that returns a list of all the
> classes being taken by a student. If the function returns NIL, does
> that mean that the student is taking no classes at the moment, or does
> it mean that the student doesn't exist?
>
> It seems to me that *every* datatype deserves to potentially have a
> "don't know" or "doesn't exist" value. Like NULL in SQL. An integer
> field in a database can allow NULL, or can be declared to be NOT NULL
> and this is an important distinction. In any complex program some code
> needs to be written to cope with NULL values, and some code wants to
> assume that there is always valid data.
>
> It's bad to use a value of zero in an integer variable to mean NULL.
> It's bad to use a value of -1 in an integer variable to mean NULL. It's
> also bad, I think, to have a null pointer value in every pointer data
> type. How often do C programs fail because some function gets passed a
> null pointer that didn't expect it?
>
> It seems to me that using an empty list to represent NULL is just as bad.
I've said I don't have an opposition to more degenerate values. But either
way, it's not a panacea. You're basically saying that for every type FOO
there should be a type (NULL-OF FOO) which is a subtype of FOO and is useful
because it's not one of the ordinary examplars of FOO. Well, first of all,
that's already not the empty list. The empty list is not a non-list, it's
a list that's empty. So the (NULL-OF LIST) would not be NULL but would be
a special non-list list, as opposed to an empty list. Or, at least, I think
it should be. You could probably make the case that the only "valid" lists
had elements, but I don't like that breakdown. You could say it required
the (NULL-OF CONS) for this, and then there'd be discussion about whether
(NULL-OF CONS) could be car'd and cdr'd, since NIL can be, and maybe we'd say
yes, it is secretly a special cons with car and cdr of NIL. But that all
sounds messy. And what about programs that return type (or foo bar). Is
there a type (NULL-OF (OR FOO BAR)) that is distinct from type (NULL-OF FOO)
and type (NULL-OF BAR)? If so, it sounds like a mess to represent and
recognize. But ok, maybe. I didn't think this through utterly but I feel
the phrase "set of all sets not contained in any set" is going to enter this
discussion really soon now... I do know that every time there has been a
serious discussion of solving this, such a bubble really does creep in.
No matter what set someone is returning, there's always a desire to squeeze
in just one extra element ... and often it defeats the whole point of having
picked the size of data storage one has chosen. Just look at NUL-terminated
strings in C vs the ability to hold all 256 characters, or look at the
opposite outcome in the utterly stupid Base64 MIME encoding, which requires
65(!) characters to correctly represent it, so never really compacts up
quite right.
| No matter what set someone is returning, there's always a desire to squeeze
| in just one extra element ... and often it defeats the whole point of having
| picked the size of data storage one has chosen. Just [...] look at the
| opposite outcome in the utterly stupid Base64 MIME encoding, which requires
| 65(!) characters to correctly represent it, so never really compacts up
| quite right.
I don't quite understand the relevance of this example, since the
purpose of using 64 characters for the Base64 encoding is not, I
believe, compactness of representation, but simplicity of coding and
decoding. But it's true that the "=" padding character is not needed
for Base64 decoding: Just decode the input in groups of 4 characters
each producing 3 octets. Then, when all these groups are used up, you
will be left with 0, 2 or 3 characters which decode into 0, 1 or 2
octets respectively. So in this sense I can concur with your
characterization of Base64 as stupid. But compared to the use of NUL
to terminate C strings it seems to me a minor stupidity indeed.
--
* Harald Hanche-Olsen <URL:http://www.math.ntnu.no/~hanche/>
- Yes it works in practice - but does it work in theory?
> isn't there a funny essay somewhere about the consequences of porting
> something from Lisp to Scheme, specifically about the problem of having
> to then differentiate between nil and false? I recall assoc figuring
> prominently in the piece.
http://www.lisp.org/humor/large-programs.html
--
Lieven Marchand <m...@wyrd.be>
She says, "Honey, you're a Bastard of great proportion."
He says, "Darling, I plead guilty to that sin."
Cowboy Junkies -- A few simple words
> Here's what I don't get about CL confusing false and the empty list:
> suppose you have a function (foo name) that returns a list of all the
> classes being taken by a student. If the function returns NIL, does
> that mean that the student is taking no classes at the moment, or does
> it mean that the student doesn't exist?
How do you know that the function is going to return at all, if you
call it with the name of a non-student? That's right, you will look
at the specification of that function. And that's exactly where your
answer will be found. If the author of the function cosidered it
useful to conflate "student doesn't exist, and therefore doesn't take
any courses" and "student exists, but still doesn't take any courses"
(which can be very useful in quite a number of occasions), then it
will return the empty list in either case, and the documentation will
state that. Note that it would do this, even if NIL were unequal to
(). It is not trying to say don't know in the one case, and no
courses in the other, it is saying no courses in both cases, and that
is _exactly_ what it wants to say.
If the author thinks that those cases need to be distinguished, he has
any number of possible ways of communicating the difference. For
example he can use mulitple values, with the secondary value
indicating whether the student existed at all.
But mostly, if the difference between non-existing and
existing-but-course-less students is really relevant, I'd specify foo
to signal an error if passed the name of a non-student.
> It seems to me that *every* datatype deserves to potentially have a
> "don't know" or "doesn't exist" value. Like NULL in SQL. An integer
[...]
> type. How often do C programs fail because some function gets passed a
> null pointer that didn't expect it?
This is a direct contradiction to your previous statement. Either
every datatype deserves its own "don't know" or "doesn't exist" value,
which a null pointer in C arguably is, because it _can't_ clash with
any valid pointer value, or we forbid "don't know" values, because we
fear that receivers might not anticipate them and fail to handle them
correctly. Making a distinction between NIL and () isn't going to
solve the problem of functions blowing up if they get NIL instead of
something they expected (like a person object), in fact it is going to
_worsen_ the situation, because all list processing functions know how
to deal with empty lists, but now that NIL != () they are _more_
likely to blow up if handed NIL, when they aren't going to expect it.
> It seems to me that using an empty list to represent NULL is just as bad.
Well, a lot of things seem to me to be the case, when I sit in my
armchair and muse about the state of the world. Lucky for you I don't
share all of them with c.l.l.
Unless someone can demonstrate real problems when programming within
CL that are caused by (eq 'NIL '()), I consider it a worse problem
that so much time is wasted on discussing that non-issue, than any
problems that (eq 'NIL '()) could imaginably create, if it were indeed
a problem at all, which I don't see.
I think such issues should be decided based on some form of real-life
data, and not on armchair musing.
I have yet to see one bug that was caused by (eq 'NIL '()) tripping up
some minimally-competent programmer (i.e. one that didn't just start a
CL implementation by accident, when he really was expecting Scheme, or
some other language), or forcing someone to write noticably more
convoluted code because of it.
> Bruce Hoult <br...@hoult.org> writes:
...
> > type. How often do C programs fail because some function gets passed a
> > null pointer that didn't expect it?
>
> This is a direct contradiction to your previous statement. Either
> every datatype deserves its own "don't know" or "doesn't exist" value,
> which a null pointer in C arguably is, because it _can't_ clash with
> any valid pointer value, or we forbid "don't know" values, because we
> fear that receivers might not anticipate them and fail to handle them
> correctly.
Yes, indeed. A *huge* number of Java programming errors that I've seen
people have great difficulty tracking down are due to the failure of
people to have to use a separated null type. In effect, every type is
secretly (or null the-type-i-thought-i-was-getting) and the syntax encourages
people to not remember the null, so they are continually baffled by the fact
that such a "typesafe"(R) language has lied to them. I had forgotten
about this horrifyingly common syndrome at the office where I found myself
watching this play out day after day after day, but that is certainly enough
all by itself to make me think I'm glad we don't have null objects PER SE
for all of our types. That doesn't mean at all that I mind having empty
lists or empty strings when they are properly degenerate cases of a more
general class and all the operators work on them in the expected way.
> In article <2hlmgy2...@dslab7.cs.uit.no>, Frode Vatvedt Fjeld
> <fro...@acm.org> wrote:
>
> > Bruce Hoult <br...@hoult.org> writes:
> >
> > > Here's what I don't get about CL confusing false and the empty list:
> > > suppose you have a function (foo name) that returns a list of all
> > > the classes being taken by a student. If the function returns NIL,
> > > does that mean that the student is taking no classes at the moment,
> > > or does it mean that the student doesn't exist?
> >
> > To me that would be (at least in principle) a poorly specified
> > function that performs two unrelated tasks: Checking whether a student
> > exists and listing a student's current classes. Those should be
> > separate functions, and calling the latter function on an unknown
> > student would be an error (exceptional situation).
>
> Quite possibly, though an exception is a pretty heavy-weight mechanism.
> You might also want to delay the exception until the client code
> actually tries to use the value inappropriately.
Why would that be an appropriate thing to do? This seems like the C
philosophy of doing error-checking: "Never report errors at the place
where they actually occurred (calling a function that is specified to
take the name of an existing student with something else), but delay
it as long as necessary to destroy all relevant context; make it
maximally inconvenient to check for error situations; and above all
else, rather crash maximally fast, or produce wrong results, than
throwing away one instruction on detecting errors".
If that is your attitude about error checking, than I don't think that
(eq 'nil '()) is something you need to fret about, for a looong time.
Furthermore why care about the cost of exceptions in such a situation?
Unless your program is buggy as hell (i.e. calls foo with non-student
names all of the time, in-spite of foo's specification), that is. If
it is important to differentiate between non-students and students
with no courses, _you don't call foo with an unchecked name_. Period.
You write instead:
(defun bar (some-random-name)
(cond
((student-p name)
(let ((courses (foo name)))
;; Do something silly here...
))
(t
;; Do something differently silly here...
)))
The only thing that a non-nil "NULL" value would give you here, is the
ability to write instead
(defun bar (some-random-name)
(let ((courses-or-student-status (foo name)))
(cond
((not courses-or-student-status)
;; Do something differently silly here...
)
(t
;; Do something silly with courses-or-student-status here, but
;; we now know it actually is a list, not some overloaded
;; other return value-type...
))))
And that's the point where I'm thanking the powers that were, that we
don't see this kind of code all over the place. FOO should never have
been specified like this in the first place, if there really is a
difference between non-students and students with no courses. What
would be a descriptive name for foo? STUDENT-COURSES-AND-STUDENT-P?
> > If for some reason those two tasks must be smashed into a single
> > function, good style would be to return two values from that
> > function.
>
> That also seems pretty heavy-weight given that you're already taking a
> hit for using a dynamically-typed language in which every value has a
> general enough representation to be a number, string, symbol or anything
> else.
Multiple values are cheap in serious implementations. If I were prone
to armchair musing, I might conjecture that distinguishing between
'NIL and '() would have higher costs in terms of register and cache
pressure (as well as tag "pressure"), than pervasive usage of
two-value functions.
> > It would also be possible to return a designated symbol like
> > :no-such-student, although I'd consider that an inferior option,
> > stylewise.
>
> I think that's a reasonable thing to do, and also that "false" is a
> pretty good designated symbol to use.
Well you can already do this. You just have to write
(eq courses-or-student-status #f)
instead of
(not courses-or-student-status)
Not that I would consider code that did this to be well written code.
It actively encourages the same kind of lossage you get in C, where we
have all of those "failure" indicating values, that you have to
manually check, and if you don't (who does), work very nicely to
increase the time between error occurrance and error detection.
Regardless of any (eq 'NIL '()) issue, I can't see any justifiable
reason (senseless obsession about speed isn't one), for a function
student-courses that returns anything but a list (or other collection
datastructure). Either you define non-students names to be valid
arguments to that function, then you should just return the empty list
(collection, whatever) for non-students, or you say "student-courses
is actually not defined for non-students", in which case you signal an
error when called with a non-student name.
I'm even critical of a multiple-value version that _additionally_
indicates whether we're really talking about a student. But in cases
where this is justified, that additional value doesn't say "the
primary value is valid or not". The primary value is always valid,
and has the normal meaning. The secondary value just _adds_
additional information. Doing anything else just opens the door for
undetected error situations. We could then just as well be doing
errno in Common Lisp...
Doing anything else is just plain bad code in my book.
> > > It seems to me that *every* datatype deserves to potentially have a
> > > "don't know" or "doesn't exist" value.
> >
> > I disagree.
>
> So you want to have pairs of variables nearly everywhere, with one being
> a boolean saying whether or not the other one is valid? And slots to be
> in pairs? And functions to return two values?
I'm beginning to wonder if you have actually ever looked at real
Common Lisp code. How do you come up with such completely foolish
ideas? No one is suggesting such foolishness, because nothing of the
sort is necessary. If your code makes that kind of nonsense
necessary, by conflating validity information and data content in one
place all over the place, then it is bad code, for reasons that are
completely unrelated to the topic under discussion.
> I've said I don't have an opposition to more degenerate values. But
> either way, it's not a panacea. You're basically saying that for
> every type FOO there should be a type (NULL-OF FOO) which is a subtype
> of FOO and is useful because it's not one of the ordinary examplars
> of FOO.
Close, but not quite. I think it's more about bindings (and
declarations of them as in, for example, CLOS method arguments) than
about objects. And CL as it is is very nearly what I'm talking about,
with the single exception of the list type.
- you should be able to have a binding that you know has a valid value
of the appropriate type
- you should be able to have a binding that might contain a valid value,
or might contain the NULL value
- it's handy if the NULL value tests as false in conditional expressions
- I don't really care whether there is a different (NULL-OF FOO) for
each type, but I'd suggest that it's fine to have them all be the same,
e.g. #F. More efficient to test for, too.
> Well, first of all, that's already not the empty list. The empty list
> is not a non-list, it's a list that's empty. So the (NULL-OF LIST)
> would not be NULL but would be a special non-list list, as opposed to
> an empty list. Or, at least, I think it should be.
I agree.
> You could probably make the case that the only "valid" lists
> had elements, but I don't like that breakdown.
Sometimes that's useful, but not often. The Dylan <list> class has two
subclasses, <pair> and <empty-list>. <empty-list> has a single
instance, #(). So you can specialize a method on <list> and get either
an empty or a non-empty list, or you can specialize on <pair> and get a
guaranteed non-empty list.
> You could say it required the (NULL-OF CONS) for this, and then there'd
> be discussion about whether (NULL-OF CONS) could be car'd and cdr'd,
> since NIL can be, and maybe we'd say yes, it is secretly a special cons
> with car and cdr of NIL. But that all sounds messy.
Very. I'd think it should throw an exception.
> And what about programs that return type (or foo bar). Is there a type
> (NULL-OF (OR FOO BAR)) that is distinct from type (NULL-OF FOO)
> and type (NULL-OF BAR)? If so, it sounds like a mess to represent and
> recognize.
I agree. Easiest thing is to make (NULL-OF (OR FOO BAR)) be the same
object as (NULL-OF FOO) and (NULL-OFF BAR). I can't see any downside to
this, and NIL already plays this role in CL. That is to say, (NULL-OF
(OR FOO BAR)) can be NIL and the return type of a function that might
return a FOO or a BAR or NIL is (OR FOO BAR NULL).
This is all fine in CL already, *except* that a the list type is (OR
CONS NULL) and (NULL-OF (OR CONS NULL)) is NIL, which is a valid list.
That's why it would be better if there was a distinct "false" value
which wasn't the same as an empty list.
> No matter what set someone is returning, there's always a desire
> to squeeze in just one extra element ... and often it defeats the
> whole point of having picked the size of data storage one has
> chosen.
Sure. That's another good reason that you should be able to specify
that some particular binding or slot CAN'T be NULL.
-- Bruce
The way you deal with these things for efficiency is to accept arguments
of any complex (or ...) type you want, but then you something like this
(typecase <arg>
(<type-1>
(locally (declare (type <type-1> <arg>))
...))
(<type-2
(locally (declare (type <type-2> <arg>))
...)))
This particular situation may actually be pre-optimized by your compiler
with appropriate locally forms and declarations inserted for you.
| This is all fine in CL already, *except* that a the list type is (OR
| CONS NULL) and (NULL-OF (OR CONS NULL)) is NIL, which is a valid list.
| That's why it would be better if there was a distinct "false" value
| which wasn't the same as an empty list.
"Better" in the absence of a context or a purpose renders the whole
statement completely meaningless. Most of the time, context-free
"better" simply means "better for me, regardless of consequences or what
other people need", and such statements should simply be ignored. I
would say they are arbitrary (which is even worse and more misleading
than if they were false) because of the absence of specific meaning.
I believe the only productive way to learn a new skill is to open one's
mind to the superior knowledge of those who already know it well and
really listen to their tales of what they went through to get where they
are today. If you come from somewhere else and have a different history
behind you, whatever you come to will look strange, but if you think what
you came from must always be more important than what you are going to,
and some people mysteriously believe this _unconditionally_, it will be
too hard for them to get into anything new, so they give up, and instead
go on and on about how wrong what they came to is. There are immigrants
in every culture who keep longing for their past and denouncing their new
living conditions for their entire life, but yet do not return. I do not
understand what is so special about what one accidentally met _first_
that makes everything one meets later on _productively_ judged by it.
> That's what multiple values are for in CL, see the CLHS for GETHASH
> for an example.
Or conditions, if that's the way that's most natural to handle it in
your application. (Maybe seeing an unknown student means that either
the input is bogus or there's an inconsistency in the database or
something)
--
-> -/- - Rahul Jain - -\- <-
-> -\- http://linux.rice.edu/~rahul -=- mailto:rahul...@usa.net -/- <-
-> -/- "I never could get the hang of Thursdays." - HHGTTG by DNA -\- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
Version 11.423.999.220020101.23.50110101.042
(c)1996-2000, All rights reserved. Disclaimer available upon request.
> Bruce Hoult <br...@hoult.org> writes:
>
> > It seems to me that *every* datatype deserves to potentially have a
> > "don't know" or "doesn't exist" value. Like NULL in SQL. An integer
>
> [...]
>
> > type. How often do C programs fail because some function gets passed a
> > null pointer that didn't expect it?
>
> This is a direct contradiction to your previous statement.
No, it's an illustration of the dangers of not having a proper
distinguished value for NULL.
> Either every datatype deserves its own "don't know" or "doesn't exist"
> value, which a null pointer in C arguably is, because it _can't_ clash
> with any valid pointer value, or we forbid "don't know" values, because
> we fear that receivers might not anticipate them and fail to handle them
> correctly.
There is a guarantee in C that no object has an address equal to the
null pointer, but there is no type that says "this pointer points to a
genuine object, and NOT to null". That's the problem. C++ tries to get
around this using references, but even they can be subverted with a bit
of hackery:
struct foo {int x};
foo *a = 0;
foo &b = *a;
b.x = 13; // oops!
This can't happen in, say, Dylan because there are no null pointers. If
you have something like...
define class foo(<object>)
slot x :: <integer>;
end;
let a :: <foo> = ...;
a.x := 13;
... then there is guaranteed to be no error in the assignment of 13 to
a.x.
If you actually *want* to have a null value then you do it explicitly:
let a :: type-union(singleton(#"null-value"), <foo>) = ...;
Conventionally the null value used is always #f -- the canonical "false"
-- and there is a standard function that makes it more convenient to
create these union types:
let a :: false-or(<foo>) = ....;
when (a)
a.x := 13
end
> Making a distinction between NIL and () isn't going to
> solve the problem of functions blowing up if they get NIL instead of
> something they expected (like a person object),
It has no effect in that case.
> in fact it is going to _worsen_ the situation, because all list
> processing functions know how to deal with empty lists, but now
> that NIL != () they are _more_ likely to blow up if handed NIL,
> when they aren't going to expect it.
No, that's an improvement, because such a situation is a programming
error, and you will now discover the error (and fix it) much sooner.
> > It seems to me that using an empty list to represent NULL is just as
> > bad.
>
> Well, a lot of things seem to me to be the case, when I sit in my
> armchair and muse about the state of the world. Lucky for you I don't
> share all of them with c.l.l.
I didn't start this thread.
Furthermore, I have a lot of experience in a language in which a false
value is not a list, so it's hardly armchair musing when I say that it's
a damn good idea to separate the two notions.
> I have yet to see one bug that was caused by (eq 'NIL '()) tripping up
> some minimally-competent programmer (i.e. one that didn't just start a
> CL implementation by accident, when he really was expecting Scheme, or
> some other language), or forcing someone to write noticably more
> convoluted code because of it.
That's a great stock answer that can be used to justify any "feature" in
any language or indeed application program. "If you were an expert in
this program then you wouldn't have any problems. How do we know if
you're an expert? Because you don't have any problems, of course!"
Cool.
Don't worry. CL is totally perfect just as it is. No other language,
past, future or present ever had any idea better than the ideas already
present in CL. Where CL is more flexible than other languages it's
because such complexity is necessary. Where CL is more restrictive than
othe languages it is because more generality is unnecessary.
I get it now.
-- Bruce
> * Bruce Hoult
> | - I don't really care whether there is a different (NULL-OF FOO) for
> | each type, but I'd suggest that it's fine to have them all be the same,
> | e.g. #F. More efficient to test for, too.
>
> The way you deal with these things for efficiency is to accept
> arguments
> of any complex (or ...) type you want, but then you something like this
>
> (typecase <arg>
> (<type-1>
> (locally (declare (type <type-1> <arg>))
> ...))
> (<type-2
> (locally (declare (type <type-2> <arg>))
> ...)))
That's right, except that you can't in CL distinguish between the empty
list and false.
Everything else is fine.
> I believe the only productive way to learn a new skill is to open one's
> mind to the superior knowledge of those who already know it well and
> really listen to their tales of what they went through to get where
> they are today. If you come from somewhere else and have a different
> history behind you, whatever you come to will look strange, but if you
> think what you came from must always be more important than what you
> are going to, and some people mysteriously believe this
> _unconditionally_, it will be too hard for them to get into anything
> new, so they give up, and instead go on and on about how wrong what
> they came to is.
That's a pretty much completely useless argument. How are we then to
distinguish me starting with Dylan and taking a look at CL from Erik
Naggum starting with CL and taking a look at Dylan? Are we doomed to
always disagree? I hope not.
Furthermore, it's not even a *correct* representation of the situation.
I learned Lisp 1.5 long before I learned Dylan. Contrary to your "baby
duck syndrome" supposition, I saw that very many things in Dylan are
done far better than in Lisp 1.5, separation of the concepts of false
and empty list being just one of them.
And that's not even counting the various other journeys. My first
programming language was FORTRAN IV. I saw that Pascal was bette, and
moved to that, and then later saw that Modula-2 was better and moved to
that. Then I learned C and saw that it was better than Pascal but worse
than Modula-2 and stayed with Modula-2. Then C++ came along and I saw
that it was better than Modula-2 and moved to it. Then Java came along
and I saw that it was worse than C++ + Boehm so I stayed with C++. Oh,
and I didn't mention the Data General machine I used in 1985 - 1986
which had only FORTRAN, COBOL and PL/1 available and so I choose to use
PL/1, that being the best of a bad lot. Or probably another dozen or
more languages along the way that I've learned, evaluated, and either
used or discarded.
-- Bruce
Bruce Hoult wrote:
>
> In article <32154810...@naggum.net>, Erik Naggum <er...@naggum.net>
> wrote:
.......
> > I believe the only productive way to learn a new skill is to open one's
> > mind to the superior knowledge of those who already know it well .....
>
> That's a pretty much completely useless argument. How are we then to
> distinguish me starting with Dylan and taking a look at CL from Erik
> Naggum starting with CL and taking a look at Dylan? Are we doomed to
> always disagree? I hope not.
You won't /always/ disagree because over time you get acquainted with
the different approach and your opinions will converge.
One of the things I sense about Lisp is that it has developed for a very
long time under the hands of folks who are insanely obsessed with doing
the Right Thing, and I think a large part of the determination of what
is the Right Thing comes from What Would A Reasonable Person Expect. And
so a Reasonable Person programs Lisp like a hot knife through warm
butter. But I digress.
A good example was my shift from MCL to ACL. My first reaction to ACL
was yechhh. But I figured it was just because I was used to MCL. Sure
enough, I now have no problems with ACL (and would probably have trouble
getting back into MCL).
So the point is, if you think the folks who created X gave it some
thought, take a few months to get into X before you dis it.
kenny
clinisys
I thought this for a short while designing Dylan, but I changed my
mind pretty quickly. You can see where this leads with 'null' in
Java; Java's 'null' is, in some sense, typed, but the problem is that
'null' doesn't obey any of the protocols of the type.
Dylan has proper support for type unions, and we all realized that
'type-union(<integer>, singleton(#f))' -- or 'false-or(<integer>)',
since there's a standard extension macro -- is preferable. It allows
the possibility of a "null" without all the headaches, and it means that
in the usual case you can generate tighter code.
> There is a guarantee in C that no object has an address equal to the
> null pointer, but there is no type that says "this pointer points to a
> genuine object, and NOT to null". That's the problem. C++ tries to get
That is a complaint about the weak type system that C has, which I can
sympathise with. If I wanted to program in a statically typed
language, I'd want a type system that is at least as powerful as that
most modern functional programming languages enjoy, with type-inference
thrown in.
> > in fact it is going to _worsen_ the situation, because all list
> > processing functions know how to deal with empty lists, but now
> > that NIL != () they are _more_ likely to blow up if handed NIL,
> > when they aren't going to expect it.
>
> No, that's an improvement, because such a situation is a programming
> error, and you will now discover the error (and fix it) much sooner.
It isn't a programming error. The callee is quite capable of handling
the empty list, and the caller has decided that it wants the NIL case
to be treated identically. The callee doesn't have to be written to
handle the NIL case.
What is the fix? Instead of
(defun foo (x)
(bar (list-or-nil-returner x)))
(defun bar (list)
(dolist (elem list) (print elem)))
you want to see either
(defun foo (x)
(let ((result (list-or-nil-returner x)))
(bar (if (valid-p result) ;; Could be only result if NIL != ()
result
'()))))
(defun bar (list)
(dolist (elem list) (print elem)))
or
(defun foo (x)
(bar (list-or-nil-returner x)))
(defun bar (list-or-nil)
(let ((list (if (valid-p list-or-nil) list-or-nil '())))
(dolist (elem list) (print elem))))
I think the first replacement is the better one, if bar can be
specified to just handle lists. But the fact of the matter is that
the original code is completely equivalent to that replacement, not
only in effect, but also in communicated intent.
> > > It seems to me that using an empty list to represent NULL is just as
> > > bad.
> >
> > Well, a lot of things seem to me to be the case, when I sit in my
> > armchair and muse about the state of the world. Lucky for you I don't
> > share all of them with c.l.l.
>
> I didn't start this thread.
So what?
> Furthermore, I have a lot of experience in a language in which a false
> value is not a list, so it's hardly armchair musing when I say that it's
> a damn good idea to separate the two notions.
But it is! The experience you have in Dylan counts as zero when
discussing the problems of Common Lisp. _They are different
languages_! Language features aren't orthogonal, they live in the
ecology of a whole language. Language feature A might be very
problematic in language 1, but pose no problems at all in language 2,
because it interacts very differently with other language features in
those languages, and with idiomatic programming in those languages.
So your experience with Dylan can support the claim "There is at least
one language where completely separating false and every other
datatype worked out great". Since I don't have enough experience with
Dylan, I can neither refute nor confirm that claim for Dylan, but I
have used a number of other programming languages that did this, and
didn't find any problems with this approach.
What your experience doesn't support is the claim "(eq NIL ()) is a
problem in Common Lisp", or the broader claim "Identifying the false
value and some value of another datatype is wrong or problematic in
any language".
What my experience with CL does support is that (eq NIL ()) is not a
problem I have ever encountered in serious use of Common Lisp:
> > I have yet to see one bug that was caused by (eq 'NIL '()) tripping up
> > some minimally-competent programmer (i.e. one that didn't just start a
> > CL implementation by accident, when he really was expecting Scheme, or
> > some other language), or forcing someone to write noticably more
> > convoluted code because of it.
>
> That's a great stock answer that can be used to justify any "feature" in
> any language or indeed application program. "If you were an expert in
> this program then you wouldn't have any problems. How do we know if
> you're an expert? Because you don't have any problems, of course!"
Who said anything about experts? I qualified my statement with
minimally competent, becaue we sadly get lots of people over here who
program in CL without having any knowledge of the language, assuming
they can just carry over their knowledge of Scheme, or some other
language. I don't think the problems they present are to be taken
seriously.
If I were to start programming in Dylan, and stumbled across the
"problem" that Dylan seals its GFs by default, since I, not having
read anything about Dylan and blissfully assuming that this works
just like in CL, ran into large numbers of errors. Now tell me, is my
having problems indicative of Dylan having a problem, or is it not
indicative of _me_ having a serious problem?
So we are not speaking about normal users having problems, we are
speaking about misguided individuals having problems.
> Don't worry. CL is totally perfect just as it is. No other language,
It isn't. There are a number of things that I find problematic. One
such problem is the fact that LOOP is specified not to allow the
intermixing of VARIABLE-CLAUSES and MAIN-CLAUSES:
<quote hyperspec>
loop [name-clause] {variable-clause}* {main-clause}*
</quote>
Since termination tests are part of the main-clauses, you can't write
(loop for x = (foo ...)
while x
for y = (do-something-with-non-null x)
for z = (bar y)
do
...)
This in itself causes one to write more convoluted code in such
situtions, especially if one can't fold the binding of y and z into a
LET construct, because one wants to use them in further loop clauses.
One of the better possibilities is
(loop for x = (foo ...)
for test = (unless x (loop-finish))
for y = (do-something-with-non-null x)
for z = (bar y)
do
...)
[Thanks Kent for reminding me about this possibility]
As I think you'll agree this is ugly, and non-obvious.
Furthermore this problem is aggravated by the fact that nearly all
LOOP implementations _do_ support this kind of intermingling, but
often without specifying the exact semantics of such a thing. This
leads many people astray, blissfully assuming that it is well
specified portable code they are writing, until some new LOOP
implementation coughs at their code, hopefully producing warnings, but
possibly producing false iteration code instead.
And it isn't a problem that only seriously misguided people
encounter. Even people well versed in the language make that slip
from time to time, as can be seen by the fact that Kent posted an
erroneous example only recently. Same thing happened to me lots of
times, until I have become paranoid enough that I check every loop
form I stumble across for that mistake. This is clearly an indication
that something is amiss here.
How to solve that problem is another question, of course. Reopening
the standard is out of the question for some time to come, so some
other solution will have to be found.
> past, future or present ever had any idea better than the ideas already
> present in CL. Where CL is more flexible than other languages it's
> because such complexity is necessary. Where CL is more restrictive than
> othe languages it is because more generality is unnecessary.
You seem to cling to the idea of a "perfect" language, and real
languages trying to approach that singularly-defined state of
perfection as a limit.
That is IMHO completely wrong. Many, many decisions of languages have
to be judged subjectively by that language's community.
For example, Dylan is more static than CL in a number of areas, and
defaults to the static choice in a number of places (e.g. sealing GFs
by default). So tell me, which is the better language, which made the
better choices? I don't think this is a sensible question. I think
CL made the correct choices with regards to the CL community, and
Dylan did so with regards to the Dylan community. Neither is a priori
better than the other, they are different, because the members of
their communities value things differently, and likely also write
different programs.
That doesn't mean that it isn't possible to criticize a language, it
just means that you have to do it from the point of view of someone
writing non-trivial programs in it.
What would you think of a German, who came to England, and started to
criticize the English language based on the little he knows about it,
and the many great ideas that German qua language embodies? The fact
that you can't just directly transplant language features from German
to English, and that things that would have been problems in German
aren't in English (and vice-versa), doesn't mean that one can't engage
in informed criticism of the language, it just means that one has to
inform oneself about the language, as used by the people, before one
can do so.
> I get it now.
I somehow doubt that very much...
Geez, could you quit carping about this and start _listening_ some day
soon? That distinction is made in the protocol described in the
documentation of the function you are using. It is never a problem
because Common Lisp programmers do not _want_ to use false and the empty
list at the same time any more than C programmers want to put null bytes
in their strings or put any objects at the very beginning of memory.
It is the Sapir-Whorff hypothesis all over again. Because the language
does not do it, its (smarter) programmers do not want to do it. You want
to do something like this because you have yet to internalize the rules
of the language. The more you think it is "wrong", the less likely you
are to grasp what the rules of the language _are_.
| That's a pretty much completely useless argument.
Yeah, I figured you would not get it. I think you are a waste of time
this time, too. You are one of those guys who make up their mind what
the world should be like, and then blame the world for not conforming.
It is _not_ Common Lisp's fault you do not like some of its features.
There is consequently nothing that _Common_Lisp_ can do to fix this.
Who is this prat "SWM" and did he design Dylan?
Sorry -- I didn't design Dylan. I meant "while Dylan was being
designed, I used to think this, too". I do not mean to take credit
for the design of very much of the Dylan language. Sorry.
> "Bruce Hoult" <br...@hoult.org> wrote in message
> news:bruce-9A9045....@news.paradise.net.nz...
> >
> > It seems to me that *every* datatype deserves to potentially have a
> > "don't know" or "doesn't exist" value.
>
> I thought this for a short while designing Dylan, but I changed my
> mind pretty quickly. You can see where this leads with 'null' in
> Java; Java's 'null' is, in some sense, typed, but the problem is that
> 'null' doesn't obey any of the protocols of the type.
You are mistaking my meaing. Or I wasn't clear enough, or something.
Null in Java is evil because there is no way to express the idea that
you have a variable that is known to *not* be null.
What I want intead is a "null" value that is not in fact a member of
*any* normal type, but which can be (OR ...)'d with any type to make a
new type-that-includes-null. The null value/type should not be a membe
of any normal type, which means it should not be, for example, a valid
list.
> Dylan has proper support for type unions, and we all realized that
> 'type-union(<integer>, singleton(#f))' -- or 'false-or(<integer>)',
> since there's a standard extension macro -- is preferable. It allows
> the possibility of a "null" without all the headaches, and it means that
> in the usual case you can generate tighter code.
Yes, this is a beautiful thing, and highly practical.
The only downside is that Dylan doesn't then have a proper <boolean>
type. type-union(<boolean>, singleton(#f)) is the same as <boolean>,
which is bad. But it's better than the false value being a valid list.
-- Bruce
After a test for null, is that not precisely what you have?
| The null value/type should not be a membe of any normal type, which means
| it should not be, for example, a valid list.
Could you _please_ start to explain _why_ these random things you keep
carping about being "better" and "should" are just that? All we get from
you are random _conclusions_ and nothing at all to support them if one
happens to disagree with those conclusions. That _must_ mean there is
absolutely nothing to them.
Get over your personal hangup and just accept the language for what it
is. _Nothing_ will _ever_ happen to the language just because you keep
having these problems. Nobody is interested in your lack of will to
accept the language while you are learning it. It is fantastically
annoying to have ignorant spluts keep arguing about things they do not
like about the language. What do you nutballs expect to happen? Is this
how you cope with _unchangeable_ things in general? I do not expect you
to accept any other people's decisions any more than that if* guy did,
either, but it only makes you look _really_ stupid to those who manage to
live with the language. What do you _want_? If you have no other mission
here than to get "sympathy" for your personal problem of accepting what
you cannot change, I think the whole thing really reeks. Get over it and
move on, or back to perfect Dylan or whatever the fuck you really need.
Erik Naggum wrote:
>
> Could you _please_ start to explain _why_ these random things you keep
> carping about being "better" and "should" are just that? ...
> Get over your personal hangup....Nobody is interested in your lack of
> will...It is fantastically annoying to have ignorant spluts keep arguing
> about things they do not
> like...What do you nutballs expect to happen?...it only makes you look
> _really_ stupid.....your personal problem of accepting what
> you cannot change, I think the whole thing really reeks. Get over it and
> move on, or back to perfect Dylan or whatever the fuck you really need.
What justifies this bilious spew? Aside from it being great fun to write
and read.
Here is something I have practicing lately in professional email and NG
postings. After I write something I go back over and eliminate
everything in the second person from sensitive material. Programmers are
sensitive; our asses are always on the line. When code breaks, the
author gets beeped. Sometimes fired.
So where I have originally written "in convert-date-to-inches you use
the metric conversion..." I go back and change that to
"...convert-date-to-inches uses a metric conversion. maybe we should use
english...".
The article to which I respond uses the second person 23 times in
fourteen sentences (counting the third person bit about ignorant spluts
as 5). Ouch. Now excuse we while I go look up "splut".
kenny
clinisys
Now your article looks much better, does it not? Context be damned! Let
us all only select the individual words we do not like and respond to each.
But to answer your stupid question: The text you elided in order to post
your venomous response, of course.
Learn to recognize a troll who is not after solving any problems, but
only seeks _sympathy_ for his problems. My guess is he only needs
someone to say they agree with him that it is bad design to "confuse" the
empty list and nil, and he will feel much better about himself. What
would actually _solve_ his problem, would, of course, be to _understand_
the issue, but sympathy-seekers are prevented from understanding because
there is no sympathy in understanding. Understanding yields joy, not
shared suffering -- that is the result of its opposite.
> In article <sfw8zcy...@shell01.TheWorld.com>, Kent M Pitman
> <pit...@world.std.com> wrote:
>
> > we have a 25-or-so year history
> > during which both Scheme and CL have existed, and in no cases have I
> > ever seen or heard of someone tearing out their hair and leaving the
> > CL community saying "I just can't build complex programs because this
> > NOT/NULL [or false/empty-list] thing is making it too hard to write
> > metaprograms". It just doesn't happen. So no one ever points to that
> > as the reason for splitting false/empty-list. They point instead into
> > the murky depths of the human brain, citing simplicity without
> > defining their simplicity metric
>
> Here's what I don't get about CL confusing false and the empty list:
> suppose you have a function (foo name) that returns a list of all the
> classes being taken by a student. If the function returns NIL, does
> that mean that the student is taking no classes at the moment, or does
> it mean that the student doesn't exist?
Well, that's a lousy interface (to the function foo). I can see two
good ways to fix it:
1. FOO returns two values, the first being the list, the second
being a boolean, true if the student was found, as with GETHASH.
2. FOO returns the list, and signals an error if the student is not
found. If the caller is expecting there to be a high proportion
of unknown students, the cost of this situation can be reduced
somewhat by having the keyword arugments :error-p and
:error-value, as with READ.
One thing that's very nice about having a Really Big language is that
(iff it's well-designed, as CL is) the language itself offers good
design examples. If you're having a problem expressing something,
first see if CL deals with similar situations, and see how it deals
with them. (AMOP is also quite good for this).
As I said before, the NIL-the-symbol/NIL-the-boolean/NIL-the-empty-list
thing causes occasional philisophical angst for me, but no practical
problems that good design couldn't solve.
> It seems to me that using an empty list to represent NULL is just as bad.
Er, I hope you meant "to represent false", because null-ness is a
property of lists (or sets, or ...), not booleans.
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
This is a very good analogue. I started this strand of this thread,
not as a criticism of CL's design choice (which, in practice, works
just fine in CL, and has yet to cause me a practical problem), but
rather as a defense of another choice as reasonable. I still think a
reasonable language, similar to CL, but differing in its goals, could
make the other choice without pain.
French has some very cool things that are missing from English.
Borrowing cool things where you can works out pretty well, in practice
(cf. the English language). Trying to force the fundemental,
incompatible ideas of one language onto another just makes a mess (eg,
when people try to make English a Romance language), even if those
ideas were perfectly wonderful in the original language (cf. French).
Design choices cannot be made independently of one another, which
means there will never be a perfect language, any more than there will
be a perfect organism -- the best one can hope for is a perfect fit
for its environment/user-base.
I don't think your problem is with CL "confusing" false and the empty
list.
Suppose you have a function (takes-class-p name class), that checks
whether a student takes a specific class at the moment. If the
function returns NIL, does that mean that the student is not taking
that class at the moment, or does it mean that the student doesn't
exist?
I think your problem is with bad documentation.
Boris
--
bo...@uncommon-sense.net - <http://www.uncommon-sense.net/>
Laugh and the world thinks you're an idiot.
Thank you, Nicolas.
P.S.: Sorry, if it was mentioned already in this thread. I couldn't
bear reading this never-ending story in whole length again...
The ANSI group didn't have a web site, so this isn't enough
information to jog my memory. Maybe someone else will remember
another clue.
I'm sorry, I messed this up. The reference I had in mind was about
another "hot" topic, namely the single namespace vs. double namespace
issue. Of course, when adjusting my memory, I found the reference
immediately. It is your report (together with Gabriel)
http://world.std.com/~pitman/Papers/Technical-Issues.html
A pity, that there is nothing similar for the nil/false issue.
Anyway, I'm sorry for my confusion.
Yours, Nicolas.
> And what about programs that return type (or foo bar). Is
> there a type (NULL-OF (OR FOO BAR)) that is distinct from type (NULL-OF FOO)
> and type (NULL-OF BAR)? If so, it sounds like a mess to represent and
> recognize. But ok, maybe. I didn't think this through utterly but I feel
> the phrase "set of all sets not contained in any set" is going to enter this
> discussion really soon now... I do know that every time there has been a
> serious discussion of solving this, such a bubble really does creep in.
> No matter what set someone is returning, there's always a desire to squeeze
> in just one extra element ... and often it defeats the whole point of having
> picked the size of data storage one has chosen.
I think it's at least some exponential blowup thing. If you have n
basic types, then I think you have 2^n possible types like (or ...),
and you want each of these to have a distinct NULL-OF, so you need 2^n
distinct values.
And we have satisfies as a type descriptor too (oops, lots and *lots*
of NULL-OFs now), and what about (null-of (or foo (member (null-of
foo)))), or even better (null-of (not (member (null-of foo))))?
--tim
Erik Naggum wrote:
>
> * Kenny Tilton
> | [...] this bilious spew? [...]
That's a great compression algorithm, is it not? A little lossy in that
it tosses all civil discourse and keeps only the fighting words, but
there's the benefit, we don't have to wade thru the tedious material
stuff to get to the fireworks. Glad to see the 98% compression achieved
on my post.
>
> But to answer your stupid question: The text you elided in order to post
> your venomous response, of course.
>
Turn about is fair play, but FWIW my intent was pacifistic and
compassionate. And as a faithful reader of such threads I was honestly
puzzled as to how we got from zero to sixty in one article. Most folks
earn their incinerations by pursuing the threads beyond reason,
something not evident in this case.
> Learn to recognize a troll...
Fine, but missing from the above digest is my implication that a master
of relaxation would take such common NG conduct in stride.
[aside: the 2nd-Person scoring was off last time; it was the use
injunctions such as "Learn to..." that should count as five YOUs.]
It's a shame to see such incredible energy, devotion, and intellect
tilting at trolls, if*, and dictionary entries. I'd rather see a sit-in
staged at the most prestigious university not teaching Lisp. Students
would go crazy over a trash-talking, "bad boy" of computer science, John
McEnroe meets Sean Penn, high-intellect act.
You're the rock star of Common Lisp. Get your head out of c.l.l. and go
win us some converts.
kenny
clinisys
> You're the rock star of Common Lisp. Get your head out of c.l.l. and
> go win us some converts.
I don't understand why some people are obsessed by the idea that
Common Lisp needs converts. I even think that converts that have to be
won should better stay away from Common Lisp.
--
Janis Dzerins
Eat shit -- billions of flies can't be wrong.
Hrmf. Heretic. :)
--
Vebjorn Ljosa
Janis Dzerins wrote:
>
> Kenny Tilton <kti...@nyc.rr.com> writes:
>
> > You're the rock star of Common Lisp. Get your head out of c.l.l. and
> > go win us some converts.
>
> I don't understand why some people are obsessed by the idea that
> Common Lisp needs converts. I even think that converts that have to be
> won should better stay away from Common Lisp.
I agree, actually. The unmentioned alternative to a university sit-in
would be simply to do great things with CL and not worry about what
other people think.
If CL had as many users as Java, Python, or Perl I think we would all
benefit, but I do not see that ever happening. I /do/ see other
languages continuing to pick up on CL features such as GC,
introspection, anonymous functions, GFs, macros so that if CL finally
dies it will have already been reborn in other HLLs.
kenny
clinisys
> Janis Dzerins wrote:
> >
> > Kenny Tilton <kti...@nyc.rr.com> writes:
> >
> > > You're the rock star of Common Lisp. Get your head out of
> > > c.l.l. and go win us some converts.
> >
> > I don't understand why some people are obsessed by the idea that
> > Common Lisp needs converts. I even think that converts that have
> > to be won should better stay away from Common Lisp.
>
> I agree, actually. The unmentioned alternative to a university
> sit-in would be simply to do great things with CL and not worry
> about what other people think.
Oh, I see you're trying to be funny. But we don't have to dismiss
_all_ others. I just came up with a first criterion that people who I
would listen to would have to satisfy: they must have read at least
one standard (be it a programming language or community accepted
protocol, like most internet protocols) and don't complain about how
stupid one thing or another in it is (without having a better
solution).
> If CL had as many users as Java, Python, or Perl I think we would
> all benefit, but I do not see that ever happening. I /do/ see other
> languages continuing to pick up on CL features such as GC,
> introspection, anonymous functions, GFs, macros so that if CL
> finally dies it will have already been reborn in other HLLs.
And we especially don't need people talking about the death of Common
Yes, intent is all that counts.
| And as a faithful reader of such threads I was honestly puzzled as to how
| we got from zero to sixty in one article. Most folks earn their
| incinerations by pursuing the threads beyond reason, something not
| evident in this case.
Huh? Bruce Hoult has been carping on the same stupid "confusion" for
years. Nothing ever happens. No understanding. Just pauses between
reiterations and regurgitations of the same stale arguments.
| You're the rock star of Common Lisp. Get your head out of c.l.l. and go
| win us some converts.
As most rock stars, I think I should go despair about my popularity.
///
--
The past is not more important than the future, despite what your culture
has taught you. Your future observations, conclusions, and beliefs are
more important to you than those in your past ever will be. The world is
changing so fast the balance between the past and the future has shifted.
Janis Dzerins wrote:
>
> Oh, I see you're trying to be funny.
No, implicit in my suggestion was the energy I saw being misdirected at
defending CL, I gathered in the hope someday of greater use. Given that
goal apparently sought by others, I recommended more aggressive action
crucially outside c.l.l.
I am on the fence. I would love to see CL take over the world, but I am
not going to worry about it, I am just trying to do good work with CL.
And I definitely agree with the immediate point to which I responded:
too aggressive a netting might snare the wrong fish. CL would be wasted
on anyone who does not get it. But the only way to choose the few is to
call the many, perhaps with a bullhorn at some prestigious university.
>
> And we especially don't need people talking about the death of Common
> Lisp.
CL is too good to be threatened by things like NG articles, and it is
too good to die. To die something better must come along, and that is
impossible because CL could become that better thing and do so better
than does the thing itself. Look at OO and CLOS. This is why Arc may be
a wrong turn.
kenny
clinisys
Erik Naggum wrote:
>
> Huh? Bruce Hoult has been carping on the same stupid "confusion" for
> years.
Oh, OK. Sometimes the existence of a certain amount of history is
apparent in these threads, this time I did not pick that up.
kenny
clinisys
> Get over your personal hangup and just accept the language for what it
> is. _Nothing_ will _ever_ happen to the language just because you keep
> having these problems. Nobody is interested in your lack of will to
> accept the language while you are learning it. It is fantastically
Can anybody explain to me why some people regard the Common Lisp
standard as gospel? What happened to the spirit of continuous
improvement that has led Lisp through the decades?
Andreas
--
"In my eyes it is never a crime to steal knowledge. It is a good
theft. The pirate of knowledge is a good pirate."
(Michel Serres)
> One of the things I sense about Lisp is that it has developed for a very
> long time under the hands of folks who are insanely obsessed with doing
> the Right Thing, and I think a large part of the determination of what
> is the Right Thing comes from What Would A Reasonable Person Expect. And
I hoped that it would be like this.
But I'm seeing a lot of "Common Lisp does it that way and you need to
swallow it", and a lot of people who seem to have forgotten that
Common Lisp is man-made (and even designed by a committee and full of
bad compromises and legacy crap).
We folks from the Dylan community see ourselves as part of the Lisp
community. That's not only because most of the Dylan designers were
also involved in the Common Lisp standardization process, but also
because the heritage shows in a lot of places, ranging from the object
system (modelled after CLOS), the optimzation theory (type
annotations), first class functions, to the UI framework (CLIM
vs. DUIM), the general compiler theory etc. We're interested in an
honest discussion on what the Right Thing is (and yes, Dylan has it's
shortcomings too).
> It is _not_ Common Lisp's fault you do not like some of its features.
> There is consequently nothing that _Common_Lisp_ can do to fix this.
Yes, but Lisp can fix it.
It is not broken. It is a feature not a mistake. If some people are not
happy with CL then they can write a new Lisp in CL (as a new shadowed
package, or would people rather write it in C?). Nothing is stopping them,
create a language that has false.
Wade