Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

should languages be typed?

23 views
Skip to first unread message

jh

unread,
Jul 8, 2006, 6:24:50 AM7/8/06
to
Reading in [1] about type systems I came across the following chapter
(page 6):
----
Should languages be typed?
The issue of whether programming languages should have types (even with
weak checking) is still subject to some debate. There is little doubt,
though, that production code written in untyped languages can be
maintained only with great difficulty. From the point of view of
maintainability, even weakly checked unsafe languages are supe-
rior to safe but untyped languages (e.g., C vs. LISP). Here are the
arguments that have been put forward in favor of typed languages, from
an engineering point of view: ...
----

I'm not versed enough to disagree with good arguments but that sounds a
bit biased. C vs. Lisp and C is more maintable? Anyone could give me
arguments that Luca Cardelli is wrong?


[1] http://citeseer.ist.psu.edu/cardelli97type.html

--

John Thingstad

unread,
Jul 8, 2006, 6:59:28 AM7/8/06
to

Lisp is strongly typed.
Variables are pointers to objects.
It is objects that have type.
You can't access a variable before assigning it an object.
You can not go from one type to another without a explicit conversion.
(This is a bit simplified)
The real difference then is that much of this checking happens at runtime.
The problem this can cause is than unrun code could still contain errors.
The same approach is used in Python.

The real problems happen in languages like Perl, PHP which
can create variables on the fly, provides default values, and automatic
conversion.
This creates major maintenance issues.

Giving variables type poses problems of it's own.
In particular by forcing all parts of the program to
specify a type seems to force premature optimisation.
This tends to make it much more expensive to change code and
thus the algorithms you end up with tend to be slower.

--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

Pascal Bourguignon

unread,
Jul 8, 2006, 8:03:45 AM7/8/06
to
jh <nos...@nospamdrugphish.ch> writes:

No, he's write. It's only that you infer that because he write:
black vs white (e.g., C vs. LISP) he means C=black and LISP=white
It's the reverse:

C is weakly typed.
LISP is strongly typed.


In C:

% cat b.c
#include <stdio.h>
int main(int argc,char** argv){
printf("%s\n",argv[1]+42);
return(0);
}
% make -k b
make: `b' is up to date.
% ./b xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxx
% ./b xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx yyyyyyyyyyyyyyyyyyyy
yyyyyyyyy
% ./b
Segmentation fault

On the other hand, in LISP:

[203]> (+ "hello world" 42)

*** - +: "hello world" is not a number
The following restarts are available:
USE-VALUE :R1 You may input a value to be used instead.
ABORT :R2 ABORT
Break 1 [204]> :q
[205]> (compile nil (lambda () (+ "Hello" 42)))
WARNING :
Arithmetic operand "Hello" must evaluate to a number, not "Hello"
WARNING :
Run time error expected: +: "Hello" is not a number

#<COMPILED-FUNCTION NIL> ;
2 ;
2


--
__Pascal Bourguignon__ http://www.informatimago.com/

In a World without Walls and Fences,
who needs Windows and Gates?

jh

unread,
Jul 8, 2006, 8:39:32 AM7/8/06
to

>
> No, he's write. It's only that you infer that because he write:
> black vs white (e.g., C vs. LISP) he means C=black and LISP=white
> It's the reverse:
>
> C is weakly typed.
> LISP is strongly typed.
>
I know that in Lisp values have types and variables don't. What was
confusing me is that the author categoried Lisp as an 'untyped'
language. Maybe he meant with untyped that it is statically not strongly
type checked and that most checks are deferred to runtime? ..

>
> In C:
>
> % cat b.c
> #include <stdio.h>
> int main(int argc,char** argv){
> printf("%s\n",argv[1]+42);
> return(0);
> }
> % make -k b
> make: `b' is up to date.
> % ./b xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> xxxxxxxxxxxxx
> % ./b xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx yyyyyyyyyyyyyyyyyyyy
> yyyyyyyyy
> % ./b
> Segmentation fault
>
>
>
> On the other hand, in LISP:
>
> [203]> (+ "hello world" 42)
>
> *** - +: "hello world" is not a number

..


> [205]> (compile nil (lambda () (+ "Hello" 42)))
> WARNING :
> Arithmetic operand "Hello" must evaluate to a number, not "Hello"
> WARNING :
> Run time error expected: +: "Hello" is not a number
>
> #<COMPILED-FUNCTION NIL> ;
> 2 ;
> 2
>

yep..thanks.

Pascal Costanza

unread,
Jul 8, 2006, 9:20:20 AM7/8/06
to

Luca Cardelli is right when all you are relying on during maintenance is
the type system. When you are using other tools, static types become
less important. For example, some developers who use a test-first
approach (as in agile methodologies, like XP) claim that static type
checks don't add anything interesting, because sufficiently large test
suites already seem to account for the kinds of errors that static type
systems are supposed to catch. Since you should have good test suites
anyway, so that you can also deal with the kinds of errors that static
type systems don't easily cover, it becomes questionable whether being
redundant pays off in the end.


Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

Ken Tilton

unread,
Jul 8, 2006, 10:56:03 AM7/8/06
to

jh wrote:
>>No, he's write. It's only that you infer that because he write:
>>black vs white (e.g., C vs. LISP) he means C=black and LISP=white
>>It's the reverse:
>>
>> C is weakly typed.
>> LISP is strongly typed.
>>
>
> I know that in Lisp values have types and variables don't. What was
> confusing me is that the author categoried Lisp as an 'untyped'
> language. Maybe he meant with untyped that it is statically not strongly
> type checked and that most checks are deferred to runtime? ..
>
>

I think you misunderstood. C does not get high grades from Cardelli, but
C++ does. And he seems to get some things right (by "safe" he means
runtime-checked with recoverable exceptions):

"Most untyped languages are, by necessity, completely safe (e.g., LISP).
Otherwise, programming would be too frustrating in the absence of both
compile time and run time checks to protect against corruption. Assembly
languages belong to the unpleasant category of untyped unsafe languages.

"Table 1. Safety

"Should languages be safe? Lack of safety in a language design is
motivated by performance considerations (when not introduced by
mistake). The run time checks needed to achieve safety are sometimes
considered too expensive. Safety has a cost even in languages that do
extensive static analysis: tests such as array bounds checks cannot be,
in general, completely eliminated at compile time.

"Safety, however, is cost effective according to different measures.
Safety produces fail-stop behavior in case of execution errors, reducing
debugging time. Moreover, safety guarantees the integrity of run time
structures, and therefore enables garbage collection. In turn, garbage
collection considerably reduces code size and code development
time, at the price of some performance.

"Thus, the choice between a safe and unsafe language may be ultimately
related to a tradeoff between development time and execution time.
Although undeniable, the advantages of safety have not yet caused a
widespread adoption of safe languages. ..."

That was almost ten years ago. Now safe un-static typed languages are
pushing C++ and Java into the sea. TDD is the answer to maintainability
and large projects.

Cardelli's was just presenting the conventional wisdom, so he would
write a different paper today.

kt

--
Cells: http://common-lisp.net/project/cells/

"I'll say I'm losing my grip, and it feels terrific."
-- Smiling husband to scowling wife, New Yorker cartoon

Kent M Pitman

unread,
Jul 8, 2006, 1:06:31 PM7/8/06
to
"John Thingstad" <john.th...@chello.no> writes:

> Lisp is strongly typed.
> Variables are pointers to objects.
> It is objects that have type.
> You can't access a variable before assigning it an object.
> You can not go from one type to another without a explicit conversion.
> (This is a bit simplified)
> The real difference then is that much of this checking happens at runtime.
> The problem this can cause is than unrun code could still contain errors.

I haven't been following this discussion and can't tell at this point
where I'm peeking in whether you're arguing in favor or against any
particular thesis or if you're just making neutral descriptive
observations, but I'll ignore all of that and just use this as a place
to inject a few comments without regard to whether I'm agreeing or
disagreeing...

The flip side of "the unrun code still could contain errors" is that
when you do run it, the dynamic type information is often enough
information around to correct it at runtime. Recall that for most
static programming languages and systems (i.e., without invoking
Greenspun's Tenth Rule of Programming), static systems cannot be
corrected while they are running.

Since Lisp programs, by design, can be first started and then patched
later while running to correct bugs. And by patched, I don't mean
bypassing the compiler and depositing random binary stuff into a
machine location, I mean doing further programming at the user level
and then merging that programming with the running Lisp program in a
graceful way that is explainable within the language. This capability
means Lisp's programs enjoy a power that many statically typed systems
don't. This shift in paradigm is not obvious to static programmers,
who often worry that launching the program is tantamount to freezing
it dead in its tracks... a sad irony if you think of the purpose of
programming as being to give "life" to ideas...

(C++ and its ilk are examples of languages I mean to target in the
above. There are some languages like Java that are statically typed
and still reflective, in such a way that you can still, under my
definitions, legitimately call them object-oriented. Java's problem
is that in order to achieve all of this, it sacrifices reasonable
usability, making everything uniformly hard ... in a way that is
perhaps a metaphorical analog to my recent riddle on another thread
about how to make an O(1) version of AREF using only LAMBDA. If
you're having trouble, look for the response from Joe Marshall, who
answered correctly. If you're having trouble figuring out why that
applies here, well, ... may be it does, maybe it doesn't. I was just
thinking aloud. But it seemed apropos at the moment and so I
mentioned it. Take solace in the fact that I didn't make some obscure
allusion to George Bush or Bill Clinton to make my point...)

Incidentally, this is the notion of "true" object-orientedness that I
think is more critical. (I sometimes call it "identity-oriented",
since "object-oriented" has been co-opted for unprincipled marketing
purposes.) To me, the notion of "objects" is that you can handle them
and manipulate them by inspection/reflection. Since a lot of systems
that are called object-oriented do not have reflection and dynamic
type information left in them, they don't have the ability to play
Lego(R) with their pieces and are not, to my view, objects. To me,
the true test of object-oriented programming is how the object behaves
in a black-box inspection after it's created, not how it behaves
white-box when you can see its code.

And, getting back to my notes on the original issue of typing, a cost
of static systems not enumerated here is that you can't CHOOSE to run
the system before you are sure it's right. So when someone frames it
in terms of keeping you from running an incomplete or buggy system,
think twice about whether it's obvious this should be a goal. You
might want to run a system because you only wanted the working parts
(e.g., for testing, or because there was a guard upstream on the data
that meant you knew that the code was unreachable for reasons
inaccessible to the compiler).

Sometimes, time to market is everything. Businesses often decide that
it's better to get a mostly working product out today than wait too
long to get a perfect system out. So if you have type systems that
claim to do a better and better job of detecting all possible bugs,
then is a consequence that you will have business systems less able to
get to market on time? Or will programming languages, in the old
tradition of COBOL, start to have a "FINANCIAL TRADE-OFFS DIVISION"
that informs the compiler that certain bugs should not be looked for,
lest finding them hold up a release that simply must go out whether
those bugs are there or not?

> The same approach is used in Python.
>
> The real problems happen in languages like Perl, PHP which
> can create variables on the fly, provides default values, and
> automatic conversion.
> This creates major maintenance issues.

Well, so does having to release a new program every time you want to
change a default.

My intent here is not to say that one or the other thing is right,
just that these are design trade-offs made in a very complicated
space. I like Lisp, but I refuse to defend Lisp by saying "Lisp code
is maintainable and code in language X is not." Anyone will see
through that in a minute. The right answer is "Lisp favors this kind
of issue at the expense of that kind of issue". You either like or
don't like what it favors, but if you try to claim it favors things
with no loss of anything, you're violating some important conservation
principle--the one that informs more mundane principles like
Newton's Third or to the Three Laws of Thermodynamics--which effectively
says "you can't get something for nothing".

> Giving variables type poses problems of it's own.
> In particular by forcing all parts of the program to
> specify a type seems to force premature optimisation.
> This tends to make it much more expensive to change code and
> thus the algorithms you end up with tend to be slower.

I think the claim that premature optimization is an issue is entirely
fair, but some things lost in statically typed systems, but I would
not sum them up this way.

The claim that it's more (or less) expensive to change code seems to
me impossible to make in the absence of a context about the domain,
the expected kinds of changes, how much the changes are due to bugs
(incorrect design) and how much due to new features needed
(extensibility), what the nature of the interface boundaries (if any)
are, how well-understood and how changeable the market is, etc.

The issue of algorithms being faster or slower is again too prone to
effects of details to make a sweeping claim. Also, in a parallel system,
the notion of faster or slower may require elaboration. Are you talking
wallclock time or CPU time, for example? With massively parallel systems
like SETI@Home and its successors, for example, you can still consume lots
of resources without it taking a lot of wallclock time. I'd hesitate to
box an algorithm like that into the binary "fast/slow" paradigm, as if
there were no more subtle ways of viewing it.

Andreas Seltenreich

unread,
Jul 8, 2006, 1:44:06 PM7/8/06
to
Kent M Pitman <pit...@nhplace.com> writes:

> Since Lisp programs, by design, can be first started and then patched
> later while running to correct bugs. And by patched, I don't mean
> bypassing the compiler and depositing random binary stuff into a
> machine location, I mean doing further programming at the user level

Well, on POSIX systems you can do that on linker symbol granularity
using dlopen(3). I would surely call it a user level feature. E.g.,
in PostgreSQL, you could recompile your stored procedure and issue a
LOAD command on the shared library, without even having to interrupt a
running transaction. BTDT.

regards,
andreas

John Thingstad

unread,
Jul 8, 2006, 2:37:28 PM7/8/06
to
On Sat, 08 Jul 2006 19:06:31 +0200, Kent M Pitman <pit...@nhplace.com>
wrote:

>> The same approach is used in Python.
>>
>> The real problems happen in languages like Perl, PHP which
>> can create variables on the fly, provides default values, and
>> automatic conversion.
>> This creates major maintenance issues.
>
> Well, so does having to release a new program every time you want to
> change a default.
> My intent here is not to say that one or the other thing is right,
> just that these are design trade-offs made in a very complicated
> space.

I think it is more to this than just taste.
If mistyping a variable creates a new one and then gives it a default
value and
then cast's it to the type required this can cause major problems.
The problem does not necessarily show up where the bug is.
Instead it can now propagate throughout the code and rear it's
head anywhere. This makes debugging a nightmare.
If the code base is relatively small this might not be so much of a
problem.
Like a PHP program typically consists on no more than a few hundred lines
of code.
Each file in the interface is self contained.
But the minute you try to scale this into a application language
you can get into trouble.

>> Giving variables type poses problems of it's own.
>> In particular by forcing all parts of the program to
>> specify a type seems to force premature optimisation.
>> This tends to make it much more expensive to change code and
>> thus the algorithms you end up with tend to be slower.
>
> I think the claim that premature optimization is an issue is entirely
> fair, but some things lost in statically typed systems, but I would
> not sum them up this way.
>

Agreed, this was a clumsy formulation.
A better way of phrasing what i meant would be:
This tends to make it more expensive to change code and
thus the algorithms you end up with tend to be less elegant.
Elegant is a balance of readability, maintainability and speed.
When the procedure is known and good algorithms exists
the difference is usually in C's favor but when the
domain is less well understood it takes luck to come up with
a good algorithm in a reasonable amount of time.
Lisp makes it easier to experiment because functions are more generic
in nature and change is easy.

Bob Felts

unread,
Jul 8, 2006, 3:59:27 PM7/8/06
to
jh <nos...@nospamdrugphish.ch> wrote:

> Reading in [1] about type systems I came across the following chapter
> (page 6):
> ----
> Should languages be typed?

[...]

Beats handwriting, especially if your father was a doctor and you
inherited his penmanship.

David Steuber

unread,
Jul 8, 2006, 7:08:07 PM7/8/06
to
jh <nos...@nospamdrugphish.ch> writes:

> I'm not versed enough to disagree with good arguments but that sounds a
> bit biased. C vs. Lisp and C is more maintable? Anyone could give me
> arguments that Luca Cardelli is wrong?

In my experience, it is not the "type" of a language that effects code
maintainability. How the code is written is far more important,
especially in complex systems.

Why the code is written the way it is is much more important and is
the first bit of information to get lost even if the code is commented
(I rarely see comments and don't use them much myself).

--
The lithobraker. Zero distance stops at any speed.
This post uses 100% post consumer electrons and 100% virgin photons.

Damien Kick

unread,
Jul 9, 2006, 5:36:45 PM7/9/06
to
jh wrote:
> Should languages be typed?

As opposed to being written longhand?

Kent M Pitman

unread,
Jul 10, 2006, 12:39:32 AM7/10/06
to
"John Thingstad" <john.th...@chello.no> writes:

> If mistyping a variable creates a new one and then gives it a
> default value and then cast's it to the type required this can cause
> major problems.

Granted. But this feature of defaulting, autocasting, etc. is not a
necessary feature of dynamic languages. Common Lisp doesn't do it.

And there used to be horror stories of Interlisp doing things like this,
but those weren't limited to program-level corrections. It can be built
into higher level tools, whether those tools are implemented in static or
dynamic languages.

My remarks were largely only intended to address the question of what
is implied by or not implied by static typing or dynamic typing...

> >> Giving variables type poses problems of it's own.
> >> In particular by forcing all parts of the program to
> >> specify a type seems to force premature optimisation.
> >> This tends to make it much more expensive to change code and
> >> thus the algorithms you end up with tend to be slower.
> >
> > I think the claim that premature optimization is an issue is entirely
> > fair, but some things lost in statically typed systems, but I would
> > not sum them up this way.
>
> Agreed, this was a clumsy formulation.
> A better way of phrasing what i meant would be:
> This tends to make it more expensive to change code and
> thus the algorithms you end up with tend to be less elegant.
> Elegant is a balance of readability, maintainability and speed.
> When the procedure is known and good algorithms exists
> the difference is usually in C's favor but when the
> domain is less well understood it takes luck to come up with
> a good algorithm in a reasonable amount of time.
> Lisp makes it easier to experiment because functions are more generic
> in nature and change is easy.

I think this is largely right. I often expression this by saying that
static languages like C/C++/Java work well when the "data pipeline" is
not changing all the time. If you frequently change something at one
end of the pipeline and then have to laboriously thread the type
change through the whole pipeline, that can be painful. [Some static
type systems (I'm thinking ML/Haskell/etc.) use polymorphism to hide
the change, so that not as much text has to change--I'm not familiar
enough with how these languages are typically compiled to know whether
this manages to avoid recompilation.]

Kaz Kylheku

unread,
Jul 10, 2006, 11:08:11 AM7/10/06
to
jh wrote:
> Reading in [1] about type systems I came across the following chapter
> (page 6):

[ snip ]

> [1] http://citeseer.ist.psu.edu/cardelli97type.html

The author of [1] is so inaccurate, vague and poorly informed that [1]
is not worthy of discussion.

Lisp (spelled "LISP" in the paper) apparently has no type system, we
learn.

We also learn that in spite of having no types, LISP is "completely
safe". Uh huh!

We learn that "dereferencing nil" is a software fault that cannot be
prevented by a type system. (What about making NIL the only domain
value in a special type?)

Good grief, what a fucking idiot pretending to be a computer scientist.

William D Clinger

unread,
Jul 10, 2006, 7:08:58 PM7/10/06
to
Speaking of a paper by Luca Cardelli, Kaz Kylheku wrote:
> The author of [1] is so inaccurate, vague and poorly informed that [1]
> is not worthy of discussion.

Well, at least Cardelli is aware that the programming
language research community uses the phrase "strong
typing" to abbreviate the combination of static typing
with type safety.

> Good grief, what a fucking idiot pretending to be a computer scientist.

A search for "Luca Cardelli" at Google Scholar may help
to clarify this, especially if combined with a search
for "Kaz Kylheku".

Will

Christopher C. Stacy

unread,
Jul 10, 2006, 7:55:49 PM7/10/06
to
"William D Clinger" <cesu...@yahoo.com> writes:

> Speaking of a paper by Luca Cardelli, Kaz Kylheku wrote:
>> The author of [1] is so inaccurate, vague and poorly informed that [1]
>> is not worthy of discussion.
>
> Well, at least Cardelli is aware that the programming
> language research community uses the phrase "strong
> typing" to abbreviate the combination of static typing
> with type safety.

Certainly some people have used the phrase that way at times.

======================================================================
Strongly-typed programming language
From Wikipedia, the free encyclopedia

In computer science and computer programming, the term strong typing
is used to describe how programming languages handle datatypes. The
antonym is weak typing. However, these terms have been given such a
wide variety of meanings over the short history of computing that it
is often difficult to know what an individual writer means by using
them.

Programming language expert Benjamin C. Pierce, author of Types and
Programming Languages and Advanced Types and Programming Languages,
has said:

"I spent a few weeks... trying to sort out the terminology of
"strongly typed," "statically typed," "safe," etc., and found it
amazingly difficult.... The usage of these terms is so various as
to render them almost useless."
======================================================================

Kent M Pitman

unread,
Jul 10, 2006, 8:21:55 PM7/10/06
to
cst...@news.dtpq.com (Christopher C. Stacy) writes:

> ...


> ======================================================================
> Strongly-typed programming language
> From Wikipedia, the free encyclopedia

> ...


> Programming language expert Benjamin C. Pierce, author of Types and
> Programming Languages and Advanced Types and Programming Languages,
> has said:
>
> "I spent a few weeks... trying to sort out the terminology of
> "strongly typed," "statically typed," "safe," etc., and found it
> amazingly difficult.... The usage of these terms is so various as
> to render them almost useless."
> ======================================================================

So, to paraphrase... "The usage of these strong terms is so dynamically
variable as to render them weak."

William D Clinger

unread,
Jul 10, 2006, 9:57:52 PM7/10/06
to
Christopher C. Stacy wrote:
> Programming language expert Benjamin C. Pierce, author of Types and
> Programming Languages and Advanced Types and Programming Languages,
> has said:
>
> "I spent a few weeks... trying to sort out the terminology of
> "strongly typed," "statically typed," "safe," etc., and found it
> amazingly difficult.... The usage of these terms is so various as
> to render them almost useless."

Almost. One consequence of that variety is that it's
silly to criticize an author's conclusions without first
making an attempt to understand his usage. Luca's
terminology was common at the time and remains common
today, especially among the POPL/PLDI/ICFP crowd that
was the primary audience for Luca's research.

It is also worth noting that Luca invented some of this
terminology, e.g. "typeful", and tried to promote a more
standard terminology through his papers. That Ben
thinks the attempt was unsuccessful does not detract
from the nobility of the attempt.

What I find puzzling is that so many Common Lispers,
whom I would have expected to be repelled by the fascist
connotations of strong typing, have regarded it as such
a desirable-sounding property that they have redefined it
so it can be said to be a property of Common Lisp.

Will

Kent M Pitman

unread,
Jul 10, 2006, 10:39:33 PM7/10/06
to
"William D Clinger" <cesu...@yahoo.com> writes:

> What I find puzzling is that so many Common Lispers,
> whom I would have expected to be repelled by the fascist
> connotations of strong typing, have regarded it as such
> a desirable-sounding property that they have redefined it
> so it can be said to be a property of Common Lisp.

I can't speak for the others, but for myself: I just don't like it
when people pick terminology that appears to, literally by definition,
put me down. If it was "Static Typing" or "Type S Typing" or
something neutral like that vs "Dynamic Typing" or "Type D Typing",
who would care? But when it's "strong" vs "weak", it's veritably
challenging one to show one is not weak.

I can hear you saying "but it is stronger" in some sort of objective
way. Indeed, but that's only half the story. Early (sometimes to the
point of premature) type-checking is the enemy of flexibility. Said
less provocatively: There's often a balance to be struck between the
conflicting goals of telling the compiler everything (and having
nothing left to be flexible about later) and telling the compiler
nothing (and having code run slowly because you would sacrifice
nothing for flexibility). Most reasonable languages are somewhere in
the middle.

I think of the typing issue as a tool, and one thing that bugs me
about strong typing is that you have to elect it at language choice
time. That in itself is an issues of premature optimization. I'd
like to elect strong typing in on a per-program basis, not a
per-language basis, which is why I like CL. Most CL compilers
actually don't do as much as they could or perhaps should with types,
but the CMU effort showed there was lots that could be done with the
optional types that CL provides...

But going back to your original question, Will, if you buy my claim
that, to some crude approximation, you can't simultaneously do strong
typing and retain the total definitional flexibility that type
dynamism implies, then there is a set of naming we could use, which
mostly we don't, that says we are a "Strongly Flexible" language and
that strongly typed languages are "Weakly Flexible". I think that
kind of naming silliness is not the way to go, but I point out the
option because I'm hoping it will make you (as my probably-unfairly
appointed defender of strong typing in this local section of the discussion)
leap to the defense of Strong Flexibility and then I can close with
a statement like (paraphrasing you from above):

< What I find puzzling is that so many [[ I'll think of a strongly
< typed language I can randomly insert here ]] users,


< whom I would have expected to be repelled by the fascist

< connotations of strong flexibility, have regarded it as such


< a desirable-sounding property that they have redefined it

< so it can be said to be a property of [[ randomly chosen
< language ]].

Of course, I'm mostly joking around about most of this linguistic and
terminological rivalry. But the point that isn't entirely a joke, and
is intended as a true answer to your question is that if you're trying
to understand why people sometimes take on a topic you'd not expect them
to be interested in, don't dismiss the possibility that the actual choice
of terms ("weak"/"strong") has an influence in provoking people.

And, in fairness, there are many people who aren't savvy about the
debate. managers and whatnot who have to make or approve choices
based on superficial knowledge . And when they see "language with
strong typing" and know nothing more, I gotta guess they look at it
with more respect than "language with weak typing". So it has some
potential real world budgetary effect, and that can also be reason to
want to claim the term. Who's to say that the subliminal cues aren't
at times critical?

William D Clinger

unread,
Jul 11, 2006, 10:40:09 AM7/11/06
to
Kent Pitman wrote:
> I can't speak for the others, but for myself: I just don't like it
> when people pick terminology that appears to, literally by definition,
> put me down. If it was "Static Typing" or "Type S Typing" or
> something neutral like that vs "Dynamic Typing" or "Type D Typing",
> who would care? But when it's "strong" vs "weak", it's veritably
> challenging one to show one is not weak.

That's part of it, yes, but that reaction also involves
some defensiveness. If typing were truly an impediment
to programming, as some have asserted, then those who
believe that assertion would read "strongly typed" as a
synonym for "strongly impeding". I suspect that some
who proclaim the evil of typing don't truly believe it
themselves.

In the world of Common Lisp, I suspect that many, maybe
most, believe typing to be good, so long as checking of
type constraints is deferred to run time.

At any rate, it was the C/C++ programmers, not Lispers,
who destroyed the usefulness of "strongly typed" by
redefining it to mean "statically typed". I think this
was mostly a matter of ignorance born of laziness.
Programmers hear a technical phrase and, without reading
the technical papers that define it, just guess at its
meaning.

Often they guess wrong. Thus we see non-LL parsers
referred to as recursive descent, self-tail-call
optimization referred to as proper tail recursion,
semi-automated reference counting in C++ referred to
as garbage collection, and so on.

In the 1997 paper cited by the original poster, Cardelli
avoids the "strongly typed" phrase that, from misuse by
C/C++/Lisp programmers, has become so meaningless, and
uses "strongly checked" instead. Will you feel insulted
if I say that Lisp is not strongly checked in Cardelli's
sense?

Will

Tayssir John Gabbour

unread,
Jul 11, 2006, 2:32:30 PM7/11/06
to
William D Clinger wrote:
> In the world of Common Lisp, I suspect that many, maybe
> most, believe typing to be good, so long as checking of
> type constraints is deferred to run time.

I don't think we often really get at the heart of why many Lisp users
seem to reject certain type systems. As Lisp is "just" a program, users
evaluate new type systems like we'd evaluate a new feature to any other
program -- how much cost does a new feature impose when you don't use
it? More cost = more scrutiny.

One case study might be CLOS. (Though keep in mind I wasn't there to
observe the no doubt wild debates, and I welcome corrections.) CLOS was
formulated in such a way that I can easily be oblivious to OOP
concepts. I don't need to know a thing about classes or methods in
everyday coding. Because of this, many even have the misconception that
Common Lisp doesn't support OOP, though its OOP features are unusually
powerful, and it even turns out that the humble number 1 is an OOP
object. (It's an instance of a built-in class, etc.)


> > I can't speak for the others, but for myself: I just don't like it
> > when people pick terminology that appears to, literally by definition,
> > put me down. If it was "Static Typing" or "Type S Typing" or
> > something neutral like that vs "Dynamic Typing" or "Type D Typing",
> > who would care? But when it's "strong" vs "weak", it's veritably
> > challenging one to show one is not weak.
>
> That's part of it, yes, but that reaction also involves
> some defensiveness. If typing were truly an impediment
> to programming, as some have asserted, then those who
> believe that assertion would read "strongly typed" as a
> synonym for "strongly impeding". I suspect that some
> who proclaim the evil of typing don't truly believe it
> themselves.

Is there any question why some Lisp users come on usenet and act
defensively? Why some Firefox users are defensive when it comes to
Internet Explorer? These issues impact them in a real way. The computer
industry has its share of hype and PR, with terms which sound
remarkably similar to "death tax."

(Incidentally, it's funny to work in the computing industry and squint
at a train station billboard portraying Seal entertaining a family,
somehow associating that imagery with the current Intel processor
line...)

If we want the type system discussion to be inclusive, why not shift to
terminology which includes people? Or are there higher priorities than
this inclusion?


Tayssir

Kent M Pitman

unread,
Jul 14, 2006, 2:10:15 AM7/14/06
to
"William D Clinger" <cesu...@yahoo.com> writes:

> In the world of Common Lisp, I suspect that many, maybe
> most, believe typing to be good, so long as checking of
> type constraints is deferred to run time.

I mostly don't disagree with the points you made, but I think you're
actually selling CL a bit short here. I think CL is not about late
checking, it's about choice in the time of checking. Of course,
weakening it at all from being required entirely at compile time means
giving something up, rather like the precipitous drop in price a new
car endures when you drive it off a lot... there is simply a big
difference between "information reliably available at compile time"
and "information not reliably available at compile time". The
inability to universally quantify and the inability to cleverly force
the answers to certain questions that compilers love to have the
answer to certainly comes at a cost. But it brings a benefit, too,
and CL users like to make that trade-off dynamically per-program to
the extent they can. So much like the debate over lisp1/lispN (CL
really being a Lisp4, after all, not a Lisp2) ends up reflecting
itself in meta and becoming a "one point of view"/"many points of
view" debate, so too, the issue of type checking isn't "early" vs
"late", it's "fixed time" vs "programmer-controlled time". I think
CL people would like to see the checking done at the earliest
reasonable time that does not dictate the natural way in which their
program reveals truth ... that is, they want the program to control
the language semantics, not vice versa.

> At any rate, it was the C/C++ programmers, not Lispers,
> who destroyed the usefulness of "strongly typed" by
> redefining it to mean "statically typed". I think this
> was mostly a matter of ignorance born of laziness.

Perhaps. Or perhaps this was clever marketing on their part.
"Statically typed" lacks the marketing punch that "strongly typed"
has, if you are a manager and haven't a clue what "static" means,
other than that it's something your programmers give you when
you don't let them have their way.

> Programmers hear a technical phrase and, without reading
> the technical papers that define it, just guess at its
> meaning.

This is certainly true as a generic observation, I just don't
know that this is what drove the phenomenon in question.
Probably it was some of both...

> Often they guess wrong. Thus we see non-LL parsers
> referred to as recursive descent, self-tail-call
> optimization referred to as proper tail recursion,
> semi-automated reference counting in C++ referred to
> as garbage collection, and so on.

A great set of examples. You should start a web page on
these for people to reference. I bet people would contribute
others...



> In the 1997 paper cited by the original poster, Cardelli
> avoids the "strongly typed" phrase that, from misuse by
> C/C++/Lisp programmers, has become so meaningless, and
> uses "strongly checked" instead. Will you feel insulted
> if I say that Lisp is not strongly checked in Cardelli's
> sense?

Insulted? No, I don't take any of this personally unless someone says
they're doing it to annoy me. But I will think it's the wrong phrase.
Lisp stops as dead in its tracks as any other language when it detects
a violation. I'd maybe give you that CL is not as "aggressively
checked", even though there's reason to say that it's as aggressive as
the info it's given. What it isn't aggressive at is forcing the user
to 'fess up information he either doesn't have or doesn't have time to
offer or, importantly, doesn't feel the need to offer.

Keep in mind that by far the lion's share of programs are prototypes
that are never deployed and that run adequately fast and adequately
correctly even without type information that they are discarded and
rewritten many times before the need for checking comes up. As was
proven (to my satisfaction anyway, even if it's not widely accepted)
in the late 1980's when Lisp was widely ditched for C++ in a number
of AI programs and people were heard to bogusly say "We've proven we
never needed Lisp in the first place", the real truth is that they
needed lisp to do the myriad iterations needed to come up with a design
that could be adequately reified into a conventional language. The
truth is that what so-called static languages are bad at is that they
front-load optimization that is often not needed, and make it nearly
impossible to get answers to important questions like "does this work
at all? is this the direction I want to go? etc."

And so CL programmers try a zillion things because they can, and learn
a lot in the process because most of their effort is going toward
domain-level tests of actual things that might do them some good. In
the end, they probably don't need a tenth of the flexibility they need
to get to the end, because they've settled down, and once they've got
a clear strategy, they just don't have to flex as much as the language
would theoretically allow (at least until the bug reports start
rolling in). So a product strategy with Lisp on the backbone and
periodic translations of product into C++ or Java as the
productization step (but with new development continuing in Lisp)
seems an appropriate hybrid approach that uses Lisp's feature of late
checking best for what it does and these other languages features best
for what they do... in many cases anyway.

Obviously there are a few where having EVAL (or LOAD, at least),
down to the bitter end is still important. I'm not trying to say
that Lisp is of no value in deployment--just that there are plenty
of things in the world that don't need Lisp (which is why more people
don't clamor for it more often). They only ask for it when they
exceed what they can do with other languages, and that happens seldom
because working other languages teaches them to dumb down their goals. :)

Well, I guess I'm drifting a bit from the issue of type checking, but
the question of static vs dynamism inevitably must work in lockstep
with the question of what you're checking. Clearly, an accounting
system doesn't do a lot of checking... It's just gonna roll right
through doing everyone's accounts pretty much the same. ... at least
as designed now. But that's what I meant about dumbing things
down. Maybe if we'd started with Lisp, we could have a lot more custom
control of how that kind of thing works... I certainly wish my bank
offered some more flexible options, and Quicken/Quickbooks, too. I
attribute a lot of why they don't to the language they're in... I
imagine it takes them a whole release cycle to move a few buttons
around or add a few more screen images. Pity.

0 new messages