Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

PLOT: A non-parenthesized, infix Lisp!

36 views
Skip to first unread message

Ray Dillinger

unread,
Apr 8, 2009, 10:00:40 PM4/8/09
to

David Moon has created a programming language called PLOT,
for "Programming Language for Old-Timers."

His introduction to it can be found at:

http://users.rcn.com/david-moon/PLOT/

There's ongoing discussion on the Lambda-The-Ultimate Programming
languages website, at

http://lambda-the-ultimate.org/node/3253

It's interesting. It appears to be a lisp (in that it's a multiparadigm
programming language with a code-data correspondence and macros work
on the data form of code) but it manages to miss such traditional lisp
builtins as cons cells and fully parenthesized prefix syntax. So far
there is some skepticism about PLOT's macrology; it may turn out to
be more complicated and harder to compose than regular lisp macrology.
Well, okay, I'm personally skeptical about it. I think if you don't
use an absolutely regular syntax like traditional lisp syntax you
probably can't avoid having your macrology become more complicated
and harder to compose.

PLOT's design goes to some lengths to keep expression nesting levels
shallow compared to traditional lisps, and prefers to use indentation
rather than parens to denote expression nesting. It has objects,
generic functions, user-defined first-class types, and macros, so
it appears to be a full-on multiparadigm red-pill language.

It's mostly-functional. Pure-functional algorithms work and are easy
to express in it, but assignment is not impossible as it is in the
blue-pill functional languages.

It's also mostly-OO. Object-Oriented programming is supported with
Objects, Methods, and Generic functions, but doesn't dominate all
possible ways of expressing algorithms as it does in the blue-pill
OO languages.

I thought folks would be interested. Especially that minority that
prefers to gripe about cons cells and parens and prefix notation,
instead of actually developing alternatives that have the power of
Lisp to do syntactic abstraction.

Bear

budden

unread,
Apr 9, 2009, 3:31:19 AM4/9/09
to
Hi Ray, list!

I'm a rather marginal person at comp.lang.lisp, but here are my
thoughts:

> it may turn out to be more complicated and harder to compose than regular
lisp macrology

This is ok. Easy macros are harmful. Greatest failure of CL is that
presence of
macros serves as an excuse for having disgusting default syntax. It
took me
years to understand that. People think "if I need sweeter syntax, I'll
write macros".
So, they write macros like aif, functions like length=. Other people
are forced to
learn that macros and functions instead of learning one good syntax
once.

> I think if you don't use an absolutely regular syntax like traditional lisp
syntax you probably can't avoid having your macrology become more
complicated
and harder to compose.

My advice is to take a look at Prolog and Mathematica. They can
handle
infix expressions in a very simple way. Quasiquoting can be
implemented
in an infix syntax too (boo has it). And even in an HTML syntax
(PHP).

Maybe PLOT is good for someone, but this

> fully powerful macros (but hygienic!)

is not acceptable for me. Sometimes I want my macros to capture
variables with
some fixed names, or with names constructed from parameters. Macros
really lose
half of their power when they are hygienic. Variable capture is a non-
issue in practice
when we have with-gensyms and are just careful.

It looks like the most conceptually correct modern language for lisper
is boo. It is
mostly statically-typed (and hence fast) but allows for dynamic typing
too. It
has type inference and duck typing. It is not purely functional, but
it have
closures, anonymous functions and functions are first-class objects.

It is dynamic and it has macros. Better macros than Scheme. It has
really
extensible compiler. Its only serious design disadvantage vs lisp
is pytonish syntax which is harder to manipulate in text editor. Lisp
has a
great advantage that you can juggle sexprs in an editor with a very
few
keystrokes.

Technically, boo seems to be not completely mature, but it hosts
at .net
platform which is now rather portable and rich. And it is here since
2003.
I don't know if I'd better prefer boo or clojure. Unfortunately there
is
a great amount of lisp code and there is some beauty in CL which
I still can't repudiate.

http://boo.codehaus.org/

But my real language of choice should be not only like boo, it should
support multiple backends like haXe. I know no such a language...

Michele Simionato

unread,
Apr 9, 2009, 3:42:57 AM4/9/09
to
On Apr 9, 9:31 am, budden <budden-l...@mail.ru> wrote:
> It is mostly statically-typed (and hence fast)

This implication is utterly wrong.

> It is dynamic and it has macros. Better macros than Scheme.

This is also wrong.

Pascal Costanza

unread,
Apr 9, 2009, 5:05:06 AM4/9/09
to
budden wrote:

> Maybe PLOT is good for someone, but this
>
>> fully powerful macros (but hygienic!)
>
> is not acceptable for me. Sometimes I want my macros to capture
> variables with
> some fixed names, or with names constructed from parameters. Macros
> really lose
> half of their power when they are hygienic. Variable capture is a non-
> issue in practice
> when we have with-gensyms and are just careful.

PLOT's macro system allows you to break macro hygiene on demand.


Pascal

--
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Kenneth Tilton

unread,
Apr 9, 2009, 5:39:47 AM4/9/09
to
Ray Dillinger wrote:
> David Moon has created a programming language called PLOT,
> for "Programming Language for Old-Timers."
>
...

> PLOT... prefers to use indentation

> rather than parens to denote expression nesting.

Game over. But...why was this not cross-posted to python?! They live for
indentation!

kt

Pillsy

unread,
Apr 9, 2009, 9:15:01 AM4/9/09
to
On Apr 9, 3:31 am, budden <budden-l...@mail.ru> wrote:
[...]

> My advice is to take a look at Prolog and Mathematica. They can
> handle infix expressions in a very simple way.

I don't know a whole helluva lot about Prolog, but I sure wouldn't
describe Mathematica's handling of infix syntax as "very simple". Most
of the time, when I need to do any but the most trivial manipulations
of Mma expressions, I end up defaulting to the underlying, Lispoid
prefix syntax, where,

a = b + c;
d = Sin[a]

becomes

CompoundExpression[Set[a, Plus[b, c]], Set[d, Sin[a]]]

A bit prolix, but then if being prolix bugged me, why would I like the
language that gave us MULTIPLE-VALUE-BIND?

There's also the 97 million levels of precedence you have to keep
track of, and somehow they all manage to be annoying. There are some
genuinely neat aspects of Mma syntax, but they all involve the way you
can use a close approximation of real, 2-dimensional mathematical
notation in your programs, and post-date the plain infix stuff by
about a decade.

Cheers,
Pillsy

Tamas K Papp

unread,
Apr 9, 2009, 10:08:47 AM4/9/09
to
On Thu, 09 Apr 2009 00:31:19 -0700, budden wrote:

> But my real language of choice should be not only like boo, it should
> support multiple backends like haXe. I know no such a language...

Be careful, you are treading on thin ice here. If your search for the
Ideal Language ever terminates, you will no longer have excuse for not
writing actual applications.

Tamas

Raffael Cavallaro

unread,
Apr 9, 2009, 3:47:06 PM4/9/09
to

Indeed, PLOT really is a plot, and the acronym secretly stands for:


Pythonic Load Of Turds

or

Pythonic Lisp Of Tomorrow

depending on how you feel about significant whitespace

;^)

Raffael Cavallaro

unread,
Apr 9, 2009, 3:51:55 PM4/9/09
to
On Apr 9, 3:47 pm, Raffael Cavallaro <raffaelcavall...@gmail.com>
wrote:

How long, I wonder, till someone makes a pun about Dave *Moon*?

Ray Dillinger

unread,
Apr 9, 2009, 8:37:38 PM4/9/09
to
Kenneth Tilton wrote:

> Ray Dillinger wrote:

>> PLOT... prefers to use indentation
>> rather than parens to denote expression nesting.

> Game over. But...why was this not cross-posted to python?! They live for
> indentation!

Yah, significant whitespace isn't my favorite invention either,
but it *can* work in languages designed for it.

Not crossposted to Python 'cause I don't like the attitude of
the Pythonistas. So sue me. Even the Lisp/Scheme crosspost is
less likely to result in a flamewar than a Python/either would
be. Also, because PLOT is (semantically) more like a Lisp than
it is like Python.

Bear

Kaz Kylheku

unread,
Apr 9, 2009, 11:28:34 PM4/9/09
to
On 2009-04-09, Ray Dillinger <be...@sonic.net> wrote:
>
> David Moon has created a programming language called PLOT,
> for "Programming Language for Old-Timers."

For the rest of us:

Parenthesized, Indented, (but otherwise) Free-Form Lisp Expressions.

PIFFLE!

:)

Chris Barts

unread,
Apr 10, 2009, 1:19:04 AM4/10/09
to
budden <budde...@mail.ru> writes:

> Hi Ray, list!
>
> I'm a rather marginal person at comp.lang.lisp, but here are my
> thoughts:
>
>> it may turn out to be more complicated and harder to compose than regular
> lisp macrology
>
> This is ok. Easy macros are harmful.

This is like saying easy functions are harmful, because it shows up
deficiencies in the standard library. It's a non-sequitur.

I'll also say that what you think of as good syntax and what I think
of as good syntax are two very different things, based on what you say
about Prolog's syntax below. Erlang is a great language in many ways
but its adoption of Prolog's bizarre, inflexible syntax makes it
unpleasant to actually work with.

> My advice is to take a look at Prolog and Mathematica. They can
> handle infix expressions in a very simple way.

Someone else addressed Mathematica already. I'll just throw another
piece of mud at Prolog by saying 'simple' and 'sane' are two very
different things.

> It looks like the most conceptually correct modern language for
> lisper is boo.

I'll look into it. I don't think I've ever seen it before.

> It is mostly statically-typed (and hence fast)

This is both wrong and wrong-headed. It's wrong because a
well-optimized dynamic language (like Common Lisp) can be a lot faster
than a badly-optimized static language (like C, as the semantics of C
don't allow a lot of wiggle room for an optimizer to work). It's
wrong-headed because it sacrifices productivity on the altar of
performance and nails the working programmer to a cross of gold, er,
machine language.

Robbert Haarman

unread,
Apr 10, 2009, 2:39:13 AM4/10/09
to
On Thu, Apr 09, 2009 at 11:19:04PM -0600, Chris Barts wrote:
> budden <budde...@mail.ru> writes:
> >
> > It looks like the most conceptually correct modern language for
> > lisper is boo.
>
> I'll look into it. I don't think I've ever seen it before.
>
> > It is mostly statically-typed (and hence fast)
>
> This is both wrong and wrong-headed. It's wrong because a
> well-optimized dynamic language (like Common Lisp) can be a lot faster
> than a badly-optimized static language (like C, as the semantics of C
> don't allow a lot of wiggle room for an optimizer to work). It's
> wrong-headed because it sacrifices productivity on the altar of
> performance and nails the working programmer to a cross of gold, er,
> machine language.

I disagree. You are implying that dynamic typing leads to greater
productivity than static typing. I don't think this is the case.

Taking "static typing" to mean that programs that cannot be correctly at
compile time are rejected at compile time, whereas "dynamic typing"
means type errors lead to rejection at run-time, static typing means, by
definition, rejecting bad programs early. It seems to me this would be a
productivity gain.

Also, requiring types to be checked at compile time requires the types
to be determined at compile time, which means the knowledge of types is
available to perform optimizations.

Now, in theory, you could perform all the same type inference and type
checking on a dynamically typed language that you could perform on a
statically typed language, as long as your program is written in a style
that we know how to do type inference for. In practice, this is often
not the case. The result is that programs written in dynamically typed
languages will often not have all their types known and checked at
compile time, leading to less efficient code generation and the
possibility of type errors at run time.

What makes these kinds of comparisons difficult is that there are few if
any cases where the only difference between two languages or
implementations thereof is static vs. dynamic typing. There are always
other features and implementation details that muddy the waters.

Just my 2 cents.

Bob

--
The sendmail configuration file is one of those files that looks like someone
beat their head on the keyboard. After working with it... I can see why!
-- Harry Skelton


Pascal Costanza

unread,
Apr 10, 2009, 3:08:09 AM4/10/09
to
Robbert Haarman wrote:
> On Thu, Apr 09, 2009 at 11:19:04PM -0600, Chris Barts wrote:
>> budden <budde...@mail.ru> writes:
>>> It looks like the most conceptually correct modern language for
>>> lisper is boo.
>> I'll look into it. I don't think I've ever seen it before.
>>
>>> It is mostly statically-typed (and hence fast)
>> This is both wrong and wrong-headed. It's wrong because a
>> well-optimized dynamic language (like Common Lisp) can be a lot faster
>> than a badly-optimized static language (like C, as the semantics of C
>> don't allow a lot of wiggle room for an optimizer to work). It's
>> wrong-headed because it sacrifices productivity on the altar of
>> performance and nails the working programmer to a cross of gold, er,
>> machine language.
>
> I disagree. You are implying that dynamic typing leads to greater
> productivity than static typing. I don't think this is the case.
>
> Taking "static typing" to mean that programs that cannot be correctly at
> compile time are rejected at compile time, whereas "dynamic typing"
> means type errors lead to rejection at run-time, static typing means, by
> definition, rejecting bad programs early. It seems to me this would be a
> productivity gain.

...but that's a wrong conclusion: http://p-cos.net/documents/dynatype.pdf

Mark Wooding

unread,
Apr 10, 2009, 6:30:48 AM4/10/09
to
Robbert Haarman <comp.la...@inglorion.net> writes:

> Taking "static typing" to mean that programs that cannot be correctly at
> compile time are rejected at compile time, whereas "dynamic typing"
> means type errors lead to rejection at run-time, static typing means, by
> definition, rejecting bad programs early. It seems to me this would be a
> productivity gain.

There's a downside to static typing, though. The compiler doesn't just
reject programs that it can prove are incorrect: it rejects programs
which it fails to prove are correct. As a consequence, compilers for
statically typed languages actually reject a nontrivial class of correct
programs. Since the kinds of programs that I write in dynamically typed
languages, such as Lisp or Python, are most certainly in this class,
they would assuredly be rejected by a compiler for a statically typed
language.

I don't see how having the programs I'd like to write be rejected is a
productivity win.

(If I take the time to decorate my Lisp program with type declarations,
a decent compiler will indeed warn me about type errors. Admittedly,
Lisp will warn me about programs which it proves to be /incorrect/,
which is the other kind of error, but it's still useful -- and I get to
write the programs which naturally occur to me to write rather than the
ones I'm forced to write by the type system.)

> Also, requiring types to be checked at compile time requires the types
> to be determined at compile time, which means the knowledge of types is
> available to perform optimizations.
>
> Now, in theory, you could perform all the same type inference and type
> checking on a dynamically typed language that you could perform on a
> statically typed language, as long as your program is written in a style
> that we know how to do type inference for. In practice, this is often
> not the case. The result is that programs written in dynamically typed
> languages will often not have all their types known and checked at
> compile time, leading to less efficient code generation and the
> possibility of type errors at run time.

You mean that you acknowledge that programmers using dynamically typed
languages tend to write programs which static type systems would reject.
There must be a reason for this, and I'd claim that it's not just ill
discipline or ignorance. In fact, I'd guess that the reason is that
those programs are quicker to write. This certainly casts doubt on your
claim that static typing is an untrammelled productivity win.

-- [mdw]

jeff

unread,
Apr 10, 2009, 7:45:01 AM4/10/09
to
Is there any actual code here?

Dmitry A. Kazakov

unread,
Apr 10, 2009, 8:03:30 AM4/10/09
to
On Fri, 10 Apr 2009 11:30:48 +0100, Mark Wooding wrote:

> Robbert Haarman <comp.la...@inglorion.net> writes:
>
>> Taking "static typing" to mean that programs that cannot be correctly at
>> compile time are rejected at compile time, whereas "dynamic typing"
>> means type errors lead to rejection at run-time, static typing means, by
>> definition, rejecting bad programs early. It seems to me this would be a
>> productivity gain.
>
> There's a downside to static typing, though. The compiler doesn't just
> reject programs that it can prove are incorrect: it rejects programs
> which it fails to prove are correct.

Firstly, it is a property of *any* compiling system. There is no compiler
that could compile a correct program of 2**9999999999999999 characters
long.

Secondly, as an alternative to Lisp I propose a random generator of
hexadecimal machine code. Any sequence of machine codes is a correct
program. It would not smoke the CPU, you know.

> I don't see how having the programs I'd like to write be rejected is a
> productivity win.

Random generator is greatly more productive. In fact "bug" is an artefact
of checks. If you check nothing, there is no bugs...

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

Kenneth Tilton

unread,
Apr 10, 2009, 8:49:50 AM4/10/09
to
Robbert Haarman wrote:
> On Thu, Apr 09, 2009 at 11:19:04PM -0600, Chris Barts wrote:
>> budden <budde...@mail.ru> writes:
>>> It looks like the most conceptually correct modern language for
>>> lisper is boo.
>> I'll look into it. I don't think I've ever seen it before.
>>
>>> It is mostly statically-typed (and hence fast)
>> This is both wrong and wrong-headed. It's wrong because a
>> well-optimized dynamic language (like Common Lisp) can be a lot faster
>> than a badly-optimized static language (like C, as the semantics of C
>> don't allow a lot of wiggle room for an optimizer to work). It's
>> wrong-headed because it sacrifices productivity on the altar of
>> performance and nails the working programmer to a cross of gold, er,
>> machine language.
>
> I disagree. You are implying that dynamic typing leads to greater
> productivity than static typing. I don't think this is the case.
>
> Taking "static typing" to mean that programs that cannot be correctly at
> compile time are rejected at compile time, whereas "dynamic typing"
> means type errors lead to rejection at run-time, static typing means, by
> definition, rejecting bad programs early. It seems to me this would be a
> productivity gain.

This old debate? The problem is how much effort goes into getting one's
code past the compiler, and the nature of working with dynamic vs
statically typed languages. How fast can I try a new idea and find out
it was a bad one? If I have to refactor everything before running once,
I am slower to find out I have had a bad idea. If that is the case, I am
less likely to explore new ideas, some of which work out fine. Suddenly
the tail is wagging the dog: static typing, meant to make us more
effective is now in the way costing more than it is worth. But...

... to some that is not the case. They work deliberately and
methodically anyway, and they have a low tolerance for uncertainty. Some
people balance their checkbooks to the penny every month, some people
check every other month for unexpected $5k discrepancies. Some people
wait for the light to turn green, some people can't because there is no
light, they are in the middle of the block and reading the newspaper
talking on the cell phone as they cross.

And it's no good asking one programmer to play another's game: these
emotiopsychosocial deals affect our productivity.

kt

Tamas K Papp

unread,
Apr 10, 2009, 8:55:41 AM4/10/09
to
On Fri, 10 Apr 2009 08:39:13 +0200, Robbert Haarman wrote:

> I disagree. You are implying that dynamic typing leads to greater
> productivity than static typing. I don't think this is the case.
>
> Taking "static typing" to mean that programs that cannot be correctly at
> compile time are rejected at compile time, whereas "dynamic typing"
> means type errors lead to rejection at run-time, static typing means, by
> definition, rejecting bad programs early. It seems to me this would be a
> productivity gain.

Your classification is flawed: CL is certainly not statically typed,
but my compiler (SBCL) does analyze the functions I compile and warns
me about a lot of things.

Also, "static typing means, by definition, rejecting bad programs
early" is sheer idiocy - a lot of "bad" programs are not caught by
type checking. Most of the "badness" in my programs arise from
conceptual mistakes or inappropriate algorithms, no compiler would be
able to catch those.

> Now, in theory, you could perform all the same type inference and type
> checking on a dynamically typed language that you could perform on a
> statically typed language, as long as your program is written in a style
> that we know how to do type inference for. In practice, this is often
> not the case. The result is that programs written in dynamically typed

You should update your knowledge of modern compilers. In practice,
modern CL compilers are able to perform a lot of optimizations, even
when they are unaided by declarations. With appropriate declarations,
CL can generate very fast code.

I never aim to write my programs "in a style that we know how to do
type inference for", I try to write them in a style that is clear,
concise and comfortable to me. I leave optimization to my compiler,
this approach works very well for me.

Tamas

Tamas K Papp

unread,
Apr 10, 2009, 9:10:15 AM4/10/09
to
On Fri, 10 Apr 2009 08:49:50 -0400, Kenneth Tilton wrote:

> statically typed languages. How fast can I try a new idea and find out
> it was a bad one? If I have to refactor everything before running once,
> I am slower to find out I have had a bad idea. If that is the case, I am
> less likely to explore new ideas, some of which work out fine. Suddenly
> the tail is wagging the dog: static typing, meant to make us more
> effective is now in the way costing more than it is worth. But...

Ah, a fellow Fortran enthusiast :-)

Tamas

Raffael Cavallaro

unread,
Apr 10, 2009, 10:05:03 AM4/10/09
to
On Apr 10, 8:03 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

I think you're merely being sarcastic, but just in case you actually
think these point count as refutations...

1. Absurd "counterexamples" don't disprove general points. In fact,
having to reach for absurdities is usually a sign your argument is
failing.

2. Reasonable measures of productivity count *useful* worker output,
not random trash.

namekuseijin

unread,
Apr 10, 2009, 11:15:21 AM4/10/09
to
On Apr 10, 9:55 am, Tamas K Papp <tkp...@gmail.com> wrote:
> You should update your knowledge of modern compilers.  In practice,
> modern CL compilers are able to perform a lot of optimizations, even
> when they are unaided by declarations.  With appropriate declarations,
> CL can generate very fast code.
>
> I never aim to write my programs "in a style that we know how to do
> type inference for", I try to write them in a style that is clear,
> concise and comfortable to me.  I leave optimization to my compiler,
> this approach works very well for me.

Indeed. The Stalin Scheme compiler is like that, performing type
inference and several optimizations on a normal Scheme program and
producing very fast binaries matching and at times surpassing hand-
coded C:

http://justindomke.wordpress.com/2009/02/23/the-stalin-compiler/

It' very, very slow though: it's a whole program compiler. The idea
is to get a normal Scheme program that was developed, debugged and
tested in a Scheme interpreter and in the end of the development cycle
send it to Stalin to produce a final fast version.

Dmitry A. Kazakov

unread,
Apr 10, 2009, 11:44:56 AM4/10/09
to
On Fri, 10 Apr 2009 07:05:03 -0700 (PDT), Raffael Cavallaro wrote:

> On Apr 10, 8:03 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 11:30:48 +0100, Mark Wooding wrote:
>>> Robbert Haarman <comp.lang.m...@inglorion.net> writes:
>>
>>>> Taking "static typing" to mean that programs that cannot be correctly at
>>>> compile time are rejected at compile time, whereas "dynamic typing"
>>>> means type errors lead to rejection at run-time, static typing means, by
>>>> definition, rejecting bad programs early. It seems to me this would be a
>>>> productivity gain.
>>
>>> There's a downside to static typing, though.  The compiler doesn't just
>>> reject programs that it can prove are incorrect: it rejects programs
>>> which it fails to prove are correct.
>>
>> Firstly, it is a property of *any* compiling system. There is no compiler
>> that could compile a correct program of 2**9999999999999999 characters
>> long.
>>
>> Secondly, as an alternative to Lisp I propose a random generator of
>> hexadecimal machine code. Any sequence of machine codes is a correct
>> program. It would not smoke the CPU, you know.
>>
>>> I don't see how having the programs I'd like to write be rejected is a
>>> productivity win.
>>
>> Random generator is greatly more productive. In fact "bug" is an artefact
>> of checks. If you check nothing, there is no bugs...
>

> I think you're merely being sarcastic,

Surely I am.

> but just in case you actually
> think these point count as refutations...
>
> 1. Absurd "counterexamples" don't disprove general points. In fact,
> having to reach for absurdities is usually a sign your argument is
> failing.

Absurd counterexample disproves absurd point.

> 2. Reasonable measures of productivity count *useful* worker output,
> not random trash.

So we can agree that the original point about productivity was absurd
without providing any measurements of.

I would like to see them. Precisely the number of man-hours required to
achieve the rate software failure at the given severity level per source
code line per one second of execution.

Further I would also like to an explanation how later or less checks could
improve this rate and thus productivity. Especially the issue how program
correctness can be defined without checks, which, according to the point
need to be reduced in order to improve "productivity." Otherwise, you fall
into trivial, no checks, no bugs, infinite productivity.

Pillsy

unread,
Apr 10, 2009, 12:13:46 PM4/10/09
to
On Apr 10, 11:44 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
[...]

> Further I would also like to an explanation how later or less checks could
> improve this rate and thus productivity.

How could such an obvious point need explanation? Eliminating
*irrelevant* checks will clearly increase productivity, because
irrelevant checks are, by definition, a waste of time.

> Especially the issue how program correctness can be defined without
> checks, which, according to the point need to be reduced in order
> to improve "productivity."

This is a really pitiful strawman. Just because you can't define
program correctness without the idea of conforming to *some* set of
checks hardly means that you can't define program correctness without
conforming to *every possible* set of checks.

Cheers,
Pillsy

Kaz Kylheku

unread,
Apr 10, 2009, 12:37:25 PM4/10/09
to
On 2009-04-10, Robbert Haarman <comp.la...@inglorion.net> wrote:
> I disagree. You are implying that dynamic typing leads to greater
> productivity than static typing. I don't think this is the case.
>
> Taking "static typing" to mean that programs that cannot be correctly at
> compile time are rejected at compile time, whereas "dynamic typing"
> means type errors lead to rejection at run-time, static typing means, by
> definition, rejecting bad programs early. It seems to me this would be a
> productivity gain.

Yes, it would, if the problem of identifying bad programs was decideable.

Doh!

Dmitry A. Kazakov

unread,
Apr 10, 2009, 12:42:26 PM4/10/09
to

Wow, now it becomes interesting. So type checks are irrelevant. That's
honest. At least!

But that was not the original point. It was, that type checks are great to
perform later. You should have argued for untyped languages.

However that does not wonder me. Dynamic typing consequently leads to no
typing. No need to be ashamed of guys, just speak your mind. How are going
to define correctness outside types (sets of values and operations on
them)? I am curious.

Pillsy

unread,
Apr 10, 2009, 1:07:36 PM4/10/09
to
On Apr 10, 12:42 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> On Fri, 10 Apr 2009 09:13:46 -0700 (PDT), Pillsy wrote:
[...]

> > This is a really pitiful strawman. Just because you can't define
> > program correctness without the idea of conforming to *some* set of
> > checks hardly means that you can't define program correctness without
> > conforming to *every possible* set of checks.

> Wow, now it becomes interesting. So type checks are irrelevant.

In *some* circumstances, they are.

You seem to be repeatedly excluding the middle.

> But that was not the original point. It was, that type checks are great to
> perform later.

Yes, because that's one easy way of providing finer-grained control
over what type checks are performed. If I'm doing exploratory
programming (which is something I certainly do a lot of), being
dropped into the debugger due to the occasional type error is a lot
more convenient than spending the time up front to get all my types
right, because that slows down the exploration that was the ultimate
point of the exercise.

Once I've got something that I think is pretty good, then getting
earlier checking is a lot more useful, and having a static type-system
is a lot more appealing, but it's not appealing enough to make dealing
with that system worth the trouble in the earlier stages of
development.

Cheers,
Pillsy

Tamas K Papp

unread,
Apr 10, 2009, 1:16:13 PM4/10/09
to
On Fri, 10 Apr 2009 18:42:26 +0200, Dmitry A. Kazakov wrote:

> However that does not wonder me. Dynamic typing consequently leads to no
> typing. No need to be ashamed of guys, just speak your mind. How are

Nope, dynamic typing leads to dynamic typing. If you don't understand
what dynamic typing is, that is fine, just don't engage in discussions
that involve the concept because doing so exposes your ignorance.

Cheers,

Tamas


Robbert Haarman

unread,
Apr 10, 2009, 1:24:06 PM4/10/09
to
On Fri, Apr 10, 2009 at 09:08:09AM +0200, Pascal Costanza wrote:
> Robbert Haarman wrote:
>>
>> Taking "static typing" to mean that programs that cannot be correctly
>> at compile time are rejected at compile time, whereas "dynamic typing"
>> means type errors lead to rejection at run-time, static typing means,
>> by definition, rejecting bad programs early. It seems to me this would
>> be a productivity gain.
>
> ...but that's a wrong conclusion: http://p-cos.net/documents/dynatype.pdf

Reading the PDF failed to convince me.

> 2.1 Statically Checked Implementation of Interfaces

You're going about it the wrong way. You shouldn't declare that your
class implements the interface when it doesn't, then add stub methods
until it does, and hope you remember to fix it later.

You should implement the interface first, and then you declare that your
class implements it. From that point on, you can have the compiler check
that you have actually implemented the interface.

In practice, what happens is often that you use an IDE which lets you
declare the interfaces you implement. The IDE then generates the stubs
for you, with a little reminder in each to tell you you still need to
make that stub do something useful. A good IDE will also tell you if you
haven't done that yet. This works, as long as you don't ignore your
IDE's warnings.

Having these warnings is the best scenario you can hope for with dynamic
typing, because dynamic typing, by its nature, is not allowed to reject
your program before run time. Many implementations of dynamically typed
language will not provide any warning at all.

I agree with you that returning a default value is wrong and you should
signal an error instead if a stub method is called. But that's
orthogonal to static typing.

> 2.2 Statically Checked Exceptions

These have their pros and cons. It would be nice if you could prove at
compile time that any error condition that could arise at run time is
handled in some way. However, I am not aware of any languages that
actually provide such guarantees. Either way, I really dislike the way
exceptions work in Java but that, again, is orthogonal to static typing.

> 2.3 Checking Feature Availability

> Checking if a resource provides a specific feature and actually using
> that feature should be an atomic step in the face of multiple access
> paths to that resource. Otherwise, that feature might get lost in
> between the check and the actual use.

Yes, race conditions are a problem. But the problem here is not with
static typing. In fact, the problem here is that you are breaking static
typing! And the end result is that you get the same thing you would have
gotten under dynamic typing.

As an aside, I think this example highlights one of the deficiencies of
the objects-with-methods flavor of object orientation. The example would
map to a relational universe much better.

Regards,

Bob

--
Sed quis custodiet ipsos custodes?
-- Juvenal

Dmitry A. Kazakov

unread,
Apr 10, 2009, 1:46:45 PM4/10/09
to
On Fri, 10 Apr 2009 10:07:36 -0700 (PDT), Pillsy wrote:

> On Apr 10, 12:42 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 09:13:46 -0700 (PDT), Pillsy wrote:
> [...]
>>> This is a really pitiful strawman. Just because you can't define
>>> program correctness without the idea of conforming to *some* set of
>>> checks hardly means that you can't define program correctness without
>>> conforming to *every possible* set of checks.
>
>> Wow, now it becomes interesting. So type checks are irrelevant.
>
> In *some* circumstances, they are.

It is interesting to learn the cases when type violation becomes
irrelevant. The only one case is when there is no any type. In all other
cases it is just a bug.

> You seem to be repeatedly excluding the middle.

It is not me, it is logic which does. Unless you deploy multi-valued or
fuzzy logic here.

>> But that was not the original point. It was, that type checks are great to
>> perform later.
>
> Yes, because that's one easy way of providing finer-grained control
> over what type checks are performed. If I'm doing exploratory
> programming (which is something I certainly do a lot of), being
> dropped into the debugger due to the occasional type error is a lot
> more convenient than spending the time up front to get all my types
> right, because that slows down the exploration that was the ultimate
> point of the exercise.
>
> Once I've got something that I think is pretty good, then getting
> earlier checking is a lot more useful, and having a static type-system
> is a lot more appealing, but it's not appealing enough to make dealing
> with that system worth the trouble in the earlier stages of
> development.

The question is what is the name for "something." How you describe
"something" to yourself? You should have it different from an algebraic
presentation as a set of values bound by operations, because that is what a
type is.

So either, your language does not provide an adequate model to describe the
concept you have in mind, or else you have something really uncommon. My
guess it is rather the former.

Note, that I do not mean incomplete descriptions of types, when "something"
is being explored and so far it is not clear which values and which
operations it will have in the end. No, this cannot not prevent you from
stating that "something" is really something and not, say, a complex matrix
or commuter train schedule. What you want is a different thing. You want to
plug a microphone into 220V wall outlet in order to explore what happens.
You should not do it. Trust the compiler, there were people who already
tried...

Robbert Haarman

unread,
Apr 10, 2009, 2:08:59 PM4/10/09
to
On Fri, Apr 10, 2009 at 12:55:41PM +0000, Tamas K Papp wrote:
> On Fri, 10 Apr 2009 08:39:13 +0200, Robbert Haarman wrote:
>
> > Taking "static typing" to mean that programs that cannot be correctly at
> > compile time are rejected at compile time, whereas "dynamic typing"
> > means type errors lead to rejection at run-time, static typing means, by
> > definition, rejecting bad programs early. It seems to me this would be a
> > productivity gain.
>
> Your classification is flawed: CL is certainly not statically typed,
> but my compiler (SBCL) does analyze the functions I compile and warns
> me about a lot of things.

For the record, I am aware of that (I use SBCL myself).

> Also, "static typing means, by definition, rejecting bad programs
> early" is sheer idiocy - a lot of "bad" programs are not caught by
> type checking. Most of the "badness" in my programs arise from
> conceptual mistakes or inappropriate algorithms, no compiler would be
> able to catch those.

My bad. I should have clarified that by "bad" I meant "invalid,
according to the type system". I thought that would have been clear from
the context, but, clearly, this wasn't the case.

> > Now, in theory, you could perform all the same type inference and type
> > checking on a dynamically typed language that you could perform on a
> > statically typed language, as long as your program is written in a style
> > that we know how to do type inference for. In practice, this is often
> > not the case. The result is that programs written in dynamically typed
>
> You should update your knowledge of modern compilers. In practice,
> modern CL compilers are able to perform a lot of optimizations, even
> when they are unaided by declarations. With appropriate declarations,
> CL can generate very fast code.

Again, I am aware of this.

> I never aim to write my programs "in a style that we know how to do
> type inference for", I try to write them in a style that is clear,
> concise and comfortable to me. I leave optimization to my compiler,
> this approach works very well for me.

Of course. I am not suggesting there is anything wrong with this
approach.

What I am saying is that, all things considered, it is nice to have
guarantees. Static typing provides one such guarantee: that there are no
type errors in the program. (Of course, unless the type system is broken
- which the type systems of many languages are.)

Current research on type systems focuses on expressing ever more
properties of the code in the type system. Combined with static typing,
this allows the compiler to check more and more properties of the code,
and reject a program if these properties are not as they should be.

Another poster in this discussion commented that it takes so much time
to get a program through the type checker. As a counter point to that, I
would like to repeat what I have heard from many Haskell programmers:

It takes me a long time before I get my programs to pass the type
checker, but after that, they work flawlessly.

This, I think, shows the power of static checking: instead of allowing
an incorrect program to run (with all the consequences of doing so), it
attempts to catch errorneous programs and preventing them from
ever running. It's another way of letting the compiler work for you.

Regards,

Bob

--
An eye for an eye makes the whole world blind.
-- Gandhi


Raffael Cavallaro

unread,
Apr 10, 2009, 2:14:15 PM4/10/09
to
On Apr 10, 1:46 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> It is interesting to learn the cases when type violation becomes


> irrelevant. The only one case is when there is no any type. In all other
> cases it is just a bug.

Only because you're thinking only of one class of program, where the
code to be run is known at compile time. But dynamically typed
languages let you write systems like this one:

<http://nm.wu-wien.ac.at/research/publications/b335.pdf>

where the code to be run isn't known at compile time because it is
created by end users, not programmers. This code can't be statically
type checked at compile time for a simple reason; it doesn't exist yet
at compile time.

Now, turing completeness being what it is, one could of course have
written such a system in a statically typed language. Of course doing
so would mean implementing a good part of a dynamically typed language
on top of the statically typed language. If one is going to do this,
one might as well start with an existing, well specified, debugged,
tested dynamically typed language, rather than writing one's own on
top of a statically typed language.

The same concern applies the other way round; if one wants static type
checks for safety reasons (for example, potentially lethal medical
treatment software) it is of course possible to implement a statically
typed language compiler on top of a dynamically typed language. But
why go that route when perfectly good statically typed languages
already exist?

IOW, there are problem domains where dynamic typing is simply
necessary because we don't yet know at compile time what we'll be
running. In these cases, dynamically typed languages let us write
programs that static type checkers cannot prove correct and won't let
us compile (short of the reductio ad absurdum of implementing dynamic
typing on top our static type system).

Conversely, there exist problem domains where we don't really care if
we end up rejecting some programs that could possibly be correct at
runtime because we want strong guarantees of safety before anything is
ever allowed to run. In such domains static type checking provides
added security at a cost that is inconsequential in that domain.

Robbert Haarman

unread,
Apr 10, 2009, 2:33:37 PM4/10/09
to
On Fri, Apr 10, 2009 at 10:07:36AM -0700, Pillsy wrote:
>
> Once I've got something that I think is pretty good, then getting
> earlier checking is a lot more useful, and having a static type-system
> is a lot more appealing, but it's not appealing enough to make dealing
> with that system worth the trouble in the earlier stages of
> development.

Ok, so what you are saying is basically that you accept errors in your
program during the exploratory stage, but it would be nice to have all
the errors found and fixed in the final program. I think this is
something we can all agree on.

It also makes a good case for having optional static checking. Disable
the checks while exploring, then enable the checks when you are working
to transform the result of your exploration into a final program. Best
of both worlds.

However, why would you want to allow errors in your exploratory phase?
There seems to be an assumption that this makes you more productive, but
is that really the case?

I can see why not having to go and fix all the breakage immediately
saves you time in going from one explorative step to the next, but,
eventually, that breakage does have to be fixed. And static checking is
a great asset here, because it can verify that you have, indeed, fixed
all the breakage, or, if you haven't, point you to exactly those places
you still need to fix.

Until we have an answer to the question if static checking does actually
impair productivity, I fear static vs. dynamic typing is a discussion
that can go on forever, without making any real progress. The bad news
is that I don't have the answer. The good news is that there are plenty
of programming languages to choose from, and if none of them are good
enough for you, you can always write your own. :-)

Regards,

Bob

--
God is dead. - Nietzsche
Nietzsche is dead. - God

Robbert Haarman

unread,
Apr 10, 2009, 2:38:10 PM4/10/09
to
On Fri, Apr 10, 2009 at 11:14:15AM -0700, Raffael Cavallaro wrote:
> On Apr 10, 1:46 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>
> > It is interesting to learn the cases when type violation becomes
> > irrelevant. The only one case is when there is no any type. In all other
> > cases it is just a bug.
>
> Only because you're thinking only of one class of program, where the
> code to be run is known at compile time. But dynamically typed
> languages let you write systems like this one:
>
> <http://nm.wu-wien.ac.at/research/publications/b335.pdf>
>
> where the code to be run isn't known at compile time because it is
> created by end users, not programmers. This code can't be statically
> type checked at compile time for a simple reason; it doesn't exist yet
> at compile time.

Sure it does. It may not yet exist at the time when the large system is
compiled, but it does exist once the user creates it. You can perform
your checking at any time after that - including before you run it.

Regards,

Bob

--
Life is too short to be taken seriously.

Larry Coleman

unread,
Apr 10, 2009, 2:38:53 PM4/10/09
to
On Apr 10, 2:08 pm, Robbert Haarman <comp.lang.m...@inglorion.net>
wrote:

> Another poster in this discussion commented that it takes so much time
> to get a program through the type checker. As a counter point to that, I
> would like to repeat what I have heard from many Haskell programmers:
>
>   It takes me a long time before I get my programs to pass the type
>   checker, but after that, they work flawlessly.
>
> This, I think, shows the power of static checking: instead of allowing
> an incorrect program to run (with all the consequences of doing so), it
> attempts to catch errorneous programs and preventing them from
> ever running. It's another way of letting the compiler work for you.
>

I've done some Haskell programming, and what you hear from them is
correct, but I don't think it's only because of static typing. I think
it's mostly because of the lack of side effects, and the pure
functional style necessary as a result. When writing a program
involves mostly arranging function calls, most missteps will cause
type errors. Languages that allow side effects and imperative control
structures also allow more opportunities for missteps that don't cause
type errors. For example, I've also used Ocaml and F#, and fought with
the type checker only to find that the imperative parts of my program
were still bug-ridden even after I was done fighting.

As a side note, Dr. Harrop is conspicuous by his absence in this
thread. He seems to be busy on clf trying to convince everyone there
that Haskell is too slow.

Larry

Tamas K Papp

unread,
Apr 10, 2009, 2:45:45 PM4/10/09
to
On Fri, 10 Apr 2009 20:08:59 +0200, Robbert Haarman wrote:

> What I am saying is that, all things considered, it is nice to have
> guarantees. Static typing provides one such guarantee: that there are no
> type errors in the program. (Of course, unless the type system is broken
> - which the type systems of many languages are.)

When we are considering all things, we are doing a cost/benefit
analysis. For me the cost of a static type system outweigh the
benefits, but since these things are subjective, this may be different
for you.

> Current research on type systems focuses on expressing ever more
> properties of the code in the type system. Combined with static typing,
> this allows the compiler to check more and more properties of the code,
> and reject a program if these properties are not as they should be.

I have seen some of that research and was not too impressed. I think
that increasing costs/diminishing returns kick in very quickly when
you try to "check" everything with static typing. For example, I have
seen Haskell code that checks conformability of matrices statically.
I use a lot of matrix computations in my programs, but I don't really
see the value of this: whenever I make a mistake, I just pop into the
debugger, find the offending piece of code, correct it and recompile,
and I am done. Using static typing to check for this would transform
a minor, occasional inconvenience into a constant pain in the ass; not
a trade-off I would prefer.

> Another poster in this discussion commented that it takes so much time
> to get a program through the type checker. As a counter point to that, I
> would like to repeat what I have heard from many Haskell programmers:
>
> It takes me a long time before I get my programs to pass the type
> checker, but after that, they work flawlessly.

I am sorry to say this, but he/she was clearly bullshitting. Static
typing does not guarantee that your programs work flawlessly. I
thought this was obvious, but apparently not.

> This, I think, shows the power of static checking: instead of allowing
> an incorrect program to run (with all the consequences of doing so), it
> attempts to catch errorneous programs and preventing them from ever
> running. It's another way of letting the compiler work for you.

I frequently write programs which are incorrect and would not get past
a compiler, then run them and discover that I should rewrite them
completely, not because of errors that type checking would have
caught, but because of other design issues. In this case, the
compiler would prevent me from doing my work.

Reconstructing from their arguments, proponents of static typing have
the following scenario in mind: you design your whole program, then
you sit down and type it in, occasionally making some typos, which the
compiler catches for you and everyone is happy. Real life is rarely
ever like this.

Tamas

Pillsy

unread,
Apr 10, 2009, 2:50:24 PM4/10/09
to
On Apr 10, 2:33 pm, Robbert Haarman <comp.lang.m...@inglorion.net>
wrote:

> On Fri, Apr 10, 2009 at 10:07:36AM -0700, Pillsy wrote:

> > Once I've got something that I think is pretty good, then getting
> > earlier checking is a lot more useful, and having a static type-system
> > is a lot more appealing, but it's not appealing enough to make dealing
> > with that system worth the trouble in the earlier stages of
> > development.

> Ok, so what you are saying is basically that you accept errors in your
> program during the exploratory stage, but it would be nice to have all
> the errors found and fixed in the final program. I think this is
> something we can all agree on.

That is, indeed, what I am saying.

> It also makes a good case for having optional static checking. Disable
> the checks while exploring, then enable the checks when you are working
> to transform the result of your exploration into a final program. Best
> of both worlds.

Sure. My favorite Lisp implementation will do static type checks with
the proper compiler settings.

> However, why would you want to allow errors in your exploratory phase?

Because the time I spend dealing with the occasional type error during
the exploratory phase is less than the time I would spend getting all
the declarations right ahead of time. If I figure I'm going to throw
away or fix 9 versions of a function before I get one that does what I
want it to do, and will spend time tracking down various bugs in the
early version, it's not necessarily a huge sacrifice to have to throw
away of fix a tenth version because I made a type error a static
checker would have caught but that ended up dropping me into the
debugger instead.

> There seems to be an assumption that this makes you more productive, but
> is that really the case?

It aligns well with my experience of what makes me more productive.
Other people have different styles of development, work in different
domains, and have different approachs to problem-solving, and may well
have had completely experiences.
[...]


> Until we have an answer to the question if static checking does actually
> impair productivity, I fear static vs. dynamic typing is a discussion
> that can go on forever, without making any real progress.

This is quite possible. My belief is that the different people will
have different answers to the question, and that those answers will
probably be right for them. I think discussions that occur on the
level of, "This works for me, and here's why it works for me...." are
often worth having, even though by their very nature there may not be
a definitive answer.

Cheers,
Pillsy

Robbert Haarman

unread,
Apr 10, 2009, 2:58:34 PM4/10/09
to
On Fri, Apr 10, 2009 at 11:38:53AM -0700, Larry Coleman wrote:
>
> I've done some Haskell programming, and what you hear from them is
> correct, but I don't think it's only because of static typing. I think
> it's mostly because of the lack of side effects, and the pure
> functional style necessary as a result. When writing a program
> involves mostly arranging function calls, most missteps will cause
> type errors. Languages that allow side effects and imperative control
> structures also allow more opportunities for missteps that don't cause
> type errors. For example, I've also used Ocaml and F#, and fought with
> the type checker only to find that the imperative parts of my program
> were still bug-ridden even after I was done fighting.

Good points. Thanks for sharing.

> As a side note, Dr. Harrop is conspicuous by his absence in this
> thread. He seems to be busy on clf trying to convince everyone there
> that Haskell is too slow.

I haven't been seeing him on c.l.misc lately, either. I assumed he had
found better uses for his time than having the same discussions over and
over again.

Regards,

Bob

--
Wise men talk because they have something to say; fools, because they
have to say something.

-- Plato

Paul Wallich

unread,
Apr 10, 2009, 3:00:19 PM4/10/09
to
Pillsy wrote:
> On Apr 10, 12:42 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 09:13:46 -0700 (PDT), Pillsy wrote:
> [...]
>>> This is a really pitiful strawman. Just because you can't define
>>> program correctness without the idea of conforming to *some* set of
>>> checks hardly means that you can't define program correctness without
>>> conforming to *every possible* set of checks.
>
>> Wow, now it becomes interesting. So type checks are irrelevant.
>
> In *some* circumstances, they are.
>
> You seem to be repeatedly excluding the middle.
>
>> But that was not the original point. It was, that type checks are great to
>> perform later.
>
> Yes, because that's one easy way of providing finer-grained control
> over what type checks are performed. If I'm doing exploratory
> programming (which is something I certainly do a lot of), being
> dropped into the debugger due to the occasional type error is a lot
> more convenient than spending the time up front to get all my types
> right, because that slows down the exploration that was the ultimate
> point of the exercise.

Another way to think of this is in terms of dead-code elimination or
short-circuit evaluation. None of the code that gets discarded during
the exploratory phase is in any of the execution paths of the final
product, or even of the preliminary versions. Performing extensive
series of tests on it is a little like insisting that office workers
carefully flatten every sheet of paper they throw into the wastebasket.

Robbert Haarman

unread,
Apr 10, 2009, 3:09:03 PM4/10/09
to
On Fri, Apr 10, 2009 at 06:45:45PM +0000, Tamas K Papp wrote:
> On Fri, 10 Apr 2009 20:08:59 +0200, Robbert Haarman wrote:
>
> > What I am saying is that, all things considered, it is nice to have
> > guarantees. Static typing provides one such guarantee: that there are no
> > type errors in the program. (Of course, unless the type system is broken
> > - which the type systems of many languages are.)
>
> When we are considering all things, we are doing a cost/benefit
> analysis. For me the cost of a static type system outweigh the
> benefits, but since these things are subjective, this may be different
> for you.

You are correct, it is a cost-benefit analysis. This is also why various
people have pointed out that they prefer dynamic typing in general, but
see the value of static typing for some types of program (e.g. in
safety-critical applications).

I have to wonder, however, what the cost of static typing really is.
Granted, it depends on what you check and how you check it. The cost can
be enormous. But does it have to be?

> > Another poster in this discussion commented that it takes so much time
> > to get a program through the type checker. As a counter point to that, I
> > would like to repeat what I have heard from many Haskell programmers:
> >
> > It takes me a long time before I get my programs to pass the type
> > checker, but after that, they work flawlessly.
>
> I am sorry to say this, but he/she was clearly bullshitting. Static
> typing does not guarantee that your programs work flawlessly. I
> thought this was obvious, but apparently not.

No, you are right. Unless you somehow manage to express "flawless" into
something that can be statically checked, the fact that the type checker
finds no errors certainly doesn't mean there aren't any. However, that
is not what is being claimed, either. Rather, the claim is that, for the
programs these programmers wrote in Haskell, no bugs were found that
weren't found by the type checker. That's certainly possible.

Regards,

Bob

--
This sentence no verb


Raffael Cavallaro

unread,
Apr 10, 2009, 3:18:24 PM4/10/09
to
On Apr 10, 2:38 pm, Robbert Haarman <comp.lang.m...@inglorion.net>
wrote:

Only if your end user is a programmer who is willing to put up with
the ctyptic error messages of static type checkers. When your end user
is not a programmer but a biologist, and you subject her to the
typical output of a static type checker, she simply gives up in
frustration and won't use your system.

Read the linked article and see why you need a system that insulates
the end user from such concerns, and why such a system in effect
requires you to either be running a dynamically typed language, or to
build one on top of your statically typed language.

Robbert Haarman

unread,
Apr 10, 2009, 3:24:44 PM4/10/09
to
On Fri, Apr 10, 2009 at 11:50:24AM -0700, Pillsy wrote:
> On Apr 10, 2:33 pm, Robbert Haarman <comp.lang.m...@inglorion.net>
> wrote:
>
> > There seems to be an assumption that this makes you more productive, but
> > is that really the case?
>
> It aligns well with my experience of what makes me more productive.
> Other people have different styles of development, work in different
> domains, and have different approachs to problem-solving, and may well
> have had completely experiences.
> [...]

Yes. It is very difficult to answer the question, because there are so
many variables. For example, programming languages don't usually differ
from one another only in that one is statically typed and the other is
dynamically typed.

> > Until we have an answer to the question if static checking does actually
> > impair productivity, I fear static vs. dynamic typing is a discussion
> > that can go on forever, without making any real progress.
>
> This is quite possible. My belief is that the different people will
> have different answers to the question, and that those answers will
> probably be right for them. I think discussions that occur on the
> level of, "This works for me, and here's why it works for me...." are
> often worth having, even though by their very nature there may not be
> a definitive answer.

Indeed. In the end, the best choice may well depend on the
circumstances. In that case, knowing which choice is best in the given
circumstance, or even knowing that a choice is "pretty good" in the
given circumstances, is much more interesting than which choice, if any,
is best in general.

Regards,

Bob

--
If source code is outlawed, only outlaws will have source code.


budden

unread,
Apr 10, 2009, 3:54:36 PM4/10/09
to
Hi Pillsy,

> I don't know a whole helluva lot about Prolog, but
> I sure wouldn't describe Mathematica's handling of
> infix syntax as "very simple".
It is not "very simple", it is just "reasonable simple".
Most CL projects I've seen have about 90% of functions
and about 10% of macros. Of macros, 60% is quasiquoting.
So, about 96% of the time I don't need to know how my
"sweet" syntax is presented internally.
For me, I'd better have simpler syntax 96% of time,
and have some difficulty 4% of the time.
What goes to Mathematica's metaprogramming, it
was really hard to understand after lisp and it
seems it is less convinient. I mean
just that Mathematica can easily access and
manipulate underlying "FullForm".

Robbert Haarman

unread,
Apr 10, 2009, 3:55:50 PM4/10/09
to
On Fri, Apr 10, 2009 at 12:18:24PM -0700, Raffael Cavallaro wrote:
>
> Only if your end user is a programmer who is willing to put up with
> the ctyptic error messages of static type checkers. When your end user
> is not a programmer but a biologist, and you subject her to the
> typical output of a static type checker, she simply gives up in
> frustration and won't use your system.

Fair enough, but I contend this is not a problem with static checking,
but rather with cryptic messages. If your messages are cryptic, users
will get frustrated, no matter if you use static or dynamic typing.

> Read the linked article and see why you need a system that insulates
> the end user from such concerns,

You can't get around the fact that not all operations can be
meaningfully applied to all values. At one point or another, a user is
going to instruct the system to do something that can't be done. If
anything, I would expect the frustration to be less if the system told
the user up front "this isn't going to work", instead of waiting until
things actually blow up and then having the user retrace the steps.

In the situation discussed in the article, I would have tried to detect
as many errors as early as possible. With respect to types, that could
mean keeping track of the type of each expression and giving an
indication as soon as the user breaks the type rules.

I believe Eclipse does this for Java: it knows the type of each
expression and it knows the types that each method can take. If you
enter something that isn't right, it will underline the code. You can
then request information about why it thinks the code is wrong.

Having said this, I have never worked on a system for biologists. If the
system Catherine and Uwe built works well for the users, they did a good
job.

Regards,

Bob

--
"The first casualty of war is truth."


Dmitry A. Kazakov

unread,
Apr 10, 2009, 3:59:50 PM4/10/09
to
On Fri, 10 Apr 2009 11:14:15 -0700 (PDT), Raffael Cavallaro wrote:

> On Apr 10, 1:46 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>
>> It is interesting to learn the cases when type violation becomes
>> irrelevant. The only one case is when there is no any type. In all other
>> cases it is just a bug.
>
> Only because you're thinking only of one class of program, where the
> code to be run is known at compile time.

This is a logical fallacy. Trivially: all code is known before run to the
compiler of the code. Otherwise we are talking about different codes and
different compilers, like in the case you presented.

It also does not show that violation can be helpful when slips undetected
or detected later than possible.

Really, the point of considering disability as an advantage cannot be
defended. The only reason why a check shouldn't be made early is because
this would be technically impossible or too expensive to do. Because there
exist types which are obviously trivial to check, pure dynamic typing (=
static typing with exactly one type) is not defendable. Surely we can talk
about concrete types, and there are lots of, which are undecidable to
compare at different states of program life cycle. This perfectly OK.
Nobody seriously argue that all types have to be checked statically or that
all program semantics has to be mapped to types. The point is that we shall
check as much as we can.

> IOW, there are problem domains where dynamic typing is simply
> necessary because we don't yet know at compile time what we'll be
> running. In these cases, dynamically typed languages let us write
> programs that static type checkers cannot prove correct and won't let
> us compile (short of the reductio ad absurdum of implementing dynamic
> typing on top our static type system).

Nevertheless, we can always narrow the class of types we deal with. This is
what generic programming is about. It is never so that we can say
absolutely nothing about expected types. (Again, your example, simplified
as a compiler/interpreter which does not know what the code it translates
would actually do is illegal. Compilers do not program.)

> Conversely, there exist problem domains where we don't really care if
> we end up rejecting some programs that could possibly be correct at
> runtime because we want strong guarantees of safety before anything is
> ever allowed to run. In such domains static type checking provides
> added security at a cost that is inconsequential in that domain.

Well, as known in pattern recognition, you can push it towards minimum of
false negatives or to minimum of false positives, but you never can exclude
either or both.

Robbert Haarman

unread,
Apr 10, 2009, 4:12:05 PM4/10/09
to
On Fri, Apr 10, 2009 at 09:59:50PM +0200, Dmitry A. Kazakov wrote:
>
> Really, the point of considering disability as an advantage cannot be
> defended. The only reason why a check shouldn't be made early is because
> this would be technically impossible or too expensive to do.

But that is exactly the argument. Remember, the original claim which I
took issue with is the claim that Boo leads to great programmer
productivity, because it is dynamically typed. The implication, then, is
that static typing is too expensive, in terms of programmer
productivity. I am not convinced that this is the case, but if it is the
case, then this is indeed a valid argument for not performing static
type checking.

Regards,

Bob

--
"Beware of bugs in the above code; I have only proved it correct, but not
tried it."
-- Donald Knuth


Raffael Cavallaro

unread,
Apr 10, 2009, 4:16:57 PM4/10/09
to
On Apr 10, 3:59 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> Really, the point of considering disability as an advantage cannot be


> defended. The only reason why a check shouldn't be made early is because
> this would be technically impossible or too expensive to do.

Or because it would be a waste of time, such as forcing programmers to
stub out some portion of a program that will not be executed yet,
before allowing them to test another part now. Dynamically typed
languages allow programmers to test one part of a program even when
other parts don't yet exist, and those of us who use such languages
find this feature saves us a lot of time and unnecessary effort.

> Nobody seriously argue that all types have to be checked statically or that
> all program semantics has to be mapped to types. The point is that we shall
> check as much as we can.

Which would be fine if that's what static type checkers did. But they
often won't let legal programs run because they insist on checking
things that can't yet be checked, such as bits of the program that
aren't written yet.

> Nevertheless, we can always narrow the class of types we deal with. This is
> what generic programming is about. It is never so that we can say
> absolutely nothing about expected types.

But now we're back to the reductio ad absurdum argument because often
we can only narrow the type down to lisp-object, or scheme-object, or,
in the case of biok, string.

Dmitry A. Kazakov

unread,
Apr 10, 2009, 4:37:30 PM4/10/09
to
On Fri, 10 Apr 2009 22:12:05 +0200, Robbert Haarman wrote:

> On Fri, Apr 10, 2009 at 09:59:50PM +0200, Dmitry A. Kazakov wrote:
>>
>> Really, the point of considering disability as an advantage cannot be
>> defended. The only reason why a check shouldn't be made early is because
>> this would be technically impossible or too expensive to do.
>
> But that is exactly the argument. Remember, the original claim which I
> took issue with is the claim that Boo leads to great programmer
> productivity, because it is dynamically typed. The implication, then, is
> that static typing is too expensive, in terms of programmer
> productivity. I am not convinced that this is the case, but if it is the
> case, then this is indeed a valid argument for not performing static
> type checking.

This cannot apply to static typing as a whole, because there obviously
exist trivial cases which simply cannot limit productivity. I cannot
imagine how multiplying string to string can suddenly become productive
when strings are not declared strings.

In order to make the argument working he should have presented a concrete
class of types for which 1) their declaration would hinder productivity 2)
productivity would be measured in a reasonable way.

budden

unread,
Apr 10, 2009, 4:44:15 PM4/10/09
to
On 10 апр, 15:45, jeff <jeffb...@gmail.com> wrote:
> Is there any actual code here?
Yes, it is interesting question :)
If this is only a spec, I think I could do better one.

Some comments on the language proposed:
> Define as much as possible by the binding of names to values.
I think it is not a good policy. 2-lisp is fine. Indeed,
people use N-lisp. E.g. I use lisp to generate SQL. Consider
stored procedure database.foo. What should symbol foo mean?
A. It should mean reference to a metadata of database.foo.
B. It should invoke database.foo from lisp.
C. It should represent database.foo when generating trigger code.
In fact, I want it to have all three meanings. I also want be
able to navigate through all code I created for database.foo
from my EMACS M-. command. If language is intentionnaly 1-lisp,
it becomes hard.

> case-insensitive names
Inacceptable. It is a great tradegy that CL is
case-insensitive by default and standard symbols are
uppercase. Very inconvinent to do metaprogramming for
case-sensitive targets.

> Minimize the use of punctuation and maximize the use of
> whitespace, for readability
Not sure it is ok. Indented code is definitely more readable
but kill-sexp and forward-sexp EMACS commands are very cool.

> Everything about the language is to be defined in the
> language itself.
Very good.

> The language is to be fully extensible by users, with
> no magic.
Very good, but CL is already almost ideal in this. Not sure it
can be imporeved significantly.

> In Lisp, code walking requires ad hoc code to understand every
> "special form." This is unmodular, error-prone, and a waste of time.
> As always, the solution to this type of problem is object
> orientation. In PLOT there is a well-defined, object-oriented
> interface to the Abstract Syntax Tree, scopes, and definitions.
> This is one reason why objects are better than S-expressions
> as a representation for program source code.
I'm unsure. It looks like having one good standard code walker
solves that problem too.

> As mentioned above, a token-stream keeps track of source
> locations.
Nice to have this built in. CL sucks here. Reader should be
able to annotate every cons it reads. To fix it, one need
redefine entire CL reader. Result is some difficulties
finding error locations, which greatly reduces CL
coding productivity. Especially horrible is trying
to find errors in macroexpanded code.

Also it looks like language abandons an ability
to print data readably which is extremely powerful
feature of lisp. Or maybe I didn't find it.
Having index of all symbols would be useful here.

Pillsy

unread,
Apr 10, 2009, 4:54:53 PM4/10/09
to
On Apr 10, 4:37 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> On Fri, 10 Apr 2009 22:12:05 +0200, Robbert Haarman wrote:
[...]

> This cannot apply to static typing as a whole, because there obviously
> exist trivial cases which simply cannot limit productivity. I cannot
> imagine how multiplying string to string can suddenly become productive
> when strings are not declared strings.

But surely you can imagine how having to take the time to declare that
the two things being multiplied are numbers ahead of time could limit
productivity. It can force the programmer to do needless work now in
order to get the program to run at all, even if it's in a prototype
stage and might be throwaway code.

It's exactly the converse of one of the major performance benefits of
static typing, where the compiler gets to generate code that doesn't
have to check types because it knows they aren't need.

Cheers,
Pillsy

Dmitry A. Kazakov

unread,
Apr 10, 2009, 4:54:48 PM4/10/09
to
On Fri, 10 Apr 2009 13:16:57 -0700 (PDT), Raffael Cavallaro wrote:

> On Apr 10, 3:59 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>
>> Really, the point of considering disability as an advantage cannot be
>> defended. The only reason why a check shouldn't be made early is because
>> this would be technically impossible or too expensive to do.
>
> Or because it would be a waste of time, such as forcing programmers to
> stub out some portion of a program that will not be executed yet,
> before allowing them to test another part now. Dynamically typed
> languages allow programmers to test one part of a program even when
> other parts don't yet exist, and those of us who use such languages
> find this feature saves us a lot of time and unnecessary effort.

Hmm, this has nothing to do with static typing. It is merely separate
compilation.

BTW, static analysis shines here. Because if you statically know that the
code is unreachable or unreferenced, the compiler can safely drop it,
giving a warning. In dynamic case it cannot do this, because it cannot
know. You propose to carry the garbage with?

>> Nobody seriously argue that all types have to be checked statically or that
>> all program semantics has to be mapped to types. The point is that we shall
>> check as much as we can.
>
> Which would be fine if that's what static type checkers did. But they
> often won't let legal programs run because they insist on checking
> things that can't yet be checked, such as bits of the program that
> aren't written yet.

See above, it is not a necessary property of statically typed system. Yes,
there are languages with types declared globally and those which do not
really support separate compilation, encapsulation and modularity. Those
are just bad languages, statically typed or not.

>> Nevertheless, we can always narrow the class of types we deal with. This is
>> what generic programming is about. It is never so that we can say
>> absolutely nothing about expected types.
>
> But now we're back to the reductio ad absurdum argument because often
> we can only narrow the type down to lisp-object, or scheme-object, or,
> in the case of biok, string.

No, we can always say a lot more. One of the advantages of static typing is
that it encourages the programmer to think about what distinguishes these
objects from others. Thus it does refactoring, reuse, consistency,
verifiability and overall better design.

Pillsy

unread,
Apr 10, 2009, 5:07:38 PM4/10/09
to
On Apr 10, 1:46 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> On Fri, 10 Apr 2009 10:07:36 -0700 (PDT), Pillsy wrote:
[...]

> > In *some* circumstances, they are.

> It is interesting to learn the cases when type violation becomes
> irrelevant.

I didn't say type violations were going to be irrelevant, I said type
*checks* were going to be irrelevant. There's a big difference!

> In all other cases it is just a bug.

Well, sure, but sometimes coding defensively to avoid a certain class
of bug is going to take more effort than just dealing with the bug
when it comes up. Some bugs are really simple to figure out and kind
of hard to blunder into by accident---the sort where you subtract a
hash table from a string is one of those.

Cheers,
Pillsy

Dmitry A. Kazakov

unread,
Apr 10, 2009, 5:09:20 PM4/10/09
to
On Fri, 10 Apr 2009 13:54:53 -0700 (PDT), Pillsy wrote:

> On Apr 10, 4:37 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 22:12:05 +0200, Robbert Haarman wrote:
> [...]
>> This cannot apply to static typing as a whole, because there obviously
>> exist trivial cases which simply cannot limit productivity. I cannot
>> imagine how multiplying string to string can suddenly become productive
>> when strings are not declared strings.
>
> But surely you can imagine how having to take the time to declare that
> the two things being multiplied are numbers ahead of time could limit
> productivity.

No, because I would never come to this. Why should I declare it a number if
I know that I do not need multiplication? This is exactly why typing is so
helpful. I think in terms of types and interfaces of. I know what are the
operations I will use. The compiler just follows me.

> It can force the programmer to do needless work now in
> order to get the program to run at all, even if it's in a prototype
> stage and might be throwaway code.

In fact, I don't really know what throwaway code is good for. To me
try-and-fail does not go beyond compilation time. If I wish to check
something, I do it using the compiler. I practically never run it. I am too
lazy for that. (:-))

Raffael Cavallaro

unread,
Apr 10, 2009, 5:24:17 PM4/10/09
to
On Apr 10, 4:54 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> BTW, static analysis shines here. Because if you statically know that the


> code is unreachable or unreferenced, the compiler can safely drop it,
> giving a warning. In dynamic case it cannot do this, because it cannot
> know. You propose to carry the garbage with?

As programs are being written and changed they are not yet complete
wholes which can be statically analyzed but are instead growing,
changing things which cannot be fully analyzed because they are
incomplete. The portions that cannot be typed yet are *not* dead code;
they just contain calls to as-yet undefined functions. I *don't* want
the compiler to eliminate these functions. I just want the compiler to
issue an undefined-function warning and run what I ask it to without
either requiring me to stub out the as-yet undefined function, or
eliminating the routine that calls it as dead code because it cannot,
as yet, be called.

Pillsy

unread,
Apr 10, 2009, 5:41:28 PM4/10/09
to
On Apr 10, 5:09 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> On Fri, 10 Apr 2009 13:54:53 -0700 (PDT), Pillsy wrote:
[...]

> > But surely you can imagine how having to take the time to declare that
> > the two things being multiplied are numbers ahead of time could limit
> > productivity.

> No, because I would never come to this. Why should I declare it a number if
> I know that I do not need multiplication?

You shouldn't. But conversely, why should you declare it a number if
you know you *do* need multiplication? What the heck else are you
multiply?
[...]


> In fact, I don't really know what throwaway code is good for.

If your major question is whether your approach is fast enough, or
numerically stable enough, or if your simulation captures the the
relevant features of the system being simulated, you are probably
going to write a lot of throwaway code.

Or at least, I'm going to write a lot of throwaway code. I really
shouldn't make that sort of assumption about how you approach
problems.

Cheers,
Pillsy

eric-and-jane-smith

unread,
Apr 10, 2009, 6:49:13 PM4/10/09
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in
news:g8b1tvxfcacp$.bwsoedn441ti$.d...@40tude.net:

> I would like to see them. Precisely the number of man-hours required
> to achieve the rate software failure at the given severity level per
> source code line per one second of execution.

Even if someone published such statistics, the information would not be
reliable. There are too many ways for errors and misunderstandings to
make it useless. The best we can probably do is tally the opinions of
programmers with enough experience in different paradigms for their
opinions to be meaningful.

I've been programming for decades, from assembler language through dozens
of programming languages to Common Lisp. For the past 7 to 8 years I
have been using Common Lisp and have been of the opinion that it is by
far the best programming language of all the ones I have used. Not
necessarily the best for everyone, but the best for me.

I see a lot of advantages and disadvantages in both static and dynamic
typing vs each other. I'm not convinced that either one of them is a
clear winner.

From my point of view, the best reason to use dynamic typing is that Lisp
uses it. If anyone could ever convince me they invented a better
programming language, and it used static typing, I would be glad to use a
good implementation of it. But all the languages I have looked at had
too many deficiencies, and would have impaired my productivity. That
includes Haskell and OCaml, both of which are outstandingly good
programming languages, but would nevertheless impair my programming
productivity relative to Common Lisp.

I don't even consider error checking to be the biggest advantage of
static typing. One other advantage is being able to overload function
names, which can increase productivity, because it fits in better with
the way we think and the way our natural languages work. Another is that
the information the compiler gets for optimization does not require as
much compiler intelligence to use well, so the same amount of compiler
sophistication can give more optimization.

But the advantages of dynamic typing seem just as good overall to me.
E.g. that the programmer doesn't have to think of variables as having
types, or that the effective type of a variable is "whatever fits the
situation" with no need to expend mental effort on that, which would
distract important mental attention away from other, more important
considerations.

When you think of a variable as having a particular type, you often miss
opportunities to create more generic code, which could come in handy in
unexpected ways. You often duplicate a lot of efforts before you finally
see the common factors. And that duplicated effort often turns out to be
orders of magnitude more costly than you would expect. Such as when you
fix a subtle bug in one set of code, but the same bug remains in another,
with much rarer symptoms, much harder to diagnose, till a number of years
later it leads to huge amounts of wasted effort, when it finally becomes
way too important, but way too obscure.

In any case, I can only judge a programming language as a whole. Static
vs dynamic typing would never be a major consideration to me, because I
can see the advantages and disadvantages of both, and could never be
completely satisfied with either. For the time being, Common Lisp is the
big winner of my small increment of mindshare, and will retain it
indefinitely, until someone finally comes up with something that seems
really better to me.

Rob Warnock

unread,
Apr 10, 2009, 8:59:04 PM4/10/09
to
Dmitry A. Kazakov <mai...@dmitry-kazakov.de> wrote:
+---------------
| Secondly, as an alternative to Lisp I propose a random generator of
| hexadecimal machine code. Any sequence of machine codes is a correct
| program.
+---------------

Not true! In most ISPs there are sequences which will cause
"Illegal Instruction" traps and/or machine checks. Such programs
are, by definition, incorrect.

+---------------
| It would not smoke the CPU, you know.
+---------------

That's also not true on certain CPUs. [E.g., on most of the MIPS ISPs,
which have software-maintained fully-associative TLBs, putting multiple
PTEs into the TLB which overlap address ranges *can* literally "smoke"
(or at least overheat to the point of damage) the CPU!]

+---------------
| > I don't see how having the programs I'd like to write be rejected
| > is a productivity win.
|
| Random generator is greatly more productive.
+---------------

Hmmm... Random code ==> Causes machine check which cannot be cleared
without power-cycling the machine. Strange definition of "productive"...


-Rob

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Chris Barts

unread,
Apr 11, 2009, 12:56:04 AM4/11/09
to
Robbert Haarman <comp.la...@inglorion.net> writes:

> On Fri, Apr 10, 2009 at 12:18:24PM -0700, Raffael Cavallaro wrote:
>>
>> Only if your end user is a programmer who is willing to put up with
>> the ctyptic error messages of static type checkers. When your end user
>> is not a programmer but a biologist, and you subject her to the
>> typical output of a static type checker, she simply gives up in
>> frustration and won't use your system.
>
> Fair enough, but I contend this is not a problem with static checking,
> but rather with cryptic messages. If your messages are cryptic, users
> will get frustrated, no matter if you use static or dynamic typing.

The paradigmatic languages here (that is, domain-specific languages
used by domain experts not trained as programmers) are Excel (for
accountants), R (for statisticians), Mathematica (for mathematicians),
and Perl (for people doing bioinformatics). They're all dynamically
typed languages that do very little static checking, AFAIK. I think
that's important to mention at this juncture.

>
>> Read the linked article and see why you need a system that insulates
>> the end user from such concerns,
>
> You can't get around the fact that not all operations can be
> meaningfully applied to all values. At one point or another, a user is
> going to instruct the system to do something that can't be done. If
> anything, I would expect the frustration to be less if the system told
> the user up front "this isn't going to work", instead of waiting until
> things actually blow up and then having the user retrace the steps.

Again, this is presuming the type system isn't going to reject
reasonable programs because it thinks some variable might at some
point contain a class of variable it's never going to
contain. Convincing them to test their functions (which they have to
do anyway, really) is more reasonable than expecting them to learn
about the difference between a number and a string (and gods help them
if the language differentiates between types of numbers!).

Static language proponents can't get around the fact testing is a fact
of life and type checking is rather coarse-grained compared to a
well-designed battery of tests. Especially if the tests are designed
by a domain expert and represent a reasonable workload for the code.

> In the situation discussed in the article, I would have tried to detect
> as many errors as early as possible. With respect to types, that could
> mean keeping track of the type of each expression and giving an
> indication as soon as the user breaks the type rules.

OK, that will make the biologists scream obscenities and jab you with
infected needles. Prophylaxis against type errors is nothing compared
to prophylaxis against poly-resistant TB.

> I believe Eclipse does this for Java: it knows the type of each
> expression and it knows the types that each method can take. If you
> enter something that isn't right, it will underline the code. You can
> then request information about why it thinks the code is wrong.

Eclipse is a bad example: You don't want to remind us all of Java's
horribly losing type system at this juncture. You want to remind us
about Haskell, perhaps, or some other (likely ML-family) language with
a type system that's complex enough to diagnose (or at least flag)
non-obvious errors without forcing us to explicitly declare everything
all the time.

Robbert Haarman

unread,
Apr 11, 2009, 2:06:55 AM4/11/09
to
On Fri, Apr 10, 2009 at 10:56:04PM -0600, Chris Barts wrote:
> Robbert Haarman <comp.la...@inglorion.net> writes:
>
> > On Fri, Apr 10, 2009 at 12:18:24PM -0700, Raffael Cavallaro wrote:
> >>
> >> Only if your end user is a programmer who is willing to put up with
> >> the ctyptic error messages of static type checkers. When your end user
> >> is not a programmer but a biologist, and you subject her to the
> >> typical output of a static type checker, she simply gives up in
> >> frustration and won't use your system.
> >
> > Fair enough, but I contend this is not a problem with static checking,
> > but rather with cryptic messages. If your messages are cryptic, users
> > will get frustrated, no matter if you use static or dynamic typing.
>
> The paradigmatic languages here (that is, domain-specific languages
> used by domain experts not trained as programmers) are Excel (for
> accountants), R (for statisticians), Mathematica (for mathematicians),
> and Perl (for people doing bioinformatics). They're all dynamically
> typed languages that do very little static checking, AFAIK. I think
> that's important to mention at this juncture.

Correct. And don't forget Python and JavaScript, which are also used by
millions of non-programmers successfully, and are also dynamically
typed.

It does make one wonder what it is that makes these languages so
successful, and if it might perhaps be dynamic typing that plays a role
here. Indeed, I have wondered this, and come to the following
conclusion.

Many beginning programmers (whether they do programming as their main
activity or not) do not have a concept of a type system. Forcing them to
learn about types before they can write a program raises the barrier to
entry for your language. Therefore, explicit typing is a disadvantage:
what is this int and char stuff? In fact, this goes for anything that is
basically boilerplate; it is much easier to grasp 'print "Hello, world"'
when it isn't surrounded by half a screenful of boilerplate code.

In short, you want your programmers to have learn as little as possible
about your langauage before they can get started. Dynamic typing works
here, because it does not require you to learn anything about the type
system at all before you can write programs. However, that does not mean
that dynamic typing is the only way to go, nor that dynamic typing is
actually what makes these languages successful (more often than not, I
think you will find the language is more or less dictated by the domain;
e.g. you can't do scripting on web pages in anything other than
JavaScript).

What I think, but I admit I don't have any empirical data to back this
up, is that it is not dynamic typing, but implicit typing that lowers
the barrier to entry. In other words, the important thing is not when
type errors are signalled, but whether or not you need to mention types
in your program.

> >
> >> Read the linked article and see why you need a system that insulates
> >> the end user from such concerns,
> >
> > You can't get around the fact that not all operations can be
> > meaningfully applied to all values. At one point or another, a user is
> > going to instruct the system to do something that can't be done. If
> > anything, I would expect the frustration to be less if the system told
> > the user up front "this isn't going to work", instead of waiting until
> > things actually blow up and then having the user retrace the steps.
>
> Again, this is presuming the type system isn't going to reject
> reasonable programs because it thinks some variable might at some
> point contain a class of variable it's never going to
> contain.

I have a difficult time imagining a program that one would want to write
that is not amenable to static type checking. Perhaps that is just a
limitation on my part, however. If you could point me to some examples,
that would be great.

> Convincing them to test their functions (which they have to
> do anyway, really) is more reasonable than expecting them to learn
> about the difference between a number and a string

I agree with you that nothing is a good substitute for actual testing.
As for the differences between strings and numbers: they are going to
have to learn about those one way or another. Static typing or dynamic
typing, you can't divide "hello" by 2, unless this is defined in some
meaningful way.

> (and gods help them if the language differentiates between types of
> numbers!).

Honestly, I wish more languages bothered their users with the difference
between exact and inexact numbers.

> > I believe Eclipse does this for Java: it knows the type of each
> > expression and it knows the types that each method can take. If you
> > enter something that isn't right, it will underline the code. You can
> > then request information about why it thinks the code is wrong.
>
> Eclipse is a bad example: You don't want to remind us all of Java's
> horribly losing type system at this juncture.

Oh, I agree. But the example wasn't about Java, it was about signalling
type errors as soon as they are entered. I could have used any other
combination of programming environment and programming language that
does this, except that I am not aware of any.

Regards,

Bob

--
The surest way to remain a winner is to win once, and then not play any more.


Kaz Kylheku

unread,
Apr 11, 2009, 2:41:15 AM4/11/09
to
On 2009-04-10, Dmitry A. Kazakov <mai...@dmitry-kazakov.de> wrote:
> It is interesting to learn

Is it now? You might want to test this hypothesis.

Chris Barts

unread,
Apr 11, 2009, 2:17:08 AM4/11/09
to
Robbert Haarman <comp.la...@inglorion.net> writes:

> What I am saying is that, all things considered, it is nice to have
> guarantees. Static typing provides one such guarantee: that there are no
> type errors in the program. (Of course, unless the type system is broken
> - which the type systems of many languages are.)

That is a vacuous guarantee in Java, for example, as it is in most
languages that took their type systems from Algol: The idea that an
integer, a rational, and a complex number are not universally
interchangeable in most circumstances is a flaw. At best, restricting
the domain of a function to some subset of the numeric tower is a
performance hack and should only be done late in development if it
proves advantageous.

In a larger sense, type checking is no replacement for testing. In
fact, the subset of errors you can catch via a type system is a small
subset of the number of errors you can reasonably catch during
testing. For example, consider a function of two numbers that requires
the numbers be relatively prime. No type system can guarantee that
because it isn't determinable during compile time. Only testing can
catch an error of that sort. Type checking only obviates the tests
that careful coding is likely to obviate as well.

So you have to test anyway. What, then, does always-on static typing
get me? We can go back and forth over this (and we are, and we're
likely to continue) but my answer is 'not enough'.

> Current research on type systems focuses on expressing ever more
> properties of the code in the type system. Combined with static typing,
> this allows the compiler to check more and more properties of the code,
> and reject a program if these properties are not as they should be.

This reminds me of AI research, except AI research abandoned
type-system-like inference systems decades ago in favor of fuzzier
pattern recognition of the type Google uses.

> Another poster in this discussion commented that it takes so much time
> to get a program through the type checker. As a counter point to that, I
> would like to repeat what I have heard from many Haskell programmers:
>
> It takes me a long time before I get my programs to pass the type
> checker, but after that, they work flawlessly.

I like Haskell when I'm not beating my head against its type
checker. I will admit that it can catch some non-obvious errors when
I'm writing code that works with complex data structures, but in
general I'm just fighting it to accept code I know is reasonable.

> This, I think, shows the power of static checking: instead of allowing
> an incorrect program to run (with all the consequences of doing so),

What consequences? Are you expecting static typing to allow you to
deploy untested code in the real world? If you don't have a good
testing regime in place, no type system can save you.

Dmitry A. Kazakov

unread,
Apr 11, 2009, 4:31:37 AM4/11/09
to
On Sat, 11 Apr 2009 08:06:55 +0200, Robbert Haarman wrote:

> On Fri, Apr 10, 2009 at 10:56:04PM -0600, Chris Barts wrote:

>> Convincing them to test their functions (which they have to
>> do anyway, really) is more reasonable than expecting them to learn
>> about the difference between a number and a string
>
> I agree with you that nothing is a good substitute for actual testing.

Really? In fact nothing is a substitute for formal proof of correctness.
Since branch coverage test is 1) technically impossible, 2) requires
specification anyway.

If they indeed wanted to test (rather than just to *probe*), they should
have to invest a huge amount of up front work compared to trivial
attributing their variables with types. And then the specification
changes... God help them! (But of course, they actually test nothing, and
what they do is technically non-testable.)

Dmitry A. Kazakov

unread,
Apr 11, 2009, 5:17:02 AM4/11/09
to
On Fri, 10 Apr 2009 14:41:28 -0700 (PDT), Pillsy wrote:

> On Apr 10, 5:09 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 13:54:53 -0700 (PDT), Pillsy wrote:
> [...]
>>> But surely you can imagine how having to take the time to declare that
>>> the two things being multiplied are numbers ahead of time could limit
>>> productivity.
>
>> No, because I would never come to this. Why should I declare it a number if
>> I know that I do not need multiplication?
>
> You shouldn't. But conversely, why should you declare it a number if
> you know you *do* need multiplication? What the heck else are you
> multiply?

Elements of a multiplicative group, matrix to vector,... the list is
infinite.

> [...]
>> In fact, I don't really know what throwaway code is good for.
>
> If your major question is whether your approach is fast enough, or
> numerically stable enough, or if your simulation captures the the
> relevant features of the system being simulated, you are probably
> going to write a lot of throwaway code.

I don't see how this changes types. You are talking about implementation
issues which should by no mean influence the specification. So if one
numerical method is instable it does not mean that another method would be
non-numeric, or?

> Or at least, I'm going to write a lot of throwaway code. I really
> shouldn't make that sort of assumption about how you approach
> problems.

If you write a lot of throwaway code, how can you claim higher
productivity? I hope that the throwaway code is not counted as production
code?

BTW, how do you distinguish throwaway and production code in order to
ensure that the former will never be treated as the latter?

Dmitry A. Kazakov

unread,
Apr 11, 2009, 5:38:15 AM4/11/09
to
On Fri, 10 Apr 2009 22:49:13 GMT, eric-and-jane-smith wrote:

> But the advantages of dynamic typing seem just as good overall to me.
> E.g. that the programmer doesn't have to think of variables as having
> types, or that the effective type of a variable is "whatever fits the
> situation" with no need to expend mental effort on that, which would
> distract important mental attention away from other, more important
> considerations.

How the program reader is supposed to judge about the "situation" at hand?
Typing is a way to convey what it is all about. The advantages of static
typing you mentioned are merely consequences of this. The compiler and the
programmer are on the same page.

> When you think of a variable as having a particular type, you often miss
> opportunities to create more generic code, which could come in handy in
> unexpected ways.

Exactly the opposite. When I annotate a variable as constrained to some
type or subtype, doing this I bring the constraint into the design. It is
an explicit and visible design decision, which is a subject of
consideration and justification. When you put your constraints implicitly,
nobody, including you is aware of. So you cannot reason about merits of
these constraints and remove them in order to generalize or refactor your
code.

> You often duplicate a lot of efforts before you finally
> see the common factors. And that duplicated effort often turns out to be
> orders of magnitude more costly than you would expect. Such as when you
> fix a subtle bug in one set of code, but the same bug remains in another,
> with much rarer symptoms, much harder to diagnose, till a number of years
> later it leads to huge amounts of wasted effort, when it finally becomes
> way too important, but way too obscure.

To clarify things. What you are talking about is substitutability.
Substitutability is undecidable and you cannot claim that fixing bug in the
context A would not be introducing of another bug when the code is used in
the context B. As a matter of fact, typing is hugely helpful for improving
dealing with substitutability.

> In any case, I can only judge a programming language as a whole. Static
> vs dynamic typing would never be a major consideration to me, because I
> can see the advantages and disadvantages of both, and could never be
> completely satisfied with either. For the time being, Common Lisp is the
> big winner of my small increment of mindshare, and will retain it
> indefinitely, until someone finally comes up with something that seems
> really better to me.

That is fair enough. You like it because you like Lisp. I dislike it
because I dislike Lisp. In this case this explanation works... (:-))

Dmitry A. Kazakov

unread,
Apr 11, 2009, 5:49:59 AM4/11/09
to
On Fri, 10 Apr 2009 14:24:17 -0700 (PDT), Raffael Cavallaro wrote:

> On Apr 10, 4:54 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>
>> BTW, static analysis shines here. Because if you statically know that the
>> code is unreachable or unreferenced, the compiler can safely drop it,
>> giving a warning. In dynamic case it cannot do this, because it cannot
>> know. You propose to carry the garbage with?
>
> As programs are being written and changed they are not yet complete
> wholes which can be statically analyzed but are instead growing,
> changing things which cannot be fully analyzed because they are
> incomplete.

The same argument can be applied in order to show that incomplete program
cannot be executed.

For very same reason you could analyse incomplete (tightly coupled,
non-modular) program, you cannot say anything meaningful about its observed
behavior.

You have not a language, but a design problem. The advantage of statically
typed systems is that they detect potentially design problems like this.
And do it early, because design problems are expensive to fix.

> The portions that cannot be typed yet are *not* dead code;
> they just contain calls to as-yet undefined functions.

Do you call these functions or not. If you don't, there is no problem with
that in *both* cases.

> I *don't* want
> the compiler to eliminate these functions. I just want the compiler to
> issue an undefined-function warning and run what I ask it to without
> either requiring me to stub out the as-yet undefined function, or
> eliminating the routine that calls it as dead code because it cannot,
> as yet, be called.

So you actually do not know if and when you call which functions. Certainly
it is not the way I am writing programs...

Mark Wooding

unread,
Apr 11, 2009, 6:54:04 AM4/11/09
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:

> If they indeed wanted to test (rather than just to *probe*), they should
> have to invest a huge amount of up front work compared to trivial
> attributing their variables with types.

I must be reading this wrong. You appear to be claiming that static
type checking is a substitute for testing. But that's obviously crazy,
since even well-typed programs can fail to meet their specifications.

> And then the specification changes... God help them! (But of course,
> they actually test nothing, and what they do is technically
> non-testable.)

Nope, I've got no idea what the second half of the parenthetical remark
means -- because it can't mean what it looks like it means, since it's
so obviously wrong.

-- [mdw]

Pascal Costanza

unread,
Apr 11, 2009, 7:00:10 AM4/11/09
to
Robbert Haarman wrote:
> On Fri, Apr 10, 2009 at 09:08:09AM +0200, Pascal Costanza wrote:
>> Robbert Haarman wrote:
>>> Taking "static typing" to mean that programs that cannot be correctly
>>> at compile time are rejected at compile time, whereas "dynamic typing"
>>> means type errors lead to rejection at run-time, static typing means,
>>> by definition, rejecting bad programs early. It seems to me this would
>>> be a productivity gain.
>> ...but that's a wrong conclusion: http://p-cos.net/documents/dynatype.pdf
>
> Reading the PDF failed to convince me.

Of course not. ;)

>> 2.1 Statically Checked Implementation of Interfaces
>
> You're going about it the wrong way. You shouldn't declare that your
> class implements the interface when it doesn't, then add stub methods
> until it does, and hope you remember to fix it later.
>
> You should implement the interface first, and then you declare that your
> class implements it. From that point on, you can have the compiler check
> that you have actually implemented the interface.

Who are you to tell me what I should and shouldn't do?

_I_ don't want to be interrupted in my flow of thinking. The programming
language shouldn't tell me what to focus on, I should have control over
the programming language to tell it what to focus on.

> In practice, what happens is often that you use an IDE which lets you
> declare the interfaces you implement. The IDE then generates the stubs
> for you, with a little reminder in each to tell you you still need to
> make that stub do something useful. A good IDE will also tell you if you
> haven't done that yet. This works, as long as you don't ignore your
> IDE's warnings.

Read my paper again. The erroneous situation was partially caused by
such a "smart" IDE!

> Having these warnings is the best scenario you can hope for with dynamic
> typing, because dynamic typing, by its nature, is not allowed to reject
> your program before run time. Many implementations of dynamically typed
> language will not provide any warning at all.
>
> I agree with you that returning a default value is wrong and you should
> signal an error instead if a stub method is called. But that's
> orthogonal to static typing.

The problem I describe in the paper is a problem I have with a
statically typed language, and that I don't have with a dynamically
typed language. It may be true that dynamically typed language have
their own problems, but it's certainly not true that statically typed
languages prevent problems from happening. To the contrary, in this
specific situation, the concrete statically typed language at hand made
the situation worse.

>> 2.2 Statically Checked Exceptions
>
> These have their pros and cons. It would be nice if you could prove at
> compile time that any error condition that could arise at run time is
> handled in some way.

You mean like that the program runs into an endless loop, for example? :-P

> However, I am not aware of any languages that
> actually provide such guarantees.

No big surprise there, because it's impossible.

> Either way, I really dislike the way
> exceptions work in Java but that, again, is orthogonal to static typing.

Is it? They are part of the static type system in Java...

>> 2.3 Checking Feature Availability
>
>> Checking if a resource provides a specific feature and actually using
>> that feature should be an atomic step in the face of multiple access
>> paths to that resource. Otherwise, that feature might get lost in
>> between the check and the actual use.
>
> Yes, race conditions are a problem. But the problem here is not with
> static typing. In fact, the problem here is that you are breaking static
> typing!

Incorrect. The static type system promises me something here that it
cannot hold. So why does it promise me that?

> And the end result is that you get the same thing you would have
> gotten under dynamic typing.

Nope, with a dynamic language I can just invoke the method without
further effort, I don't first have to ensure that it's there. At the
time the runtime system makes the decision that the method is there,
there is no gap anymore in which it can be removed before its actual
execution.

> As an aside, I think this example highlights one of the deficiencies of
> the objects-with-methods flavor of object orientation. The example would
> map to a relational universe much better.

That's a different discussion.


Pascal

--
ELS'09: http://www.european-lisp-symposium.org/
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Pascal Costanza

unread,
Apr 11, 2009, 7:06:09 AM4/11/09
to
Dmitry A. Kazakov wrote:
> On Fri, 10 Apr 2009 09:13:46 -0700 (PDT), Pillsy wrote:
>
>> On Apr 10, 11:44 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
>> wrote:
>> [...]
>>> Further I would also like to an explanation how later or less checks could
>>> improve this rate and thus productivity.
>> How could such an obvious point need explanation? Eliminating
>> *irrelevant* checks will clearly increase productivity, because
>> irrelevant checks are, by definition, a waste of time.
>>
>>> Especially the issue how program correctness can be defined without
>>> checks, which, according to the point need to be reduced in order
>>> to improve "productivity."
>> This is a really pitiful strawman. Just because you can't define
>> program correctness without the idea of conforming to *some* set of
>> checks hardly means that you can't define program correctness without
>> conforming to *every possible* set of checks.
>
> Wow, now it becomes interesting. So type checks are irrelevant. That's
> honest. At least!
>
> But that was not the original point. It was, that type checks are great to
> perform later. You should have argued for untyped languages.
>
> However that does not wonder me. Dynamic typing consequently leads to no
> typing.

That's incorrect.

This is SBCL 1.0.27, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.

SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses. See the CREDITS and COPYING files in the
distribution for more information.
* (type-of 42)

(INTEGER 0 536870911)
* (typep 42 'number)

T
* (defun foo ()
(flet ((bar (x) (+ x 42)))
(bar "string")))
; in: LAMBDA NIL
; (+ X 42)
;
; caught WARNING:
; Asserted type NUMBER conflicts with derived type
; (VALUES (SIMPLE-ARRAY CHARACTER (6)) &OPTIONAL).
; See also:
; The SBCL Manual, Node "Handling of Types"
;
; compilation unit finished
; caught 1 WARNING condition

FOO


> No need to be ashamed of guys, just speak your mind. How are going
> to define correctness outside types (sets of values and operations on
> them)? I am curious.

There is no such thing as program correctness. See
http://doi.acm.org/10.1145/379486.379512

Dmitry A. Kazakov

unread,
Apr 11, 2009, 8:07:27 AM4/11/09
to
On Sat, 11 Apr 2009 11:54:04 +0100, Mark Wooding wrote:

> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
>
>> If they indeed wanted to test (rather than just to *probe*), they should
>> have to invest a huge amount of up front work compared to trivial
>> attributing their variables with types.
>
> I must be reading this wrong. You appear to be claiming that static
> type checking is a substitute for testing.

Sure it is. You don't need to test for type errors, since types are
checked.

I repeat the point I made before. The only consistent dynamic typing is no
typing. Since the only way out is to claim that type errors need not to be
checked at all!

> But that's obviously crazy,
> since even well-typed programs can fail to meet their specifications.

No, they never fail to meet types specifications. You are mixing two
different kinds of errors. The point is that testing is always the worst
case scenario. You test only for something that cannot be proved
statically.

>> And then the specification changes... God help them! (But of course,
>> they actually test nothing, and what they do is technically
>> non-testable.)
>
> Nope, I've got no idea what the second half of the parenthetical remark
> means -- because it can't mean what it looks like it means, since it's
> so obviously wrong.

One can test only against a specification. When just typing is already too
much work to think about it, then how a much more detailed thing as
specification isn't? Look, guys complain that merely to consider if the
thingy is a number or else a commuter train schedule is too big burden. How
could they design, implement and evaluate a test scenario for what they
don't care to know what it actually is?

Tamas K Papp

unread,
Apr 11, 2009, 9:12:06 AM4/11/09
to
On Sat, 11 Apr 2009 11:17:02 +0200, Dmitry A. Kazakov wrote:

> If you write a lot of throwaway code, how can you claim higher
> productivity? I hope that the throwaway code is not counted as
> production code?

Architects/designers usually make blueprints, physical scale models
and lately, 3d renderings of the objects with a computer. If you
don't understand why they do this, you will never understand the role
of throwaway code.

Tamas

Robbert Haarman

unread,
Apr 11, 2009, 9:19:27 AM4/11/09
to
On Sat, Apr 11, 2009 at 01:00:10PM +0200, Pascal Costanza wrote:
> Robbert Haarman wrote:
>> On Fri, Apr 10, 2009 at 09:08:09AM +0200, Pascal Costanza wrote:
>
>>> 2.1 Statically Checked Implementation of Interfaces
>>
>> You're going about it the wrong way. You shouldn't declare that your
>> class implements the interface when it doesn't, then add stub methods
>> until it does, and hope you remember to fix it later.
>>
>> You should implement the interface first, and then you declare that
>> your class implements it. From that point on, you can have the compiler
>> check that you have actually implemented the interface.
>
> Who are you to tell me what I should and shouldn't do?

About the same as you are by telling people the anti-patterns you
mention in your paper are not the way to go.

What I am saying is that I agree with you that the anti-pattern isn't
the way to do it, but that I disagree with the analysis in your paper.

> _I_ don't want to be interrupted in my flow of thinking. The programming
> language shouldn't tell me what to focus on, I should have control over
> the programming language to tell it what to focus on.

But you have that power. Nothing prevents you from developing your
program step by step, without being forced to implement all the methods
of a certain interface.

Only if _you_ declare that your class implements an interface will the
compiler check for you that the class does, in fact, implement that
interface.

The anti-pattern here is not that the compiler checks that you have
actually implemented CharSequence when you tell it to; the anti-pattern
is that you tell the compiler to check that you have implemented
CharSequence when you don't actually want the compiler to check that.

>> In practice, what happens is often that you use an IDE which lets you
>> declare the interfaces you implement. The IDE then generates the stubs
>> for you, with a little reminder in each to tell you you still need to
>> make that stub do something useful. A good IDE will also tell you if
>> you haven't done that yet. This works, as long as you don't ignore your
>> IDE's warnings.
>
> Read my paper again. The erroneous situation was partially caused by
> such a "smart" IDE!

You mean the part about Eclipse? I thought I had already addressed that
in my previous post. Or are you talking about something else? In that
case, could you point me to the relevant part of the paper?

>> Having these warnings is the best scenario you can hope for with
>> dynamic typing, because dynamic typing, by its nature, is not allowed
>> to reject your program before run time. Many implementations of
>> dynamically typed language will not provide any warning at all.
>>
>> I agree with you that returning a default value is wrong and you should
>> signal an error instead if a stub method is called. But that's
>> orthogonal to static typing.
>
> The problem I describe in the paper is a problem I have with a
> statically typed language, and that I don't have with a dynamically
> typed language.

Perhaps I don't fully understand what you are claiming the problem is.

The way I understand the paper is that you don't want the compiler to
check that you have actually implemented all the methods mandated by
CharSequence. There is nothing about static typing that forces this
check on you; it only comes about when you tell the compiler to perform
this check for you by means of "implements CharSequence". If you don't
want the compiler to perform the check, simply don't tell it to perform
the check.

> It may be true that dynamically typed language have their own
> problems, but it's certainly not true that statically typed languages
> prevent problems from happening.

I like to think that they do prevent some problems from happening.
Specifically, type errors in production systems. It is true that not all
errors are type errors, and that exhaustive testing would also reveal
type errors. However, exhaustive testing is not always (usually not)
performed. There is a good reason for that: testing is expensive, and,
at some point, it just doesn't make economic sense to test more (whether
the expense is measured in time or money or both). Static type checking
allows you to find some errors very quickly, and static typing prevents
those errors from ever making it into a production system.

> To the contrary, in this specific situation, the concrete statically
> typed language at hand made the situation worse.

Still assuming I understand the situation correctly, I don't agree. You
told the compiler that your class implemented CharSequence, but it
didn't. The compiler dutifully pointed this out to you. This is not
worse, this is better.

>>> 2.2 Statically Checked Exceptions
>>
>> These have their pros and cons. It would be nice if you could prove at
>> compile time that any error condition that could arise at run time is
>> handled in some way.
>
> You mean like that the program runs into an endless loop, for example? :-P

Sure, why not? I have seen plenty of programs that are supposed to
run in endless loops, but don't when certain error conditions occur,
because the error causes the whole program to terminate.

>> However, I am not aware of any languages that actually provide such
>> guarantees.
>
> No big surprise there, because it's impossible.

Oh?

>> Either way, I really dislike the way exceptions work in Java but that,
>> again, is orthogonal to static typing.
>
> Is it? They are part of the static type system in Java...

Oh, yes. I forgot. Java is the only statically typed language, and there
is no other way to design a statically typed language! ;-)

Seriously, though, most languages I know, even statically typed ones,
don't have both checked and unchecked exceptions like Java has.

>>> 2.3 Checking Feature Availability
>>
>>> Checking if a resource provides a specific feature and actually using
>>> that feature should be an atomic step in the face of multiple access
>>> paths to that resource. Otherwise, that feature might get lost in
>>> between the check and the actual use.
>>
>> Yes, race conditions are a problem. But the problem here is not with
>> static typing. In fact, the problem here is that you are breaking
>> static typing!
>
> Incorrect. The static type system promises me something here that it
> cannot hold. So why does it promise me that?

The static type system promises you that you can call getEmployer() on
an instance of Employee. This is true.

What the static type system does not promise you is that dilbert is an
Employee. This is because you have not told it to prove that; you've
only told it to prove that dilbert is a Person.

You then tell the compiler "oh, by the way, whatever you believe,
dilbert is actually an Employee". Rather than taking this claim at face
value (like a C compiler would), the compiler inserts a run-time check
to verify that this is actually the case. It is that run-time check that
fails, and it fails if and only if your claim is actually false and
dilbert is not an Employee. This may or may not actually happen, because
there is a race condition in your program.

>> And the end result is that you get the same thing you would have
>> gotten under dynamic typing.
>
> Nope, with a dynamic language I can just invoke the method without
> further effort, I don't first have to ensure that it's there. At the
> time the runtime system makes the decision that the method is there,
> there is no gap anymore in which it can be removed before its actual
> execution.

Yes, and you could have done the same thing in the Java program. The
only change you would need to make is removing the instanceof check.
Alternatively, you could add such a check to the program in a
dynamically typed language. It both cases, the program in the
dynamically typed language would exhibit the exact same behavior as the
Java program.

Regards,

Bob

--
#include <stdio.h>
#define SIX 1 + 5
#define NINE 8 + 1
int main(int argc, char **argv) {
return printf("When you multiply SIX by NINE, you get %d\n",
SIX * NINE);
}


Dmitry A. Kazakov

unread,
Apr 11, 2009, 9:30:50 AM4/11/09
to

I am afraid you are confusing models with what they ought to model as well
as tools with goals.

I am surprised to hear that a physical scale model (throwaway code)
requires less efforts than labeling the project folder "it will be a
bridge, guys, not a stall" (type annotation).

Tamas K Papp

unread,
Apr 11, 2009, 9:51:18 AM4/11/09
to
On Sat, 11 Apr 2009 15:30:50 +0200, Dmitry A. Kazakov wrote:

> On 11 Apr 2009 13:12:06 GMT, Tamas K Papp wrote:
>
>> On Sat, 11 Apr 2009 11:17:02 +0200, Dmitry A. Kazakov wrote:
>>
>>> If you write a lot of throwaway code, how can you claim higher
>>> productivity? I hope that the throwaway code is not counted as
>>> production code?
>>
>> Architects/designers usually make blueprints, physical scale models and
>> lately, 3d renderings of the objects with a computer. If you don't
>> understand why they do this, you will never understand the role of
>> throwaway code.
>
> I am afraid you are confusing models with what they ought to model as
> well as tools with goals.

Nope, the separation you introduce is artificial and reflects your
lack of experience with Lisp. Lisp is the best model for Lisp
code. Experienced Lisp programmers sketch their programs in Lisp, and
continuously refine them until they arrive at the end result.

Static typing prevents this from happening, because it distracts the
programmer and the flow is broken. Haskell and its ilk are very good
among languages with static typing since they implement static typing
in a minimally intrusive way, but it still manages to be a pain.

> I am surprised to hear that a physical scale model (throwaway code)
> requires less efforts than labeling the project folder "it will be a
> bridge, guys, not a stall" (type annotation).

That is fine when your components are well-known things and you are
just repackaging them. But when you are developing something new, you
can't just label it because you have to say more about the object.
Good dynamic languages allows you to sketch the description of the
object in the language itself. The emphasis is on "sketch": it is
something that is a good first approximation, but implementing it in a
language with static typing would imply a lot of wasted effort just to
please the compiler.

Tamas

Raffael Cavallaro

unread,
Apr 11, 2009, 9:51:57 AM4/11/09
to
On Apr 11, 5:49 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> The same argument can be applied in order to show that incomplete program
> cannot be executed.

Lisp users run incomplete programs all the time - we just get a
runtime error, are dropped into the debugger, and either add the
missing bit in the debugger and continue merrily on our way, or back
out and refactor, etc.

You have a rigidly static notion of what can/can't/should/shouldn't be
allowed to run which is the enemy of exploratory programming and
programmer productivity[1]

[1] used here in the sense, possibly unfamiliar to you, of usable
code, not random sequences of instructions.

Robbert Haarman

unread,
Apr 11, 2009, 10:04:31 AM4/11/09
to
On Fri, Apr 10, 2009 at 10:49:13PM +0000, eric-and-jane-smith wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in
> news:g8b1tvxfcacp$.bwsoedn441ti$.d...@40tude.net:
>
> > I would like to see them. Precisely the number of man-hours required
> > to achieve the rate software failure at the given severity level per
> > source code line per one second of execution.
>
> Even if someone published such statistics, the information would not be
> reliable. There are too many ways for errors and misunderstandings to
> make it useless.

I think this is very likely to be true. This is why I object to
statements like "X is dynamically typed, leading to greater programmer
productivity". I am simply not convinced this is the case. In the
absence of any advantages that come from dynamic typing, I would rather
have static typing's assurance that the program is free of type errors.

> The best we can probably do is tally the opinions of
> programmers with enough experience in different paradigms for their
> opinions to be meaningful.

This, of course, does not establish that there is an actual difference
in productivity, let alone that such a difference is due to dynamic
typing. Which is not saying that listening to experienced programmers
sharing their accumulated wisdom isn't useful!

> From my point of view, the best reason to use dynamic typing is that Lisp
> uses it.

I think that is the right way to see it. Lisp is a great language, so
you use it. Lisp also happens to have dynamic typing, so you get that
with the bargain.

> I don't even consider error checking to be the biggest advantage of
> static typing. One other advantage is being able to overload function
> names, which can increase productivity, because it fits in better with
> the way we think and the way our natural languages work.

I agree that this is a great advantage, but you don't need static typing
for this to work. Common Lisp supports it, for example, while still
being dynamically typed.

> Another is that the information the compiler gets for optimization
> does not require as much compiler intelligence to use well, so the
> same amount of compiler sophistication can give more optimization.

This is certainly a point to consider.

> But the advantages of dynamic typing seem just as good overall to me.
> E.g. that the programmer doesn't have to think of variables as having
> types, or that the effective type of a variable is "whatever fits the
> situation" with no need to expend mental effort on that, which would
> distract important mental attention away from other, more important
> considerations.

I think the important consideration is "can this operation be applied to
that value?" When you implement a function, you make certain assumptions
about the values that are passed in. Static typing will check some of
these assumptions, and stop you from running your program if the
assumptions are not correct.

I don't see how you need to do less thinking about the types of your
values when writing

(defun my-reverse (xs)
(labels ((aux (xs ys)
(if (null xs) ys
(aux (cdr xs) (cons (car xs) ys)))))
(aux xs '())))

than when writing

let my_reverse xs =
let rec aux xs ys = match xs with
| [] -> ys
| (x::others) -> aux others (x::ys)
in aux xs []

The difference comes when you write

(defun wrong () (my-reverse 42))

or

let wrong _ = my_reverse 42

In Lisp, you have now defined a function that will fail when called. In
OCaml, you have now failed to define a function.

Regards,

Bob

--
A journey of a thousand miles starts under one's feet.
-- Lao Tze


Tamas K Papp

unread,
Apr 11, 2009, 10:15:40 AM4/11/09
to

I had a look at Dmitry's homepage, apparently he uses Ada 95/2005. So
it is possible that exploratory programming, among other things, is
unfamiliar to him.

From Ada's wikipedia entry:

"Ada also supports a large number of compile-time checks to help avoid
bugs that would not be detectable until run-time in some other
languages or would require explicit checks to be added to the source
code. Ada is designed to detect small problems in very large, massive
software systems. For example, Ada detects each misspelled variable
(due to the rule to declare each variable name), and Ada pinpoints
unclosed if-statements, which require "END IF" rather than mismatching
with any "END" token."

What a truly amazing language! Whereas in CL, when I write

(lambda (x)
(if (statement-involving-x)
value1
value2))

I sometimes wonder which of the last two )'s belong to (lambda and
(if. Sometimes I mix up the two )'s and write it like this:

(lambda (x)
(if (statement-involving-x)
value1
value2))

and my dumb compiler doesn't even give me an error message.

Tamas

Pillsy

unread,
Apr 11, 2009, 10:16:16 AM4/11/09
to
On Apr 11, 5:17 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> On Fri, 10 Apr 2009 14:41:28 -0700 (PDT), Pillsy wrote:
[...]

> > If your major question is whether your approach is fast enough, or
> > numerically stable enough, or if your simulation captures the the
> > relevant features of the system being simulated, you are probably
> > going to write a lot of throwaway code.

> I don't see how this changes types.

It doesn't change *types*.

The issue isn't what the types are, it's whether it's worth the effort
to deal with static type *checks*.

Types and type checks just aren't the same thing.

Cheers,
Pillsy

Pascal Costanza

unread,
Apr 11, 2009, 10:55:34 AM4/11/09
to

I may want to run tests by passing instances of the class I'm developing
to a third-party method that declares to expect a CharSequence as a
parameter, which means I may have no choice.

>>> In practice, what happens is often that you use an IDE which lets you
>>> declare the interfaces you implement. The IDE then generates the stubs
>>> for you, with a little reminder in each to tell you you still need to
>>> make that stub do something useful. A good IDE will also tell you if
>>> you haven't done that yet. This works, as long as you don't ignore your
>>> IDE's warnings.
>> Read my paper again. The erroneous situation was partially caused by
>> such a "smart" IDE!
>
> You mean the part about Eclipse? I thought I had already addressed that
> in my previous post.

I don't see where you did.

> Or are you talking about something else? In that
> case, could you point me to the relevant part of the paper?

Eclipse offers to introduce stub methods that return arbitrary values
which happen to make the type checker happy, but which are erroneous.
So, no guarantee at all provided by the type system.

>> The problem I describe in the paper is a problem I have with a
>> statically typed language, and that I don't have with a dynamically
>> typed language.
>
> Perhaps I don't fully understand what you are claiming the problem is.
>
> The way I understand the paper is that you don't want the compiler to
> check that you have actually implemented all the methods mandated by
> CharSequence. There is nothing about static typing that forces this
> check on you; it only comes about when you tell the compiler to perform
> this check for you by means of "implements CharSequence". If you don't
> want the compiler to perform the check, simply don't tell it to perform
> the check.

See above.

>> It may be true that dynamically typed language have their own
>> problems, but it's certainly not true that statically typed languages
>> prevent problems from happening.
>
> I like to think that they do prevent some problems from happening.

But they also introduce other problems.

> Specifically, type errors in production systems. It is true that not all
> errors are type errors, and that exhaustive testing would also reveal
> type errors.

It's also true that not all type errors are errors. A programmer has to
actively work for having an overlap between type errors and actual
errors, otherwise he won't get any benefits from the static type checker.

It's a tool that, if it's in line with how the programmer thinks about
his program and is used by him accordingly, can be useful, but can be a
serious burden if it's either not in line with how he thinks about the
program or is not used correctly.

Static type systems by themselves don't do anything.

>> To the contrary, in this specific situation, the concrete statically
>> typed language at hand made the situation worse.
>
> Still assuming I understand the situation correctly, I don't agree. You
> told the compiler that your class implemented CharSequence, but it
> didn't. The compiler dutifully pointed this out to you. This is not
> worse, this is better.

See above.

>>> Either way, I really dislike the way exceptions work in Java but that,
>>> again, is orthogonal to static typing.
>> Is it? They are part of the static type system in Java...
>
> Oh, yes. I forgot. Java is the only statically typed language, and there
> is no other way to design a statically typed language! ;-)

It's an example for a statically typed language which proves that the
presence of a static type system by itself doesn't mean a lot yet (and
whose design can be considered a good-faith effort to provide a 'good'
static type system - some people actually claim to like it).

> Seriously, though, most languages I know, even statically typed ones,
> don't have both checked and unchecked exceptions like Java has.

Sure.

>>>> 2.3 Checking Feature Availability
>>>> Checking if a resource provides a specific feature and actually using
>>>> that feature should be an atomic step in the face of multiple access
>>>> paths to that resource. Otherwise, that feature might get lost in
>>>> between the check and the actual use.
>>> Yes, race conditions are a problem. But the problem here is not with
>>> static typing. In fact, the problem here is that you are breaking
>>> static typing!
>> Incorrect. The static type system promises me something here that it
>> cannot hold. So why does it promise me that?
>
> The static type system promises you that you can call getEmployer() on
> an instance of Employee. This is true.
>
> What the static type system does not promise you is that dilbert is an
> Employee. This is because you have not told it to prove that; you've
> only told it to prove that dilbert is a Person.

if (dilbert instanceof Employee) { // <<<=== this is the promise
// (Employee is a static type!)
System.out.println("Employer: " +
((Employee)dilbert).getEmployer().getName();
}

>>> And the end result is that you get the same thing you would have
>>> gotten under dynamic typing.
>> Nope, with a dynamic language I can just invoke the method without
>> further effort, I don't first have to ensure that it's there. At the
>> time the runtime system makes the decision that the method is there,
>> there is no gap anymore in which it can be removed before its actual
>> execution.
>
> Yes, and you could have done the same thing in the Java program. The
> only change you would need to make is removing the instanceof check.
> Alternatively, you could add such a check to the program in a
> dynamically typed language. It both cases, the program in the
> dynamically typed language would exhibit the exact same behavior as the
> Java program.

Yes, that's one of the claims of the paper: That in the examples given
in the paper, the right thing to do is to simulate what you would have
done in a dynamic language.

Pillsy

unread,
Apr 11, 2009, 11:02:42 AM4/11/09
to
On Apr 11, 5:49 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> On Fri, 10 Apr 2009 14:24:17 -0700 (PDT), Raffael Cavallaro wrote:
[...]

> > The portions that cannot be typed yet are *not* dead code;
> > they just contain calls to as-yet undefined functions.

> Do you call these functions or not. If you don't, there is no problem with
> that in *both* cases.

If you *do* call one of these functions, there's also no problem. You
get dropped into the debugger, go to another window and define the
function, rewind the stack a bit and start again.

It's not so different from what happens when you add 3 to NIL, and
it's really not a much bigger deal.

Cheers,
Pillsy

Mark Wooding

unread,
Apr 11, 2009, 11:06:17 AM4/11/09
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:

> On Sat, 11 Apr 2009 11:54:04 +0100, Mark Wooding wrote:
> > I must be reading this wrong. You appear to be claiming that static
> > type checking is a substitute for testing.
>
> Sure it is. You don't need to test for type errors, since types are
> checked.

But type errors are only one (fairly small) class of errors. You still
need to test for all of the others, so you've actually won relatively
little.

> I repeat the point I made before. The only consistent dynamic typing is no
> typing. Since the only way out is to claim that type errors need not to be
> checked at all!

This is just a complete misunderstanding of what dynamic and static
typing mean. The two are in fact almost orthogonal, rather than
mutually exclusive: the names `static' and `dynamic' are unhelpful and
impede understanding.

* `Static typing' is better named `expression typing'. An expression
is a syntactic entity which denotes a computation. Static typing
assigns each expression a type, according to some rules, possibly
based on other annotations in the source. If this assignment fails
(e.g., there is no applicable rule to assign a type to an
expression, or there are multiple rules that assign distinct types
without an means of disambiguation) then the program is considered
ill-formed.

* `Dynamic typing' is better named `value typing'. A value is a
runtime entity which stores a (compound or atomic) datum. Dynamic
typing assigns a type to each value, which can be checked at runtime
by functions and operators acting on those values. If an operator
or function is applied to a value with an inappropriate type, then
an error can be signalled. This may indicate that the program is
incorrect, or simply be a means of validating input data.

Just to show that these two concepts are mostly orthogonal:

* Forth is neither statically nor dynamically typed.

* C is statically typed, but not dynamically typed.

* Scheme is dynamically typed, but not statically typed.

* Java is both statically and dynamically typed.

To further confuse matters, there's the issue of `strong' versus `weak'
typing, which measures how easy the type system is to subvert. Both
static and dynamic type systems can be weak or strong.

So, to return to your comment:

> I repeat the point I made before. The only consistent dynamic typing
> is no typing. Since the only way out is to claim that type errors need
> not to be checked at all!

This is clearly nonsense. Scheme, for example, has a perfectly coherent
type system. It has no static typing (or, if you like, trivial static
typing, since all expressions are assigned the same universal type).
But this doesn't mean that there is no typing at all: Scheme has strong
dynamic typing: if you apply the `*' procedure to strings or lists, an
error is signalled.

> > But that's obviously crazy, since even well-typed programs can fail
> > to meet their specifications.
>
> No, they never fail to meet types specifications.

But not all specifications are about types.

> You are mixing two different kinds of errors. The point is that
> testing is always the worst case scenario. You test only for something
> that cannot be proved statically.

`Beware of bugs in the above code; I have only proved it correct, not
tried it.'
-- Donald Knuth

> One can test only against a specification. When just typing is already
> too much work to think about it, then how a much more detailed thing
> as specification isn't? Look, guys complain that merely to consider if
> the thingy is a number or else a commuter train schedule is too big
> burden.

You've got this backwards. If I have a specification, and a program,
and a proof that the program correctly implements the specification,
then type annotations and static checking are worthless to me. The only
thing that a static type checker can tell me is that my program is
well-typed; but it must be if it's a correct program.

> How could they design, implement and evaluate a test scenario for what
> they don't care to know what it actually is?

And this is word salad.

-- [mdw]

Dmitry A. Kazakov

unread,
Apr 11, 2009, 11:16:09 AM4/11/09
to
On Sat, 11 Apr 2009 07:16:16 -0700 (PDT), Pillsy wrote:

> On Apr 11, 5:17 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 14:41:28 -0700 (PDT), Pillsy wrote:
> [...]
>>> If your major question is whether your approach is fast enough, or
>>> numerically stable enough, or if your simulation captures the the
>>> relevant features of the system being simulated, you are probably
>>> going to write a lot of throwaway code.
>
>> I don't see how this changes types.
>
> It doesn't change *types*.

Fine, what is wrong with stating these types.

> The issue isn't what the types are, it's whether it's worth the effort
> to deal with static type *checks*.

Which efforts, it is already done. Checks are performed by the compiler.

> Types and type checks just aren't the same thing.

Great. So we have agreed that

1. Types are needed
2. Types errors has to be detected

The only problem is that you don't want to detect these errors at compile
time, even if you could?

Dmitry A. Kazakov

unread,
Apr 11, 2009, 11:25:29 AM4/11/09
to
On Sat, 11 Apr 2009 08:02:42 -0700 (PDT), Pillsy wrote:

> On Apr 11, 5:49 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Fri, 10 Apr 2009 14:24:17 -0700 (PDT), Raffael Cavallaro wrote:
> [...]
>>> The portions that cannot be typed yet are *not* dead code;
>>> they just contain calls to as-yet undefined functions.
>
>> Do you call these functions or not. If you don't, there is no problem with
>> that in *both* cases.
>
> If you *do* call one of these functions, there's also no problem. You
> get dropped into the debugger, go to another window and define the
> function, rewind the stack a bit and start again.

So you agree that this is a bug. Therefore the compiler is right telling
you so, *before* you start the program. Where is a problem? What is the
reason to wait until a debugger window pops up? After all, if you design a
GUI it could take a lot of mouse clicks before that happens. Do you find
debugging a fun? I don't. I hate debugging.

> It's not so different from what happens when you add 3 to NIL, and
> it's really not a much bigger deal.

I never wrote a program that adds 3 to null, because it is impossible in
the statically typed language I am using.

Ray Dillinger

unread,
Apr 11, 2009, 11:48:21 AM4/11/09
to
Dmitry A. Kazakov wrote:

> The only problem is that you don't want to detect these errors at compile
> time, even if you could?

Dmitry, I don't think anyone objects to detecting type errors, or even
type warnings, before runtime. People who use dynamically typed languages
just prefer not to have such type errors or warnings stop them from
running the program.

Most coders I know and work with prefer finding logical errors in code
by running it and seeing what happens as it interacts with real data.
I'm sorry if you're offended by folks preferring to treat type errors
exactly the same as any other kind of logic error, but that's what
many folks do prefer.

Type errors detectable before time are mostly hypothetical in nature;
an operation has been used that is not defined on variables of this
type. Type errors found at runtime are very concrete and easy to
understand; Hey, this code tried to add 23 to "The rain in Spain."
Having actual values rather than just hypothetical types makes it
easier to understand what went wrong and go debug it.

Aside from type errors per se, remember that there's still a
large class of type-correct programs that cannot be proven type-
correct. And the halting problem says that no matter how advanced
we get, that class of programs will remain nonempty.

And also, sometimes these are applications that necessarily run
24/7 and which need to be modified with new types or new methods
while running. In a no-downtime situation, static typechecks
"before runtime" can never happen because there is no "before" to
work with. I like being able to write code, for example, that
manages lists of things, have it in a program, and then, while
the program is running, add new user-defined types whose methods
use my list code to manage lists of objects of the new type
(which didn't even exist when the list module started) and link
this into the program while it's running, without ever stopping
the program.


Bear

Pillsy

unread,
Apr 11, 2009, 11:56:26 AM4/11/09
to
On Apr 11, 11:25 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>

wrote:
> On Sat, 11 Apr 2009 08:02:42 -0700 (PDT), Pillsy wrote:
> > On Apr 11, 5:49 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> > wrote:
> >> On Fri, 10 Apr 2009 14:24:17 -0700 (PDT), Raffael Cavallaro wrote:
> > [...]
> >>> The portions that cannot be typed yet are *not* dead code;
> >>> they just contain calls to as-yet undefined functions.
>
> >> Do you call these functions or not. If you don't, there is no problem with
> >> that in *both* cases.
>
> > If you *do* call one of these functions, there's also no problem. You
> > get dropped into the debugger, go to another window and define the
> > function, rewind the stack a bit and start again.
>
> So you agree that this is a bug.

Of course there's a bug. So what? I'm still in the early testing and
design phases of the project, so I don't particularly care if there
are silly, easy-to-fix bugs yet.

> Therefore the compiler is right telling you so, *before* you start the
> program.

Yes...?

> Where is a problem?

The problem is that in order to catch those occasional bugs, I have to
waste a whole lot of time convincing the compiler to compile a program
that I know is pretty damned likely to be buggy anyway.

Cheers,
Pillsy

Dmitry A. Kazakov

unread,
Apr 11, 2009, 11:59:26 AM4/11/09
to
On Sat, 11 Apr 2009 16:06:17 +0100, Mark Wooding wrote:

> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:
>
>> On Sat, 11 Apr 2009 11:54:04 +0100, Mark Wooding wrote:
>>> I must be reading this wrong. You appear to be claiming that static
>>> type checking is a substitute for testing.
>>
>> Sure it is. You don't need to test for type errors, since types are
>> checked.
>
> But type errors are only one (fairly small) class of errors. You still
> need to test for all of the others, so you've actually won relatively
> little.

OK, that brings us back to the metrics. Nobody has presented them. Secondly
whatever percentage of errors it might be, they are caught. There is
absolutely no reason not to catch them.

>> I repeat the point I made before. The only consistent dynamic typing is no
>> typing. Since the only way out is to claim that type errors need not to be
>> checked at all!
>
> This is just a complete misunderstanding of what dynamic and static
> typing mean. The two are in fact almost orthogonal, rather than
> mutually exclusive: the names `static' and `dynamic' are unhelpful and
> impede understanding.

[...]
To clarify things. Each value has a type in any model. What you refer as
"dynamic" is merely a polymorphic value, which also has a type denoting the
class rooted in the most unspecific type. The tag of this value specifies
not the type of the compound, but the specific type of what it contains.

There is no orthogonality between these models. Any modern statically typed
language supports dynamic polymorphism and so dynamic typing, like Java,
you have mentioned. Nobody argues that there is no place for dynamic
typing. My point was, that if you want to remove static typing (because it
is a burden etc), you have to go untyped.



>> I repeat the point I made before. The only consistent dynamic typing
>> is no typing. Since the only way out is to claim that type errors need
>> not to be checked at all!
>
> This is clearly nonsense. Scheme, for example, has a perfectly coherent
> type system. It has no static typing (or, if you like, trivial static
> typing, since all expressions are assigned the same universal type).

Yep

> But this doesn't mean that there is no typing at all: Scheme has strong
> dynamic typing: if you apply the `*' procedure to strings or lists, an
> error is signalled.

Is it an error? Note that an error (bug) cannot be signalled within the
program, which is erroneous = incorrect = has unpredictable behavior. It
can only be to the programmer. So in fact, if what you call "error" does
not kill the program, it is not an error, but a legal state of its
execution, which semantics is perfectly defined. For example, propagation
of an exception, call to "method not understood" etc. So any operation is
legal for any object. This is why I call it effectively untyped.

Yet another issue is the desire keep such problems undetected. Again, the
only possible reason why, is that you have no types. If you had types, not
formally, but semantically, then you would like to prevent type errors.

>>> But that's obviously crazy, since even well-typed programs can fail
>>> to meet their specifications.
>>
>> No, they never fail to meet types specifications.
>
> But not all specifications are about types.

Nobody claimed that.

>> You are mixing two different kinds of errors. The point is that
>> testing is always the worst case scenario. You test only for something
>> that cannot be proved statically.
>
> `Beware of bugs in the above code; I have only proved it correct, not
> tried it.'
> -- Donald Knuth

(:-))

Yes. But errors in programs proved correct, are ones in the specifications.
If we could prove specifications correct we would need not to test them. It
is an infinite recursion. Even more it is important to specify what is the
testee.

>> One can test only against a specification. When just typing is already
>> too much work to think about it, then how a much more detailed thing
>> as specification isn't? Look, guys complain that merely to consider if
>> the thingy is a number or else a commuter train schedule is too big
>> burden.
>
> You've got this backwards. If I have a specification, and a program,
> and a proof that the program correctly implements the specification,
> then type annotations and static checking are worthless to me.

You cannot spell the specifications without types at any reasonable level
of complexity.

> The only
> thing that a static type checker can tell me is that my program is
> well-typed; but it must be if it's a correct program.

The only thing that a test can tell is that this test is passed. It does
not show that the program is correct. You need a branch coverage test in
order to show it correct. But in absence of statically typed annotation of
inputs and outputs, branch coverage is uncountably infinite. Arguing for
tests you do for static typing!

Dmitry A. Kazakov

unread,
Apr 11, 2009, 12:19:53 PM4/11/09
to
On Sat, 11 Apr 2009 08:56:26 -0700 (PDT), Pillsy wrote:

> The problem is that in order to catch those occasional bugs, I have to
> waste a whole lot of time convincing the compiler to compile a program
> that I know is pretty damned likely to be buggy anyway.

Maybe it is just so that you are trying to convince yourself that it isn't
that buggy as you surely know it is. Now I understand your point: it would
be so easy to fool yourself if not that damned compiler... (:-))

Marco Antoniotti

unread,
Apr 11, 2009, 12:21:40 PM4/11/09
to
On Apr 10, 3:55 pm, Tamas K Papp <tkp...@gmail.com> wrote:
> On Fri, 10 Apr 2009 08:39:13 +0200, Robbert Haarman wrote:
> > I disagree. You are implying that dynamic typing leads to greater
> > productivity than static typing. I don't think this is the case.

>
> > Taking "static typing" to mean that programs that cannot be correctly at
> > compile time are rejected at compile time, whereas "dynamic typing"
> > means type errors lead to rejection at run-time, static typing means, by
> > definition, rejecting bad programs early. It seems to me this would be a
> > productivity gain.
>
> Your classification is flawed: CL is certainly not statically typed,
> but my compiler (SBCL) does analyze the functions I compile and warns
> me about a lot of things.
>
> Also, "static typing means, by definition, rejecting bad programs
> early" is sheer idiocy - a lot of "bad" programs are not caught by
> type checking.  Most of the "badness" in my programs arise from
> conceptual mistakes or inappropriate algorithms, no compiler would be
> able to catch those.

Yeah. What was that?


Objective Caml version 3.11.0

# let rec fatt n =
match n with
0 -> 1
| n -> n * fatt (n - 1)
;;
val fatt : int -> int = <fun>
# fatt 13;;
- : int = -215430144
#


Cheers
--
Marco

Robbert Haarman

unread,
Apr 11, 2009, 12:25:57 PM4/11/09
to
On Sat, Apr 11, 2009 at 04:06:17PM +0100, Mark Wooding wrote:
>
> This is just a complete misunderstanding of what dynamic and static
> typing mean. The two are in fact almost orthogonal, rather than
> mutually exclusive: the names `static' and `dynamic' are unhelpful and
> impede understanding.

Please note that I provided my definitions of static typing and dynamic
typing at the start of this discussion. The definitions are, in a
nutshell,

Static typing means type errors are signalled before the program is run.

Dynamic typing means type errors are signalled while the program is
running.

These two concepts are certainly mutually exclusive. Perhaps you do not
agree with my definitions, but that is a different discussion. If you
would be so kind, if you want to use different definitions, do so in a
different thread. This one is large enough as it is.

Regards,

Bob

--
Tis better to be silent and thought a fool, than to open your mouth
and remove all doubt.

-- Abraham Lincoln

William James

unread,
Apr 11, 2009, 12:39:36 PM4/11/09
to
budden wrote:

> I'm a rather marginal person at comp.lang.lisp

Yes, not being a member of a pack of hyenas is admirable,
but don't boast about it.

--
Common Lisp is a significantly ugly language. --- Dick Gabriel
The good news is, it's not Lisp that sucks, but Common Lisp.
--- Paul Graham
Common LISP is the PL/I of Lisps. --- Jeffrey M. Jacobs

Tamas K Papp

unread,
Apr 11, 2009, 12:50:09 PM4/11/09
to
On Sat, 11 Apr 2009 18:25:57 +0200, Robbert Haarman wrote:

> On Sat, Apr 11, 2009 at 04:06:17PM +0100, Mark Wooding wrote:
>>
>> This is just a complete misunderstanding of what dynamic and static
>> typing mean. The two are in fact almost orthogonal, rather than
>> mutually exclusive: the names `static' and `dynamic' are unhelpful and
>> impede understanding.
>
> Please note that I provided my definitions of static typing and dynamic
> typing at the start of this discussion. The definitions are, in a
> nutshell,
>
> Static typing means type errors are signalled before the program is run.
>
> Dynamic typing means type errors are signalled while the program is
> running.
>
> These two concepts are certainly mutually exclusive. Perhaps you do not

And they are also useless / idiotic. Consider

(defun foo (x y)
(+ (expt x 9) (symbol-name y)))

compiled under SBCL:

; compiling (DEFUN FOO ...)

; file: /tmp/file4Z4YAi.lisp
; in: DEFUN FOO
; (+ (EXPT X 9) (SYMBOL-NAME Y))
;
; note: deleting unreachable code
;

; caught WARNING:
; Asserted type NUMBER conflicts with derived type

; (VALUES SIMPLE-STRING &OPTIONAL).


; See also:
; The SBCL Manual, Node "Handling of Types"
;
; compilation unit finished
; caught 1 WARNING condition

; printed 1 note

Using your definitions, SBCL is statically typed. Way to go. Care to
define something else, perhaps?

> agree with my definitions, but that is a different discussion. If you
> would be so kind, if you want to use different definitions, do so in a
> different thread. This one is large enough as it is.

So once you give us bogus definitions, we are not allowed to correct
them in this thread? Are you an elementary school teacher, who is
used to having absolute say in what is Right or Wrong, and sends
students who argue to stand in a corner? Can I leave the thread to go
to the potty?

Tamas

du...@franz.com

unread,
Apr 11, 2009, 12:53:53 PM4/11/09
to
On Apr 11, 9:25 am, Robbert Haarman <comp.lang.m...@inglorion.net>
wrote:

> On Sat, Apr 11, 2009 at 04:06:17PM +0100, Mark Wooding wrote:
>
> > This is just a complete misunderstanding of what dynamic and static
> > typing mean.  The two are in fact almost orthogonal, rather than
> > mutually exclusive: the names `static' and `dynamic' are unhelpful and
> > impede understanding.
>
> Please note that I provided my definitions of static typing and dynamic
> typing at the start of this discussion. The definitions are, in a
> nutshell,
>
> Static typing means type errors are signalled before the program is run.

Agreed. It should also be noted that _all_ static type errors will be
caught (specification errors notwithstanding).

> Dynamic typing means type errors are signalled while the program is
> running.

Partially agreed, but not necessarily so. Such errors are only caught
(yes, at runtime) when they are encountered. It is possible (and in
fact likely) that during a unit test of the program a dynamic type
error will not be encountered, and so, while the programmer is
concentrating on getting Unit A working, he need not concern himself
with interruptions due to a type error in Unit B. This is especially
important when multiple programmers are working on multiple subunits
of a program where the types within each other's units are not even
known - it allows work to go on without having to stop the whole
project while the type system is worked out.

> These two concepts are certainly mutually exclusive. Perhaps you do not
> agree with my definitions, but that is a different discussion. If you
> would be so kind, if you want to use different definitions, do so in a
> different thread. This one is large enough as it is.

I don't think that there's any problem with your definitions, though
there are certainly alternate ones. Your definition of dynamic typing
certainly led you to summarize your definitions in a not-quite-correct
way, even if you agree that it was a minor mistake (i.e. that dynamic
type errors _are_ signaled at runtime, rather than _may_ _be_ signaled
at runtime).

But therein lies the rub. Those who are static typing advocates tend
to minimize the fact that dynamic typing doesn't catch all type
errors, or they claim it as a great bane on dynamic typing. But
dynamic typing advocates call this a _feature_, not a bug, and we lay
claim to great productivity increases, both personally and in group
efforts, because of the lazy-detection that dynamic typing offers.

Also, something that gets lost in these discussions is the presumed
dichotomy of static vs dynamic typing. It is clear that a static
typing advocate must by its definition maintain an absolute position
on static typing, if there are any errors in the compilation then the
program is wrong. However, dynamic-typing proponents are not limited
to viewing the world in terms of type-checks-at-runtime-only; CL
especially allows static type checking when types are known and when
the compiler can catch the type errors, and we who are dynamic-typing
proponents do indeed appreciate the efforts of good statically checked
program segments. As a CL implementor, I consider it to be important
to hug the terrain very closely; where types are known, the compiler
should propagate them, infer from them, and either warn or error as
appropriate. But as types are not yet known, even providing warnings,
such lacks in compile-time specifications must not deter the
programmer from his desired thought process.

Duane

Mark Wooding

unread,
Apr 11, 2009, 1:23:49 PM4/11/09
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> writes:

> OK, that brings us back to the metrics. Nobody has presented
> them. Secondly whatever percentage of errors it might be, they are
> caught. There is absolutely no reason not to catch them.

Yes, there is, as I mentioned much earlier in the thread. A static type
checker exists to prove properties about your program -- in particular,
to prove that certain kinds of errors cannot occur at runtime. But, as
we know, proving nontrivial properties about arbitrary programs is very
difficult, and trying to do it algorithmically is doomed to failure
since most nontrivial properties are noncomputable. So the static type
checker errs conservatively, rejecting programs which it cannot prove to
be free of the kinds of errors it's checking for -- even if the program
is, in fact, correct.

For example, consider

foo (x : int, y : int, z : int) =
3 * (if x /= 0 and y /= 0 and z /= 0 and
x^3 + y^3 == z^3 then
"hello, world"
else
2)

I doubt that many type checkers are capable of following Andrew Wiles in
their ability to show the type-correctness of this program (and its
equivalence to a program which evaluates its arguments, discards the
results, and returns 6).

> >> I repeat the point I made before. The only consistent dynamic
> >> typing is no typing. Since the only way out is to claim that type
> >> errors need not to be checked at all!
> >
> > This is just a complete misunderstanding of what dynamic and static
> > typing mean. The two are in fact almost orthogonal, rather than
> > mutually exclusive: the names `static' and `dynamic' are unhelpful
> > and impede understanding.
>
> [...]
> To clarify things. Each value has a type in any model. What you refer
> as "dynamic" is merely a polymorphic value, which also has a type
> denoting the class rooted in the most unspecific type. The tag of this
> value specifies not the type of the compound, but the specific type of
> what it contains.

That's a clarification? It makes use of several terms, such as `class'
and `compound' which don't seem to be defined anywhere. Dynamic typing
occurs in languages which have no class structure, e.g., pre-CLOS Common
Lisp (or do you count structure inclusion?) or Scheme.

> There is no orthogonality between these models. Any modern statically
> typed language supports dynamic polymorphism and so dynamic typing,
> like Java, you have mentioned.

Why restrict the discussion to modern languages? C is still heavily
used; C++'s dynamic typing (RTTI) is rather limited.

> Nobody argues that there is no place for dynamic typing. My point was,
> that if you want to remove static typing (because it is a burden etc),
> you have to go untyped.

This seems to be a trivial tautology: if you remove static typing, there
is no static typing. In a language like Scheme, there is still run-time
type checking.

> >> I repeat the point I made before. The only consistent dynamic typing
> >> is no typing. Since the only way out is to claim that type errors need
> >> not to be checked at all!
> >
> > This is clearly nonsense. Scheme, for example, has a perfectly coherent
> > type system. It has no static typing (or, if you like, trivial static
> > typing, since all expressions are assigned the same universal type).
>
> Yep
>
> > But this doesn't mean that there is no typing at all: Scheme has strong
> > dynamic typing: if you apply the `*' procedure to strings or lists, an
> > error is signalled.
>
> Is it an error? Note that an error (bug) cannot be signalled within the
> program, which is erroneous = incorrect = has unpredictable behavior.

Checking my copy of R6RS, it seems that I misspoke, and the correct term
in Scheme is `raising an exception' (5.3). Since the standard
procedures in Scheme are defined to check their arguments (5.4, 6.2) and
raise exceptions (of defined condition types) if the requirements are
not met, this behaviour can be relied upon by correct programs.

> It can only be to the programmer. So in fact, if what you call "error"
> does not kill the program, it is not an error, but a legal state of
> its execution, which semantics is perfectly defined. For example,
> propagation of an exception, call to "method not understood" etc. So
> any operation is legal for any object.

Yes, I suppose one could describe the situation in those terms, though I
don't think that it's the most useful way of thinking about it.

> This is why I call it effectively untyped.

As mentioned, I don't think this is a fruitful way of approaching a
non-statically typed but strongly and richly dynamically typed language
such as Scheme. For example, Scheme's numeric tower is unusually rich
and powerful, making clear distinctions between exact and inexact,
integer, rational, real and complex numbers; furthermore, there are many
other kinds of data, both atomic and compound, which can be manipulated
in Scheme, and different operations are appropriate to different kinds
of values. While an implementation will allow you to apply `cdr' to the
integer 3, this is not usually a helpful thing to do, and one tends to
consider it erroneous -- even though in fact the program's behaviour
remains well-defined.

> Yet another issue is the desire keep such problems undetected. Again,
> the only possible reason why, is that you have no types.

No. The desire is to be able to write programs which are, in fact, free
of type errors (in the sense of there not being any unplanned exceptions
raised, say), though a static type checker might be unable to prove
this.

> If you had types, not formally, but semantically, then you would like
> to prevent type errors.

I don't know what this means.

> >>> But that's obviously crazy, since even well-typed programs can fail
> >>> to meet their specifications.
> >>
> >> No, they never fail to meet types specifications.
> >
> > But not all specifications are about types.
>
> Nobody claimed that.

Then why is your comment `they never fail to meet types specifications'
a useful response to my claim that well-typed programs can fail to meet
their specifications?

> You cannot spell the specifications without types at any reasonable level
> of complexity.

True. But the types in the specification need not map onto types in my
implementation. The types in the Z specification language are very
abstract (simply sets of values), and don't map exactly onto any
implementation language that I know -- and I know a lot of them. But
that doesn't matter: all you have to prove is that, for argument values
in the specified domain, the function computes result values according
to the specification. Static types in the implementation language are
entirely unnecessary to this process.

> > The only thing that a static type checker can tell me is that my
> > program is well-typed; but it must be if it's a correct program.
>
> The only thing that a test can tell is that this test is passed. It does
> not show that the program is correct.

True, but irrelevant to my point.

> You need a branch coverage test in order to show it correct. But in
> absence of statically typed annotation of inputs and outputs, branch
> coverage is uncountably infinite. Arguing for tests you do for static
> typing!

Even that is insufficient. In particular, it won't tell you that the
program as written fails to detect and handle particular kinds of
special cases in its input. Consider a binary tree implementation which
corrupts the tree when deleting entries represented in interior nodes
(say by truncating the entire branch): a test suite which only tests the
implementation when deleting leaf nodes can achieve full branch
coverage, yet it is still an incorrect (though possibly well-typed)
implementation.

-- [mdw]

Dmitry A. Kazakov

unread,
Apr 11, 2009, 1:25:53 PM4/11/09
to
On Sat, 11 Apr 2009 09:53:53 -0700 (PDT), du...@franz.com wrote:

> This is especially
> important when multiple programmers are working on multiple subunits
> of a program where the types within each other's units are not even
> known - it allows work to go on without having to stop the whole
> project while the type system is worked out.

Impressive. One can sew sails while other would prepare harness for the
horse...

> But therein lies the rub. Those who are static typing advocates tend
> to minimize the fact that dynamic typing doesn't catch all type
> errors, or they claim it as a great bane on dynamic typing. But
> dynamic typing advocates call this a _feature_, not a bug, and we lay
> claim to great productivity increases, both personally and in group
> efforts, because of the lazy-detection that dynamic typing offers.

I think this confuses dynamic typing and weak typing. Dynamic refers to the
time of binding. A possibility to check for type errors at given time is
merely a consequence of. The "feature" of ignoring typing is to have typing
weak, rather than strong.

Let me repeat it once again: a honest advocate of dynamic typing do it
against typing in favor of no typing. Thank you for illustrating this
point.

du...@franz.com

unread,
Apr 11, 2009, 1:40:21 PM4/11/09
to

Correcting myself:

On Apr 11, 9:53 am, du...@franz.com wrote:

> > Static typing means type errors are signalled before the program is run.
>
> Agreed.  It should also be noted that _all_ static type errors will be
> caught (specification errors notwithstanding).

The parenthesized phrase was unfortunate. What I meant was "(modulo
specification errors)".

Duane

Mark Wooding

unread,
Apr 11, 2009, 1:37:25 PM4/11/09
to
Robbert Haarman <comp.la...@inglorion.net> writes:

> Please note that I provided my definitions of static typing and dynamic
> typing at the start of this discussion.

My newsreader tells me that Ray Dillinger started this discussion. This
thread is crossposted to three newsgroups, and seems to be relevant to
all of them; none of them is moderated. I'm at a loss to see why you
consider yourself to be especially privileged in further guiding the
discussion.

> The definitions are, in a nutshell,
>
> Static typing means type errors are signalled before the program is run.
>
> Dynamic typing means type errors are signalled while the program is
> running.
>
> These two concepts are certainly mutually exclusive.

Some languages -- those, such as Java and Common Lisp which provide both
typing disciplines -- can report type errors at both compile- and
runtime. These counterexamples disprove your claim of mutual exclusion
and leave your definitions meaningless.

What's missing, of course, is that the word `type' doesn't mean quite
the same thing in static and dynamic type systems, and static types
don't apply to the same things as dynamic types. As I explained --
relatively clearly, I hope.

> Perhaps you do not agree with my definitions, but that is a different
> discussion. If you would be so kind, if you want to use different
> definitions, do so in a different thread. This one is large enough as
> it is.

Since your definitions are defective and misleading, we shall have to go
elsewhere for some replacements. Any ideas?

-- [mdw]

Raffael Cavallaro

unread,
Apr 11, 2009, 1:56:01 PM4/11/09
to
On Apr 11, 1:25 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> Impressive. One can sew sails while other would prepare harness for the
> horse...

So you're admitting that your preferred form of type checking would
never let a team develop a sailboat that can be drawn by horses along
a canal. IOW, that static type restrictions can severely impede
programmer creativity.

Raffael Cavallaro

unread,
Apr 11, 2009, 2:05:33 PM4/11/09
to
On Apr 11, 10:15 am, Tamas K Papp <tkp...@gmail.com> wrote:

> and my dumb compiler doesn't even give me an error message.

Yes. This is why xml is so much superior to sexps - each closing tag
explicitly (and, more importantly, quite verbosely) matches an equally
explicit and verbose opening tag.

;^)


du...@franz.com

unread,
Apr 11, 2009, 2:23:52 PM4/11/09
to
On Apr 11, 10:25 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:

> On Sat, 11 Apr 2009 09:53:53 -0700 (PDT), du...@franz.com wrote:
> > This is especially
> > important when multiple programmers are working on multiple subunits
> > of a program where the types within each other's units are not even
> > known - it allows work to go on without having to stop the whole
> > project while the type system is worked out.
>
> Impressive. One can sew sails while other would prepare harness for the
> horse...

Of course. But why limit yourself to horses? Why not expand your
horizons and harness the craft with wings, or even a jet engine? With
slight modification to the hull (little matters of being airtight and
ability to withstand pressure) one could use the sails to capture
winds in space that are not atmospheric. You'd never get that far if
you were bogged down with a compiler that insisted that your sails
were made of cloth.

> > But therein lies the rub.  Those who are static typing advocates tend
> > to minimize the fact that dynamic typing doesn't catch all type
> > errors, or they claim it as a great bane on dynamic typing.  But
> > dynamic typing advocates call this a _feature_, not a bug, and we lay
> > claim to great productivity increases, both personally and in group
> > efforts, because of the lazy-detection that dynamic typing offers.
>
> I think this confuses dynamic typing and weak typing.

I've seen you state this confusion, and I'm sorry for you. That
doesn't make the concept confusing; it's just you that are confused.
Remember, I am accepting Mr Haarman's definitions of static and
dynamic types; if you want to argue within another definitional set,
then state your definitions.

> Dynamic refers to the time of binding.

Agreed.

> A possibility to check for type errors at given time is merely a consequence of.

Please finish your sentence. The above sentence is not complete.

> The "feature" of ignoring typing is to have typing weak, rather than strong.

Here's where you're confused. Dynamic typing and weak typing are
orthogonal. You've been told this before. If you want to get
anywhere with us, you must state your basic assumptions, and we must
agree to discuss within those assumptions.

> Let me repeat it once again: a honest advocate of dynamic typing do it
> against typing in favor of no typing.

You keep repeating again and again this incorrect mantra, with the
expectation that something will magically change. You should fix your
expectation and figure out why we don't accept your conclusions.
Otherwise you're likely to be one frustrated boy.

> Thank you for illustrating this point.

You're welcome. Of course, the point I illustrated (and the trap you
yet again fell into) was not the one you expected...

Duane

Kenneth Tilton

unread,
Apr 11, 2009, 2:43:07 PM4/11/09
to

> On Apr 11, 10:25 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Sat, 11 Apr 2009 09:53:53 -0700 (PDT), du...@franz.com wrote:
>>> This is especially
>>> important when multiple programmers are working on multiple subunits
>>> of a program where the types within each other's units are not even
>>> known - it allows work to go on without having to stop the whole
>>> project while the type system is worked out.
>> Impressive. One can sew sails while other would prepare harness for the
>> horse...

You just broke Tilton's Law, perhaps the pre-eminent one:

Solve the right problem.

If on prototype-assembly day your power team showed up looking for the
mast and your drive train team showed up looking for horses, the problem
is not that they did not prepare 500 page design specifications before
picking up their tools, it is that they were not talking.

I would say "project management was not looking", but project management
never looks.

btw, if one eschews the desperation that is pre-planning everything you
would be surprised how few people you need to build complex systems,
thereby eliminating a large chunk of the communication issue.

I recommend e-mail for the rest.

hth, kenny

Dmitry A. Kazakov

unread,
Apr 11, 2009, 2:50:45 PM4/11/09
to
On Sat, 11 Apr 2009 11:23:52 -0700 (PDT), du...@franz.com wrote:

> On Apr 11, 10:25 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:

>> On Sat, 11 Apr 2009 09:53:53 -0700 (PDT), du...@franz.com wrote:

>>> This is especially
>>> important when multiple programmers are working on multiple subunits
>>> of a program where the types within each other's units are not even
>>> known - it allows work to go on without having to stop the whole
>>> project while the type system is worked out.
>>
>> Impressive. One can sew sails while other would prepare harness for the
>> horse...
>
> Of course. But why limit yourself to horses? Why not expand your
> horizons and harness the craft with wings, or even a jet engine? With
> slight modification to the hull (little matters of being airtight and
> ability to withstand pressure) one could use the sails to capture
> winds in space that are not atmospheric. You'd never get that far if
> you were bogged down with a compiler that insisted that your sails
> were made of cloth.

Ah, this is also understanding of how aircrafts are produced. I am afraid
we have different ideas of how and when scientific research is done and how
that differs from engineering. I can only hope that methods of software
engineering you advocate weren't used during design of flight control
systems.

>> A possibility to check for type errors at given time is merely a consequence of.
>
> Please finish your sentence.

It is possible to check types at compile time because it the time of
binding.

>> The "feature" of ignoring typing is to have typing weak, rather than strong.
>
> Here's where you're confused. Dynamic typing and weak typing are
> orthogonal.

I am not. I merely pointed out that the "feature" you described is
characteristic to weak typing, not to the time of binding. Weak typing can
be static or dynamic.

It is loading more messages.
0 new messages