Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What is Expressiveness in a Computer Language

19 views
Skip to first unread message

Xah Lee

unread,
Jun 9, 2006, 1:04:47 AM6/9/06
to
in March, i posted a essay “What is Expressiveness in a Computer
Language”, archived at:
http://xahlee.org/perl-python/what_is_expresiveness.html

I was informed then that there is a academic paper written on this
subject.

On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf

Has anyone read this paper? And, would anyone be interested in giving a
summary?

thanks.

Xah
x...@xahlee.org
http://xahlee.org/

Joe Marshall

unread,
Jun 9, 2006, 10:34:47 AM6/9/06
to

The gist of the paper is this: Some computer languages seem to be
`more expressive' than
others. But anything that can be computed in one Turing complete
language can be computed in any other Turing complete language.
Clearly the notion of
expressiveness isn't concerned with ultimately computing the answer.

Felleisen's paper puts forth a formal definition of expressiveness in
terms of semantic
equivilances of small, local constructs. In his definition, wholescale
program transformation is
disallowed so you cannot appeal to Turing completeness to claim program
equivalence.

Expressiveness isn't necessarily a good thing. For instance, in C, you
can express the
addresses of variables by using pointers. You cannot express the same
thing in Java, and
most people consider this to be a good idea.

Simon Richard Clarkstone

unread,
Jun 9, 2006, 11:51:33 AM6/9/06
to
Joe Marshall wrote:

> Xah Lee wrote:
>>On the Expressive Power of Programming Languages, by Matthias
>>Felleisen, 1990.
>>http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf
>
> The gist of the paper is this: Some computer languages seem to be
> `more expressive' than others. But anything that can be computed in
> one Turing complete language can be computed in any other Turing
> complete language. Clearly the notion of expressiveness isn't
> concerned with ultimately computing the answer.
>
> Felleisen's paper puts forth a formal definition of expressiveness in
> terms of semantic equivilances of small, local constructs. In his
> definition, wholescale program transformation is disallowed so you
> cannot appeal to Turing completeness to claim program equivalence.

I suspect that the small, local transformations versus global
transformations is also to do with the practice of not saying the same
thing twice. Everything from subroutines to LISP macros also helps
here, increasing language expressiveness.

> Expressiveness isn't necessarily a good thing. For instance, in C,
> you can express the addresses of variables by using pointers. You
> cannot express the same thing in Java, and most people consider this
> to be a good idea.

Assuming the more-expressive feature does not preclude the
less-expressive one, good/bad depends on the programmer. I know *I*
can't be trusted with pointers ;-) , but I know many programmers benefit
greatly from them. Of course, knowing that the programmer cannot do
something does help the compiler stop you shooting yourself in the foot.

--
Simon Richard Clarkstone: s.r.cl?rkst?n?@durham.ac.uk/s?m?n.cl?rkst?n?@
hotmail.com ### "I have a spelling chequer / it came with my PC /
it plainly marks for my revue / Mistake's I cannot sea" ...
by: John Brophy (at: http://www.cfwf.ca/farmj/fjjun96/)

Message has been deleted

Kaz Kylheku

unread,
Jun 9, 2006, 2:32:18 PM6/9/06
to
Xah Lee wrote:
> Has anyone read this paper? And, would anyone be interested in giving a
> summary?

Not you, of course. Too busy preparing the next diatribe against UNIX,
Perl, etc. ;)

Ken Tilton

unread,
Jun 9, 2006, 5:27:35 PM6/9/06
to

Thanks for the summary.

Me, I would like to see a definition of expressiveness that would
exclude a programming mechanism from "things to be expressed".

If the subject is programmer productivity, well, I write programs to get
some behavior out of them, such as operating an ATM cash dispenser. If I
need to keep a list of transactions, I need to express the abstraction
"list" in some data structure or other, but below that level of
abstraction I am just hacking code, not expressing myself -- well, that
is the distinction for which I am arguing.

heck, in this case I will even give you as "thing to express" getting
back multiple values from a function. That comes up all the time, and it
can be an aggravation or a breeze. But then I would score C down because
it does not really return multiple values. One still has some heavy
lifting to do to fake the expressed thing. But I would still give it an
edge over java because Java's fakery would have to be a composite object
-- one could not have a primary return value as the function result and
ancillary values "somewhere else".

kt

--
Cells: http://common-lisp.net/project/cells/

"I'll say I'm losing my grip, and it feels terrific."
-- Smiling husband to scowling wife, New Yorker cartoon

Xah Lee

unread,
Jun 14, 2006, 5:03:46 AM6/14/06
to
hi Joe,

Joe Marshall wrote:
« Expressiveness isn't necessarily a good thing. For instance, in C,
you can express the addresses ...»

we gotta be careful here, because soon we gonna say binaries are the
most expressive. For instance, in assembly, you can express the
registers and stuff.

Expressiveness, with respect to — for lack of refined terms —
semantics, is a good thing, period. When discussing a language's
semantical expressiveness, it goes without saying that a “domain”
are understood, or needs to be defined. This is almost never mentioned
because it is very difficult. Put it in driveler's chant for better
understanding: we can't “compare apples with oranges”.

Let me give a example. Let's say i invented a language, where, there's
no addition of numbers, but phaserfy and realify with respective
operators ph and re. So, in my language, to do 1+2, you write “ph 1
re ph 2”, which means, to phaserfy 1, and phaserfy 2, then realify
their results, which results in 3. Now, this language is the most
expressive, because it can deal with concepts of phaserfy and realify
that no other lang can.

This may seem ridiculous, but is in fact a lot imperative languages do.
I won't go long here, but for instance, the addresses or references of
C and Perl is such. And in Java and few other OOP langs, there's
“iterator” and “enumerator” things, are likewise immaterial.

As to OOP's iterator and enumerator things, and the general perspective
of extraneous concepts in languages, i'll have to write a essay in
detail some other day.

----

Thanks for the summary.

Is there no one else who are able to read that paper?

Xah
x...@xahlee.org
http://xahlee.org/

> Xah Lee wrote:
> > in March, i posted a essay "What is Expressiveness in a Computer
> > Language", archived at:
> > http://xahlee.org/perl-python/what_is_expresiveness.html

> > ...


> > On the Expressive Power of Programming Languages, by Matthias
> > Felleisen, 1990.

> > http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf

Torben Ægidius Mogensen

unread,
Jun 14, 2006, 9:42:25 AM6/14/06
to
"Joe Marshall" <eval....@gmail.com> writes:


> > On the Expressive Power of Programming Languages, by Matthias
> > Felleisen, 1990.
> > http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf
>

> The gist of the paper is this: Some computer languages seem to be
> `more expressive' than others. But anything that can be computed in
> one Turing complete language can be computed in any other Turing
> complete language. Clearly the notion of expressiveness isn't
> concerned with ultimately computing the answer.
>
> Felleisen's paper puts forth a formal definition of expressiveness
> in terms of semantic equivilances of small, local constructs. In
> his definition, wholescale program transformation is disallowed so
> you cannot appeal to Turing completeness to claim program
> equivalence.

I think expressiveness is more subtle than this. Basically, it boils
down to: "How quickly can I write a program to solve my problem?".

There are several aspects relevant to this issue, some of which are:

- Compactness: How much do I have to type to do what I want?

- Naturality: How much effort does it take to convert the concepts of
my problem into the concepts of the language?

- Feedback: Will the language provide sensible feedback when I write
nonsensical things?

- Reuse: How much effort does it take to reuse/change code to solve a
similar problem?

Compactness is hard to measure. It isn't really about the number of
characters needed in a program, as I don't think one-character symbols
instead of longer keywords make a language more expressive. It is
better to count lexical units, but if there are too many different
predefined keywords and operators, this isn't reasonable either.
Also, the presence of opaque one-liners doesn't make a language
expressible. Additionally, as mentioned above, Turing-completeness
(TC) allows you to implement any TC language in any other, so above a
certain size, the choice of language doesn't affect size. But
something like (number of symbols in program)/log(number of different
symbols) is not too bad. If programs are allowed to use standard
libraries, the identifiers in the libraries should be counted in the
number of different symbols.

Naturality is very difficult to get a grip on, and it strongly depends
on the type of problem you want to solve. So it only makes sense to
talk about expressiveness relative to a set of problem domains. If
this set is small, domain-specific languages win hands down, so if you
want to compare expressiveness of general-purpose languages, you need
a large set of very different problems. And even with a single
problem, it is hard to get an objective measure of how difficult it is
to map the problem's concepts to those of the language. But you can
normally observe whether you need to overspecify the concept (i.e.,
you are required to make arbitrary decisions when mapping from concept
to data), if the mapping is onto (i.e., can you construct data that
isn't sensible in the problem domain) and how much redundancy your
representation has.

Feedback is a mixture of several things. Partly, it is related to
naturality, as a close match between problem concepts and language
concepts makes it less likely that you will express nonsense (relative
to the problem domain) that makes sense in the language. For example,
if you have to code everything as natural numbers, untyped pure lambda
calculus or S-expressions, there is a good chance that you can get
nonsense past the compiler. Additionally, it is about how difficult
it is to tie an observation about a computed result to a point in the
program.

Measuring reuse depends partly on what is meant by problems being
similar and also on whether you at the time you write the original
code can predict what types of problems you might later want to solve,
i.e., if you can prepare the code for reuse. Some languages provide
strong mechanisms for reuse (templates, object hierarchies, etc.), but
many of those require that you can predict how the code is going to be
reused. So, maybe, you should measure how difficult it is to reuse a
piece of code that is _not_ written with reuse in mind.

This reminds me a bit of last years ICFP contest, where part of the
problem was adapting to a change in specification after the code was
written.

> Expressiveness isn't necessarily a good thing. For instance, in C,
> you can express the addresses of variables by using pointers. You
> cannot express the same thing in Java, and most people consider this
> to be a good idea.

I think this is pretty much covered by the above points on naturality
and feedback: Knowing the address of a value or object is an
overspecification unless the address maps back into something in the
problem domain.

On a similar note, is a statically typed langauge more or less
expressive than a dynamically typed language? Some would say less, as
you can write programs in a dynamically typed language that you can't
compile in a statically typed language (without a lot of encoding),
whereas the converse isn't true. However, I think this is misleading,
as it ignores the feedback issue: It takes longer for the average
programmer to get the program working in the dynamically typed
language.

Torben

Raffael Cavallaro

unread,
Jun 14, 2006, 10:51:05 AM6/14/06
to
On 2006-06-14 09:42:25 -0400, tor...@app-1.diku.dk (Torben Ægidius
Mogensen) said:

> It takes longer for the average
> programmer to get the program working in the dynamically typed
> language.

Though I agree with much of your post I would say that many here find
the opposite to be true - it takes us longer to get a program working
in a statically typed language because we have to keep adding/changing
things to get the compiler to stop complaining and actually compile and
run a program which would be perfectly permissible in a dynamically
typed language such as common lisp - for example - heterogeneous lists
and forward references to as yet non-existent functions.

Rob Thorpe

unread,
Jun 14, 2006, 11:12:12 AM6/14/06
to
Torben Ægidius Mogensen wrote:
> On a similar note, is a statically typed langauge more or less
> expressive than a dynamically typed language? Some would say less, as
> you can write programs in a dynamically typed language that you can't
> compile in a statically typed language (without a lot of encoding),
> whereas the converse isn't true. However, I think this is misleading,
> as it ignores the feedback issue: It takes longer for the average
> programmer to get the program working in the dynamically typed
> language.

>From the point of view purely of expressiveness I'd say it's rather
different.

If a language can express constraints of one kind that is an increase
in expressiveness.
If a language requires constraint to be in one particular way thats a
decrease in expressiveness.

So I would say languages that can be statically typed and can be
dynamically typed are the most expressive. Languages that require
static typing or are dynamic but cannot express static typing are less
expressive.

This meets my experience of what useful in practice too, static typing
for everything is painful for writing simple code. Pure dynamic typing
is painful when writing complex code because it makes impossible a
layer of error checking that could otherwise be useful.

Joachim Durchholz

unread,
Jun 14, 2006, 2:55:40 PM6/14/06
to
Torben Ægidius Mogensen schrieb:

> For example,
> if you have to code everything as natural numbers, untyped pure lambda
> calculus or S-expressions, there is a good chance that you can get
> nonsense past the compiler.

Also past peer review and/or debugging runs. And, most importantly, past
your own mental review processes.

Regards,
Jo

Joachim Durchholz

unread,
Jun 14, 2006, 3:04:34 PM6/14/06
to
Raffael Cavallaro schrieb:

> On 2006-06-14 09:42:25 -0400, tor...@app-1.diku.dk (Torben Ægidius
> Mogensen) said:
>
>> It takes longer for the average
>> programmer to get the program working in the dynamically typed
>> language.
>
> Though I agree with much of your post I would say that many here find
> the opposite to be true - it takes us longer to get a program working in
> a statically typed language because we have to keep adding/changing
> things to get the compiler to stop complaining and actually compile and
> run

I think Torben was assuming a language with type inference. You write
only those type annotations that really carry meaning (and some people
let the compiler infer even these).

> a program which would be perfectly permissible in a dynamically
> typed language such as common lisp - for example - heterogeneous lists
> and forward references to as yet non-existent functions.

Um... heterogenous lists are not necessarily a sign of expressiveness.
The vast majority of cases can be transformed to homogenous lists
(though these might then contain closures or OO objects).

As to references to nonexistent functions - heck, I never missed these,
not even in languages without type inference :-)

I don't hold that they are a sign of *in*expressiveness either. They are
just typical of highly dynamic programming environments such as Lisp or
Smalltalk.

Regards,
Jo

Joachim Durchholz

unread,
Jun 14, 2006, 3:09:08 PM6/14/06
to
Rob Thorpe schrieb:

>
> If a language can express constraints of one kind that is an increase
> in expressiveness.

Agreed.

> If a language requires constraint to be in one particular way thats a
> decrease in expressiveness.

Unless alternatives would be redundant.
Having redundant ways to express the same thing doesn't make a language
more or less expressive (but programs written in it become more
difficult to maintain).

> So I would say languages that can be statically typed and can be
> dynamically typed are the most expressive. Languages that require
> static typing or are dynamic but cannot express static typing are less
> expressive.

Note that this is a different definition of expressiveness.
(The term is very diffuse...)

I think Felleisen's paper defines something that should be termed
"conciseness".
Whether there's a way to express constraints or other static properties
of the software is something different. I don't have a good word for it,
but "expressiveness" covers too much for my taste to really fit.

Regards,
Jo

Pascal Costanza

unread,
Jun 14, 2006, 4:18:03 PM6/14/06
to
Torben Ćgidius Mogensen wrote:

> On a similar note, is a statically typed langauge more or less
> expressive than a dynamically typed language? Some would say less, as
> you can write programs in a dynamically typed language that you can't
> compile in a statically typed language (without a lot of encoding),
> whereas the converse isn't true.

It's important to get the levels right here: A programming language with
a rich static type system is more expressive at the type level, but less
expressive at the base level (for some useful notion of expressiveness ;).

> However, I think this is misleading,
> as it ignores the feedback issue: It takes longer for the average
> programmer to get the program working in the dynamically typed
> language.

This doesn't seem to capture what I hear from Haskell programmers who
say that it typically takes quite a while to convince the Haskell
compiler to accept their programs. (They perceive this to be worthwhile
because of some benefits wrt correctness they claim to get in return.)


Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

Pascal Bourguignon

unread,
Jun 14, 2006, 4:36:52 PM6/14/06
to
Joachim Durchholz <j...@durchholz.org> writes:
> Raffael Cavallaro schrieb:

>> a program which would be perfectly permissible in a dynamically
>> typed language such as common lisp - for example - heterogeneous
>> lists and forward references to as yet non-existent functions.
>
> Um... heterogenous lists are not necessarily a sign of
> expressiveness. The vast majority of cases can be transformed to
> homogenous lists (though these might then contain closures or OO
> objects).

In lisp, all lists are homogenous: lists of T.


--
__Pascal Bourguignon__ http://www.informatimago.com/

ADVISORY: There is an extremely small but nonzero chance that,
through a process known as "tunneling," this product may
spontaneously disappear from its present location and reappear at
any random place in the universe, including your neighbor's
domicile. The manufacturer will not be responsible for any damages
or inconveniences that may result.

Neelakantan Krishnaswami

unread,
Jun 14, 2006, 6:57:58 PM6/14/06
to
In article <4fb97sF...@individual.net>, Pascal Costanza wrote:
> Torben Ćgidius Mogensen wrote:
>
>> On a similar note, is a statically typed langauge more or less
>> expressive than a dynamically typed language? Some would say less, as
>> you can write programs in a dynamically typed language that you can't
>> compile in a statically typed language (without a lot of encoding),
>> whereas the converse isn't true.
>
> It's important to get the levels right here: A programming language
> with a rich static type system is more expressive at the type level,
> but less expressive at the base level (for some useful notion of
> expressiveness ;).

This doesn't seem obviously the case to me. If you have static
information about your program, the compiler can use this information
to automate a lot of grunt work away.

Haskell's system of typeclasses work this way. If you tell the
compiler how to print integers, and how to print lists, then when you
call a print function on a list of list of integers, then the compiler
will automatically figure out the right print function using your base
definitions. This yields an increase in Felleisen-expressiveness over
a dynamically typed language, because you would need to globally
restructure your program to achieve a similar effect.

More dramatic are the "polytypic" programming languages, which let you
automate even more by letting you write generic map, fold, and print
functions which work at every type.

> This doesn't seem to capture what I hear from Haskell programmers
> who say that it typically takes quite a while to convince the
> Haskell compiler to accept their programs. (They perceive this to be
> worthwhile because of some benefits wrt correctness they claim to
> get in return.)

This is true, if you are a novice learning the language, or if you are
an expert programming with good style.

If you encode your invariants in the types, then type errors will
signal broken invariants. But: learning how to use the type system to
encode significantly complex invariants (eg, that an abstract syntax
tree representing an HTML document actually satisfies all of the
constraints on valid HTML) takes experience to do well.

--
Neel Krishnaswami
ne...@cs.cmu.edu

Raffael Cavallaro

unread,
Jun 15, 2006, 1:42:33 AM6/15/06
to
On 2006-06-14 15:04:34 -0400, Joachim Durchholz <j...@durchholz.org> said:

> Um... heterogenous lists are not necessarily a sign of expressiveness.
> The vast majority of cases can be transformed to homogenous lists
> (though these might then contain closures or OO objects).
>
> As to references to nonexistent functions - heck, I never missed these,
> not even in languages without type inference :-)
>
> I don't hold that they are a sign of *in*expressiveness either. They
> are just typical of highly dynamic programming environments such as
> Lisp or Smalltalk.

This is a typical static type advocate's response when told that users
of dynamically typed languages don't want their hands tied by a type
checking compiler:

"*I* don't find those features expressive so *you* shouldn't want them."

You'll have to excuse us poor dynamically typed language rubes - we
find these features expressive and we don't want to give them up just
to silence a compiler whose static type checks are of dubious value in
a world where user inputs of an often unpredictable nature can come at
a program from across a potentially malicious internet making run-time
checks a practical necessity.

Raffael Cavallaro

unread,
Jun 15, 2006, 1:58:24 AM6/15/06
to
On 2006-06-14 16:36:52 -0400, Pascal Bourguignon <p...@informatimago.com> said:

> In lisp, all lists are homogenous: lists of T.

CL-USER 123 > (loop for elt in (list #\c 1 2.0d0 (/ 2 3)) collect
(type-of elt))
(CHARACTER FIXNUM DOUBLE-FLOAT RATIO)

i.e., "heterogenous" in the common lisp sense: having different dynamic
types, not in the H-M sense in which all lisp values are of the single
union type T.

Pascal Costanza

unread,
Jun 15, 2006, 3:26:29 AM6/15/06
to
Neelakantan Krishnaswami wrote:
> In article <4fb97sF...@individual.net>, Pascal Costanza wrote:
>> Torben Ægidius Mogensen wrote:
>>
>>> On a similar note, is a statically typed langauge more or less
>>> expressive than a dynamically typed language? Some would say less, as
>>> you can write programs in a dynamically typed language that you can't
>>> compile in a statically typed language (without a lot of encoding),
>>> whereas the converse isn't true.
>> It's important to get the levels right here: A programming language
>> with a rich static type system is more expressive at the type level,
>> but less expressive at the base level (for some useful notion of
>> expressiveness ;).
>
> This doesn't seem obviously the case to me. If you have static
> information about your program, the compiler can use this information
> to automate a lot of grunt work away.
>
> Haskell's system of typeclasses work this way. If you tell the
> compiler how to print integers, and how to print lists, then when you
> call a print function on a list of list of integers, then the compiler
> will automatically figure out the right print function using your base
> definitions. This yields an increase in Felleisen-expressiveness over
> a dynamically typed language, because you would need to globally
> restructure your program to achieve a similar effect.
>
> More dramatic are the "polytypic" programming languages, which let you
> automate even more by letting you write generic map, fold, and print
> functions which work at every type.

Yes, but these decisions are taken at compile time, without running the
program.

Torben Ægidius Mogensen

unread,
Jun 16, 2006, 5:04:33 AM6/16/06
to

What's the difference? Dynamically types values _are_ all members of
a single tagged union type. The main difference is that the tages
aren't always visible and that there are only a fixed, predefined
number of them.

Torben

Torben Ægidius Mogensen

unread,
Jun 16, 2006, 5:07:57 AM6/16/06
to
Pascal Costanza <p...@p-cos.net> writes:

> Torben Ægidius Mogensen wrote:
>
> > On a similar note, is a statically typed langauge more or less
> > expressive than a dynamically typed language? Some would say less, as
> > you can write programs in a dynamically typed language that you can't
> > compile in a statically typed language (without a lot of encoding),
> > whereas the converse isn't true.
>
> It's important to get the levels right here: A programming language
> with a rich static type system is more expressive at the type level,
> but less expressive at the base level (for some useful notion of
> expressiveness ;).
>
> > However, I think this is misleading,
> > as it ignores the feedback issue: It takes longer for the average
> > programmer to get the program working in the dynamically typed
> > language.
>
> This doesn't seem to capture what I hear from Haskell programmers who
> say that it typically takes quite a while to convince the Haskell
> compiler to accept their programs. (They perceive this to be
> worthwhile because of some benefits wrt correctness they claim to get
> in return.)

That's the point: Bugs that in dynamically typed languages would
require testing to find are found by the compiler in a statically
typed language. So whil eit may take onger to get a program thatgets
past the compiler, it takes less time to get a program that works.

Torben

Joachim Durchholz

unread,
Jun 16, 2006, 5:22:08 AM6/16/06
to
Raffael Cavallaro schrieb:

> On 2006-06-14 15:04:34 -0400, Joachim Durchholz <j...@durchholz.org> said:
>
>> Um... heterogenous lists are not necessarily a sign of expressiveness.
>> The vast majority of cases can be transformed to homogenous lists
>> (though these might then contain closures or OO objects).
>>
>> As to references to nonexistent functions - heck, I never missed
>> these, not even in languages without type inference :-)
>>
>> [[snipped - doesn't seem to relate to your answer]]

>
> This is a typical static type advocate's response when told that users
> of dynamically typed languages don't want their hands tied by a type
> checking compiler:
>
> "*I* don't find those features expressive so *you* shouldn't want them."

And this is a typical dynamic type advocate's response when told that
static typing has different needs:

"*I* don't see the usefulness of static typing so *you* shouldn't want
it, either."

No ad hominem arguments, please. If you find my position undefendable,
give counterexamples.
Give a heterogenous list that would to too awkward to live in a
statically-typed language.
Give a case of calling nonexistent functions that's useful.
You'll get your point across far better that way.

Regards,
Jo

Pascal Costanza

unread,
Jun 16, 2006, 5:47:09 AM6/16/06
to
Torben Ćgidius Mogensen wrote:
> Raffael Cavallaro <raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com> writes:
>
>> On 2006-06-14 16:36:52 -0400, Pascal Bourguignon <p...@informatimago.com> said:
>>
>>> In lisp, all lists are homogenous: lists of T.
>> CL-USER 123 > (loop for elt in (list #\c 1 2.0d0 (/ 2 3)) collect
>> (type-of elt))
>> (CHARACTER FIXNUM DOUBLE-FLOAT RATIO)
>>
>> i.e., "heterogenous" in the common lisp sense: having different
>> dynamic types, not in the H-M sense in which all lisp values are of
>> the single union type T.
>
> What's the difference? Dynamically types values _are_ all members of
> a single tagged union type.

Yes, but that's mostly a meaningless statement in a dynamically typed
language. In a dynamically typed language, you typically don't care
about the static types.

> The main difference is that the tages
> aren't always visible and that there are only a fixed, predefined
> number of them.

Depending on the language, the number of "tags" is not fixed.

Pascal Costanza

unread,
Jun 16, 2006, 5:59:25 AM6/16/06
to
Torben Ćgidius Mogensen wrote:
> Pascal Costanza <p...@p-cos.net> writes:
>
>> Torben Ćgidius Mogensen wrote:
>>
>>> On a similar note, is a statically typed langauge more or less
>>> expressive than a dynamically typed language? Some would say less, as
>>> you can write programs in a dynamically typed language that you can't
>>> compile in a statically typed language (without a lot of encoding),
>>> whereas the converse isn't true.
>> It's important to get the levels right here: A programming language
>> with a rich static type system is more expressive at the type level,
>> but less expressive at the base level (for some useful notion of
>> expressiveness ;).
>>
>>> However, I think this is misleading,
>>> as it ignores the feedback issue: It takes longer for the average
>>> programmer to get the program working in the dynamically typed
>>> language.
>> This doesn't seem to capture what I hear from Haskell programmers who
>> say that it typically takes quite a while to convince the Haskell
>> compiler to accept their programs. (They perceive this to be
>> worthwhile because of some benefits wrt correctness they claim to get
>> in return.)
>
> That's the point: Bugs that in dynamically typed languages would
> require testing to find are found by the compiler in a statically
> typed language.

Yes. However, unfortunately statically typed languages also reject
programs that don't have such bugs. It's a tradeoff whether you want to
spend time to deal with them or not.

> So whil eit may take onger to get a program thatgets
> past the compiler, it takes less time to get a program that works.

That's incorrect. See http://haskell.org/papers/NSWC/jfp.ps - especially
Figure 3.

Sacha

unread,
Jun 16, 2006, 6:10:17 AM6/16/06
to

"Joachim Durchholz" <j...@durchholz.org> wrote in message
news:e6tt7j$b41$1...@online.de...

> Raffael Cavallaro schrieb:
>> On 2006-06-14 15:04:34 -0400, Joachim Durchholz <j...@durchholz.org> said:
>>
>>> Um... heterogenous lists are not necessarily a sign of expressiveness.
>>> The vast majority of cases can be transformed to homogenous lists
>>> (though these might then contain closures or OO objects).
>>>
>>> As to references to nonexistent functions - heck, I never missed these,
>>> not even in languages without type inference :-)
>>>
>>> [[snipped - doesn't seem to relate to your answer]]
>>
> Give a heterogenous list that would to too awkward to live in a
> statically-typed language.

Many lists are heterogenous, even in statically typed languages.
For instance lisp code are lists, with several kinds of atoms and
sub-lists..
A car dealer will sell cars, trucks and equipment..
In a statically typed language you would need to type the list on a common
ancestor...
What would then be the point of statical typing , as you stilll need to type
check
each element in order to process that list ? Sure you can do this in a
statically-typed
language, you just need to make sure some relevant ancestor exists. In my
experience
you'll end up with the base object-class more often than not, and that's
what i call
dynamic typing.

> Give a case of calling nonexistent functions that's useful.

I might want to test some other parts of my program before writing this
function.
Or maybe will my program compile that function depending on user input.
As long as i get a warning for calling a non-existing function, everything
is fine.

Sacha


Hendrik Maryns

unread,
Jun 16, 2006, 6:28:54 AM6/16/06
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Sacha schreef:

In my experience you won’t. I almost never have a List<Object> (Java),
and when I have one, I start thinking on how I can improve the code to
get rid of it.

H.
- --
Hendrik Maryns

==================
http://aouw.org
Ask smart questions, get good answers:
http://www.catb.org/~esr/faqs/smart-questions.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQFEkofme+7xMGD3itQRAo0XAJ9DiG228ZKMLX8JH+u9X+6YGEHwgQCdH/Jn
kC/F/b5rbmUvUzKYIv8agis=
=iuV0
-----END PGP SIGNATURE-----

Rob Thorpe

unread,
Jun 16, 2006, 6:40:46 AM6/16/06
to

In my experience the opposite is true for many programs.
Having to actually know the precise type of every variable while
writing the program is not necessary, it's a detail often not relevant
to the core problem. The language should be able to take care of
itself.

In complex routines it can be useful for the programmer to give types
and for the compiler to issue errors when they are contradicted. But
for most routines it's just an unnecessary chore that the compiler
forces on the programmer.

Torben Ægidius Mogensen

unread,
Jun 16, 2006, 7:38:26 AM6/16/06
to
Pascal Costanza <p...@p-cos.net> writes:

> Torben Ægidius Mogensen wrote:

> > So while it may take longer to get a program that gets


> > past the compiler, it takes less time to get a program that works.
>
> That's incorrect. See http://haskell.org/papers/NSWC/jfp.ps -
> especially Figure 3.

There are many other differences between these languages than static
vs. dynamic types, and some of these differences are likely to be more
significant. What you need to test is langauges with similar features
and syntax, except one is statically typed and the other dynamically
typed.

And since these languages would be quite similar, you can use the same
test persons: First let one half solve a problem in the statically
typed language and the other half the same problem in the dynamically
typed language, then swap for the next problem. If you let a dozen
persons each solve half a dozen problems, half in the statically typed
language and half in the dynamically typed language (using different
splits for each problem), you might get a useful figure.

Torben

Torben Ægidius Mogensen

unread,
Jun 16, 2006, 7:53:10 AM6/16/06
to
"Rob Thorpe" <robert...@antenova.com> writes:

> Torben Ægidius Mogensen wrote:

Indeed. So use a language with type inference.

Torben

Dr.Ruud

unread,
Jun 16, 2006, 8:16:07 AM6/16/06
to
Torben Ægidius Mogensen schreef:

> Bugs that in dynamically typed languages would
> require testing to find are found by the compiler in a statically

> typed language. So whil[e ]it may take [l]onger to get a program
that[ ]
> gets past the compiler, it takes less time to get a program that
works.

If it were that simple, I would say: compile time type inference is the
only way to go.

--
Affijn, Ruud

"Gewoon is een tijger."


Pascal Costanza

unread,
Jun 16, 2006, 10:48:28 AM6/16/06
to
Torben Ćgidius Mogensen wrote:
> Pascal Costanza <p...@p-cos.net> writes:
>

...and until then claims about the influence of static type systems on
the speed with which you can implement working programs are purely
guesswork. That's the only point I need to make to show that your
original unqualified statement, namely that it takes less time to get a
program that works, is incorrect.

genea

unread,
Jun 16, 2006, 11:13:42 AM6/16/06
to

Torben Ægidius Mogensen wrote:

> There are several aspects relevant to this issue, some of which are:
> - Compactness: How much do I have to type to do what I want?

......


> - Naturality: How much effort does it take to convert the concepts of
> my problem into the concepts of the language?
> - Feedback: Will the language provide sensible feedback when I write
> nonsensical things?
> - Reuse: How much effort does it take to reuse/change code to solve a
> similar problem?

.....

I am fairly new to Haskell, but the compactness of the language and the
way you can express a lot in a very small amount of real estate is very
important.. I used to program back in the 80's in forth a lot.... okay
I'm a dinosaur!, but "good" definitions were usually very short, and
sweet. Unicon/Icon that I used {still do!} in the imperative world,
very compact. I will give an example that covers compact, reusable,
and because of static typing when will give back mis-type info when you
load a new "a or xs" into it to form a new function.
-- what is happening below is a is being replaced with the curried
lambda: ((++) 3) and
-- xs a list: [1..6], so when that definition of f is used it is type
checked to see if the
-- elements in xs match the type of a, so if this were going to be
compiled, it would
-- checked and guaranteed to work.

Prelude> :t f
f :: forall a b. (Show b) => (a -> b) -> [a] -> IO ()
Prelude> let f a xs = putStr $ foldr (++) "\n" $ map (((++) "\n").
show . a ) xs
Prelude> f ((*) 3) [1..6]
3
6
9
12
15
18
Prelude>

another substitution of parameters.. using the same definition of f
allowed by the the
polymorphic parameters allowed in Haskell add to the versatility and
reusability angle:

Prelude> f sqrt [0.5,1.0..4]
0.7071067811865476
1.0
1.224744871391589
1.4142135623730951
1.5811388300841898
1.7320508075688772
1.8708286933869707
2.0

Same function 'f" now used with a different type..
[0.5,1.0..4] :: forall a. (Fractional a, Enum a) => [a]

I don't know, but this just makes programming fun, for me anyway, and
if it is fun, it is expressive.. I've heard this statement made about
Ruby and Unicon, to name a few... some would say Python.. but it really
applies to the functional languages too, with all their strict typing,
with the type inference mechanisms, it isn't usually that big a deal..
If you just get into it, and can learn to take some constructive
criticism from your compiler, well hey, it is a really patient
teacher... you might get frustrated at times.. but the compiler will
happily remind you of the same type mis-matches, until you get a handle
on some concept and never once complain...
Happy Programming to all!
-- gene

Raffael Cavallaro

unread,
Jun 16, 2006, 11:29:12 AM6/16/06
to
On 2006-06-16 05:22:08 -0400, Joachim Durchholz <j...@durchholz.org> said:

> And this is a typical dynamic type advocate's response when told that
> static typing has different needs:
>
> "*I* don't see the usefulness of static typing so *you* shouldn't want
> it, either."

But I haven't made this sort of argument. I never said you shouldn't
use static typing if you want to. There are indeed types of software
where one wants the guarantees provided by static type checks. For
example, software that controls irreplaceable or very expensive
equipment such as space craft, or software that can kill people if it
fails such as software for aircraft or medical devices. The problem for
static typing advocates is that most software is not of this type.

There is a very large class of software where user inputs are
unpredictable and/or where input data comes from an untrusted source.
In these cases run-time checks are going to be needed anyway so the
advantages of static type checking are greatly reduced - you end up
doing run-time checks anyway, precisely the thing you were trying to
avoid by doing static analysis. In software like this it isn't worth
satisfying a static type checker because you don't get much of the
benefit anyway

Raffael Cavallaro

unread,
Jun 16, 2006, 11:37:45 AM6/16/06
to
On 2006-06-16 11:29:12 -0400, Raffael Cavallaro

Raffael Cavallaro

unread,
Jun 16, 2006, 11:49:23 AM6/16/06
to
On 2006-06-16 05:22:08 -0400, Joachim Durchholz <j...@durchholz.org> said:

> And this is a typical dynamic type advocate's response when told that
> static typing has different needs:
>
> "*I* don't see the usefulness of static typing so *you* shouldn't want
> it, either."

But I haven't made this sort of argument. I never said you shouldn't

use static typing if you want to. There are indeed types of software
where one wants the guarantees provided by static type checks. For
example, software that controls irreplaceable or very expensive
equipment such as space craft, or software that can kill people if it
fails such as software for aircraft or medical devices. The problem for
static typing advocates is that most software is not of this type.

There is a very large class of software where user inputs are
unpredictable and/or where input data comes from an untrusted source.
In these cases run-time checks are going to be needed anyway so the
advantages of static type checking are greatly reduced - you end up
doing run-time checks anyway, precisely the thing you were trying to

avoid by doing static analysis. In software like this it isn't worth

satisfying a static type checker because you don't get much of the

benefit anyway and it means forgoing such advantages of dynamic typing
as being able to run and test portions of a program before other parts
are written (forward references to as yet nonexistent functions).

Ideally one wants a language with switchable typing - static where
possible and necessary, dynamic elsewhere. To a certain extent this is
what common lisp does but it requires programmer declarations. Some
implementations try to move beyond this by doing type inference and
alerting the programmer to potential static guarantees that the
programmer could make that would allow the compiler to do a better job.

In effect the argument comes down to which kind of typing one thinks
should be the default. Dynamic typing advocates think that static
typing is the wrong default. The notion that static typing can prove
program correctness is flawed - it can only prove that type constraints
are not violated but not necessarily that program logic is correct. It
seems to me that if we set aside that class of software where safety is
paramount - mostly embedded software such as aircraft and medical
devices - we are left mostly with efficiency concerns. The 80-20 rule
suggests that most code doesn't really need the efficiency provided by
static guarantees. So static typing should be invoked for that small
portion of a program where efficiency is really needed and that dynamic
typing should be the default elswhere. This is how common lisp works -
dynamic typing by default with static guarantees available where one
needs them.

Raffael Cavallaro

unread,
Jun 16, 2006, 12:09:54 PM6/16/06
to
On 2006-06-16 05:22:08 -0400, Joachim Durchholz <j...@durchholz.org> said:

> And this is a typical dynamic type advocate's response when told that
static typing has different needs:
> "*I* don't see the usefulness of static typing so *you* shouldn't
want it, either."

But I haven't made this sort of argument. I never said you shouldn't

Darren New

unread,
Jun 16, 2006, 12:45:46 PM6/16/06
to
Joachim Durchholz wrote:
> Give a heterogenous list that would to too awkward to live in a
> statically-typed language.

Write a function that takes an arbitrary set of arguments and stores
them into a structure allocated on the heap.

> Give a case of calling nonexistent functions that's useful.

See the Tcl "unknown" proc, used for interactive command expansion,
dynamic loading of code on demand, etc.

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

Darren New

unread,
Jun 16, 2006, 12:59:41 PM6/16/06
to
Joachim Durchholz wrote:
> Give a heterogenous list that would to too awkward to live in a
> statically-typed language.

Printf()?

Matthias Blume

unread,
Jun 16, 2006, 2:10:34 PM6/16/06
to
Darren New <dn...@san.rr.com> writes:

> Joachim Durchholz wrote:
>> Give a heterogenous list that would to too awkward to live in a
>> statically-typed language.
>
> Printf()?

Very good statically typed versions of printf exist. See, e.g.,
Danvy's unparsing combinators.

Darren New

unread,
Jun 16, 2006, 3:06:46 PM6/16/06
to
Matthias Blume wrote:
> Very good statically typed versions of printf exist. See, e.g.,
> Danvy's unparsing combinators.

That seems to ignore the fact that the pattern is a string, which means
that printf's first argument in Danvy's mechanism has to be a literal.
You can't read the printf format from a configuration file (for example)
to support separate languages. It doesn't look like the version of
printf that can print its arguments in an order different from the order
provided in the argument list is supported either; something like "%3$d"
or some such.

Second, what's the type of the argument that printf, sprintf, fprintf,
kprintf, etc all pass to the subroutine that actually does the
formatting? (Called vprintf, I think?)

Matthias Blume

unread,
Jun 16, 2006, 4:45:26 PM6/16/06
to
Darren New <dn...@san.rr.com> writes:

> Matthias Blume wrote:
>> Very good statically typed versions of printf exist. See, e.g.,
>> Danvy's unparsing combinators.
>
> That seems to ignore the fact that the pattern is a string, which
> means that printf's first argument in Danvy's mechanism has to be a
> literal.

In Danvy's solution, the format argument is not a string.

> You can't read the printf format from a configuration file
> (for example) to support separate languages.

You don't need to do that if you want to support separate languages.
Moreover, reading the format string from external input is a good way
of opening your program to security attacks, since ill-formed data on
external media are then able to crash you program.

> It doesn't look like the
> version of printf that can print its arguments in an order different
> from the order provided in the argument list is supported either;
> something like "%3$d" or some such.

I am not familiar with the version of printf you are refering to, but
I am sure one could adapt Danvy's solution to support such a thing.

> Second, what's the type of the argument that printf, sprintf, fprintf,
> kprintf, etc all pass to the subroutine that actually does the
> formatting? (Called vprintf, I think?)

Obviously, a Danvy-style solution (see, e.g., the one in SML/NJ's
library) is not necessarily structured that way. I don't see the
problem with typing, though.

Matthias

Darren New

unread,
Jun 16, 2006, 5:10:50 PM6/16/06
to
Matthias Blume wrote:
> In Danvy's solution, the format argument is not a string.

That's what I said, yes.

>>You can't read the printf format from a configuration file
>>(for example) to support separate languages.

> You don't need to do that if you want to support separate languages.

That's kind of irrelevant to the discussion. We're talking about
collections of dynamically-typed objects, not the best mechanisms for
supporting I18N.

> Moreover, reading the format string from external input is a good way
> of opening your program to security attacks, since ill-formed data on
> external media are then able to crash you program.

Still irrelevant to the point.

> I am sure one could adapt Danvy's solution to support such a thing.

I'm not. It's consuming arguments as it goes, from what I understood of
the paper. It's translating, essentially, into a series of function
calls in argument order.

> Obviously, a Danvy-style solution (see, e.g., the one in SML/NJ's
> library) is not necessarily structured that way. I don't see the
> problem with typing, though.

You asked for an example of a heterogenous list that would be awkward in
a statically strongly-typed language. The arguments to printf() count,
methinks. What would the second argument to apply be if the first
argument is printf (since I'm reading this in the LISP group)?

Joachim Durchholz

unread,
Jun 16, 2006, 5:57:10 PM6/16/06
to
Sacha schrieb:

>
> Many lists are heterogenous, even in statically typed languages.
> For instance lisp code are lists, with several kinds of atoms and
> sub-lists..

Lisp isn't exactly a statically-typed language :-)

> A car dealer will sell cars, trucks and equipment..
> In a statically typed language you would need to type the list on a common
> ancestor...

Where's the problem with that?

BTW the OO way isn't the only way to set up a list from heterogenous data.
In statically-typed FPL land, lists require homogenous data types all
right, but the list elements aren't restricted to data - they can be
functions as well.
Now the other specialty of FPLs is that you can construct functions at
run-time - you take a function, fill some of its parameters and leave
others open - the result is another function. And since you'll iterate
over the list and will do homogenous processing over it, you construct
the function so that it will do all the processing that you'll later need.

The advantage of the FPL way over the OO way is that you can use ad-hoc
functions. You don't need precognition to know which kinds of data
should be lumped under a common supertype - you simply write and/or
construct functions of a common type that will go into the list.

> What would then be the point of statical typing , as you stilll need to type
> check each element in order to process that list ?

Both OO and FPL construction allow static type checks.

> Sure you can do this in a
> statically-typed
> language, you just need to make sure some relevant ancestor exists. In my
> experience
> you'll end up with the base object-class more often than not, and that's
> what i call dynamic typing.

Not quite - the common supertype is more often than not actually useful.

However, getting the type hierarchy right requires a *lot* of
experimentation and fine-tuning. You can easily spend a year or more
(sometimes *much* more) with that (been there, done that). Even worse,
once the better hierarchy is available, you typically have to adapt all
the client code that uses it (been there, done that, too).

That's the problems in OO land. FPL land doesn't have these problems -
if the list type is just a function mapping two integers to another
integer, reworking the data types that went into the functions of the
list don't require those global changes.

>> Give a case of calling nonexistent functions that's useful.
>
> I might want to test some other parts of my program before writing this
> function.

That's unrelated to dynamic typing. All that's needed is an environment
that throws an exception once such an undefined function is called,
instead of aborting the compilation.

I'll readily admit that very few static languages offer such an
environment. (Though, actually, C interpreters do exist.)

> Or maybe will my program compile that function depending on user input.

Hmm... do I really want this kind of power at the user's hand in the age
of malware?

> As long as i get a warning for calling a non-existing function, everything
> is fine.

That depends.
For software that's written to run once (or very few times), and where
somebody who's able to correct problems is always nearby, that's a
perfectly viable strategy.
For safety-critical software where problems must be handled within
seconds (or an even shorter period of time), you want to statically
ensure as many properties as you can. You'll take not just static
typing, you also want to ascertain value ranges and dozens of other
properties. (In Spark, an Ada subset, this is indeed done.)

Between those extremes, there's a broad spectrum.

Joachim Durchholz

unread,
Jun 16, 2006, 5:59:07 PM6/16/06
to
Raffael Cavallaro schrieb:

> There is a very large class of software where user inputs are
> unpredictable and/or where input data comes from an untrusted source. In
> these cases run-time checks are going to be needed anyway so the
> advantages of static type checking are greatly reduced - you end up
> doing run-time checks anyway, precisely the thing you were trying to
> avoid by doing static analysis.

There's still a large class of errors that *can* be excluded via type
checking.

> Ideally one wants a language with switchable typing - static where
> possible and necessary, dynamic elsewhere.

That has been my position for a long time now.

> To a certain extent this is
> what common lisp does but it requires programmer declarations. Some
> implementations try to move beyond this by doing type inference and
> alerting the programmer to potential static guarantees that the
> programmer could make that would allow the compiler to do a better job.

I think it's easier to start with a good (!) statically-typed language
and relax the checking, than to start with a dynamically-typed one and
add static checks.
With the right restrictions, a language can make all kinds of strong
guarantees, and it can make it easy to construct software where static
guarantees abound. If the mechanisms are cleverly chosen, they interfere
just minimally with the programming process. (A classical example it
Hindley-Milner type inference systems. Typical reports from languages
with HM systems say that you can have it verify thousand-line programs
without a single type annotation in the code. That's actually far better
than you'd need - you'd *want* to document the types at least on the
major internal interfaces after all *grin*.)
With a dynamically-typed language, programming style tends to evolve in
directions that make it harder to give static guarantees.

> It seems to
> me that if we set aside that class of software where safety is paramount
> - mostly embedded software such as aircraft and medical devices - we are
> left mostly with efficiency concerns.

Nope. Efficiency has taken a back seat. Software is getting slower
(barely offset by increasing machine speed), and newer languages even
don't statically typecheck everything (C++, Java). (Note that the
impossibility to statically typecheck everything in OO languages doesn't
mean that it's impossible to do rigorous static checking in general.
FPLs have been quite rigorous about static checks; the only cases when
an FPL needs to dynamically typecheck its data structures is after
unmarshalling one from an untyped data source such as a network stream,
a file, or an IPC interface.)

The prime factor nowadays seems to be maintainability.

And the difference here is this:
With dynamic typing, I have to rely on the discipline of the programmers
to document interfaces.
With static typing, the compiler will infer (and possibly document) at
least part of their semantics (namely the types).

> So static typing should be invoked for that small portion of a program
> where efficiency is really needed and that dynamic typing should be the
> default elswhere. This is how common lisp works - dynamic typing by
> default with static guarantees available where one needs them.

Actually static typing seems to become more powerful at finding errors
as the program size increases.
(Yes, that's a maintainability argument. Efficiency isn't *that*
important; since maintenance is usually the most important single
factor, squelching bugs even before testing is definitely helpful.)

Regards,
Jo

Joachim Durchholz

unread,
Jun 16, 2006, 6:09:26 PM6/16/06
to
Darren New schrieb:

> Joachim Durchholz wrote:
>> Give a heterogenous list that would to too awkward to live in a
>> statically-typed language.
>
> Write a function that takes an arbitrary set of arguments and stores
> them into a structure allocated on the heap.

If the set of arguments is really arbitrary, then the software can't do
anything with it. In that case, the type is simply "opaque data block",
and storing it in the heap requires nothing more specific than that of
"opaque data block".
There's more in this. If we see a function with a parameter type of
"opaque data block", and there's no function available except copying
that data and comparing it for equality, then from simply looking at the
function's signature, we'll know that it won't inspect the data. More
interestingly, we'll know that funny stuff in the data might trigger
bugs in the code - in the context of a security audit, that's actually a
pretty strong guarantee, since the analysis can stop at the function't
interface and doesn't have to dig into the function's implementation.

>> Give a case of calling nonexistent functions that's useful.
>
> See the Tcl "unknown" proc, used for interactive command expansion,
> dynamic loading of code on demand, etc.

Not related to dynamic typing, I fear - I can easily envision
alternatives to that in a statically-typed context.

Of course, you can't eliminate *all* run-time type checking. I already
mentioned unmarshalling data from an untyped source; another possibility
is run-time code compilation (highly dubious in a production system but
of value in a development system).

However, that's some very specialized applications, easily catered for
by doing a dynamic type check plus a thrown exception in case the types
don't match. I still don't see a convincing argument for making dynamic
typing the standard policy.

Regards,
Jo

Raffael Cavallaro

unread,
Jun 16, 2006, 6:25:09 PM6/16/06
to
On 2006-06-16 17:59:07 -0400, Joachim Durchholz <j...@durchholz.org> said:

> I think it's easier to start with a good (!) statically-typed language
> and relax the checking, than to start with a dynamically-typed one and
> add static checks.
> With the right restrictions, a language can make all kinds of strong
> guarantees, and it can make it easy to construct software where static
> guarantees abound. If the mechanisms are cleverly chosen, they
> interfere just minimally with the programming process. (A classical
> example it Hindley-Milner type inference systems. Typical reports from
> languages with HM systems say that you can have it verify thousand-line
> programs without a single type annotation in the code. That's actually
> far better than you'd need - you'd *want* to document the types at
> least on the major internal interfaces after all *grin*.)
> With a dynamically-typed language, programming style tends to evolve in
> directions that make it harder to give static guarantees.

This is purely a matter of programming style. For explorative
programming it is easier to start with dynamic typing and add static
guarantees later rather than having to make decisions about
representation and have stubs for everything right from the start. The
lisp programming style is arguably all about using heterogenous lists
and forward references in the repl for everything until you know what
it is that you are doing, then choosing a more appropriate
representation and filling in forward references once the program gels.
Having to choose representation right from the start and needing
working versions (even if only stubs) of *every* function you call may
ensure type correctness, but many programmers find that it also ensures
that you never find the right program to code in the first place. This
is because you don't have the freedom to explore possible solutions
without having to break your concentration to satisfy the nagging of a
static type checker.

Joachim Durchholz

unread,
Jun 17, 2006, 7:03:19 AM6/17/06
to
Raffael Cavallaro schrieb:

> On 2006-06-16 17:59:07 -0400, Joachim Durchholz <j...@durchholz.org> said:
>
>> I think it's easier to start with a good (!) statically-typed language
>> and relax the checking, than to start with a dynamically-typed one and
>> add static checks.
>
> This is purely a matter of programming style. For explorative
> programming it is easier to start with dynamic typing and add static
> guarantees later rather than having to make decisions about
> representation and have stubs for everything right from the start.

Sorry for being ambiguous - I meant to talk about language evolution.

I agree that static checking could (and probably should) be slightly
relaxed: compilers should still do all the diagnostics that current-day
technology allows, but any problems shouldn't abort the compilation.
It's always possible to generate code that will throw an exception as
soon as a problematic piece of code becomes actually relevant; depending
on the kind of run-time support, this might abort the program, abort
just the computation, or open an interactive facility to correct and/or
modify the program on the spot (the latter is the norm in highly dynamic
systems like those for Lisp and Smalltalk, and I consider this actually
useful).

I don't see static checking and explorative programming as opposites.
Of course, in practice, environments that combine these don't seem to
exist (except maybe in experimental or little-known state).

Regards,
Jo

Raffael Cavallaro

unread,
Jun 17, 2006, 9:08:59 AM6/17/06
to
On 2006-06-17 07:03:19 -0400, Joachim Durchholz <j...@durchholz.org> said:

> I don't see static checking and explorative programming as opposites.
> Of course, in practice, environments that combine these don't seem to
> exist (except maybe in experimental or little-known state).

Right. Unfortunately the philosophical leanings of those who design
these two types of languages tend to show themselves as different
tastes in development style - for example, static type advocates don't
often want a very dynamic development environment that would allow a
program to run for testing even when parts of it arent defined yet, and
dynamic type advocates don't want a compiler preventing them from doing
so because the program can't yet be proven statically correct. Dynamic
typing advocates don't generally want a compiler error for ambiguous
typing - for example, adding a float and an int - but static typing
advocates generally do. Of course there's little reason one couldn't
have a language that allowed the full range to be switchable so that
programmers could tighten up compiler warnings and errors as the
program becomes more fully formed. Unfortunately we're not quite there
yet. For my tastes something like sbcl*, with its type inference and
very detailed warnings and notes is as good as it gets for now. I can
basically ignore warnings and notes early on, but use them to allow the
compiler to improve the code it generates once the program is doing
what I want correctly.

[*] I don't mean to exclude other common lisp implementations that do
type inference here - I just happen to use sbcl.

Torben Ægidius Mogensen

unread,
Jun 19, 2006, 4:19:05 AM6/19/06
to
Raffael Cavallaro <raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com> writes:

> This is purely a matter of programming style. For explorative
> programming it is easier to start with dynamic typing and add static
> guarantees later rather than having to make decisions about
> representation and have stubs for everything right from the start.

I think you are confusing static typing with having to write types
everywhere. With type inference, you only have to write a minimum of
type information (such as datatype declarations), so you can easily do
explorative progrmming in such languages -- I don't see any advantage
of dynamic typing in this respect.

> The
> lisp programming style is arguably all about using heterogenous lists
> and forward references in the repl for everything until you know what
> it is that you are doing, then choosing a more appropriate
> representation and filling in forward references once the program
> gels. Having to choose representation right from the start and needing
> working versions (even if only stubs) of *every* function you call may
> ensure type correctness, but many programmers find that it also
> ensures that you never find the right program to code in the first
> place.

If you don't have definitions (stubs or complete) of the functions you
use in your code, you can only run it up to the point where you call
an undefined function. So you can't really do much exploration until
you have some definitions.

I expect a lot of the exploration you do with incomplete programs
amount to the feedback you get from type inference.

> This is because you don't have the freedom to explore possible
> solutions without having to break your concentration to satisfy the
> nagging of a static type checker.

I tend to disagree. I have programmed a great deal in Lisp, Scheme,
Prolog (all dynamically typed) and SML and Haskell (both statically
typed). And I don't find that I need more stubs etc. in the latter.
In fact, I do a lot of explorative programming when writing programs
in ML and Haskell. And I find type inference very helpful in this, as
it guides the direction of the exploration, so it is more like a
safari with a local guide than a blindfolded walk in the jungle.

Torben

Rob Thorpe

unread,
Jun 19, 2006, 4:48:10 AM6/19/06
to

Well, for most purposes that's the same as dynamic typing since the
compiler doesn't require you to label the type of your variables. I
occasionally use CMUCL and SBCL which do type inference, which is
useful at improving generated code quality. It also can warn the
programmer if they if they reuse a variable in a context implying that
it's a different type which is useful.

I see type inference as an optimization of dynamic typing rather than a
generalization of static typing. But I suppose you can see it that way
around.

Torben Ægidius Mogensen

unread,
Jun 19, 2006, 6:03:29 AM6/19/06
to
"Rob Thorpe" <robert...@antenova.com> writes:

> Torben Ægidius Mogensen wrote:
> > "Rob Thorpe" <robert...@antenova.com> writes:
> >
> > > Torben Ægidius Mogensen wrote:
> >
> > Indeed. So use a language with type inference.
>
> Well, for most purposes that's the same as dynamic typing since the
> compiler doesn't require you to label the type of your variables.

That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations. Dynamic typing means
that such violations are detected at run-time. This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all. The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value. ML and Scheme are
both strongly typed, but one is statically typed and the other
dynamically typed.

Anyway, type inference for statically typed langauges don't make them
any more dynamically typed. It just moves the burden of assigning the
types from the programmer to the compiler. And (for HM type systems)
the compiler doesn't "guess" at a type -- it finds the unique most
general type from which all other legal types (within the type system)
can be found by instantiation.

> I
> occasionally use CMUCL and SBCL which do type inference, which is
> useful at improving generated code quality. It also can warn the
> programmer if they if they reuse a variable in a context implying that
> it's a different type which is useful.
>
> I see type inference as an optimization of dynamic typing rather than a
> generalization of static typing. But I suppose you can see it that way
> around.

Some compilers for dynamically typed languages will do a type analysis
similar to type inference, but they will happily compile a program
even if they can't guarantee static type safety.

Such "type inference" can be seen as an optimisation of dynamic
typing, as it allows the compiler to omit _some_ of the runtime type
checks. I prefer the term "soft typing" for this, though, so as not
to confuse with static type inference.

Soft typing can give feedback similar to that of type inference in
terms of identifying potential problem spots, so in that respect it is
similar to static type inference, and you might get similar fast code
development. You miss some of the other benefits of static typing,
though, such as a richer type system -- soft typing often lacks
features like polymorphism (it will find a set of monomorphic
instances rather than the most general type) and type classes.

Torben

George Neuner

unread,
Jun 19, 2006, 6:11:15 AM6/19/06
to
On 19 Jun 2006 10:19:05 +0200, tor...@app-3.diku.dk (Torben Ægidius
Mogensen) wrote:


>If you don't have definitions (stubs or complete) of the functions you
>use in your code, you can only run it up to the point where you call
>an undefined function. So you can't really do much exploration until
>you have some definitions.

Well in Lisp that just drops you into the debugger where you can
supply the needed return data and continue. I agree that it is not
something often needed.


>I expect a lot of the exploration you do with incomplete programs
>amount to the feedback you get from type inference.

The ability to write functions and test them immediately without
writing a lot of supporting code is _far_ more useful to me than type
inference.

I'm not going to weigh in on the static v dynamic argument ... I think
both approaches have their place. I am, however, going to ask what
information you think type inference can provide that substitutes for
algorithm or data structure exploration.

George
--
for email reply remove "/" from address

Torben Ægidius Mogensen

unread,
Jun 19, 2006, 7:53:01 AM6/19/06
to
George Neuner <gneuner2/@comcast.net> writes:

> On 19 Jun 2006 10:19:05 +0200, tor...@app-3.diku.dk (Torben Ægidius
> Mogensen) wrote:

> >I expect a lot of the exploration you do with incomplete programs
> >amount to the feedback you get from type inference.
>
> The ability to write functions and test them immediately without
> writing a lot of supporting code is _far_ more useful to me than type
> inference.

I can't see what this has to do with static/dynamic typing. You can
test individula functions in isolation is statically typed languages
too.



> I'm not going to weigh in on the static v dynamic argument ... I think
> both approaches have their place. I am, however, going to ask what
> information you think type inference can provide that substitutes for
> algorithm or data structure exploration.

None. But it can provide a framework for both and catch some types of
mistakes early.

Torben

Chris Smith

unread,
Jun 19, 2006, 11:20:20 AM6/19/06
to
Torben Ægidius Mogensen <tor...@app-3.diku.dk> wrote:
> That's not really the difference between static and dynamic typing.
> Static typing means that there exist a typing at compile-time that
> guarantess against run-time type violations. Dynamic typing means
> that such violations are detected at run-time. This is orthogonal to
> strong versus weak typing, which is about whether such violations are
> detected at all. The archetypal weakly typed language is machine code
> -- you can happily load a floating point value from memory, add it to
> a string pointer and jump to the resulting value. ML and Scheme are
> both strongly typed, but one is statically typed and the other
> dynamically typed.

Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
interject my plea not to abuse the word "type" with a phrase like
"dynamically typed". If anyone considers "untyped" to be perjorative,
as some people apparently do, then I'll note that another common term is
"type-free," which is marketing-approved but doesn't carry the
misleading connotations of "dynamically typed." We are quickly losing
any rational meaning whatsoever to the word "type," and that's quite a
shame.

By way of extending the point, let me mention that there is no such
thing as a universal class of things that are called "run-time type
violations". At runtime, there is merely correct code and incorrect
code. To the extent that anything is called a "type" at runtime, this
is a different usage of the word from the usage by which we may define
languages as being statically typed (which means just "typed"). In
typed OO languages, this runtime usage is often called the "class", for
example, to distinguish it from type.

This cleaner terminology eliminates a lot of confusion. For example, it
clarifies that there is no binary division between strongly typed
languages and weakly typed languages, since the division between a "type
error" and any other kind of error is arbitrary, depending only on
whether the type system in a particular language happens to catch that
error. For example, type systems have been defined to try to catch unit
errors in scientific programming, or to catch out-of-bounds array
indices... yet these are not commonly called "type errors" only because
such systems are not in common use.

This also leads us to define something like "language safety" to
encapsulate what we previously would have meant by the phrase "strongly
dynamically typed language". This also is a more general concept than
we had before. Language safety refers to a language having well-defined
behavior for as many operations as feasible, so that it's less likely
that someone will do something spectacularly bad. Language safety may
be achieved either by a type system or by runtime checks. Again, it's
not absolute... I'm aware of no language that is completely determinate,
at least if it supports any kind of concurrency.

This isn't just a matter of preference in terminology. The definitions
above (which are, in my experience, used widely by most non-academic
language design discussions) actually limit our understanding of
language design by pretending that certain delicate trade-offs such as
the extent of the type system, or which language behavior is allowed to
be non-deterministic or undefined, are etched in stone. This is simply
not so. If types DON'T mean a compile-time method for proving the
absence of certain program behaviors, then they don't mean anything at
all. Pretending that there's a distinction at runtime between "type
errors" and "other errors" serves only to confuse things and
artificially limit which problems we are willing to concieve as being
solvable by types.

> Anyway, type inference for statically typed langauges don't make them
> any more dynamically typed.

Indeed it does not. Unless it weakens the ability of a compiler to
prove the absence of certain program behaviors (which type inference
does not), it doesn't move anything on the scale toward a type-free
language.

That being said, though, it is considered a feature of some programming
languages that the programmer is asked to repeat type information in a
few places. The compiler may not need the information, but for
precisely the reason that the information is redundant, the compiler is
then able to check the consistency of the programmer in applying the
type. I won't get into precisely how useful this is, but it is
nevertheless present as an advantage to outweigh the wordiness.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

Rob Thorpe

unread,
Jun 19, 2006, 11:44:49 AM6/19/06
to
Chris Smith wrote:
> Torben Ægidius Mogensen <tor...@app-3.diku.dk> wrote:
> > That's not really the difference between static and dynamic typing.
> > Static typing means that there exist a typing at compile-time that
> > guarantess against run-time type violations. Dynamic typing means
> > that such violations are detected at run-time. This is orthogonal to
> > strong versus weak typing, which is about whether such violations are
> > detected at all. The archetypal weakly typed language is machine code
> > -- you can happily load a floating point value from memory, add it to
> > a string pointer and jump to the resulting value. ML and Scheme are
> > both strongly typed, but one is statically typed and the other
> > dynamically typed.
>
> Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> interject my plea not to abuse the word "type" with a phrase like
> "dynamically typed". If anyone considers "untyped" to be perjorative,
> as some people apparently do, then I'll note that another common term is
> "type-free," which is marketing-approved but doesn't carry the
> misleading connotations of "dynamically typed." We are quickly losing
> any rational meaning whatsoever to the word "type," and that's quite a
> shame.

I don't think dynamic typing is that nebulous. I remember this being
discussed elsewhere some time ago, I'll post the same reply I did then
..


A language is statically typed if a variable has a property - called
it's type - attached to it, and given it's type it can only represent
values defined by a certain class.

A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class.

Some people use dynamic typing as a word for latent typing, others use
it to mean something slightly different. But for most purposes the
definition above works for dynamic typing also.

Untyped and type-free mean something else: they mean no type checking
is done.

Pascal Costanza

unread,
Jun 19, 2006, 12:22:02 PM6/19/06
to
Chris Smith wrote:
> Torben Ægidius Mogensen <tor...@app-3.diku.dk> wrote:
>> That's not really the difference between static and dynamic typing.
>> Static typing means that there exist a typing at compile-time that
>> guarantess against run-time type violations. Dynamic typing means
>> that such violations are detected at run-time. This is orthogonal to
>> strong versus weak typing, which is about whether such violations are
>> detected at all. The archetypal weakly typed language is machine code
>> -- you can happily load a floating point value from memory, add it to
>> a string pointer and jump to the resulting value. ML and Scheme are
>> both strongly typed, but one is statically typed and the other
>> dynamically typed.
>
> Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> interject my plea not to abuse the word "type" with a phrase like
> "dynamically typed". If anyone considers "untyped" to be perjorative,
> as some people apparently do, then I'll note that another common term is
> "type-free," which is marketing-approved but doesn't carry the
> misleading connotations of "dynamically typed." We are quickly losing
> any rational meaning whatsoever to the word "type," and that's quite a
> shame.

The words "untyped" or "type-free" only make sense in a purely
statically typed setting. In a dynamically typed setting, they are
meaningless, in the sense that there are _of course_ types that the
runtime system respects.

Types can be represented at runtime via type tags. You could insist on
using the term "dynamically tagged languages", but this wouldn't change
a lot. Exactly _because_ it doesn't make sense in a statically typed
setting, the term "dynamically typed language" is good enough to
communicate what we are talking about - i.e. not (static) typing.

> By way of extending the point, let me mention that there is no such
> thing as a universal class of things that are called "run-time type
> violations". At runtime, there is merely correct code and incorrect
> code.

No, there is more: There is safe and unsafe code (i.e., code that throws
exceptions or that potentially just does random things). There are also
runtime systems where you have the chance to fix the reason that caused
the exception and continue to run your program. The latter play very
well with dynamic types / type tags.

> To the extent that anything is called a "type" at runtime, this
> is a different usage of the word from the usage by which we may define
> languages as being statically typed (which means just "typed"). In
> typed OO languages, this runtime usage is often called the "class", for
> example, to distinguish it from type.

What type of person are you to tell other people what terminology to use? ;)

Ha! Here I used "type" in just another sense of the word. ;)

It is important to know the context in which you are discussing things.
For example, "we" Common Lispers use the term "type" as defined in
http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_t.htm . You
cannot possibly argue that "our" use of the word "type" is incorrect
because in "our" context, when we talk about Common Lisp, the use of the
word "type" better be consistent with that definition. (You can say that
you don't like the definition, that it is unsound, or whatever, but that
doesn't change anything here.)

> This cleaner terminology eliminates a lot of confusion. For example, it
> clarifies that there is no binary division between strongly typed
> languages and weakly typed languages, since the division between a "type
> error" and any other kind of error is arbitrary, depending only on
> whether the type system in a particular language happens to catch that
> error. For example, type systems have been defined to try to catch unit
> errors in scientific programming, or to catch out-of-bounds array
> indices... yet these are not commonly called "type errors" only because
> such systems are not in common use.

What type system catches division by zero? That is, statically? Would
you like to program in such a language?

> This isn't just a matter of preference in terminology. The definitions
> above (which are, in my experience, used widely by most non-academic
> language design discussions) actually limit our understanding of
> language design by pretending that certain delicate trade-offs such as
> the extent of the type system, or which language behavior is allowed to
> be non-deterministic or undefined, are etched in stone. This is simply
> not so. If types DON'T mean a compile-time method for proving the
> absence of certain program behaviors, then they don't mean anything at
> all. Pretending that there's a distinction at runtime between "type
> errors" and "other errors" serves only to confuse things and
> artificially limit which problems we are willing to concieve as being
> solvable by types.

Your problem doesn't exist. Just say "types" when you're amongst your
own folks, and "static types" when you're amongst a broader audience,
and everything's fine. Instead of focusing on terminology, just focus on
the contents.

Chris Smith

unread,
Jun 19, 2006, 12:31:55 PM6/19/06
to
Rob Thorpe <robert...@antenova.com> wrote:
> A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent values
> defined by a certain class.

I'm assuming you mean "class" in the general sense, rather than in the
sense of a specific construct of some subset of OO programming
languages.

Now I define a class of values called "correct" values. I define these
to be those values for which my program will produce acceptable results.
Clearly there is a defined class of such values: (1) they are
immediately defined by the program's specification for those lines of
code that produce output; (2) if they are defined for the values that
result from any expression, then they are defined for the values that
are used by that expression; and (3) for any value for which correctness
is not defined by (1) or (2), we may define its "correct" values as the
class of all possible values. Now, by your definition, any language
which provides checking of that property of correctness for values is
latently typed. Of course, there are no languages that assign this
specific class of values; but ANY kind of correctness checking on values
that a language does (if it's useful at all) is a subset of the perfect
correctness checking system above. Apparently, we should call all such
systems "latent type systems". Can you point out a language that is not
latently typed?

I'm not trying to poke holes in your definition for fun. I am proposing
that there is no fundamental distinction between the kinds of problems
that are "type problems" and those that are not. Types are not a class
of problems; they are a class of solutions. Languages that solve
problems in ways that don't assign types to variables are not typed
languages, even if those same problems may have been originally solved
by type systems.

> Untyped and type-free mean something else: they mean no type checking
> is done.

Hence, they don't exist, and the definitions being used here are rather
pointless.

George Neuner

unread,
Jun 19, 2006, 1:33:23 PM6/19/06
to
On 19 Jun 2006 13:53:01 +0200, tor...@app-1.diku.dk (Torben Ægidius
Mogensen) wrote:

>George Neuner <gneuner2/@comcast.net> writes:
>
>> On 19 Jun 2006 10:19:05 +0200, tor...@app-3.diku.dk (Torben Ægidius
>> Mogensen) wrote:
>
>> >I expect a lot of the exploration you do with incomplete programs
>> >amount to the feedback you get from type inference.
>>
>> The ability to write functions and test them immediately without
>> writing a lot of supporting code is _far_ more useful to me than type
>> inference.
>
>I can't see what this has to do with static/dynamic typing. You can

>test individul functions in isolation is statically typed languages
>too.

It has nothing to do with static/dynamic typing and that was the point
... that support for exploratory programming is orthogonal to the
language's typing scheme.

Chris Smith

unread,
Jun 19, 2006, 1:57:08 PM6/19/06
to
Pascal Costanza <p...@p-cos.net> wrote:
> Types can be represented at runtime via type tags. You could insist on
> using the term "dynamically tagged languages", but this wouldn't change
> a lot. Exactly _because_ it doesn't make sense in a statically typed
> setting, the term "dynamically typed language" is good enough to
> communicate what we are talking about - i.e. not (static) typing.

Okay, fair enough. It's certainly possible to use the same sequence of
letters to mean two different things in different contexts. The problem
arises, then, when Torben writes:

: That's not really the difference between static and dynamic typing.


: Static typing means that there exist a typing at compile-time that
: guarantess against run-time type violations. Dynamic typing means
: that such violations are detected at run-time.

This is clearly not using the word "type" to mean two different things
in different contexts. Rather, it is speaking under the mistaken
impression that "static typing" and "dynamic typing" are varieties of
some general thing called "typing." In fact, the phrase "dynamically
typed" was invented to do precisely that. My argument is not really
with LISP programmers talking about types, by which they would mean
approximately the same thing Java programmers mean by "class." My point
here concerns the confusion that results from the conception that there
is this binary distinction (or continuum, or any other simple
relationship) between a "statically typed" and a "dynamically typed"
language.

Torben's (and I don't mean to single out Torben -- the terminology is
used quite widely) classification of dynamic versus static type systems
depends on the misconception that there is some universal definition to
the term "type error" or "type violation" and that the only question is
how we address these well-defined things. It's that misconception that
I aim to challenge.

> No, there is more: There is safe and unsafe code (i.e., code that throws
> exceptions or that potentially just does random things). There are also
> runtime systems where you have the chance to fix the reason that caused
> the exception and continue to run your program. The latter play very
> well with dynamic types / type tags.

Yes, I was oversimplifying.

> What type system catches division by zero? That is, statically?

I can define such a type system trivially. To do so, I simply define a
type for integers, Z, and a subtype for non-zero integers, Z'. I then
define the language such that division is only possible in an expression
that looks like << z / z' >>, where z has type Z and z' has type Z'.
The language may then contain an expression:

z 0? t1 : t2

in which t1 is evaluated in the parent type environment, but t2 is
evaluated in the type environment augmented by (z -> Z'), the type of
the expression is the intersection type of t1 and t2 evaluated in those
type environments, and the evaluation rules are defined as you probably
expect.

> Would you like to program in such a language?

No. Type systems for real programming languages are, of course, a
balance between rigor and usability. This particular set of type rules
doesn't seem to exhibit a good balance. Perhaps there is a way to
achieve it in a way that is more workable, but it's not immediately
obvious.

As another example, from Pierce's text "Types and Programming
Languages", Pierce writes: "Static elimination of array-bounds checking
is a long-standing goal for type system designers. In principle, the
necessary mechanisms (based on dependent types) are well understood, but
packaging them in a form that balances expressive power, predictability
and tractability of typechecking, and complexity of program annotations
remains a significant challenge." Again, this could quite validly be
described as a type error, just like division by zero or ANY other
program error... it's just that the type system that solves it doesn't
look appealing, so everyone punts the job to runtime checks (or, in some
cases, to the CPU's memory protection features and/or the user's ability
to fix resulting data corruption).

Why aren't these things commonly considered type errors? There is only
one reason: there exists no widely used language which solves them with
types. (I mean in the programming language type theory sense of "type";
since many languages "tag" arrays with annotations indicating their
dimensions, I guess you could say that we do solve them with types in
the LISP sense).

> Your problem doesn't exist. Just say "types" when you're amongst your
> own folks, and "static types" when you're amongst a broader audience,
> and everything's fine.

I think I've explained why that's not the case. I don't have a
complaint about anyone speaking of types. It's the confusion from
pretending that the two definitions are comparable that I'm pointing
out.

Matthias Blume

unread,
Jun 19, 2006, 2:09:28 PM6/19/06
to
George Neuner <gneuner2/@comcast.net> writes:

> I am, however, going to ask what
> information you think type inference can provide that substitutes for
> algorithm or data structure exploration.

Nobody wants to do such a substitution, of course. In /my/
experience, however, I find that doing algorithm and data structure
exploration is greatly aided by a language with static types and type
inference. YMMV.

Yet Another Dan

unread,
Jun 19, 2006, 2:21:56 PM6/19/06
to
Chris Smith <cds...@twu.net> wrote in
news:MPG.1f007f6a4...@news.altopia.net:

> Rob Thorpe <robert...@antenova.com> wrote:
>> A language is latently typed if a value has a property - called it's
>> type - attached to it, and given it's type it can only represent
>> values defined by a certain class.
>

> Now I define a class of values called "correct" values. I define
> these to be those values for which my program will produce acceptable
> results. Clearly there is a defined class of such values: (1) they
> are immediately defined by the program's specification for those lines

> of code that produce output; ...

> I'm not trying to poke holes in your definition for fun. I am
> proposing that there is no fundamental distinction between the kinds
> of problems that are "type problems" and those that are not.

That sounds like a lot to demand of a type system. It almost sounds like
it's supposed to test and debug the whole program. In general, defining the
exact set of values for a given variable that generate acceptable output
from your program will require detailed knowledge of the program and all
its possible inputs. That goes beyond simple typing. It sounds more like
contracts. Requiring an array index to be an integer is considered a typing
problem because it can be checked based on only the variable itself,
whereas checking whether it's in bounds requires knowledge about the array.

--
YAD

Matthias Blume

unread,
Jun 19, 2006, 2:24:47 PM6/19/06
to
"Rob Thorpe" <robert...@antenova.com> writes:

> I don't think dynamic typing is that nebulous. I remember this being
> discussed elsewhere some time ago, I'll post the same reply I did then
> ..
>
>
> A language is statically typed if a variable has a property - called
> it's type - attached to it, and given it's type it can only represent
> values defined by a certain class.

By this definition, all languages are statically typed (by making that
"certain class" the set of all values). Moreover, this "definition",
when read the way you probably wanted it to be read, requires some
considerable stretch to accommodate existing static type systems such
as F_\omega.

Perhaps better: A language is statically typed if its definition
includes (or ever better: is based on) a static type system, i.e., a
static semantics with typing judgments derivable by typing rules.
Usually typing judgmets associate program phrases ("expressions") with
types given a typing environment.

> A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent values
> defined by a certain class.

This "definition" makes little sense. Any given value can obviously
only represent one value: itself. "Dynamic types" are nothing more
than sets of values, often given by computable predicates.

> Untyped and type-free mean something else: they mean no type checking
> is done.

Look up "untyped lambda calculus".

Pascal Costanza

unread,
Jun 19, 2006, 2:34:33 PM6/19/06
to
Matthias Blume wrote:
> "Rob Thorpe" <robert...@antenova.com> writes:
>
>> I don't think dynamic typing is that nebulous. I remember this being
>> discussed elsewhere some time ago, I'll post the same reply I did then
>> ..
>>
>>
>> A language is statically typed if a variable has a property - called
>> it's type - attached to it, and given it's type it can only represent
>> values defined by a certain class.
>
> By this definition, all languages are statically typed (by making that
> "certain class" the set of all values). Moreover, this "definition",
> when read the way you probably wanted it to be read, requires some
> considerable stretch to accommodate existing static type systems such
> as F_\omega.
>
> Perhaps better: A language is statically typed if its definition
> includes (or ever better: is based on) a static type system, i.e., a
> static semantics with typing judgments derivable by typing rules.
> Usually typing judgmets associate program phrases ("expressions") with
> types given a typing environment.

How does your definition exclude the trivial type system in which the
only typing judgment states that every expression is acceptable?

Pascal Costanza

unread,
Jun 19, 2006, 2:40:05 PM6/19/06
to

There is an overlap in the sense that some static type systems cover
only types as sets of values whose correct use could as well be checked
dynamically.

Yes, it's correct that more advanced static type systems can provide
more semantics than that (and vice versa).

Chris Smith

unread,
Jun 19, 2006, 3:17:15 PM6/19/06
to
Pascal Costanza <p...@p-cos.net> wrote:
> How does your definition exclude the trivial type system in which the
> only typing judgment states that every expression is acceptable?

It is not necessary to exclude that trivial type system. Since it is
useless, no one will implement it. However, if pressed, I suppose one
would have to admit that that definition includes a type system that is
just useless.

I do, though, prefer Pierce's definition:

A type system is a tractable syntactic method for proving the
absence of certain program behaviors by classifying phrases
according to the kinds of values they compute.

(Benjamin Pierce, Types and Programming Languages, MIT Press, pg. 1)

Key words include:

- tractable: it's not sufficient to just evaluate the program

- syntactic: types are tied to the kinds of expressions in the language

- certain program behaviors: while perhaps confusing out of context,
there is nowhere in the book a specification of which program
behaviors may be prevented by type systems and which may not. In
context, the word "certain" there is meant to make it clear that type
systems should be able to specifically identify which behaviors they
prevent, and not that there is some universal set.

Matthias Blume

unread,
Jun 19, 2006, 3:21:19 PM6/19/06
to
Pascal Costanza <p...@p-cos.net> writes:

It does not.

Joe Marshall

unread,
Jun 19, 2006, 3:56:29 PM6/19/06
to

Chris Smith wrote:
>
> Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> interject my plea not to abuse the word "type" with a phrase like
> "dynamically typed".

Allow me to strenuously object. The static typing community has its
own set of
terminology and that's fine. However, we Lisp hackers are not used to
this terminology.
It confuses us. *We* know what we mean by `dynamically typed', and we
suspect *you* do, too.


> This cleaner terminology eliminates a lot of confusion.

Hah! Look at the archives.

> This isn't just a matter of preference in terminology.

No?

> If types DON'T mean a compile-time method for proving the
> absence of certain program behaviors, then they don't mean anything at
> all.

Nonsense.

Chris Smith

unread,
Jun 19, 2006, 4:23:46 PM6/19/06
to
Joe Marshall <eval....@gmail.com> wrote:
>
> Chris Smith wrote:
> >
> > Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> > interject my plea not to abuse the word "type" with a phrase like
> > "dynamically typed".
>
> Allow me to strenuously object. The static typing community has its
> own set of
> terminology and that's fine. However, we Lisp hackers are not used to
> this terminology.
> It confuses us. *We* know what we mean by `dynamically typed', and we
> suspect *you* do, too.

I know what you mean by types in LISP. The phrase "dynamically typed,"
though, was explicitly introduced as a counterpart to "statically
typed" in order to imply (falsely) that the word "typed" has related
meanings in those two cases. Nevertheless, I did not really object,
since it's long since passed into common usage, until Torben attempted
to give what I believe are rather meaningless definitions to those
words, in terms of some mythical thing called "type violations" that he
seems to believe exist apart from any specific type systems.
(Otherwise, how could you define kinds of type systems in terms of when
they catch type violations?)

> > This cleaner terminology eliminates a lot of confusion.
>
> Hah! Look at the archives.

I'm not sure what you mean here. You would like me to look at the
archives of which of the five groups that are part of this conversation?
In any case, the confusion I'm referring to pertains to comparison of
languages, and it's already been demonstrated once in the half-dozen or
so responses to this branch of this thread.

> > If types DON'T mean a compile-time method for proving the
> > absence of certain program behaviors, then they don't mean anything at
> > all.
>
> Nonsense.

Please accept my apologies for not making the context clear. I tried to
clarify, in my response to Pascal, that I don't mean that the word
"type" can't have any possible meaning except for the one from
programming language type theory. I should modify my statement as
follows:

An attempt to generalize the definition of "type" from programming
language type theory to eliminate the requirement that they are
syntactic in nature yields something meaningless. Any concept of
"type" that is not syntactic is a completely different thing from
static types.

Basically, I start objecting when someone starts comparing "statically
typed" and "dynamically typed" as if they were both varieties of some
general concept called "typed". They aren't. Furthermore, these two
phrases were invented under the misconception that that are. If you
mean something else by types, such as the idea that a value has a tag
indicating its range of possible values, then I tend to think it would
be less confusing to just say "type" and then clarify the meaning it
comes into doubt, rather than adopting language that implies that those
types are somehow related to types from type theory.

Marshall

unread,
Jun 19, 2006, 4:41:10 PM6/19/06
to
Chris Smith wrote:
>
> Basically, I start objecting when someone starts comparing "statically
> typed" and "dynamically typed" as if they were both varieties of some
> general concept called "typed". They aren't. Furthermore, these two
> phrases were invented under the misconception that that are. If you
> mean something else by types, such as the idea that a value has a tag
> indicating its range of possible values, then I tend to think it would
> be less confusing to just say "type" and then clarify the meaning it
> comes into doubt, rather than adopting language that implies that those
> types are somehow related to types from type theory.

While I am quite sympathetic to this point, I have to say that
this horse left the barn quite some time ago.


Marshall

PS. Hi Chris!

Chris Smith

unread,
Jun 19, 2006, 4:56:26 PM6/19/06
to
Marshall <marshal...@gmail.com> wrote:
> While I am quite sympathetic to this point, I have to say that
> this horse left the barn quite some time ago.

I don't think so. Perhaps it's futile to go scouring the world for uses
of the phrase "dynamic type" and eliminating them. It's not useless to
point out when the term is used in a particularly confusing way, though,
as when it's implied that there is some class of "type errors" that is
strictly a subset of the class of "errors". Terminology is often
confused for historical reasons, but incorrect statements eventually get
corrected.

> PS. Hi Chris!

Hi! Where are you posting from these days?

Marshall

unread,
Jun 19, 2006, 6:08:38 PM6/19/06
to
Chris Smith wrote:
> Marshall <marshal...@gmail.com> wrote:
> > While I am quite sympathetic to this point, I have to say that
> > this horse left the barn quite some time ago.
>
> I don't think so. Perhaps it's futile to go scouring the world for uses
> of the phrase "dynamic type" and eliminating them. It's not useless to
> point out when the term is used in a particularly confusing way, though,
> as when it's implied that there is some class of "type errors" that is
> strictly a subset of the class of "errors". Terminology is often
> confused for historical reasons, but incorrect statements eventually get
> corrected.

That's fair.

One thing that is frustrating to me is that I really want to build
an understanding of what dynamic typing is and what its
advantages are, but it's difficult to have a productive discussion
on the static vs. dynamic topic. Late binding of every function
invocation: how does that work, what are the implications of that,
what does that buy you?

I have come to believe that the two actually represent
very different ways of thinking about programming. Which
goes a long way to explaining why there are such difficulties
communicating. Each side tries to describe to the other how
the tools and systems they use facilitate doing tasks that
don't exist in the other side's mental model.


> > PS. Hi Chris!
>
> Hi! Where are you posting from these days?

I'm mostly on comp.databases.theory, but I also lurk
on comp.lang.functional, which is an awesome group, and
if you're reading Pierce then you might like it too.


Marshall

Dimitri Maziuk

unread,
Jun 19, 2006, 6:02:55 PM6/19/06
to
Yet Another Dan sez:

... Requiring an array index to be an integer is considered a typing

> problem because it can be checked based on only the variable itself,
> whereas checking whether it's in bounds requires knowledge about the array.

You mean like
subtype MyArrayIndexType is INTEGER 7 .. 11
type MyArrayType is array (MyArrayIndexType) of MyElementType

Dima
--
We're sysadmins. Sanity happens to other people. -- Chris King

Chris F Clark

unread,
Jun 19, 2006, 7:24:00 PM6/19/06
to
Chris Smith wrote:
> Marshall <marshal...@gmail.com> wrote:
> > While I am quite sympathetic to this point, I have to say that
> > this horse left the barn quite some time ago.
>
> I don't think so. Perhaps it's futile to go scouring the world for uses
> of the phrase "dynamic type" and eliminating them. It's not useless to
> point out when the term is used in a particularly confusing way, though,
> as when it's implied that there is some class of "type errors" that is
> strictly a subset of the class of "errors". Terminology is often
> confused for historical reasons, but incorrect statements eventually get
> corrected.

Ok, so you (Chris Smith) object to the term "dynamic type". From a
historical perspective it makes sense, perticular in the sense of
tags. A dynamic type system involves associating tags with values and
expresses variant operations in terms of those tags, with certain
disallowed combinations checked (dynamicly) at run-time. A static
type system eliminates some set of tags on values by syntactic
analysis of annotations (types) written with or as part of the program
and detects some of the disallowed compuatations (staticly) at compile
time.

Type errors are the errors caught by ones personal sense of which
annotations are expressible, computable, and useful. Thus, each
person has a distinct sense of which errors can (and/or should) be
staticly caught.

A personal example, I read news using Emacs, which as most readers in
these newsgroups will know is a dialect of lisp that includes
primitives to edit files. Most of emacs is quite robust, perhaps due
to it being lisp. However, some commands (reading news being one of
them) are exceptionally fragile and fail in the most undebuggable
ways, often just complaining about "nil being an invalid argument" (or
"stack overflow in regular expression matcher".)

This is particularly true, when I am composing lisp code. If I write
some lisp code and make a typographical error, the code may appear to
run on some sample case and fail due to a path not explored in my
sample case when applied to real data.

I consider such an error to be a "type error" because I believe if I
used a languages that required more type annotations, the compiler
would have caught my typographical error (and no I'm not making a pun
on type and typographical). Because I do a lot of this--it is easy
enough for me to conjure up a small lisp macro to make some edit that
it is a primary tool in my toolkit, I wish I could make my doing so
more robust. It would not bother me to have to type more to do so. I
simply make too many stupid errors to use emacs lisp as effectively as
I would like.

Now, this has nothing to do with real lisp programming, and so I do
not wish to offend those who do that. However, I personally would
like a staticly typed way of writing macros (and more importantly of
annotating some of the existing emacs code) to remove some of the
fragilities that I experience. I'm not taking avantage of the
exploratory nature of lisp, except in the sense that the exploratory
nature has created lots of packages which mostly work most of the
time.

Now, I will return this group to its much more erudite discussion of
the issue.

Thank you,
-Chris

*****************************************************************************
Chris Clark Internet : com...@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------

Joe Marshall

unread,
Jun 19, 2006, 7:50:04 PM6/19/06
to

Chris Smith wrote:
> Joe Marshall <eval....@gmail.com> wrote:
> >
> > Chris Smith wrote:
> > >
> > > Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> > > interject my plea not to abuse the word "type" with a phrase like
> > > "dynamically typed".
> >
> > Allow me to strenuously object. The static typing community has its
> > own set of
> > terminology and that's fine. However, we Lisp hackers are not used to
> > this terminology.
> > It confuses us. *We* know what we mean by `dynamically typed', and we
> > suspect *you* do, too.
>
> I know what you mean by types in LISP. The phrase "dynamically typed,"
> though, was explicitly introduced as a counterpart to "statically
> typed" in order to imply (falsely) that the word "typed" has related
> meanings in those two cases.

They *do* have a related meaning. Consider this code fragment:
(car "a string")

Obviously this code is `wrong' in some way. In static typing terms, we
could say that
we have a type error because the primitive procedure CAR doesn't
operate on strings.
Alternatively, we could say that since Lisp has one universal type (in
static type terms)
the code is correctly statically typed - that is, CAR is defined on all
input, but it's definition is to raise a runtime exception when passed
a string.

But regardless of which way you want to look at it, CAR is *not* going
to perform its usual computation and it is *not* going to return a
value. The reason behind this is that you cannot take the CAR of a
string. A string is *not* a valid argument to CAR. Ask anyone why and
they will tell you `It's the wrong type.'

Both `static typing' and `dynamic typing' (in the colloquial sense) are
strategies to detect this sort of error.

> Nevertheless, I did not really object,
> since it's long since passed into common usage,

Exactly. And you are far more likely to encounter this sort of usage
outside of a type theorist's convention.


> until Torben attempted
> to give what I believe are rather meaningless definitions to those
> words, in terms of some mythical thing called "type violations" that he
> seems to believe exist apart from any specific type systems.

It's hardly mythical. (car "a string") is obviously an error and you
don't need a static type system to know that.

> > > This cleaner terminology eliminates a lot of confusion.
> >
> > Hah! Look at the archives.
>
> I'm not sure what you mean here. You would like me to look at the
> archives of which of the five groups that are part of this conversation?
> In any case, the confusion I'm referring to pertains to comparison of
> languages, and it's already been demonstrated once in the half-dozen or
> so responses to this branch of this thread.

I mean that this has been argued time and time again in comp.lang.lisp
and probably the other groups as well. You may not like the fact that
we say that Lisp is dynamically typed, but *we* are not confused by
this usage. In fact, we become rather confused when you say `a
correctly typed program cannot go wrong at runtime' because we've seen
plenty of runtime errors from code that is `correctly typed'.

> > > If types DON'T mean a compile-time method for proving the
> > > absence of certain program behaviors, then they don't mean anything at
> > > all.
> >
> > Nonsense.
>
> Please accept my apologies for not making the context clear. I tried to
> clarify, in my response to Pascal, that I don't mean that the word
> "type" can't have any possible meaning except for the one from
> programming language type theory. I should modify my statement as
> follows:
>
> An attempt to generalize the definition of "type" from programming
> language type theory to eliminate the requirement that they are
> syntactic in nature yields something meaningless. Any concept of
> "type" that is not syntactic is a completely different thing from
> static types.

Agreed. That is why there is the qualifier `dynamic'. This indicates
that it is a completely different thing from static types.

> Basically, I start objecting when someone starts comparing "statically
> typed" and "dynamically typed" as if they were both varieties of some
> general concept called "typed". They aren't.

I disagree. There is clearly a concept that there are different
varieties of data and they are not interchangable. In some languages,
it is up to the programmer to ensure that mistakes in data usage do not
happen. In other languages, the computer can detect such mistakes and
prevent them. If this detection is performed by syntactic analysis
prior to running the program, it is static typing. Some languages like
Lisp defer the detection until the program is run. Call it what you
want, but here in comp.lang.lisp we tend to call it `dynamic typing'.

> Furthermore, these two
> phrases were invented under the misconception that that are. If you
> mean something else by types, such as the idea that a value has a tag
> indicating its range of possible values, then I tend to think it would
> be less confusing to just say "type" and then clarify the meaning it
> comes into doubt, rather than adopting language that implies that those
> types are somehow related to types from type theory.

You may think it would be less confusing, but if you look at the
archives of comp.lang.lisp you would see that it is not.

We're all rubes here, so don't try to educate us with your
high-falutin' technical terms.

Marshall

unread,
Jun 19, 2006, 8:58:22 PM6/19/06
to
Joe Marshall wrote:
>
> They *do* have a related meaning. Consider this code fragment:
> (car "a string")
> [...]

> Both `static typing' and `dynamic typing' (in the colloquial sense) are
> strategies to detect this sort of error.

The thing is though, that putting it that way makes it seems as
if the two approaches are doing the same exact thing, but
just at different times: runtime vs. compile time. But they're
not the same thing. Passing the static check at compile
time is universally quantifying the absence of the class
of error; passing the dynamic check at runtime is existentially
quantifying the absence of the error. A further difference is
the fact that in the dynamically typed language, the error is
found during the evaluation of the expression; in a statically
typed language, errors are found without attempting to evaluate
the expression.

I find everything about the differences between static and
dynamic to be frustratingly complex and subtle.

(To be clear, I do know that Joe understands these issues
quite well.)

So I kind of agree with Chris, insofar as I think the terminology
plays a role in obscuring rather than illuminating the differences.

On the other hand I agree with Joe in that:

> I mean that this has been argued time and time again in comp.lang.lisp
> and probably the other groups as well. You may not like the fact that
> we say that Lisp is dynamically typed, but *we* are not confused by
> this usage. In fact, we become rather confused when you say `a
> correctly typed program cannot go wrong at runtime' because we've seen
> plenty of runtime errors from code that is `correctly typed'.

Yes; as I said ealier, the horse has left the barn on this one.

The conversation I would *really* like to have is the one where we
discuss what all the differences are, functionally, between the two,
and what the implications of those differences are, without trying
to address which approach is "right" or "better", because those are
dependent on the problem domain anyway, and because I can
make up my own mind just fine about which one I prefer.

The comp.lang.functional and comp.lang.lisp people are probably
two of the smartest groups on usenet. (I do not consider myself
a member of either group.) You wouldn't *think* that conversation
would be *so* hard to have.


Marshall

Chris Smith

unread,
Jun 19, 2006, 11:26:54 PM6/19/06
to
Joe Marshall <eval....@gmail.com> wrote:
> They *do* have a related meaning. Consider this code fragment:
> (car "a string")

My feeling is that this code is obviously wrong. It is so obviously
wrong, in fact, that the majority of automated error detection systems,
if written for Lisp, would probably manage to identify it as wrong at
some point. This includes the Lisp runtime. So far, I haven't
mentioned types.

> A string is *not* a valid argument to CAR. Ask anyone why and
> they will tell you `It's the wrong type.'

Indeed, they will. We're assuming, of course that they know a little
Lisp... otherwise, it may be entirely reasonable for someone to expect
that (car "a string") is 'a' and (cdr "a string") is " string"... but
I'll ignore that, even though I haven't yet convinced myself that it's
not relevant to why this is colloquially considered a type error.

I believe that in this sense, the 'anyone' actually means "type" in the
sense that you mean "type". The fact that a static type system detects
this error is somewhat coincidental (except for the fact that, again,
any general error-detection scheme worth its salt probably detects this
error), and orthogonal to whether it is considered a type error by our
hypothetical 'anyone'.

> Both `static typing' and `dynamic typing' (in the colloquial sense) are
> strategies to detect this sort of error.

I will repeat that static typing is a strategy for detecting errors in
general, on the basis of tractable syntactic methods. There are some
types of errors that are easier to detect in such a system than
others... but several examples have been given of problems solved by
static type systems that are not of the colloquial "It's the wrong
type" variety that you mention here. The examples so far have included
detecting division by zero, or array bounds checking. Other type
systems can check dimensionality (correct units).

Another particularly interesting example may be the following from
Ocaml:

let my_sqrt x = if x < 0.0 then None else Some(sqrt(x));;

Then, if I attempt to use my_sqrt in a context that requires a float,
the compiler will complain about a type violation, since the type of the
expression is "float option". So this is a type error *in Ocaml*, but
it's not the kind of thing that gets intuitively classified as a type
error. In fact, it's roughly equivalent to a NullPointerException at
runtime in Java, and few Java programmers would consider a
NullPointerException to be somehow "actually a type error" that the
compiler just doesn't catch. In this case, when the error appears in
Ocaml, it appears to be "obviously" a type error, but that's only
because the type system was designed to catch some class of program
errors, of which this is a member.

> It's hardly mythical. (car "a string") is obviously an error and you
> don't need a static type system to know that.

Sure. The question is whether it means much to say that it's a "type
error". So far, I'd agree with either of two statements, depending on
the usage of the word "type":

a) Yes, it means something, but Torben's definition of a static type
system was wrong, because static type systems are not specifically
looking for type errors.

or

b) No, "type error" just means "error that can be caught by the type
system", so it is circular and meaningless to use the phrase in defining
a kind of type system.

> I mean that this has been argued time and time again in comp.lang.lisp
> and probably the other groups as well.

My apologies, then. It has not been discussed so often in any newsgroup
that I followed up until now, though Marshall has now convinced me to
read comp.lang.functional, so I might see these endless discussions from
now on.

> In fact, we become rather confused when you say `a
> correctly typed program cannot go wrong at runtime' because we've seen
> plenty of runtime errors from code that is `correctly typed'.

Actually, I become a little confused by that as well. I suppose it
would be true of a "perfect" static type system, but I haven't seen one
of those yet. (Someone did email me earlier today to point out that the
type system of a specification language called LOTOS supposedly is
perfect in that sense, that every correctly typed program is also
correct, but I've yet to verify this for myself. It seems rather
difficult to believe.)

Unless I suddenly have some kind of realization in the future about the
feasibility of a perfect type system, I probably won't make that
statement that you say confuses you.

> > An attempt to generalize the definition of "type" from programming
> > language type theory to eliminate the requirement that they are
> > syntactic in nature yields something meaningless. Any concept of
> > "type" that is not syntactic is a completely different thing from
> > static types.
>
> Agreed. That is why there is the qualifier `dynamic'. This indicates
> that it is a completely different thing from static types.

If we agree about this, then there is no need to continue this
discussion. I'm not sure we do agree, though, because I doubt we'd be
right here in this conversation if we did.

This aspect of being a "completely different thing" is why I objected to
Torben's statement of the form: static type systems detect type
violations at compile time, whereas dynamic type systems detect type
violations at runtime. The problem with that statement is that "type
violations" means different things in the first and second half, and
that's obscured by the form of the statement. It would perhaps be
clearer to say something like:

Static type systems detect some bugs at compile time, whereas
dynamic type systems detect type violations at runtime.

Here's one interesting consequence of the change. It is easy to
recognize that static and dynamic type systems are largely orthogonal to
each other in the sense above. Java, for example (since that's one of
the newsgroups on the distribution list for this thread), restricting
the field of view to reference types for simplicity's sake, clearly has
both a very well-developed static type system, and a somewhat well-
developed dynamic type system. There are dynamic "type" errors that
pass the compiler and are caught by the runtime; and there are errors
that are caught by the static type system. There is indeed considerable
overlap involved, but nevertheless, neither is made redundant. In fact,
one way of understanding the headaches of Java 1.5 generics is to note
that the two different meanings of "type errors" are no longer in
agreement with each other!

> We're all rubes here, so don't try to educate us with your
> high-falutin' technical terms.

That's not my intention.

Chris Smith

unread,
Jun 19, 2006, 11:31:37 PM6/19/06
to
I <cds...@twu.net> wrote:
> Static type systems detect some bugs at compile time, whereas
> dynamic type systems detect type violations at runtime.

PS: In order to satisfy the Python group (among others not on the cross-
post list), we'd need to include "duck typing," which fits neither of
the two definitions above. We'd probably need to modify the definition
of dynamic type systems, since most source tend to classify it as a
dynamic type system. It's getting rather late, though, and I don't
intend to think about how to do that.

Ron Garret

unread,
Jun 20, 2006, 12:30:11 AM6/20/06
to
In article <1150765102....@h76g2000cwa.googlegroups.com>,
"Marshall" <marshal...@gmail.com> wrote:

> The conversation I would *really* like to have is the one where we
> discuss what all the differences are, functionally, between the two,
> and what the implications of those differences are, without trying
> to address which approach is "right" or "better", because those are
> dependent on the problem domain anyway, and because I can
> make up my own mind just fine about which one I prefer.
>
> The comp.lang.functional and comp.lang.lisp people are probably
> two of the smartest groups on usenet. (I do not consider myself
> a member of either group.) You wouldn't *think* that conversation
> would be *so* hard to have.

It's hard to have because there's very little to say, which leaves the
academics without enough to do to try to justify their existence. This
is the long and the short of it:

1. There are mismatches between the desired behavior of code and its
actual behavior. Such mismatches are generally referred to as "bugs" or
"errors" (and, occasionally, "features").

2. Some of those mismatches can be detected before the program runs.
Some can be detected while the program runs. And some cannot be
detected until after the program has finished running.

3. The various techniques for detecting those mismatches impose varying
degrees of burden upon the programmer and the user.

That's it. Everything else, including but not limited to quibbling over
the meaning of the word "type", is nothing but postmodernist claptrap.

IMHO of course.

rg

Vesa Karvonen

unread,
Jun 20, 2006, 1:05:14 AM6/20/06
to
In comp.lang.functional Chris Smith <cds...@twu.net> wrote:
[...]

> Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> interject my plea not to abuse the word "type" with a phrase like
> "dynamically typed". If anyone considers "untyped" to be perjorative,
> as some people apparently do, then I'll note that another common term is
> "type-free," which is marketing-approved but doesn't carry the
> misleading connotations of "dynamically typed." We are quickly losing
> any rational meaning whatsoever to the word "type," and that's quite a
> shame.
[...]

FWIW, I agree and have argued similarly on many occasions (both on the
net (e.g. http://groups.google.fi/group/comp.programming/msg/ba3ccfde4734313a?hl=fi&)
and person-to-person). The widely used terminology (statically /
dynamically typed, weakly / strongly typed) is extremely confusing to
beginners and even to many with considerable practical experience.

-Vesa Karvonen

Rob Thorpe

unread,
Jun 20, 2006, 5:12:25 AM6/20/06
to
Chris Smith wrote:
> Rob Thorpe <robert...@antenova.com> wrote:
> > A language is latently typed if a value has a property - called it's
> > type - attached to it, and given it's type it can only represent values
> > defined by a certain class.
>
> I'm assuming you mean "class" in the general sense, rather than in the
> sense of a specific construct of some subset of OO programming
> languages.

Yes.

> Now I define a class of values called "correct" values. I define these
> to be those values for which my program will produce acceptable results.
> Clearly there is a defined class of such values: (1) they are
> immediately defined by the program's specification for those lines of
> code that produce output; (2) if they are defined for the values that
> result from any expression, then they are defined for the values that
> are used by that expression; and (3) for any value for which correctness
> is not defined by (1) or (2), we may define its "correct" values as the
> class of all possible values.

I'm not talking about correctness, I'm talking about typing.

> Now, by your definition, any language
> which provides checking of that property of correctness for values is
> latently typed.

No, that isn't what I said. What I said was:


"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."

I said nothing about the values producing correct outputs, or being
correct inputs. I only said that they have types.

What I mean is that each value in the language is defined by two peice
of information, its contents X and its type T.

> Of course, there are no languages that assign this
> specific class of values; but ANY kind of correctness checking on values
> that a language does (if it's useful at all) is a subset of the perfect
> correctness checking system above. Apparently, we should call all such
> systems "latent type systems".

No, I'm speaking about type checking not correctness.

> Can you point out a language that is not
> latently typed?

Easy, any statically typed language is not latently typed. Values have
no type associated with them, instead variables have types.

If I tell a C program to print out a string but give it a number it
will give an error telling me that the types mismatch where the
variable the number is held in is used to assign to a variable that
must hold a string. Similarly if I have a lisp function that prints
out a string it will also fail when given a number, but it will fail at
a different point, it will fail when the type of the value is examined
and found to be incorrect.

> I'm not trying to poke holes in your definition for fun. I am proposing
> that there is no fundamental distinction between the kinds of problems
> that are "type problems" and those that are not. Types are not a class
> of problems; they are a class of solutions.

Exactly. Which is why they are only tangentially associated with
correctness.
Typing is a set of rules put in place to aid correctness, but it is not
a system that attempts to create correctness itself.

> Languages that solve
> problems in ways that don't assign types to variables are not typed
> languages, even if those same problems may have been originally solved
> by type systems.

Well, you can think of things that way. But to the rest of the
computing world languages that don't assign types to variables but do
assign them to values are latently typed.

> > Untyped and type-free mean something else: they mean no type checking
> > is done.
>
> Hence, they don't exist, and the definitions being used here are rather
> pointless.

No they aren't, types of data exist even if there is no system in place
to check them. Ask an assembly programmer whether his programs have
string and integers in them and he will probably tell you that they do.

Andreas Rossberg

unread,
Jun 20, 2006, 5:29:58 AM6/20/06
to
Chris F Clark wrote:
>
> A static
> type system eliminates some set of tags on values by syntactic
> analysis of annotations (types) written with or as part of the program
> and detects some of the disallowed compuatations (staticly) at compile
> time.

Explicit annotations are not a necessary ingredient of a type system,
nor is "eliminating tags" very relevant to its function.

- Andreas

Pascal Costanza

unread,
Jun 20, 2006, 6:01:34 AM6/20/06
to
Marshall wrote:

> The conversation I would *really* like to have is the one where we
> discuss what all the differences are, functionally, between the two,
> and what the implications of those differences are, without trying
> to address which approach is "right" or "better", because those are
> dependent on the problem domain anyway, and because I can
> make up my own mind just fine about which one I prefer.

My current take on this is that static typing and dynamic typing are
incompatible, at least in their "extreme" variants.

The simplest examples I have found are this:

- In a statically typed language, you can have variables that contain
only first-class functions at runtime that are guaranteed to have a
specific return type. Other values are rejected, and the rejection
happens at compile time.

In dynamically typed languages, this is impossible because you can never
be sure about the types of return values - you cannot predict the
future. This can at best be approximated.


- In a dynamically typed language, you can run programs successfully
that are not acceptable by static type systems. Here is an example in
Common Lisp:

; A class "person" with no superclasses and with the only field "name":
(defclass person ()
(name))

; A test program:
(defun test ()
(let ((p (make-instance 'person)))
(eval (read))
(slot-value p 'address)))

(slot-value p 'address) is an attempt to access the field 'address in
the object p. In many languages, the notation for this is p.address.

Although the class definition for person doesn't mention the field
address, the call to (eval (read)) allows the user to change the
definition of the class person and update its existing instances.
Therefore at runtime, the call to (slot-value p 'adress) has a chance to
succeed.

(Even without the call to (eval (read)), in Common Lisp the call to
(slot-value p 'address) would raise an exception which gives the user a
chance to fix things and continue from the point in the control flow
where the exception was raised.)

I cannot imagine a static type system which has a chance to predict that
this program can successfully run without essentially accepting all
kinds of programs.

At least, development environments for languages like Smalltalk, Common
Lisp, Java, etc., make use of such program updates to improve
edit-compile-test cycles. However, it is also possible (and done in
practice) to use such program updates to minimize downtimes when adding
new features to deployed systems.

Andreas Rossberg

unread,
Jun 20, 2006, 8:01:20 AM6/20/06
to
Rob Thorpe wrote:
>
> No, that isn't what I said. What I said was:
> "A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent values
> defined by a certain class."

"it [= a value] [...] can [...] represent values"?

> Easy, any statically typed language is not latently typed. Values have
> no type associated with them, instead variables have types.

A (static) type system assigns types to (all) *expressions*. This
includes values as well as variables.

Don't confuse type assignment with type annotation (which many
mainstream languages enforce for, but also only allow for, variable
declarations).

- Andreas

Rob Thorpe

unread,
Jun 20, 2006, 9:11:58 AM6/20/06
to
Andreas Rossberg wrote:
> Rob Thorpe wrote:
> >
> > No, that isn't what I said. What I said was:
> > "A language is latently typed if a value has a property - called it's
> > type - attached to it, and given it's type it can only represent values
> > defined by a certain class."
>
> "it [= a value] [...] can [...] represent values"?

???

> > Easy, any statically typed language is not latently typed. Values have
> > no type associated with them, instead variables have types.
>
> A (static) type system assigns types to (all) *expressions*.

That's right most of the time yes, I probably should have said
expressions. Though I can think of static typed languages where the
resulting type of an expression depends on the type of the variable it
is being assigned to.

> This includes values as well as variables.

Well I haven't programmed in any statically typed language where values
have types themselves. Normally the language only knows that variable
Z is of type Q because it's in a variable of type Q, or as you mention
an expression of type Q. There are many things that could be
considered more than one type. The integer 45 could be unsigned 45 or
signed 45 or even float 45 depending on the variable it's in, but
without context it doesn't have a precise type.

It does imply the type to some extent though, you could imagine a
language where every value has a precise type. So, you've found a hole
in my definition.

Maybe a better definition would be:-

if (variables have types || expressions have types) <lang is statically
typed>
else if (values have types) <lang is latently/dynamically typed>
else <lang is untyped>

That seems to fit usage, quite well.

Even then there are problems. Perl has static types for arrays, hashs
and scalars. But scalars themselves can be integers, strings, etc.

> Don't confuse type assignment with type annotation (which many
> mainstream languages enforce for, but also only allow for, variable
> declarations).

Point taken.

Ketil Malde

unread,
Jun 20, 2006, 9:16:32 AM6/20/06
to
Andreas Rossberg <ross...@ps.uni-sb.de> writes:

>> "A language is latently typed if a value has a property - called it's
>> type - attached to it, and given it's type it can only represent values
>> defined by a certain class."

I thought the point was to separate the (bitwise) representation of a
value from its interpretation (which is its type). In a static
system, the interpretation is derived from context, in a dynamic
system values must carry some form of tags specifying which
interpretation to use.

I think this applies - conceptually, at least - also to expressions?

My impression is that dynamic typers tend to include more general
properties in their concept of types (div by zero, srqt of negatives,
etc).

-k
--
If I haven't seen further, it is by standing in the footprints of giants

Andreas Rossberg

unread,
Jun 20, 2006, 9:53:29 AM6/20/06
to
Rob Thorpe wrote:
>>
>>>No, that isn't what I said. What I said was:
>>>"A language is latently typed if a value has a property - called it's
>>>type - attached to it, and given it's type it can only represent values
>>>defined by a certain class."
>>
>>"it [= a value] [...] can [...] represent values"?
>
> ???

I just quoted, in condensed form, what you said above: namely, that a
value represents values - which I find a strange and circular definition.

>>A (static) type system assigns types to (all) *expressions*.
>
> That's right most of the time yes, I probably should have said
> expressions. Though I can think of static typed languages where the
> resulting type of an expression depends on the type of the variable it
> is being assigned to.

Yes, but that's no contradiction. A type system does not necessarily
assign *unique* types to individual expressions (consider overloading,
subtyping, polymorphism, etc).

> Well I haven't programmed in any statically typed language where values
> have types themselves.

They all have - the whole purpose of a type system is to ensure that any
expression of type T always evaluates to a value of type T. So when you
look at type systems formally then you certainly have to assign types to
values, otherwise you couldn't prove any useful property about those
systems (esp. soundness).

- Andreas

Chris Smith

unread,
Jun 20, 2006, 10:19:53 AM6/20/06
to
Rob Thorpe <robert...@antenova.com> wrote:
> I'm not talking about correctness, I'm talking about typing.
>

Since you wrote that, I've come to understand that you meant something
specific by "property" which I didn't understand at first. From my
earlier perspective, it was obvious that correctness was a property of a
value. I now realize that you meant a property that's explicitly
associated with the value and plays a role in determining the behavior
of the language. Sorry for the confusion.

> No, that isn't what I said. What I said was:
> "A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent values
> defined by a certain class."

No, to answer Andreas' concern, you would only need to say:

... if a value has a property - called it's type - attached to it,
and the language semantics guarantees that only values defined by a
certain class may have that same property attached.

> Easy, any statically typed language is not latently typed.

I'm actually not sure I agree with this at all. I believe that
reference values in Java may be said to be latently typed. This is the
case because each reference value (except null, which may be tested
separately) has an explicit property (called its "class", but surely the
word doesn't make any difference), such that the language semantics
guarantees that only a certain class of values may have that same
property, and the property is used to determine behavior of the language
in many cases (for example, in the case of type-based polymorphism, or
use of Java's instanceof operator). Practically all class-based OO
languages are subject to similar consideration, as it turns out.

I'm unsure whether to consider explicitly stored array lengths, which
are present in most statically typed languages, to be part of a "type"
in this sense or not.

David Squire

unread,
Jun 20, 2006, 10:29:36 AM6/20/06
to
Andreas Rossberg wrote:
> Rob Thorpe wrote:
>>>
>>>> No, that isn't what I said. What I said was:
>>>> "A language is latently typed if a value has a property - called it's
>>>> type - attached to it, and given it's type it can only represent values
>>>> defined by a certain class."
>>>
>>> "it [= a value] [...] can [...] represent values"?
>>
>> ???
>
> I just quoted, in condensed form, what you said above: namely, that a
> value represents values - which I find a strange and circular definition.
>

But you left out the most significant part: "given it's type it can only
represent values *defined by a certain class*" (my emphasis). In C-ish
notation:

unsigned int x;

means that x can only represent elements that are integers elements of
the set (class) of values [0, MAX_INT]. Negative numbers and non-integer
numbers are excluded, as are all sorts of other things.

You over-condensed.

DS

NB. This is not a comment on static, latent, derived or other typing,
merely on summarization.

Rob Thorpe

unread,
Jun 20, 2006, 10:57:41 AM6/20/06
to
Andreas Rossberg wrote:
> Rob Thorpe wrote:
> >>
> >>>No, that isn't what I said. What I said was:
> >>>"A language is latently typed if a value has a property - called it's
> >>>type - attached to it, and given it's type it can only represent values
> >>>defined by a certain class."
> >>
> >>"it [= a value] [...] can [...] represent values"?
> >
> > ???
>
> I just quoted, in condensed form, what you said above: namely, that a
> value represents values - which I find a strange and circular definition.

Yes, but the point is, as the other poster mentioned: values defined by
a class.
For example, in lisp:
"xyz" is a string, #(1 2 3) is an array, '(1 2 3) is a list, 45 is a
fixed-point number.
Each item that can be stored has a type, no item can be stored without
one. The types can be tested. Most dynamic typed languages are like
that.

Compare this to an untyped language where types cannot generally be
tested.

> >>A (static) type system assigns types to (all) *expressions*.
> >
> > That's right most of the time yes, I probably should have said
> > expressions. Though I can think of static typed languages where the
> > resulting type of an expression depends on the type of the variable it
> > is being assigned to.
>
> Yes, but that's no contradiction. A type system does not necessarily
> assign *unique* types to individual expressions (consider overloading,
> subtyping, polymorphism, etc).

That's a fair way of looking at it.

> > Well I haven't programmed in any statically typed language where values
> > have types themselves.
>
> They all have - the whole purpose of a type system is to ensure that any
> expression of type T always evaluates to a value of type T.

But it only gaurantees this because the variables themselves have a
type, the values themselves do not. Most of the time the fact that the
variable they are held in has a type infers that the value does too.
But the value itself has no type, in a C program for example I can take
the value from some variable and recast it in any way I feel and the
language cannot correct any errors I make because their is no
information in the variable to indicate what type it is.

> So when you
> look at type systems formally then you certainly have to assign types to
> values, otherwise you couldn't prove any useful property about those
> systems (esp. soundness).

Yes, but indirectly.

Andreas Rossberg

unread,
Jun 20, 2006, 10:42:14 AM6/20/06
to
David Squire wrote:
> Andreas Rossberg wrote:
>
>> Rob Thorpe wrote:
>>
>>>>
>>>>> No, that isn't what I said. What I said was:
>>>>> "A language is latently typed if a value has a property - called it's
>>>>> type - attached to it, and given it's type it can only represent
>>>>> values
>>>>> defined by a certain class."
>>>>
>>>>
>>>> "it [= a value] [...] can [...] represent values"?
>>>
>>>
>>> ???
>>
>> I just quoted, in condensed form, what you said above: namely, that a
>> value represents values - which I find a strange and circular definition.
>
> But you left out the most significant part: "given it's type it can only
> represent values *defined by a certain class*" (my emphasis).

That qualification does not remove the circularity from the definition.

> In C-ish notation:
>
> unsigned int x;
>
> means that x can only represent elements that are integers elements of
> the set (class) of values [0, MAX_INT]. Negative numbers and non-integer
> numbers are excluded, as are all sorts of other things.

I don't see how that example is relevant, since the above definition
does not mention variables.

- Andreas

Ketil Malde

unread,
Jun 20, 2006, 11:34:21 AM6/20/06
to
"Rob Thorpe" <robert...@antenova.com> writes:

> But it only gaurantees this because the variables themselves have a
> type, the values themselves do not.

I think statements like this are confusing, because there are
different interpretations of what a "value" is. I would say that the
integer '4' is a value, and that it has type Integer (for instance).
This value is different from 4 the Int16, or 4 the double-precision
floating point number. From this viewpoint, all values in statically
typed languages have types, but I think you use 'value' to denote the
representation of a datum in memory, which is a different thing.

Matthias Blume

unread,
Jun 20, 2006, 11:45:42 AM6/20/06
to
Pascal Costanza <p...@p-cos.net> writes:

> - In a dynamically typed language, you can run programs successfully
> that are not acceptable by static type systems.

This statement is false.

For every program that can run successfully to completion there exists
a static type system which accepts that program. Moreover, there is
at least one static type system that accepts all such programs.

What you mean is that for static type systems that are restrictive
enough to be useful in practice there always exist programs which
(after type erasure in an untyped setting, i.e., by switching to a
different language) would run to completion, but which are rejected by
the static type system.

By the way, the parenthetical remark is important: If a language's
definition is based on a static type system, then there are *no*
programs in that language which are rejected by its type checker.
That's trivially so because strings that do not type-check are simply
not considered programs.


> Here is an example in Common Lisp:
>
> ; A class "person" with no superclasses and with the only field "name":
> (defclass person ()
> (name))
>
> ; A test program:
> (defun test ()
> (let ((p (make-instance 'person)))
> (eval (read))
> (slot-value p 'address)))
>
> (slot-value p 'address) is an attempt to access the field 'address in
> the object p. In many languages, the notation for this is p.address.
>
> Although the class definition for person doesn't mention the field
> address, the call to (eval (read)) allows the user to change the
> definition of the class person and update its existing
> instances. Therefore at runtime, the call to (slot-value p 'adress)
> has a chance to succeed.

I am quite comfortable with the thought that this sort of evil would
get rejected by a statically typed language. :-)

Matthias Blume

unread,
Jun 20, 2006, 11:49:23 AM6/20/06
to
David Squire <David....@no.spam.from.here.au> writes:

> Andreas Rossberg wrote:
>> Rob Thorpe wrote:
>>>>
>>>>> No, that isn't what I said. What I said was:
>>>>> "A language is latently typed if a value has a property - called it's
>>>>> type - attached to it, and given it's type it can only represent values
>>>>> defined by a certain class."
>>>>
>>>> "it [= a value] [...] can [...] represent values"?
>>>
>>> ???
>> I just quoted, in condensed form, what you said above: namely, that
>> a value represents values - which I find a strange and circular
>> definition.
>>
>
> But you left out the most significant part: "given it's type it can
> only represent values *defined by a certain class*" (my emphasis). In
> C-ish notation:
>
> unsigned int x;
>
> means that x can only represent elements that are integers elements of
> the set (class) of values [0, MAX_INT]. Negative numbers and
> non-integer numbers are excluded, as are all sorts of other things.

This x is not a value. It is a name of a memory location.

> You over-condensed.

Andreas condensed correctly.

David Squire

unread,
Jun 20, 2006, 11:51:43 AM6/20/06
to

I should have stayed out of this. I had not realised that it had
degenerated to point-scoring off someone typing "value" when it is clear
from context that he meant "variable".

Bye.

DS

Darren New

unread,
Jun 20, 2006, 11:52:46 AM6/20/06
to
Rob Thorpe wrote:
> Yes, but the point is, as the other poster mentioned: values defined by
> a class.

A value can only represent one value, right? Or can a value have
multiple values?

> For example, in lisp:
> "xyz" is a string,

"xyz" cannot represent values from the class of strings. It can only
represent one value.

I think that's what the others are getting at.

>>They all have - the whole purpose of a type system is to ensure that any
>>expression of type T always evaluates to a value of type T.
>
> But it only gaurantees this because the variables themselves have a
> type, the values themselves do not.

Sure they do. 23.5e3 is a "real" in Pascal, and there's no variable there.

("hello" % "there") is illegal in most languages, because the modulo
operator doesn't apply to strings. How could this be determined at
compile time if "hello" and "there" don't have types?

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

Matthias Blume

unread,
Jun 20, 2006, 11:53:19 AM6/20/06
to
"Rob Thorpe" <robert...@antenova.com> writes:

> Andreas Rossberg wrote:
>> Rob Thorpe wrote:
>> >>
>> >>>No, that isn't what I said. What I said was:
>> >>>"A language is latently typed if a value has a property - called it's
>> >>>type - attached to it, and given it's type it can only represent values
>> >>>defined by a certain class."
>> >>
>> >>"it [= a value] [...] can [...] represent values"?
>> >
>> > ???
>>
>> I just quoted, in condensed form, what you said above: namely, that a
>> value represents values - which I find a strange and circular definition.
>
> Yes, but the point is, as the other poster mentioned: values defined by
> a class.
> For example, in lisp:
> "xyz" is a string, #(1 2 3) is an array, '(1 2 3) is a list, 45 is a
> fixed-point number.
> Each item that can be stored has a type, no item can be stored without
> one. The types can be tested. Most dynamic typed languages are like
> that.

Your "types" are just predicates.

> Compare this to an untyped language where types cannot generally be
> tested.

You mean there are no predicates in untyped languages?

>> They all have - the whole purpose of a type system is to ensure that any
>> expression of type T always evaluates to a value of type T.
>
> But it only gaurantees this because the variables themselves have a
> type, the values themselves do not.

Of course they do.

> Most of the time the fact that the
> variable they are held in has a type infers that the value does too.
> But the value itself has no type, in a C program for example I can take
> the value from some variable and recast it in any way I feel and the
> language cannot correct any errors I make because their is no
> information in the variable to indicate what type it is.

Casting in C takes values of one type to values of another type.

>> So when you
>> look at type systems formally then you certainly have to assign types to
>> values, otherwise you couldn't prove any useful property about those
>> systems (esp. soundness).
>
> Yes, but indirectly.

No, this is usually done very directly and very explicitly.

Matthias Blume

unread,
Jun 20, 2006, 11:55:02 AM6/20/06
to
David Squire <David....@no.spam.from.here.au> writes:

If he really had meant "variable" then he would have been completely wrong.

> Bye.

Bye.

Andreas Rossberg

unread,
Jun 20, 2006, 11:39:28 AM6/20/06
to
Rob Thorpe wrote:
>Andreas Rossberg wrote:
>>Rob Thorpe wrote:
>>
>>>>>"A language is latently typed if a value has a property - called it's
>>>>>type - attached to it, and given it's type it can only represent values
>>>>>defined by a certain class."
>>>>
>>>>"it [= a value] [...] can [...] represent values"?
>>>
>>>???
>>
>>I just quoted, in condensed form, what you said above: namely, that a
>>value represents values - which I find a strange and circular definition.
>
> Yes, but the point is, as the other poster mentioned: values defined by
> a class.

I'm sorry, but I still think that the definition makes little sense.
Obviously, a value simply *is* a value, it does not "represent" one, or
several ones, regardless how you qualify that statement.

>>>Well I haven't programmed in any statically typed language where values
>>>have types themselves.
>>
>>They all have - the whole purpose of a type system is to ensure that any
>>expression of type T always evaluates to a value of type T.
>
> But it only gaurantees this because the variables themselves have a
> type,

No, variables are insignificant in this context. You can consider a
language without variables at all (such languages exist, and they can
even be Turing-complete) and still have evaluation, values, and a
non-trivial type system.

> But the value itself has no type

You mean that the type of the value is not represented at runtime? True,
but that's simply because the type system is static. It's not the same
as saying it has no type.

> in a C program for example I can take
> the value from some variable and recast it in any way I feel and the
> language cannot correct any errors I make because their is no
> information in the variable to indicate what type it is.

Nothing in the C spec precludes an implementation from doing just that.
The problem with C rather is that its semantics is totally
underspecified. In any case, C is about the worst example to use when
discussing type systems. For starters, it is totally unsound - which is
what your example exploits.

- Andreas

Chris F Clark

unread,
Jun 20, 2006, 12:18:37 PM6/20/06
to
Chris F Clark wrote:
> A static
> type system eliminates some set of tags on values by syntactic
> analysis of annotations (types) written with or as part of the program
> and detects some of the disallowed compuatations (staticly) at compile
> time.

Adreas relied:


> Explicit annotations are not a necessary ingredient of a type system,
> nor is "eliminating tags" very relevant to its function.

While this is true, I disagree at some level with the second part. By
eliminating tags, I mean allowing one to perform "type safe"
computations without requiring the values to be tagged. One could
argue that the tags were never there. However, many of the
interesting polymorphic computations reaquire either that the values
be tagged or that some other process assures that at each point one
can determine apriori what the correct variant of computation is which
applies.

To me a static type system is one which does this apriori
determination. A dynamic type system does not do a apriori and
instead includes explicit information in the values being computed to
select the corret variant computations.

In that sense, a static type system is eliminating tags, because the
information is pre-computed and not explicitly stored as a part of the
computation. Now, you may not view the tag as being there, but in my
mind if there exists a way of perfoming the computation that requires
tags, the tag was there and that tag has been eliminated.

To put it another way, I consider the tags to be axiomatic. Most
computations involve some decision logic that is driven by distinct
values that have previously been computed. The separation of the
values which drive the compuation one-way versus another is a tag.
That tag can potentially be eliminated by some apriori computation.

In what I do, it is very valuable to move information from being
explicitly represented in the computed result into the tag, so that I
often have distinct "types" (i.e. tags) for an empty list, a list with
one element, a list with two elements, and a longer list. In that
sense, I agree with Chris Smith's assertion that "static typing" is
about asserting general properties of the algorithm/data. These
assertions are important to the way I am manipulating the data. They
are part of my type model, but they may not be part of anyone else's,
and to me toe pass a empty list to a function requiring a list with
two elements is a "type error", because it is something I expect the
type system to detect as incorrect. The fact that no one else may
have that expectation does not seem relevant to me.

Now, to carry this farther, since I expect the type system to validate
that certain values are of certain types and only be used in certain
contexts, I am happy when it does not require certain "tags" to be
actualy present in the data. However, because other bits of code are
polymorphic, I do expect certain values to require tags. In the end,
this is still a win for me. I had certain data elements that in the
abstract had to be represented explicitly. I have encoded that
information into the type system and in some cases the type system is
not using any bits in the computed representation to hold that
information. Whenever that happens, I win and that solves one of the
problems that I need solved.

Thus, a type system is a way for me to express certain axioms about my
algorithm. A dynamic type system encodes those facts as part of the
computation. A static type system pre-calculates certain "theorems"
from my axioms and uses those theorems to allow my algorithm to be
computed without all the facts being stored as part of the computation.

Hopefully, this makes my point of view clear.

-Chris

*****************************************************************************
Chris Clark Internet : com...@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------

Rob Thorpe

unread,
Jun 20, 2006, 12:33:59 PM6/20/06
to
Ketil Malde wrote:
> "Rob Thorpe" <robert...@antenova.com> writes:
>
> > But it only gaurantees this because the variables themselves have a
> > type, the values themselves do not.
>
> I think statements like this are confusing, because there are
> different interpretations of what a "value" is. I would say that the
> integer '4' is a value, and that it has type Integer (for instance).
> This value is different from 4 the Int16, or 4 the double-precision
> floating point number. From this viewpoint, all values in statically
> typed languages have types, but I think you use 'value' to denote the
> representation of a datum in memory, which is a different thing.

Well I haven't been consistent so far :)

But I mean the value as the semantics of the program itself sees it.
Which mostly means the datum in memory.

It is loading more messages.
0 new messages