Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Benefits of Dynamic Typing

34 views
Skip to first unread message

Marshall Spight

unread,
Sep 24, 2005, 3:56:29 PM9/24/05
to
Hi all,

I am curious as to what people see as the primary benefits
of dynamic typing. I have read lots of threads in which
the static and dynamic advocates debate, and I find them
largely unsatisfying. In general, it strikes me that the
advocates of one camp do a poor job of characterizing the
costs and benefits of the other camp, which should perhaps
not be surprising. That is why I am more interested in
hearing the benefits of dynamic typing from those who
practice it. I am sure it will be different for
different people.

I would like to hear what the dynamic proponents consider
the strengths of dynamic typing, and perhaps also what
aspects of it don't matter so much. For example, I have
heard people talk of the benefits of being able to execute
code before it is fully elaborated; in a dynamic language,
one never needs to code stub methods just to execute what
one has coded so far. I can see how this would be of value.
IIUC, in a dynamic language, a variable may contain values
of different types over its lifespan; it *this* valuable?
It strikes me that it might be confusing.

I am hopeful (but not confident) of not starting a flamewar,
despite the contentiousness of the topic. My experience with
this newsgroup is that it is full of highly educated,
well-mannered folk, with good perspective on language design,
which is why I picked this forum to ask the question. I
hope we can concentrate on statements in the positive form
("I like dynamically-typed language X because I can do Y.")
rather than in the negative form ("I dislike statically-typed
language X because I can't do Y.)

Also, if one has any references to good books or papers on
the topic, I'd be appreciative. I have ten books on my shelf
about various topics in type theory; they are all of them
written from a static-typing perspective. I have read (much of)
CTM, but that's not really what I mean; while the book's
code is in a dynamically typed language, the book isn't *about*
dynamic typing.

Thanks,


Marshall

Jon Harrop

unread,
Sep 24, 2005, 10:07:58 PM9/24/05
to
Marshall Spight wrote:
> I am curious as to what people see as the primary benefits
> of dynamic typing. I have read lots of threads in which
> the static and dynamic advocates debate, and I find them
> largely unsatisfying. In general, it strikes me that the
> advocates of one camp do a poor job of characterizing the
> costs and benefits of the other camp, which should perhaps
> not be surprising. That is why I am more interested in
> hearing the benefits of dynamic typing from those who
> practice it. I am sure it will be different for
> different people.

I am also very interested to hear people's opinions on this.

My answer isn't exactly what you were asking for. I use both dynamically
typed (Mathematica) and statically typed (OCaml) languages. However, I use
dynamically typed languages when the benefits of their capabilities
outweighs the pain of dynamic typing. I would prefer it if the dynamically
typed languages were more statically typed.

I've recently been playing with MetaOCaml. This is a fascinating project
that allows you to defer computations (like Lisp, but faster, shorter,
statically type checked and more elegant ;-). My latest attempts indicate
that static typing can be a hindrance here. Specifically, I can't write a
staged interpreter without having the generated code box most values.
Perhaps this is an example of a place where we'd be better off without
static typing. I'm not sure though, perhaps the solution is no typing
rather than dynamic typing.

Another possibility is GUI code. This can be very dynamic in nature and is
potentially the remit of dynamic typing. However, despite this being a
pedagogical example of OOP, I've found variants and HOFs to work just fine.

So I'm also still looking for a killer example of the utility of dynamic
typing...

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com

Ben Rudiak-Gould

unread,
Sep 25, 2005, 9:07:12 PM9/25/05
to
Marshall Spight wrote:
> I am curious as to what people see as the primary benefits
> of dynamic typing.

I rarely use dynamically typed languages, but...

I think that the major advantage of dynamic typing is that it's much easier
to design a good dynamically typed language than a good statically typed
language. When your language is statically typed, and especially when it's
Hindley-Milner, new language features often interact in tricky ways with the
type system, and therefore with each other. It's a difficult theoretical
problem to figure out how to add objects, runtime reflection,
metaprogramming, dynamic linking, on-line code replacement, etc. to a
statically typed language. In a dynamically typed language, you can usually
just add things in the most straightforward way, and it'll work as long as
the programmer follows some straightforward rules. And you don't have to
worry about your extension precluding some other useful extension in the future.

> Also, if one has any references to good books or papers on
> the topic, I'd be appreciative. I have ten books on my shelf
> about various topics in type theory; they are all of them
> written from a static-typing perspective.

I don't think there is a type theory of dynamically typed languages. The
term "dynamically typed" is really a misnomer; these languages don't have
types at all, just values. Sure, the + function will check at runtime that
its arguments are numbers, but that's not really the same thing. This kind
of domain check is much closer to / checking at runtime that the divisor is
not zero, or car/head checking that the argument is not the empty list.
People sometimes say that "static typing means no type errors at runtime",
but I would say the same of dynamic typing. Both kinds of languages have
value errors at runtime.

There's plenty of work on "value theories" of dynamically typed languages --
for example, Church's original work on the untyped lambda calculus.

-- Ben

Joachim Durchholz

unread,
Sep 26, 2005, 6:50:22 AM9/26/05
to
Ben Rudiak-Gould schrieb:

> I think that the major advantage of dynamic typing is that it's much
> easier to design a good dynamically typed language than a good
> statically typed language. When your language is statically typed, and
> especially when it's Hindley-Milner, new language features often
> interact in tricky ways with the type system, and therefore with each
> other. It's a difficult theoretical problem to figure out how to add
> objects, runtime reflection, metaprogramming, dynamic linking, on-line
> code replacement, etc. to a statically typed language.

Agreed.

> In a dynamically typed language, you can usually just add things in
> the most straightforward way, and it'll work as long as the
> programmer follows some straightforward rules. And you don't have to
> worry about your extension precluding some other useful extension in
> the future.

Um... I think the trade-off is somewhat different.

A static checker buys you a guarantee that there will be no unexpected
interactions between the various language features.

In a system without a checker, you're basically on your own.
In other words, things will mostly work, but the edge cases may come
back and bite you. Usually they won't (they are edge cases after all)...
though I suspect that they become more of a problem if systems get large
(i.e. more opportunities for an edge case to arise) or with
inexperienced programmers.
You said programmers would have to "follow some straightforward rules",
but I don't think the rules are straightforward - they are fuzzy,
defined by "what works", so you have to try whenever you approach one of
the limits (and you never know whether it will break tomorrow when you
add something else that stretches the limits).

Of course, these things are more important when you are like me - I
*love* stretching limits :-) (and maybe that's the reason why I find
dynamic systems unacceptable - this may also explain why the static vs.
dynamic typing is such a controversial issue: if it's a question of
personal workstyle and personality, personal arguments prevail over
technical ones.)

Regards,
Jo

James

unread,
Sep 26, 2005, 10:06:11 AM9/26/05
to
Ben Rudiak-Gould wrote:

>It's a difficult theoretical problem to figure out how to add objects, runtime
>reflection, metaprogramming, dynamic linking, on-line code replacement, etc.
>to a statically typed language.

I think you've generally hit upon the answer, but I'd rephrase your
"it's much easier to design a good dynamically typed language" as
"dynamic typing is the simpler solution."

So many arguments about functional languages come down to "well, that
would be possible given a sufficiently smart compiler" (see
http://c2.com./cgi/wiki?SufficientlySmartCompiler) or a sufficiently
smart type system or whatnot. Many times the off-the-cuff response is
along the lines of "Oh, you can do that in [fill-in name of defunct
research project language here]." Dynamic typing is a one-shot system
that deals with on-the-fly code reloading, integers that overflow in
all cases (not only when you use a special subtype of integers),
message passing between threads, etc. Yes you *can* do all this with
the proper type system, but it's not like there are many such languages
to choose from. The choice comes down to either "solving" these
problems today or putting off coding until an ideal type system is
developed.

And while I'm being controversial in my first c.l.f post in years:
Nobody--including me--is going to say that static typing is useless,
but I'd say that it tends to distract many great minds from more
important problems. Obviously static typing isn't a a primary
attribute of usefulness, otherwise we wouldn't have Python and Perl and
Erlang and Lisp and Lua. So you have people who are spending all their
time doing research into type systems, when working programmers have
found that you get 90% of the way there by writing clean, concise code
and creating a test suite as you go. You still need the test suite in
statically typed languages, of course, because you can't tell if you
passed in the parameters to make_rect(x,y,w,h) in the right order (or
was it make_rect(x1,y1,x2,y2) or was it make_rect(x,w,y,h)?). It's
also easy enough to write a tool to help verify that functions are
being called in consistent ways, like dialyzer for Erlang.

Further, I'd say that the correct way to create many applications isn't
to write them in pure OCaml or Erlang or whatever, but to create a
language that encapsulates exactly what you need. Then you can check
whatever properties of your program that you like. You don't have to
rely on the type system of the underlying language. Use a dynamically
typed language for flexibility, but maybe the resulting language has
some statically typed elements.

Message has been deleted

Aaron Gray

unread,
Sep 26, 2005, 3:41:35 PM9/26/05
to
> I am curious as to what people see as the primary benefits
> of dynamic typing.

Less physical typing at the keyboard :)

Typically there are no braces or begin...end's to type. The syntax is
henerally more simplistic than statically typed languages.

Typically the development cycle is quicker.

...

Aaron


Joachim Durchholz

unread,
Sep 26, 2005, 4:03:02 PM9/26/05
to
Aaron Gray schrieb:

>> I am curious as to what people see as the primary benefits of
>> dynamic typing.
>
> Less physical typing at the keyboard :)
>
> Typically there are no braces or begin...end's to type.

Lisp: (...)
Perl: {...}

Don't think that that's the real difference - I'd be hard-pressed to
name a language that has no block structure markers. (Well, some *very*
old Basics come to my mind, along with RPG IV and other nightmares...
but I'm pretty sure you didn't mean *these* *ggg*.)

> The syntax is henerally more simplistic than statically typed
> languages.

Agreed with that one - anybody got a clue why that's so?
Technical reasons? Reasons in the language designers' mentalities?

Regards,
Jo

Florian Weimer

unread,
Sep 26, 2005, 4:43:39 PM9/26/05
to
* Joachim Durchholz:

>> The syntax is henerally more simplistic than statically typed
>> languages.
>
> Agreed with that one - anybody got a clue why that's so?

Typ declarations are quite complex ("access not null function return
access not null Integer") and a source of all kinds of oddities.

In addition, there seems to be a tendency to avoid ambiguities through
special syntax, for example "." and "->" in C, or, in C++, "<" and ">"
as template argument list deliminators and the "typename" keyword.

Joachim Durchholz

unread,
Sep 26, 2005, 5:46:40 PM9/26/05
to
Florian Weimer schrieb:

Um, C++ isn't a particularly good example. It goes without saying that
you need additional syntax to express types, I'm more interested why
statically typed languages tend to have a more complex syntax even in
areas where types aren't written down.
Think an FPL with type inference: you don't write down many types in
these, yet their syntax tends to be somewhat more complex than their
dynamically-typed equivalents. I.e. I'm comparing Haskell with
Lisp/Smalltalk and wondering why Haskell still needs somewhat more
syntactic sugar.

Of course, on the complex side of the spectrum, you find languages of
both kinds (e.g. Perl for dynamic typing, C++ for static typing). In
fact I'm not totally sure that the correlation of "dynamic typing means
less syntax" holds in general (and establishing a correlation of any
kind would be a dubious undertaking anyway).
But I'm still interested in "how lean can we get". (With the obvious
exception that a statically typed language needs a syntax for type
expressions.)

Regards,
Jo

Ulf Wiger

unread,
Sep 27, 2005, 6:18:32 AM9/27/05
to
Ben Rudiak-Gould <br276d...@cam.ac.uk> writes:

> Marshall Spight wrote:
> > I am curious as to what people see as the primary benefits
> > of dynamic typing.

In part, the same as with type inference.

Add to that the rather widespread notion that it is
often easier to write a program in a dynamically typed
language that is _good enough_ (if not provably correct)
even though it wouldn't necessarily pass a type check.

Not all people agree that this is true, but I think this
is similar to the situation that some proponents of static
typing prefer to declare types up-front, while others
prefer the compiler to infer types automatically.



> I think that the major advantage of dynamic typing is that
> it's much easier to design a good dynamically typed language
> than a good statically typed language. When your language is
> statically typed, and especially when it's Hindley-Milner,
> new language features often interact in tricky ways with the type
> system, and therefore with each other.

I think there might be some truth to this.

In industry, most programmers are infidels, at least as
regards types. If typing helps, fine. If it gets in the
way, then it's out of there. If typing would save you
hundreds of hours down the road for the price of a few
hours up front, well, that's too bad. Industrial projects
don't necessarily work like that. Extra work up-front in
order to possibly gain advantage later doesn't always win,
partly because if you don't win the battle for market share,
there may not be a 'later'.

> It's a difficult theoretical problem to figure out how to add
> objects, runtime reflection, metaprogramming, dynamic linking,
> on-line code replacement, etc. to a statically typed language.
> In a dynamically typed language, you can usually just add things
> in the most straightforward way, and it'll work as long as the
> programmer follows some straightforward rules. And you don't
> have to worry about your extension precluding some other
> useful extension in the future.

Another, perhaps a bit provocative, view on this is that since
it's a more difficult theoretical problem, it is more academically
interesting. Thus, if you're going for a PhD in programming
language theory, obviously statically typed languages offer
more fertile ground for research. Many of the challenges in
commercial product development are, at least academically,
non-problems, in that there are known techniques to deal with
them. The only problem for industry is that these techniques
seem (note the wording) out of reach for practical purposes.


> I don't think there is a type theory of dynamically typed
> languages. The term "dynamically typed" is really a misnomer; these
> languages don't have types at all, just values.

I don't think this describes a dynamically typed language like
Erlang, for example. After all, what's the difference between
the two patterns [1,2,3] and {1,2,3}? The first is a list with
the values 1,2 and 3, and the second is a tuple with the values
1,2 and 3. Isn't it reasonable to say that the main difference
is that of type, not values? The Erlang compiler is in many
cases good enough to detect type discrepancies and refuse
to compile, but this would be in violation with the language
specification, since:

{1,2,3} = [1,2,3]

is perfectly valid in Erlang and _will_ lead to a runtime
exception if evaluated. It's not that this can't be detected
at compile-time; it's just "illegal" to make it a compile
error. In this explicit case, I can't come up with an argument
for why the compiler shouldn't reject the program, except
perhaps consistency. Having said this, the Erlang compiler
_will_ reject a program that uses record notation and refers
to an attribute not present in the record declaration.

So basically there are weakly typed, or untyped dynamically
typed languages, and there are strongly typed dynamically
typed languages. Erlang is strongly typed, and IMO slowly
moving towards being optionally statically typed.

Using Dialyzer, you can get quite a lot of type warnings
at compile-time (but only warnings, see above). For example,
Dialyzer will tell you the expression above will crash, and
since it does global analysis with type propagation, it
will of course find much more than that.

At the ICFP Erlang workshop last weekend, there was a
discussion about adding type annotations to erlang.
This has been suggested before on the erlang mailing
list, and most people welcome it. One approach might be
that if you have added type annotations, it will be
a compilation error if the type annotation and the
actual code are in conflict. The TypEr tool, also
presented at the workshop, will automatically annotate
the code for you, if you want.

Does that make Erlang statically or dynamically typed?

/Uffe
--
Ulf Wiger, Senior Specialist,
/ / / Architecture & Design of Carrier-Class Software
/ / / Team Leader, Software Characteristics
/ / / Ericsson AB, IMS Gateways

Bruce Stephens

unread,
Sep 28, 2005, 6:23:51 AM9/28/05
to
Joachim Durchholz <j...@durchholz.org> writes:

[...]

> I.e. I'm comparing Haskell with Lisp/Smalltalk and wondering why
> Haskell still needs somewhat more syntactic sugar.

I doubt that there's a need for more syntax. I imagine one can
construct an AST for any Haskell program, and presumably that AST
could be represented in sexps.

That kind of thing suggested (to me, anyway) a possibly interesting
student-type project.

There's been some suggestion that it might be interesting to have a
statically typed lisp (such things exist, of course), in that that
would fit better in CLR, for example (with its other statically typed
languages like C#).

Anyway, a similar idea would be to develop a lisp syntax for (say)
ocaml. OCaml has mostly implicit typing, and its environment has a
fairly rich set of modules and things, and it's not pure, which seems
likely to fit better than a pure semantic with lisp.

(Most of the statically typed lisps have been attempts at statically
typed variants of existing lisps, I think. But maybe that's not the
best way to approach things: maybe it would be better to go at a
target which has lots of good things already, and see if the lisp
approach to syntax brings something extra.)

[...]

Panu Kalliokoski

unread,
Sep 28, 2005, 6:43:19 AM9/28/05
to
In <87hdc5s...@cenderis.demon.co.uk> Bruce Stephens <bruce+...@cenderis.demon.co.uk> writes:
>There's been some suggestion that it might be interesting to have a
>statically typed lisp (such things exist, of course), in that that
>would fit better in CLR, for example (with its other statically typed
>languages like C#).

One of probably the earliest approaches to this was the design of
pre-scheme, which was used to write an interpreter for Scheme. The idea
was (IIUC) to write the interpreter of Scheme in an easy-to-compile
language that is a strict subset of Scheme. This way, the interpreter
can run itself even if the compiled code is reasonably easy to produce.

Panu
--
personal contact: ate...@iki.fi, +35841 5323835, +3589 85619369
work contact: panu.kal...@helsinki.fi, +35850 3678003
kotisivu (henkkoht): http://www.iki.fi/atehwa/
homepage (technical): http://sange.fi/~atehwa/

Jon Harrop

unread,
Sep 28, 2005, 12:40:23 PM9/28/05
to
Joachim Durchholz wrote:
> Aaron Gray schrieb:

>> The syntax is henerally more simplistic than statically typed
>> languages.
>
> Agreed with that one - anybody got a clue why that's so?

I don't think it is true. Many dynamically typed languages (e.g. Python,
Perl, Tcl, Ruby, PHP, BASIC, Kogut) have syntaxes that are at least as
complex as simple MLs.

Lisp and Mathematica are dynamically typed. Both provide "code as data",
which leads to a simple underlying syntax (an "s-expr" in Lisp or a
"FullForm expr" in Mathematica). However, even here, Mathematica goes to
great lengths to parse, pretty print and typeset its exprs so most users
are never exposed to this internal representation.

So I think the assertion is only true if it is changed to "Lisp's syntax is
generally more simplistic than statically typed languages". Then the answer
is "Lisp was designed to have a simple syntax".

> Technical reasons? Reasons in the language designers' mentalities?

In Lisp's case, I think it is nothing more than religion. Mathematica has
shown that the same functionality can be provided without having to force
users to read and write raw ASTs.

Joe Hendrix

unread,
Sep 28, 2005, 3:09:02 PM9/28/05
to
Joachim Durchholz wrote:
>> The syntax is henerally more simplistic than statically typed
>> languages.
>
> Agreed with that one - anybody got a clue why that's so?
> Technical reasons? Reasons in the language designers' mentalities?

The fundamental reason is that a statically typed language needs
syntax for describing types. Type inference can alleviate some of
this, but not all.

The fundamentals however are obscured by individual preferences with
regard to syntax. For example, one could undoubtedly come up with
some type of many-sorted version of Lisp that would have types, but
the syntax would be much simpler than almost every dynamically typed
language. I don't necessarily think that is a good idea, but it is
possible.

Bruce Stephens

unread,
Sep 28, 2005, 5:06:43 PM9/28/05
to
Panu Kalliokoski <ate...@sange.fi> writes:

> In <87hdc5s...@cenderis.demon.co.uk> Bruce Stephens <bruce+...@cenderis.demon.co.uk> writes:
>>There's been some suggestion that it might be interesting to have a
>>statically typed lisp (such things exist, of course), in that that
>>would fit better in CLR, for example (with its other statically typed
>>languages like C#).
>
> One of probably the earliest approaches to this was the design of
> pre-scheme, which was used to write an interpreter for Scheme. The
> idea was (IIUC) to write the interpreter of Scheme in an
> easy-to-compile language that is a strict subset of Scheme. This
> way, the interpreter can run itself even if the compiled code is
> reasonably easy to produce.

Yeah, I guess. I was more thinking of languages intended for general
purpose use (rather than for bootstrapping the real language). I'm
fairly sure I've seen statically typed lisp variants aimed at C/C++
and JVM recently, for example, although I can't seem to find them now.
(Not that they're recent; it's just that I'd never seen them before.)

Not that that matters; the problem I felt with them is that they were
trying to fit cleanly into an impoverished environment (so in order to
fit easily, they chose to give up closures, for example).

It seemed to me that if you chose something like OCaml, which (I
think, anyway) has closures and neat stuff like that, then you ought
to end up with something that's nice and high-level, and which is
probably as close to a lisp as is possible, but which has static
typing.

Maybe that turns out not to be interesting, though. Maybe the macros
that lisp's syntax makes convenient aren't so useful in a strongly
typed language (or maybe they're not useful in some strongly typed
languages that people have tried them in).

And maybe the syntactic simplicity doesn't actually help in learning
or remembering, because the things that the syntax expresses has to be
there somewhere, and it's just somewhere else.

George Neuner

unread,
Sep 29, 2005, 5:44:38 AM9/29/05
to
On Wed, 28 Sep 2005 17:40:23 +0100, Jon Harrop <use...@jdh30.plus.com>
wrote:

>Joachim Durchholz wrote:
>> Aaron Gray schrieb:
>>> The syntax is henerally more simplistic than statically typed
>>> languages.
>>

>In Lisp's case, I think it is nothing more than religion.

More an accident of history.

From the beginning, McCarthy intended Lisp to have an infix syntax (I
think he called it M-exprs) and to use S-exprs only as an internal
representation. The infix syntax never materialized because people
realized that writing code directly in the AST form had advantages -
such as you could easily write code that writes code - important in AI
which was Lisp's original target area.

Remember also that Lisp was designed at a time when a computer with
32K words of core memory was BIG. Lisp was interactive and the extra
cost of having an integrated infix reader on top of the S-expr reader
(which was still needed) was judged too much. Eliminating it made
room for other more meaningful features.

After a while everyone became used to writing in the AST form. By the
time Lisp was ported to more capable machines, most everyone had
forgotten about the original plan for infix syntax. Anyone who really
wanted a different programming syntax could implement it using macros.

George
--
for email reply remove "/" from address

David Hopwood

unread,
Sep 29, 2005, 11:24:06 AM9/29/05
to
George Neuner wrote:
> Jon Harrop <use...@jdh30.plus.com> wrote:
>>Joachim Durchholz wrote:
>>>Aaron Gray schrieb:
>>>
>>>>The syntax is henerally more simplistic than statically typed
>>>>languages.
>
>>In Lisp's case, I think it is nothing more than religion.
>
> More an accident of history.

It is an accident of history that has become a religion (maybe that's
true of religions in general ;-)

For true believers of this religion, the point that there are ways
to do metaprogramming just as easily without having the source code
correspond as literally to an AST as s-expressions do, seems to fall
on deaf ears.

> Anyone who really wanted a different programming syntax could implement
> it using macros.

This is another of the religion's peculiar dogmas. The fact is that
people who are turned off by s-expression syntax don't spend their time
writing macro packages or parsers to translate other syntaxes into Lisp;
they just don't use Lisp. This is probably their loss, but it's still
true.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Marcin 'Qrczak' Kowalczyk

unread,
Sep 29, 2005, 3:16:15 PM9/29/05
to
"James" <james...@gmail.com> writes:

>>It's a difficult theoretical problem to figure out how to add
>>objects, runtime reflection, metaprogramming, dynamic linking,
>>on-line code replacement, etc. to a statically typed language.
>
> I think you've generally hit upon the answer, but I'd rephrase your
> "it's much easier to design a good dynamically typed language" as
> "dynamic typing is the simpler solution."

I agree, this is what I would answer too. Dynamic typing allows
a simpler language with the same expressivity.

For example ML has structures (modules) and functors as a separate
concept from records and functions only because of static typing.
They need to trade some properties for others to keep typing decidable.
They may contain public components of a type which is different in
different structures matching the same signature, but they are not
first-class.

* * *

There are two major families of static type systems:

1. Typical for functional languages (Haskell, Clean, SML, OCaml,
Mercury): type inference for global and local variables (but not
for object fields), no subtyping (or only limited subtyping with
explicit subsumption as in OCaml), easy parametric polymorphism,
no runtime type information.

2. Typical for OO languages (Java, C#, Eiffel): subtyping (with
subtype relationships explicitly declared at the point of the
definition of the subtype), explicitly specified type for most
variables, runtime type information, downcasting (needs RTTI),
parametric polymorphism added late.

(There is also a third family of poorly expressive type systems,
as in Pascal, C and C++, which I'm not considering here.)

The "functional" family emphasizes static guarantees provided by the
type system, and it grows features which allow to accurately describe
more advanced types. I'm seeing that in Haskell and OCaml.

The "OO" family resorts to dynamic typing (RTTI and downcasting) when
it's not able to express some types statically (e.g. for dispatching
exceptions, and for the second parameter of binary methods).

While proponents of functional programming claim that type systems
which allow to describe more scenarios without resorting to runtime
checking are superior, the "OO" family has an advantage in being able
to emulate dynamic typing when needed.

Consider the issue of exceptions. It's no problem for dynamically
typed languages nor for OO languages to check the type of a given
exception object at runtime (I'm ignoring the issue of checked
exceptions in Java). SML and OCaml have a separate language feature
for introducing new kinds of exceptions, which is mostly adequate,
although it doesn't allow to conveniently group exception types
in a hierarchy. But Haskell doesn't provide a good way of having an
extensible exception type. It has a regular algebraic type for builtin
I/O errors, and Glasgow Haskell allows to use a kludge of the Dynamic
type for exceptions (again without a hierarchy).

The issue which pushed me towards dynamic typing was making an
extensible abstract syntax tree. Again it's easy with dynamic typing
and with OO-style static typing. I had to do that in Haskell, and
since the AST types to be extended were a regular algebraic type,
I had to copy & paste them for extending. It seems there is no good
mechanism in Haskell to make such type extensible. Haskell and OCaml
have the most impressive type systems I've seen, and it's a pity that
they both failed this case.

I already knew about limitations of OO-style static typing, e.g. that
it doesn't provide an equivalent of Haskell / Clean / Mercury classes
(which, unlike OO-style interfaces, can have instances defined outside
the definition of the corresponding type). So dynamic typing was the
natural choice.

I appreciate and miss advantages of static typing, especially pointing
out many programmer errors at compile time. But I couldn't imagine a
sufficiently powerful static type system.

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Joachim Durchholz

unread,
Sep 29, 2005, 4:52:36 PM9/29/05
to
Marcin 'Qrczak' Kowalczyk schrieb:

> While proponents of functional programming claim that type systems
> which allow to describe more scenarios without resorting to runtime
> checking are superior, the "OO" family has an advantage in being able
> to emulate dynamic typing when needed.

Agreed.

> Consider the issue of exceptions. It's no problem for dynamically
> typed languages nor for OO languages to check the type of a given
> exception object at runtime (I'm ignoring the issue of checked
> exceptions in Java).

Hmm... from the an-exception-is-abortion perspective, you don't need to
map different kinds of errors to different exception types. If all that
an exception does is to abort a computation, the program doesn't care
much about data that the failed computation may hold inside. (The
/programmer/ would care, but the exception objects can hold that data in
string form, so there's no need of a multitude of types to get that to
work.)

At least that's my experience with exceptions-are-abortions regimes.
If found them to work quite well indeed, and usually better than
exceptions-are-shortcircuited-computations.
YMMV :-)

> SML and OCaml have a separate language feature for introducing new
> kinds of exceptions, which is mostly adequate, although it doesn't
> allow to conveniently group exception types in a hierarchy.

From the perspective shown above, this is a non-issue. If an exception
is a failed computation, the code that catches the exception isn't very
interested in the kind of failure. It needs a lot of information in
string form so that it can log a meaningful error message, but the exact
cause of the failure isn't much of the caller's concern.
In fact, the only strategies that it has are:
* Retry (if real-world objects with temporary glitches are involved)
* Try a different processing strategy, i.e.
* try a different algorithm/configuration/whatever
* degrade gracefully (if its own contract allows that!)
* Pass the bucket up by aborting with an exception


IOW I think the example of exception types isn't a particularly lucky one.
I can't tell about the other examples you're bringing up :-)

Regards,
Jo

Vincenzo Ciancia

unread,
Sep 29, 2005, 11:09:11 PM9/29/05
to
Marcin 'Qrczak' Kowalczyk wrote:

> The issue which pushed me towards dynamic typing was making an
> extensible abstract syntax tree.

> [cut]

> Haskell and OCaml
> have the most impressive type systems I've seen, and it's a pity that
> they both failed this case.

Maybe I am not following you, but I think that polymorphic variants should
be the right thing for an extensible abstract syntax tree.

V.

--
Please note that I do not read the e-mail address used in the from field but
I read vincenzo_ml at yahoo dot it
Attenzione: non leggo l'indirizzo di posta usato nel campo from, ma leggo
vincenzo_ml at yahoo dot it

Jon Harrop

unread,
Sep 29, 2005, 5:42:48 PM9/29/05
to
David Hopwood wrote:
> This is another of the religion's peculiar dogmas. The fact is that
> people who are turned off by s-expression syntax don't spend their time
> writing macro packages or parsers to translate other syntaxes into Lisp;
> they just don't use Lisp. This is probably their loss, but it's still
> true.

Apart from the "their loss", I agree. ;-)

Paul Rubin

unread,
Sep 29, 2005, 5:55:03 PM9/29/05
to
David Hopwood <david.nosp...@blueyonder.co.uk> writes:
> This is another of the religion's peculiar dogmas. The fact is that
> people who are turned off by s-expression syntax don't spend their time
> writing macro packages or parsers to translate other syntaxes into Lisp;
> they just don't use Lisp. This is probably their loss, but it's still true.

Dylan?

Jon Harrop

unread,
Sep 29, 2005, 5:57:14 PM9/29/05
to
Marcin 'Qrczak' Kowalczyk wrote:
> Dynamic typing allows a simpler language with the same expressivity.

I know this isn't what you mean but you cannot express static checks in a
dynamically typed language. I find that to be very useful in practice.

> SML and OCaml have a separate language feature
> for introducing new kinds of exceptions, which is mostly adequate,
> although it doesn't allow to conveniently group exception types
> in a hierarchy.

Modules act as hierarchical namespaces for exceptions, at least.

> The issue which pushed me towards dynamic typing was making an
> extensible abstract syntax tree. Again it's easy with dynamic typing
> and with OO-style static typing. I had to do that in Haskell, and
> since the AST types to be extended were a regular algebraic type,
> I had to copy & paste them for extending. It seems there is no good
> mechanism in Haskell to make such type extensible. Haskell and OCaml
> have the most impressive type systems I've seen, and it's a pity that
> they both failed this case.

Have you read Jacques Garrigue's paper on code reuse through polymorphic
variants? He demonstrates an extensible AST in OCaml.

> I appreciate and miss advantages of static typing, especially pointing
> out many programmer errors at compile time. But I couldn't imagine a
> sufficiently powerful static type system.

It seems odd to ditch static typing when dynamic typing is just a special
case. Did you consider providing an alternate syntax for dynamic typing
within a statically typed language?

Joe Hendrix

unread,
Sep 29, 2005, 6:34:44 PM9/29/05
to
Marcin 'Qrczak' Kowalczyk wrote:
> There are two major families of static type systems:
>
> 1. Typical for functional languages (Haskell, Clean, SML, OCaml,
> Mercury): type inference for global and local variables (but not
> for object fields), no subtyping (or only limited subtyping with
> explicit subsumption as in OCaml), easy parametric polymorphism,
> no runtime type information.
>
> 2. Typical for OO languages (Java, C#, Eiffel): subtyping (with
> subtype relationships explicitly declared at the point of the
> definition of the subtype), explicitly specified type for most
> variables, runtime type information, downcasting (needs RTTI),
> parametric polymorphism added late.

In terms of common "major" type systems this is correct. Subtypes are
used extensively in "functional" programming languages based on term
rewriting such as Obj and Maude. Maude also has metaprogramming, though
it is type checked at runtime.

There are also higher order type systems that show up in proof
assistents like Coq. I don't know of any major languages that use them.

> (There is also a third family of poorly expressive type systems,
> as in Pascal, C and C++, which I'm not considering here.)

Lumping C++ in with Pascal and C seems quite wrong. I suspect that you
have not used the C++ STL extensively much less done any template meta
programming. C++ has a significantly more expressive type system than
Java or C#. C++ templates can avoid the problems with multimethods that
OO type systems face, and is quite expressive. Through meta-programming
it is possible to make type lists and functions that operate on types.
The consequence is that the type system is not decidable, but I haven't
found that to be much of a problem in practice so far. The horrible
syntax one needs to use seems to be a bigger issue.

I don't personally have significant enough experience to compare the
type system in C++ with those of Haskell or ML.

Neelakantan Krishnaswami

unread,
Sep 29, 2005, 7:29:38 PM9/29/05
to
In article <871x37u...@qrnik.zagroda>, Marcin 'Qrczak' Kowalczyk wrote:
>
> The issue which pushed me towards dynamic typing was making an
> extensible abstract syntax tree. Again it's easy with dynamic typing
> and with OO-style static typing. I had to do that in Haskell, and
> since the AST types to be extended were a regular algebraic type,
>
> I had to copy & paste them for extending. It seems there is no good
> mechanism in Haskell to make such type extensible. Haskell and OCaml
> have the most impressive type systems I've seen, and it's a pity that
> they both failed this case.

To do this, you need to leave the recursive knot untied. For example,
if you want to do unification, for example, you may want an extra case
for unification variables.

First, define a functor to tie the knot explicitly.

module Fix(F : sig type 'a t end) =
struct
type t = [ `Fix of t F.t ]
end

Then you can do something like:

module Expr =
struct
type 'a t = [ `Node of string * 'a list ]
end

module Extend =
struct
type 'a t = [ 'a Expr.t
| `Evar of string ]
end

module Base = Fix(Expr)
module Unif = Fix(Extend)

Now every term of type Base.t will also be an element of Unif.t.

--
Neel Krishnaswami
ne...@cs.cmu.edu

Marcin 'Qrczak' Kowalczyk

unread,
Sep 29, 2005, 8:00:26 PM9/29/05
to
Joachim Durchholz <j...@durchholz.org> writes:

> From the perspective shown above, this is a non-issue. If an
> exception is a failed computation, the code that catches the
> exception isn't very interested in the kind of failure. It needs a
> lot of information in string form so that it can log a meaningful
> error message, but the exact cause of the failure isn't much of the
> caller's concern.

Most languages support distinguishing the kind of failures, which is
used for other things besides printing it to the user and exiting,
and I think it's useful enough to be worth supporting.

I/O errors are usually handled differently than program bugs, but the
mechanism of exceptions is suitable for both.

There is an analogous issue of asynchronous signals (Unix signals,
memory overflow, internal timeouts, asynchronous communucation between
threads). Their kinds must be distinguished if they are supported, the
set of kinds of signals is open. They also participate in the set of
exceptions because many of them are turned into exceptions by default.

Exceptions are suitable for expressing non-local exits.

Python used to have string-only exceptions. It was later changed to
support arbitrary objects, and their dynamic type is used to distinguish
the reasons.


Jon Harrop <use...@jdh30.plus.com> writes:

>> SML and OCaml have a separate language feature for introducing new
>> kinds of exceptions, which is mostly adequate, although it doesn't
>> allow to conveniently group exception types in a hierarchy.
>

> Modules act as hierarchical namespaces for exceptions, at least.

I meant grouping for the purpose of catching any exception from the
group.

> Have you read Jacques Garrigue's paper on code reuse through
> polymorphic variants? He demonstrates an extensible AST in OCaml.

Ok, maybe it would work. I more or less know how they work in theory,
although I haven't used them in practice.

> It seems odd to ditch static typing when dynamic typing is just a
> special case.

I don't understand. There are two families of dynamic typing:
1. the set of types is closed (Scheme R5RS, Erlang, Prolog, Logo)
2. the set of types is open (most dynamically typed languages)
and the two families of statically typed languages I mentioned are
distinguished by an analogous property: whether the set of "cases"
or "subtypes" of a type is closed or open.

Emulation becomes hard when a statically typed language with closed
sets of cases (the "functional" family) wants to embed a dynamically
typed language with open set of types. It's easy only in case of types
defined in the language itself, and for a fixed number of primitive
types. It breaks when extending the embedded language with types
defined in various libraries of the host language.

I did it once in OCaml by using a custom RTTI based on Obj.magic. This
is cheating because it's unsafe. There might be other, safer but still
ugly workarounds:
- using the exn type
- registering pairs of functions which update a global reference

> Did you consider providing an alternate syntax for dynamic typing
> within a statically typed language?

It can't be fixed on the syntactic level if the semantics doesn't
include the possibility of grouping an open set of types in a given
supertype.


Joe Hendrix <jhen...@uiuc.edu> writes:

> Lumping C++ in with Pascal and C seems quite wrong.

Ok, but it's a different dimension which is orthogonal to embedding
dynamic typing in a formally statically typed program.

By the way, I would say that C++ templates use a compile-time dynamic
type system, and perhaps this makes them powerful. Yes, it's all
ultimately checked before the program is run, but interfaces of
templates themselves aren't described by the language. Templates
generally fail with horrible error messages coming from their
internals if they are applied to a type with wrong properties,
just like when a function in a dynamically typed language is misused.
This is dynamic typing lifted to the metalevel.

OTOH Haskell, C# and Java have features with applications which
overlap with C++ templates, but they are fully statically typed,
i.e. interfaces of abstractions they introduce are described more
abstractly than by requirements deduced from their implementation.

It seems that there is a larger set of problems that templates are
suitable for but these features aren't, than the converse.

David Hopwood

unread,
Sep 29, 2005, 8:14:59 PM9/29/05
to

Dylan is the exception that proves the rule.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Jon Harrop

unread,
Sep 29, 2005, 10:01:22 PM9/29/05
to
Marcin 'Qrczak' Kowalczyk wrote:
> Jon Harrop <use...@jdh30.plus.com> writes:
>>> SML and OCaml have a separate language feature for introducing new
>>> kinds of exceptions, which is mostly adequate, although it doesn't
>>> allow to conveniently group exception types in a hierarchy.
>>
>> Modules act as hierarchical namespaces for exceptions, at least.
>
> I meant grouping for the purpose of catching any exception from the
> group.

Ah yes. Would it not be easy to implement catching any exception defined
within a module as an extension to ML?

>> Have you read Jacques Garrigue's paper on code reuse through
>> polymorphic variants? He demonstrates an extensible AST in OCaml.
>
> Ok, maybe it would work. I more or less know how they work in theory,
> although I haven't used them in practice.

Yes, I've played with them because I'm interested in extensible scene trees
for graphics. Basically the same problem.

>> It seems odd to ditch static typing when dynamic typing is just a
>> special case.
>
> I don't understand. There are two families of dynamic typing:
> 1. the set of types is closed (Scheme R5RS, Erlang, Prolog, Logo)
> 2. the set of types is open (most dynamically typed languages)
> and the two families of statically typed languages I mentioned are
> distinguished by an analogous property: whether the set of "cases"
> or "subtypes" of a type is closed or open.

Are polymorphic variants closed or open?

> Emulation becomes hard when a statically typed language with closed
> sets of cases (the "functional" family) wants to embed a dynamically
> typed language with open set of types. It's easy only in case of types
> defined in the language itself, and for a fixed number of primitive
> types. It breaks when extending the embedded language with types
> defined in various libraries of the host language.

Are you saying that the representation of values and types in a Lisp
interpreter written in ML (for example) is "hard"?

> I did it once in OCaml by using a custom RTTI based on Obj.magic. This
> is cheating because it's unsafe. There might be other, safer but still
> ugly workarounds:
> - using the exn type
> - registering pairs of functions which update a global reference

Yes, that's pretty horrific. :-)

>> Did you consider providing an alternate syntax for dynamic typing
>> within a statically typed language?
>
> It can't be fixed on the syntactic level if the semantics doesn't
> include the possibility of grouping an open set of types in a given
> supertype.

I think it can be fixed on the syntactic level as long as you're willing to
go most of the way to writing the dynamic typing bits of an interpreter.
That's a lot of extra code, its ugly and the dynamic type checks are slow,
but how much can be factored out into a neat little syntax?

Jon Harrop

unread,
Sep 29, 2005, 10:07:22 PM9/29/05
to
Joe Hendrix wrote:
> There are also higher order type systems that show up in proof
> assistents like Coq. I don't know of any major languages that use them.

Any ideas why they aren't used in ordinary languages? I have found that I
need to be quite familiar with the type system in order to make good use of
ML. Perhaps their type systems are just too complicated for Jo-programmer.

> I don't personally have significant enough experience to compare the
> type system in C++ with those of Haskell or ML.

IMHO, practical issues make C++'s type system unusable, as you said. In
addition to the hideous syntax is appaulingly slow compile time. When I
last used C++ I was seeing compile times of 1-24hrs for <10kLOC projects.

Panu Kalliokoski

unread,
Sep 30, 2005, 12:26:31 AM9/30/05
to
The AST and exception examples bring up very good points about the
differences of static and dynamic typing, but I'm still wondering about
this:

In <871x37u...@qrnik.zagroda> Marcin 'Qrczak' Kowalczyk <qrc...@knm.org.pl> writes:
>1. Typical for functional languages (Haskell, Clean, SML, OCaml,
> Mercury): type inference for global and local variables (but not
> for object fields), no subtyping (or only limited subtyping with
> explicit subsumption as in OCaml), easy parametric polymorphism,
> no runtime type information.

I fail to see how OCaml's subtyping is "limited", and I've always deemed
OCaml the language where type subsumption (of classes) is _implicit_,
based on the subsumption relation of their type signatures.

Ocaml has these kinds of "must-support-at-least" and
"does-support-at-most" type signatures in at least polymorphic variants
and objects. Moreover, the exn type is extensible. (I think OCaml
would still benefit from a "dynamic" type.)

What OCaml lacks is RTTI. But as you admit, languages with RTTI are not
really statically typed at all, so they hardly qualify as a category of
statically typed languages. OCaml's FAQs urge you to use exceptions if
you need RTTI.

Joachim Durchholz

unread,
Sep 30, 2005, 4:10:49 AM9/30/05
to
Marcin 'Qrczak' Kowalczyk schrieb:

> Joachim Durchholz <j...@durchholz.org> writes:
>
>
>>From the perspective shown above, this is a non-issue. If an
>>exception is a failed computation, the code that catches the
>>exception isn't very interested in the kind of failure. It needs a
>>lot of information in string form so that it can log a meaningful
>>error message, but the exact cause of the failure isn't much of the
>>caller's concern.
>
> Most languages support distinguishing the kind of failures, which is
> used for other things besides printing it to the user and exiting,
> and I think it's useful enough to be worth supporting.

I disagree about that.

> I/O errors are usually handled differently than program bugs, but the
> mechanism of exceptions is suitable for both.

It's suitable, but is it necessary?

Regarding I/O errors, the system may assume that I/O errors "just don't
happen" (as is reasonable with today's harddisks), the functions'
interfaces might not provide for failures. I.e. if you read the first 20
bytes of a file, the caller is supposed to assume that those 20 bytes
are accessible. If the low-level function cannot read the data, this is
a failure (actually not a bug, but handled just like one): it cannot
fulfill its contract and must throw an exception. If the caller checks
for the exception, it can react to read errors (e.g. by retrying), if it
doesn't, the exception will
Alternatively, the software doesn't guarantee errors. The contract of a
Read function now says "either here are your bytes, or we have an error
and here is the error data".

> There is an analogous issue of asynchronous signals (Unix signals,
> memory overflow, internal timeouts, asynchronous communucation between
> threads). Their kinds must be distinguished if they are supported, the
> set of kinds of signals is open. They also participate in the set of
> exceptions because many of them are turned into exceptions by default.

I wouldn't use Unix signals as

> Exceptions are suitable for expressing non-local exits.

Yes, but non-local exits complicate issues. They closely couple
low-level and high-level activities.

One example: I'm dividing foo/bar and get a DivideByZero exception. The
knee-jerk reaction might be to assume that bar is zero and react
accordingly, but if foo is a function, the exception might have
originated there.

Of course, I could run each function separately and assign its results
to intermediates. But that deprives me of the possibility to rearrange
code at will.
Even if I were willing to take that trade-off, I'd be unable to properly
handle that DivideByZero exception. It might be a problem in foo with
known (or inferrable/diagnosable) causes, or it might be a problem
somewhere deep down the call stack. We could do some hefty analysis of
the call stack info that should be part of the exception, but then the
top-level function has to "know" a lot about the infrastructure that
it's calling, creating the above-mentioned tight coupling between high
and low software levels.

Isn't FPL programming about making dependencies explicit? Having the
low-level routines return sum types that either contain the desired
result or error information is just the right vehicle for this kind of
explicitness.
It even gives the caller the choice whether to handle an unusual status
as a bug or as something that needs to be handled. Handling as a bug
means: matching just for the normal variant; the language will throw an
exception. Handling it explicitly means matching for the normal case as
well as any other cases. We might even go a middle ground: for a file
I/O error, the code might choose to match on "medium removed" (this
could be a common occurrence with NFS files or removable media such as
CDs or USB sticks), but ignore "read error" or "end-of-file" (the first
being an external problem, the second probably a bug - but we don't
care, it's a failure outside the contract of the function, and all we
want is a useful error message).

> Python used to have string-only exceptions. It was later changed to
> support arbitrary objects, and their dynamic type is used to distinguish
> the reasons.

As I said: using dynamic types is just convenient in OO languages. You
could equally well use nested sum types.

Regards,
Jo

Marcin 'Qrczak' Kowalczyk

unread,
Sep 30, 2005, 4:29:15 AM9/30/05
to
Jon Harrop <use...@jdh30.plus.com> writes:

>> I meant grouping for the purpose of catching any exception from the
>> group.
>
> Ah yes. Would it not be easy to implement catching any exception
> defined within a module as an extension to ML?

Even if it was, it would not be sufficient, because it would not allow
to extend preexisting groups. For example you couldn't add more "logic
error" exceptions from outside the standard library.

>> I don't understand. There are two families of dynamic typing:
>> 1. the set of types is closed (Scheme R5RS, Erlang, Prolog, Logo)
>> 2. the set of types is open (most dynamically typed languages)
>> and the two families of statically typed languages I mentioned are
>> distinguished by an analogous property: whether the set of "cases"
>> or "subtypes" of a type is closed or open.
>
> Are polymorphic variants closed or open?

Closed. You can't emulate exn using polymorphic variants because
adding a new variant changes the type ("supertype").

Well, they are more open than regular algebraic types, because
extending them merges the new variants with old variants, and if
the code using the old variants was kept polymorphic, then it can
be reused. Perhaps it's worth distinguishing 3 levels of openness
in the statically typed case.

> Are you saying that the representation of values and types in a Lisp
> interpreter written in ML (for example) is "hard"?

Yes, when it includes the possibility to wrap new ML modules to make
their functionality available to the embedded Lisp without changing
the core of the interpreter (the representation of runtime values).

Marcin 'Qrczak' Kowalczyk

unread,
Sep 30, 2005, 4:43:37 AM9/30/05
to
Panu Kalliokoski <ate...@sange.fi> writes:

> I fail to see how OCaml's subtyping is "limited", and I've always
> deemed OCaml the language where type subsumption (of classes) is
> _implicit_, based on the subsumption relation of their type
> signatures.

Ok, perhaps I should have used a different adjective instead of
"limited".

I insist on explicitness. The subtyping relation is implicit,
but subsumption (upcasting) is explicit.

> What OCaml lacks is RTTI. But as you admit, languages with RTTI are
> not really statically typed at all, so they hardly qualify as a
> category of statically typed languages.

That was my point: languages with RTTI merge statically typed and
dynamically typed paradigms, which is their advantage, while those
without are limited to pure static typing.

exn in OCaml indeed is extensible as far as adding new cases and
recognizing a given case is concerned. There are related operations
however that both dynamically typed languages and those statically
typed with RTTI can do, while OCaml can't for exn:

- Making an operation declared for an open family for cases, which
is then defined incrementally for each case, and dispatched when
applied.

Well, it could be done by making a list of functions which check
for the exn case they handle and pass others further. It's quite
inefficient with this linear search but would work.

- Extracting the type to use it as a dictionary key (this could be
used to implement dispatch).

Things are not that black & white.

Marcin 'Qrczak' Kowalczyk

unread,
Sep 30, 2005, 7:34:29 AM9/30/05
to
Joachim Durchholz <j...@durchholz.org> writes:

>> Most languages support distinguishing the kind of failures, which is
>> used for other things besides printing it to the user and exiting,
>> and I think it's useful enough to be worth supporting.
>
> I disagree about that.

In my compiler (13k lines) there are three places of catching specific
exceptions:

- On the toplevel of the compiler, to report all compile errors found
(sorted by source location) even if one of errors was fatal and
caused the compiler to abort by throwing an exception.

- When checking whether the interface file would change, an absent or
malformed interface file is treated as one to be changed (catching
IO_ERROR and CODING_ERROR).

- For working around a bug when it's compiled using an old version of
itself which did not define hashing of rational numbers, the method
redefinition exception is caught there and ignored.

In my Scheme interpreter (2800 lines) specific exceptions are caught
on the toplevel. The toplevel distinguishes an exception translated
from the SIGINT handler (to return to the toplevel) and PROGRAM_EXIT
(if a Scheme function which exits is called, the interpreter exits).
Other exceptions are printed for the user and the REPL continues.

In my Tetris game (520 lines) there are some specific exceptions being
caught:

- Attempting to write a message about the terminal being too small
may fail with CURSES_ERROR if it's too small even for that message.
This error is ignored instead of causing the program to abort,
the user only sees the beginning of the message.

- When an attempt to hide the cursor fails, the error is ignored.

- An exception is used internally by the construct of providing a
non-local exit. The construct is used for exiting the main loop
when the user presses "q".

I claim that they are legitimate uses of exceptions. It would be
inconvenient and inefficient to make error propagation explicit
(or even impossible in the case of the asynchronous SIGINT), and it
would be fragile if reasons for exceptions were encoded as strings.

> Isn't FPL programming about making dependencies explicit?

I don't religiously adhere to general principles without looking at
consequences at a specific case. Yes, usually it's better to make
dependencies more explicit, up to a point. Too much and it will do
more harm than good.

In particular many people claim that Java checked exceptions were
a mistake. The essence of exceptions is that they transparently
propagate through code which is executing between the point of
throwing and catching. Making the propagation explicit in types makes
hard to write reusable code which propagates outside different kinds
of exceptions depending on what is executed inside. It could be
improved by making possible to parametrize methods and classes by sets
of possible exceptions, e.g. expressing that this method throws the
union of exception types thrown by parameters of these two generic
classes, but then the language would get more complex. I'm happy with
leaving this implicit, as in almost all languages besides Java.

Panu Kalliokoski

unread,
Sep 30, 2005, 7:48:02 AM9/30/05
to
In <87oe6bq...@qrnik.zagroda> Marcin 'Qrczak' Kowalczyk <qrc...@knm.org.pl> writes:

>Panu Kalliokoski <ate...@sange.fi> writes:

>> deemed OCaml the language where type subsumption (of classes) is
>> _implicit_, based on the subsumption relation of their type
>> signatures.

[...]


>I insist on explicitness. The subtyping relation is implicit,
>but subsumption (upcasting) is explicit.

Okay, now I see what you're referring to. I took subsumption to be the
mathematical relation that says which type can be coerced to which, not
the actual action of coercion. The former is implicit in Ocaml, whereas
the latter is explicit, as you say.

>> What OCaml lacks is RTTI. But as you admit, languages with RTTI are
>> not really statically typed at all, so they hardly qualify as a
>> category of statically typed languages.
>That was my point: languages with RTTI merge statically typed and
>dynamically typed paradigms, which is their advantage, while those
>without are limited to pure static typing.

Okay. But they are then IMO semantically equivalent to
Lisp-with-a-sufficiently-smart-compiler. That is, dynamically typed,
period.

>- Making an operation declared for an open family for cases, which
> is then defined incrementally for each case, and dispatched when
> applied.

Maybe I'm dizzy today but I'm not sure what you're talking about. Maybe
a concrete example would help? (Or pointing to a former example, if one
was made already.)

>Things are not that black & white.

Well, I haven't yet understood how "static typing + RTTI" is not really
dynamic typing, totally.

Marcin 'Qrczak' Kowalczyk

unread,
Sep 30, 2005, 8:39:34 AM9/30/05
to
Panu Kalliokoski <ate...@sange.fi> writes:

> But they are then IMO semantically equivalent to
> Lisp-with-a-sufficiently-smart-compiler. That is, dynamically typed,
> period.

I won't fight over the terminology. Definitely it's closer to dynamic
typing than the functional type system is; in particular it can easily
emulate dynamic typing (modulo performance problems caused by the
inability of having unboxed integers and by lots of downcasts).

At the same time it's more static than typical dynamically typed
languages: the compiler will catch lots of type errors at compile
time in areas where the programming style used doesn't rely on that
"dynamic" typing, while in a true dynamically typed language a wrong
type is very rarely caught at compile time.

(I never liked these languages for other reasons, and jumped straight
to dynamic typing.)

>>- Making an operation declared for an open family for cases, which
>> is then defined incrementally for each case, and dispatched when
>> applied.
>
> Maybe I'm dizzy today but I'm not sure what you're talking about.
> Maybe a concrete example would help?

I was implementing an interpreter of a dynamically typed language
in OCaml, and the problem was in representing runtime values of the
embedded language in the host language. Requirements:

- Additional modules can wrap new types from the host language without
having to update a central type definition.

- When wrapping a type from the host language, it must be possible to
dynamically check whether the given wrapped value was constructed
from the given type, and if so, unwrap to give access to the
underlying value of the host language.

Rationale. For some libraries it's possible to live without this,
by wrapping the object in a record of functions which refer to the
value of the new type from their free variables, and exposing only
the external interface which uses only values of simple types. But
it's not sufficient for binary methods, and also it would force
to fix the set of exposed functions working on a given type when
exporting that type.

- In the embedded language you can attempt to apply a value to
argumenets. This may succeed not only for "functions", but also for
various other wrapped types. When wrapping a type, it is decided how
objects of this type implement application to arguments.

Getting a record field is unified with application to a symbol,
so it's used for all types which want to behave like records.

In one of versions of the interpreter I solved it this way:

type ktype = int (* Unique tag *)

type native
type obj =
| KFunction of (trace -> obj list -> obj list)
| KVariable of obj ref
| KInt of int
| KBigInt of Big_int.big_int
... etc. for various primitive types ...
| KNative of ktype * native * (trace -> native -> obj list -> obj list)

The KNative constructor is open for extensions. When a new kind of
extension is registered, it gets its unique type id and uses Obj.magic
to cast something to native. And then back, after checking the type id.

Application of an object to arguments dispatches on the primitive
types, and in the case of KNative it uses the stored function (which
is not a closure over the native value but gets it as an argument,
so it can be physically shared among all values of the given type).

In a newer version of the interpreter I chose a more uniform
representation of values:

type kobj = {desc : kobj desc}

and 'a desc = {
typeObj : ktype;
idOfType : int;
mutable apply : trace -> 'a -> kobj list -> kobj;
mutable apply0 : trace -> 'a -> kobj;
mutable apply1 : trace -> 'a -> kobj -> kobj;
mutable apply2 : trace -> 'a -> kobj -> kobj -> kobj;
mutable apply3 : trace -> 'a -> kobj -> kobj -> kobj -> kobj;
}

and ktype = {
typeDesc : ktype desc;
typeId : int;
mutable typeName : kobj;
mutable directSupertypes : ktype list
}

(Many "mutable" adjectives are needed only to close recursive knots in
core parts of the library.)

Then concrete types are represented by records with the first field of
type 'a desc for some 'a, using Obj.magic to cast them to kobj and back,
after checking for the expected typeObj (or redundant idOfType). This is
again unsafe. Application of an object to arguments jumps straight to
the application code in the descriptor, without pattern matching.

If this was implemented in .NET, I would have two choices:

- Have my interface or superclass which declares pure virtual
application methods. Let each concrete type implement this
interface. Declare all parameters using this interface.

- Since many primitive types don't have any interesting reaction to
application to arguments, have that interface only for objects which
need it, and otherwise pass native .NET objects in the Object type
without additional wrapping. Downcast for performing an application.

They would be both safe, i.e. they wouldn't rely on constructs which
have undefined behavior when misused like Obj.magic.

Joachim Durchholz

unread,
Sep 30, 2005, 8:44:06 AM9/30/05
to
Panu Kalliokoski schrieb:

> Well, I haven't yet understood how "static typing + RTTI" is not
> really dynamic typing, totally.

In my book, "static typing + RTTI" means: I can rely on the guarantees
of static typing in those parts of the program that don't use RTTI.

I'm not sure that I'm answering your question with that one though...

Regards,
Jo

Joachim Durchholz

unread,
Sep 30, 2005, 2:26:26 PM9/30/05
to
Marcin 'Qrczak' Kowalczyk schrieb:

> Joachim Durchholz <j...@durchholz.org> writes:
>
>>>Most languages support distinguishing the kind of failures, which is
>>>used for other things besides printing it to the user and exiting,
>>>and I think it's useful enough to be worth supporting.
>>
>>I disagree about that.
>
> In my compiler (13k lines) there are three places of catching specific
> exceptions:

Let me classify the examples.

> - On the toplevel of the compiler, to report all compile errors found
> (sorted by source location)

Catching and reporting all exceptions in the toplevel doesn't require an
extendible type hierarchy.

> even if one of errors was fatal and caused the compiler to abort by
> throwing an exception.

Again, to log an exception and ignore it otherwise, no extendible type
hierarchy is needed.

> - When checking whether the interface file would change, an absent or
> malformed interface file is treated as one to be changed (catching
> IO_ERROR and CODING_ERROR).

These exceptions need to be caught because the underlying API throw
them. These things could equally well be handled using discriminated
unions (sum types).

> - For working around a bug when it's compiled using an old version of
> itself which did not define hashing of rational numbers, the method
> redefinition exception is caught there and ignored.

Can't comment on that - working around a bug often requires all sorts of
hacks. Catching and ignoring the exception might be the most elegant way
out.
OTOH this means you may not be informed about legitimate method
redefinition problems in the entire call graph below that exception
handler. In other words, there's a potentially nonlocal property that
might have to be verified a year later - marginally problematic in a
13kloc project, but a /huge/ problem in a 100kloc project. (The problem
is that 13kloc projects tend to grow to 100kloc if they are successful.
I hope you could remove that exception-eating code after the successful
bootstrap, and am sympathetic if it's been impossible.)

> In my Scheme interpreter (2800 lines) specific exceptions are caught
> on the toplevel. The toplevel distinguishes an exception translated
> from the SIGINT handler (to return to the toplevel) and PROGRAM_EXIT
> (if a Scheme function which exits is called, the interpreter exits).
> Other exceptions are printed for the user and the REPL continues.

Hmm... can't really comment on that. I've been using the Windows model
where all event handling is synchronous by default.
I wouldn't want to do it that way, but that may be too biased by
personal experience to be interesting to anybody.

> In my Tetris game (520 lines) there are some specific exceptions being
> caught:
>
> - Attempting to write a message about the terminal being too small
> may fail with CURSES_ERROR if it's too small even for that message.
> This error is ignored instead of causing the program to abort,
> the user only sees the beginning of the message.
>
> - When an attempt to hide the cursor fails, the error is ignored.

These are classical cases of managing software failure. In both cases, a
low-level routine fails to render the promised service, so the
higher-level code chooses an alternate strategy (in these cases, simply
by doing nothing).

> - An exception is used internally by the construct of providing a
> non-local exit. The construct is used for exiting the main loop
> when the user presses "q".

Now that's the one case where I thing the exception should be avoided.

Sure, the code is a bit more convoluted. You have to carry the knowledge
about the exit through several call levels.
On the other hand, the code at intermediate call levels must be aware
that there's a shortcut running straight through it, that things that
are begun may not be finished. In other words, the additional hoops that
intermediate-level code has to jump through are justified, since they
serve as a reminder to any future maintainer that things are indeed that
complicated.
(Not a real issue in a 520loc program, of course, I'm extrapolating to
larger software here. For 520loc, even gotos are acceptable *g*.)

> I claim that they are legitimate uses of exceptions.

I hope I have made clear where and why I disagree.

> It would be
> inconvenient and inefficient to make error propagation explicit

Inconvenient - yes. Though, as said above, that's in places where things
*should* be inconvenient, IMO.

Inefficient? I don't think so. Discriminated unions of that kind can be
returned in processor register pairs, so if a language uses that as a
standard pattern, efficiency will almost automatically follow.

> (or even impossible in the case of the asynchronous SIGINT),

As I said, asynchronous signals aren't my area of expertise.

In the code that I've been maintaining, such signals were queued up and
processed in order, so the signal was processed just as any keyboard
input would have been. That has worked well enough for me. (It also
seems to work well enough for the Erlang guys, or at least so I infer
from the architectural choices they are presenting to the world.)

> and it
> would be fragile if reasons for exceptions were encoded as strings.

Not too much. It's really a question what you use exceptions for.

If you use them as a replacement for a non-local goto, then yes you need
all the information you can get, and string information isn't good
enough for that.
If you use them as a way to manage failure handling, you don't even
really need a failure code. Either the called code succeeds, in which
case nothing special need be done, or it fails, in which case the
calling code should retry, change strategy, or fail - and it doesn't
need specifics of the point of failure for either choice, and in fact it
*shouldn't* (have to) know specifics to keep the software decoupled.

>>Isn't FPL programming about making dependencies explicit?
>
> I don't religiously adhere to general principles without looking at
> consequences at a specific case. Yes, usually it's better to make
> dependencies more explicit, up to a point. Too much and it will do
> more harm than good.

Sure.

It's just that I found that the downsides of *not* using exceptions are
often overestimated.
I have programmed under the an-exception-is-failure paradigma for
several years, and I *never* felt the need to extend the error type
hierarchy. In fact there was only a single occasion where I had to write
code that differentiated between error classes, and that was fairly
low-level code, very near to the execution engine (when transferring
exception information from the host language to the 4GL interpreter on
top of it, IIRC).

> In particular many people claim that Java checked exceptions were
> a mistake. The essence of exceptions is that they transparently
> propagate through code which is executing between the point of
> throwing and catching. Making the propagation explicit in types makes
> hard to write reusable code which propagates outside different kinds
> of exceptions depending on what is executed inside.

That's a problem indeed.

My knee-jerk reaction was to say that I don't need to distinguish
exception classes anyway, so it's OK to say "throws Exception"
everywhere :-)

> It could be
> improved by making possible to parametrize methods and classes by sets
> of possible exceptions, e.g. expressing that this method throws the
> union of exception types thrown by parameters of these two generic
> classes, but then the language would get more complex. I'm happy with
> leaving this implicit, as in almost all languages besides Java.

Indeed :-)

If I were to design a language, my stance would be more simplicistic: no
exception classes, exception information tailored towards debugging and
post-mortem analysis, and an exception is a contract failure. And be
done with it - everything else can be fully handled using the normal
function call interfaces.

I know your mileage does vary ;-)

Regards,
Jo

Dirk Thierbach

unread,
Sep 30, 2005, 2:56:19 PM9/30/05
to
Marcin 'Qrczak' Kowalczyk <qrc...@knm.org.pl> wrote:
> I was implementing an interpreter of a dynamically typed language
> in OCaml, and the problem was in representing runtime values of the
> embedded language in the host language. Requirements:

> - Additional modules can wrap new types from the host language without
> having to update a central type definition.

Why do you want to wrap a type of the host language into the
interpreted language in the first place? The straightforward approach
is to implement dynamic types as they should be implemented, i.e. base
types and then generic type constructors like Lisp lists, with the
option to attach additional type identifiers. Maybe add some extra
construct to keep a wrapped dynamic type opaque in the interpreted
language, if it is important.

Any additional modules have to dispatch on the type anyway, so
you just add one pattern matching, and the conversion becomes
trivial. Static typing points out when you have to check, and
the compiler will tell you when you forgot a catch-all to throw
the dynamic error.

Maybe wrapping types directly gives a tiny bit of performance,
but that shouldn't be enough to justify a somewhat contorted
implementation.

- Dirk

George Neuner

unread,
Sep 30, 2005, 3:13:25 PM9/30/05
to
On Thu, 29 Sep 2005 15:24:06 GMT, David Hopwood
<david.nosp...@blueyonder.co.uk> wrote:

>For true believers of this religion, the point that there are ways
>to do metaprogramming just as easily without having the source code
>correspond as literally to an AST as s-expressions do, seems to fall
>on deaf ears.

Certainly it is possible, however I think the "true believers" would
question the claim that it can be done "just as easily".

I am not a "true believer" btw - I see utility in Lisp's syntax but
not necessarily beauty. And I don't care how meta programming
features are implemented as long as they can be used without
unnecessary gyration ... spinning makes me dizzy.

GW

Marcin 'Qrczak' Kowalczyk

unread,
Sep 30, 2005, 5:31:48 PM9/30/05
to
Joachim Durchholz <j...@durchholz.org> writes:

>> - On the toplevel of the compiler, to report all compile errors found
>> (sorted by source location)
>
> Catching and reporting all exceptions in the toplevel doesn't require
> an extendible type hierarchy.

I needed one new kind of exception here, specific to this program,
which means "the compiler found a fatal error and it cannot continue
compiling this module, so we return to compiler's toplevel code and
print errors found so far (only one of them was fatal) before exiting".

Of course the similar behavior could be obtained differently:

- Print all errors when the fatal error is found and exit the process
in a hard way.

(Exiting the process is itself performed by throwing an exception,
so cleanup code of active "brackets" of explicit finalization is run,
and thus this wouldn't really change anything except demodularizing
the error dumping code and perhaps failing to free some resources.)

- Design the compiler such that all errors be non-fatal, attempt to
continue in any case.

(This is indeed what happens most of the time. I still use fatal
errors for improbable cases where I didn't want to bother with
inventing an error result: when the compiler has detected violation
of certain invariants of the data it is manipulating, i.e. found a
bug in itself, or when the interface file - normally written by a
previous compilation, not by a human - could not be parsed.)

I claim that a distinguished exception used for jumping from the point
of detection of an unusual error to the point where compilation errors
are reported is cleaner.

Note that I don't use exceptions much. An average Java program of the
same size probably has more 'try' statements. But I definitely consider
exceptions important enough to have a dedicated language mechanism.

>> - When checking whether the interface file would change, an absent or
>> malformed interface file is treated as one to be changed (catching
>> IO_ERROR and CODING_ERROR).
>
> These exceptions need to be caught because the underlying API throw
> them. These things could equally well be handled using discriminated
> unions (sum types).

And let each I/O operation return a discriminated union of the
expected result and an error? No way, very inconvenient. Even Haskell
uses exceptions for that, and even they are hardwired in the IO monad
instead of being a syntactic sugar over explicit status checking.

My language Kogut does use a distinguished result instead of an
exception for reporting an end of file though.

>> - For working around a bug when it's compiled using an old version of
>> itself which did not define hashing of rational numbers, the method
>> redefinition exception is caught there and ignored.
>
> Can't comment on that - working around a bug often requires all sorts
> of hacks. Catching and ignoring the exception might be the most
> elegant way out.
> OTOH this means you may not be informed about legitimate method
> redefinition problems in the entire call graph below that exception
> handler.

It includes only that single method definition, no problem here.

> Hmm... can't really comment on that. I've been using the Windows
> model where all event handling is synchronous by default.

No matter what the OS is, when the program performs a mathematical
computation programmed by the user or on data provided by the user,
the user might want to abort it when the computation seems to take
too much time, perhaps by pressing some "Cancel" button. This includes
e.g. image processing.

> Inefficient? I don't think so. Discriminated unions of that kind can
> be returned in processor register pairs, so if a language uses that as
> a standard pattern, efficiency will almost automatically follow.

Inefficient because the result would have to be checked after each
function call. Most implementations of exceptions have zero overhead
for normal expression evaluation when exceptions don't occur.

Matthew D Swank

unread,
Sep 30, 2005, 6:18:19 PM9/30/05
to
On Fri, 30 Sep 2005 11:48:02 +0000, Panu Kalliokoski wrote:


> Well, I haven't yet understood how "static typing + RTTI" is not really
> dynamic typing, totally.
>
> Panu

Because you _choose_ syntactically to subvert the type system. Static
type checking is based on reasoning about syntax. If you don't upcast
or dynamically dispatch on types, etc., it's statically typed. You can act
as if the rtti isn't there.

In Lisp* all the type information about values is resolved at runtime
by default. You _have_ to use the runtime to detect type errors.

Matt

*excluding Common Lisp compilers that treat declarations as compile-time
assertions

--
"You do not really understand something unless you can
explain it to your grandmother." — Albert Einstein.

Joachim Durchholz

unread,
Sep 30, 2005, 6:29:30 PM9/30/05