Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Hot code replacement

25 views
Skip to first unread message

HP

unread,
Mar 1, 2003, 5:58:47 PM3/1/03
to
By reading Erlang's website,
I (being a C++ person) notice that
one feature in Erlang is 'hot code replacement' ----
i.e. change the code while the application is running.
This is one feature that I always wanted in some situations
at work.

Does any functional language support this feature ?
(e.g. Does Mozart/Oz or Lisp support it ?)

HP

Faust

unread,
Mar 1, 2003, 6:02:03 PM3/1/03
to
h...@rentec.com (HP) writes:

> By reading Erlang's website,
> I (being a C++ person) notice that
> one feature in Erlang is 'hot code replacement' ----
> i.e. change the code while the application is running.
> This is one feature that I always wanted in some situations
> at work.
>
> Does any functional language support this feature ?

Erlang is a functional language.
Is there any reason to not use erlang ?

--

natsu-gusa ya / tsuwamono-domo-ga / yume no ato
summer grasses / strong ones / dreams site

Summer grasses,
All that remains
Of soldier's dreams
(Basho trans. Stryk)

Torbjorn Lager

unread,
Mar 2, 2003, 6:16:02 AM3/2/03
to
h...@rentec.com (HP) wrote in message news:<10caf2e2.03030...@posting.google.com>...

Yes, Mozart/Oz supports it. Check out:

http://www.mozart-oz.org/documentation/dstutorial/node3.html#label199

Cheers,
Torbjörn

Joachim Durchholz

unread,
Mar 2, 2003, 9:33:44 AM3/2/03
to
Faust wrote:
>
> Erlang is a functional language.
> Is there any reason to not use erlang ?

I have never programmed in Erlang, just skimmed a few chapters of
written material, so take this as an account of how good or bad the
Erlang PR is, not whether Erlang is good or bad.
So, here's what speaks against Erlang (there are also a lot of things
that speak *for* it, I'm omitting that because it would be off-topic):

1. No static typing.

2. Bytecode interpreted (i.e. not compiled to native code).

3. Possibly not "really functional", i.e. mutable stuff (message sends
and state-dependent responses) play an important part when it comes to
structuring and maintaining large programs. (Of course, you *can* do
large parts of purely functional programming in Erlang.) (This is just
an impression that I gained, something that somebody with personal
experience in Erlang should confirm or deny. Hence the "possibly" in the
first sentence of this paragraph.)

4. Some roots in Prolog, which I find... er... disturbing (my personal
encounters with Prolog were, say, unlucky... its strengths were less
useful than I had expected, and definitely not worth the weaknesses that
were the payment for its strengths - IMHO (I never became a Prolog guru)).


Note that all of the above points might be considered irrelevant, a
showstopper, or even a plus, depending on circumstances, available
expertise, and personal religion.

Regards,
Joachim
--
This is not an official statement from my employer.

Stefan Axelsson

unread,
Mar 2, 2003, 10:02:02 AM3/2/03
to
In article <3E621648...@gmx.de>, Joachim Durchholz wrote:
> 2. Bytecode interpreted (i.e. not compiled to native code).

Well, that's really a property of the implementation, not the
language. There's a native code compiler (HiPE) available. BUT that
clashes with moving code unfortunately, that presuposes byte code (if
I'm not completely mistaken).

> 4. Some roots in Prolog, which I find... er... disturbing (my personal
> encounters with Prolog were, say, unlucky...

Well, it's not that bad. There's the syntax, which isn't worse than
O'Caml IMHO, and the very limited "unification" in pattern matching
(you can match 'x' twice and the compiler enforces equality for the
match to succeed) which I think is kind of neat and fairly useful in
practice.

I'm sure the "real" Erlang guys will chime in. ;-)

Stefan,
--
Stefan Axelsson (email at http://www.ce.chalmers.se/staff/sax)

HP

unread,
Mar 2, 2003, 10:08:42 AM3/2/03
to
Faust <urf...@optushome.com.au> wrote in message news:<n0ker8...@optushome.com.au>...

> h...@rentec.com (HP) writes:
>
> > By reading Erlang's website,
> > I (being a C++ person) notice that
> > one feature in Erlang is 'hot code replacement' ----
> > i.e. change the code while the application is running.
> > This is one feature that I always wanted in some situations
> > at work.
> >
> > Does any functional language support this feature ?
>
> Erlang is a functional language.
> Is there any reason to not use erlang ?
>
I was trying to understand if hot code replacement feature
is a natural product of 'functional language' or
it is purly dependent upon the 'implementation of a language
system'. [e.g. is it feasible to implement such a feature
in a C++ interpreter ? Or is it out of the question ? ]

HP

Christian Szegedy

unread,
Mar 2, 2003, 12:48:41 PM3/2/03
to
HP schrieb:

If you use DLLs extensively, you can achieve "hot code replacement"
in most (even in compiled C and C++) programming environments.
Of course, in most cases, you can't change single functions,
but you will be able to exchange modules which is sufficient
for most applications. Of course it is highly language and
implementation dependent, how comfortable it is.

Best Regards, Christian

Eli Barzilay

unread,
Mar 3, 2003, 2:42:37 AM3/3/03
to
Christian Szegedy <sze...@t-online.de> writes:

> If you use DLLs extensively, you can achieve "hot code replacement"
> in most (even in compiled C and C++) programming environments. Of
> course, in most cases, you can't change single functions, but you
> will be able to exchange modules which is sufficient for most
> applications. Of course it is highly language and implementation
> dependent, how comfortable it is.

Lisp had this for years, and without buzzwording it... The experience
of debugging a "real" applications (usually some GUI or server stuff)
while it is running is a really wonderful experience.

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://www.barzilay.org/ Maze is Life!

Eli Barzilay

unread,
Mar 3, 2003, 2:46:19 AM3/3/03
to
Christian Szegedy <sze...@t-online.de> writes:

> If you use DLLs extensively, you can achieve "hot code replacement"
> in most (even in compiled C and C++) programming environments. Of
> course, in most cases, you can't change single functions, but you
> will be able to exchange modules which is sufficient for most
> applications. Of course it is highly language and implementation
> dependent, how comfortable it is.

Lisp had this for decades, and without buzzwording it... The


experience of debugging a "real" applications (usually some GUI or
server stuff) while it is running is a really wonderful experience.

Going back from that to some DLL based thing makes it very clear why
it's so different.

Joachim Durchholz

unread,
Mar 3, 2003, 3:20:14 AM3/3/03
to
Stefan Axelsson wrote:
> In article <3E621648...@gmx.de>, Joachim Durchholz wrote:
>
>>2. Bytecode interpreted (i.e. not compiled to native code).
>
> Well, that's really a property of the implementation, not the
> language. There's a native code compiler (HiPE) available. BUT that
> clashes with moving code unfortunately, that presuposes byte code (if
> I'm not completely mistaken).

Sorry, I forgot about Hipe.

Regards,
Joachim
[readjusting his reality sensors right now *g*]

Joachim Durchholz

unread,
Mar 3, 2003, 3:33:10 AM3/3/03
to
HP wrote:
>
> I was trying to understand if hot code replacement feature
> is a natural product of 'functional language' or
> it is purly dependent upon the 'implementation of a language
> system'. [e.g. is it feasible to implement such a feature
> in a C++ interpreter ? Or is it out of the question ? ]

It's orthogonal to functionalness.

If I understood various postings right, the actual code replacement
process is based on an imperative feature of the language: you take down
a process and fire up a new process with new code to take its place.

(As far as I can tell from a distance, Erlang is a very pragmatic mix of
functional and imperative features, probably with some logic programming
thrown in.)

Regards,
Joachim

Louis Frayser

unread,
Mar 3, 2003, 3:37:47 AM3/3/03
to
HP wrote:

> I was trying to understand if hot code replacement feature
> is a natural product of 'functional language' or
> it is purly dependent upon the 'implementation of a language
> system'. [e.g. is it feasible to implement such a feature
> in a C++ interpreter ? Or is it out of the question ? ]
>
> HP

I think this feature is mostly the result of implementation.
Interpreters and incremental compilers are two very common ways of
providing this feature.

See http://root.cern.ch for such a C++ system. The ROOT System, based
on the CINT C++ interpreter, allows redefinition of functions at
runtime. If a function's source is changed and its source file
reloaded, the function is redefined in ROOT.

For C++ applications this feature is unusual--C++ interpreters are
unusual(discounting Java ;-) But for LISP, Prolog, Forth, Smalltalk,
and even SQL, on the fly redefinition of functions (predicates, or
procedures) are commonplace. For newer language implementations this
'hot-code-replacement' feature or the ability to implement dynamic
programs is pretty much the norm.

-Louis

Joe Armstrong

unread,
Mar 3, 2003, 4:31:27 AM3/3/03
to
Joachim Durchholz <joac...@gmx.de> writes:

> Faust wrote:
> > Erlang is a functional language.

Actually it's a concurrent language with a function core.

Thats why the Erlang book was entitled "Concurrent Programming in
Erlang."

Erlang's strong point is that is handles large numbers of parallel
processes very efficiently (i.e. *much* more efficiently than OS
processes or threads). The "functional core" is the bit where you
perform sequential computations.

To me the interesting thing is how to structure applications in
terms of concurrent processes, not the paradigm used for sequential
computations *within* a process. As long as things are
"observationally equivalent" I'm not interested in how they were
computed.

Whether you perform a sequential computation in a functional,
logical, OO, or imperative style seems is a matter of pragmatics,
certain problems favor an imperative style, others a functional style
etc.

Erlang was *designed* for building large scale, distributed
fault-tolerant systems so it has a lot of goodies for doing just this.

Erlang was never designed to be a functional language, indeed it
started off as a "concurrent language with a logical core" and as time
passed the logical core became more and more functional.

> > Is there any reason to not use Erlang ?

I guess I should ask "for what" - the question should I "use Erlang"
doesn't seem to make sense.

A question like "Should I use Erlang for building an XYZ" does make
sense. Technologies have to be related to the problem they are
intended to solve, they are not generally applicable.

If XYZ is what Erlang was designed for making (i.e. a large-scale
fault-tolerant distributed thingy, then the answer may well be yes,
but it would, of course depend upon the problem).

>
> I have never programmed in Erlang, just skimmed a few chapters of
> written material, so take this as an account of how good or bad the
> Erlang PR is, not whether Erlang is good or bad.
> So, here's what speaks against Erlang (there are also a lot of things
> that speak *for* it, I'm omitting that because it would be off-topic):
>
> 1. No static typing.

Why is this bad? - seems like the jury is out here:

On the (-) side the lack of a static type system means that certain
errors that could have been detected at compile time will occur at run
time.

On the (+) side with dynamic typing code is shorter and you can
write powerful generic routines for common tasks.

In any case, any realistic code will still have to handle things
like "divide-by-zero" exceptions, irrespective of the type system.


>
> 2. Bytecode interpreted (i.e. not compiled to native code).

So why does being bytecode interpreted speak against Erlang?

How a language is implemented is surely irrelevant - It's either
"fast enough" or it's not - Erlang is fast enough for a large number
of heavy grade industrial applications.

<< just to spoil your fun, the current system ships with two
different run-time systems, the default is a "word threaded"
interpretor - and there is also a native code compiler, which you can
enable if you want more efficiency >>

> 3. Possibly not "really functional", i.e. mutable stuff (message sends
> and state-dependent responses) play an important part when it comes to
> structuring and maintaining large programs. (Of course, you *can* do
> large parts of purely functional programming in Erlang.) (This is just
> an impression that I gained, something that somebody with personal
> experience in Erlang should confirm or deny. Hence the "possibly" in
> the first sentence of this paragraph.)

How "functional or not" a program is is entirely up to the
programmer - a programmer can write pure functional code or code with
side effects - Both styles of programming are beneficial, depending
upon the context.


>
> 4. Some roots in Prolog, which I find... er... disturbing (my personal
> encounters with Prolog were, say, unlucky... its strengths were less
> useful than I had expected, and definitely not worth the weaknesses
> that were the payment for its strengths - IMHO (I never became a
> Prolog guru)).
>

Erlang started as a meta-interpretor which added processes to Prolog
- as time passed it lost all resemblance to Prolog - virtually all
that remains from the Prolog days are the syntax of variables (they
start with capital letters) and the unification of variables in
function clause heads. I can't imagine why this is "disturbing".



>
> Note that all of the above points might be considered irrelevant, a
> showstopper, or even a plus, depending on circumstances, available
> expertise, and personal religion.

None of them are really showstoppers.

The biggest single plus point is Erlang has been successfully used
in extremely large industrial-scale projects.

The Ericsson AXD301 has over 1,7 Million lines of Erlang - it
controls a large proportion on British Telecoms ATM network.

This (I think) makes it the largest functional program ever
written. It was also not written in an academic environment, but by
regular Ericsson programmers, most of who had no previous experience
with functional languages.

/Joe

Joachim Durchholz

unread,
Mar 3, 2003, 4:47:31 AM3/3/03
to
Joe Armstrong wrote:

> Joachim Durchholz <joac...@gmx.de> writes:
>>Note that all of the above points might be considered irrelevant, a
>>showstopper, or even a plus, depending on circumstances, available
>>expertise, and personal religion.
>
> None of them are really showstoppers.

I'm pretty sure that lack of static typing is in the "showstopper" class
for some people. I used to be one of them, I know this mindset from the
inside :-)

Otherwise, agreeing to all you said. It's just that the OP asked what
reasons might speak against Erlang, it's difficult to answer such a
question without sounding negative...

gr...@cs.uwa.edu.au

unread,
Mar 3, 2003, 5:06:05 AM3/3/03
to
Joe Armstrong <j...@enfield.sics.se> wrote:

: Joachim Durchholz <joac...@gmx.de> writes:
:> Faust wrote:
:> > Erlang is a functional language.
:> > Is there any reason to not use Erlang ?
:> here's what speaks against Erlang

:> 1. No static typing.


: Why is this bad? - seems like the jury is out here:

But I think it would be fair to say that most of the jurors on the
functional side are in favour, so lack of static typing does seem
to be a negative for functional programming.

: On the (+) side with dynamic typing code is shorter

How so? Or perhaps you mean implicit typing?

: and you can write powerful generic routines for common tasks.

Which you can also do in a sufficiently powerful static type system.

: In any case, any realistic code will still have to handle things


: like "divide-by-zero" exceptions, irrespective of the type system.

I'm not sure how that is a benefit of dynamic typing over static?

:> 3. Possibly not "really functional"

: How "functional or not" a program is is entirely up to the


: programmer - a programmer can write pure functional code or code with
: side effects - Both styles of programming are beneficial, depending
: upon the context.

But in the context of Erlang-as-a-functional-language...

-Greg

William Lovas

unread,
Mar 3, 2003, 5:09:54 AM3/3/03
to
In article <tyf8yvw...@enfield.sics.se>, Joe Armstrong wrote:
> Joachim Durchholz <joac...@gmx.de> writes:
>> 1. No static typing.
>
> Why is this bad? - seems like the jury is out here:

For a class i'm taking this semester, i've been writing a lot of Scheme
code that i'd much rather be writing in ML. There are various reasons,
like Scheme having ugly syntax (let's see if i can't provoke a flamewar
here *grin*), but by *far* the thing i miss the most is a nice, static
type system.

> On the (-) side the lack of a static type system means that certain
> errors that could have been detected at compile time will occur at run
> time.

You say this as if it's not a problem. Sometimes run-time is too late
to find out about such errors -- consider safety-critical applications,
like rocket-navigation software or nuclear power plant controllers. In
less life-threatening situations, a lack of static typing is `merely' an
annoyance, but such annoyances are the very things that make software a
pain in the neck to write.

> On the (+) side with dynamic typing code is shorter and you can
> write powerful generic routines for common tasks.

With type inference, statically typed code can be plenty short -- but
arguably, for large-scale software, type annotations are critical for
maintainability. And even then, single-line top-level function types
like those found in idiomatic Haskell are very unobtrusive, and don't
contribute a great deal to code length.

What do you mean by "write powerful generic routines for common tasks"?
Surely i'm reading it with some bias, but it sounds practically like an
advertisement for parametric polymorphism.

Are there really problems out there for which a lack of static typing
leads to a more elegant solution? I have yet to find one myself, and
it's not because i haven't looked.

William

Thomas Lindgren

unread,
Mar 3, 2003, 5:18:03 AM3/3/03
to

Christian Szegedy <sze...@t-online.de> writes:

> If you use DLLs extensively, you can achieve "hot code replacement"
> in most (even in compiled C and C++) programming environments.
> Of course, in most cases, you can't change single functions,
> but you will be able to exchange modules which is sufficient
> for most applications. Of course it is highly language and
> implementation dependent, how comfortable it is.

Yes (if your target OS supports DLLs). There are a couple of more
common requirements.

For concurrent programs, you also want to know that all processes
or threads have stopped using the code that will be removed. (Not
sure if DLLs handle that?)

For high-availability programs, you also might want to keep the
old code around while the new is installed. (Or keep multiple versions
around.) (I assume DLLs support this.)

Erlang provides support for the above. OTP (the surrounding software
platform) also provides support for changing code in a structured
fashion, e.g., if you want to upgrade many modules at the same time,
and/or do it in a distributed system.

Mozart/Oz seems to be using another mechanism, from what the tutorial
someone posted showed me: the code/module to use is found in a value
cell and changing code is an atomic exchange of value cell contents (a
programming technique, as far as I can see). GC reclaims unreachable
code. Any number of code versions can coexist, for good or bad.

Best,
Thomas
--
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin

Ulf Wiger

unread,
Mar 3, 2003, 5:17:37 AM3/3/03
to
Joachim Durchholz <joac...@gmx.de> writes:

> If I understood various postings right, the actual code replacement
> process is based on an imperative feature of the language: you take
> down a process and fire up a new process with new code to take its
> place.

It's a little bit more advanced than that.

When you load a new module into the Erlang VM, it keeps the
old version of the module in memory. Processes executing code
in the old version of the module will continue executing there
until the function returns. The next call will go to the new
module. Thus, Erlang processes are normally not restarted when
code is replaced.

If a module implements an infinite loop, e.g. an event loop,

event_loop(State) ->
receive
Event ->
NewState = handle_event(Event, State),
event_loop(NewState)
end.

then the function will obviously never return. Such a process
will be killed when the old code is purged.

A simplified model of what is normally done then, is this:

event_loop(State) ->
receive
{From, change_code, FromVsn} ->
NewState = ?MODULE:change_code(FromVsn, State),
?MODULE:event_loop(NewState);
Event ->
NewState = handle_event(Event, State),
event_loop(NewState)
end.

Fully qualified calls (i.e. when the module name is given) always
go to the newest available version of the module. This way, we
can jump into the new module and transform our state, and then
jump into the new event loop. After this, the old code can be
safely purged from memory.

The functional nature of Erlang and the absence of global variables
makes the above procedure fairly clean. Live upgrade of a huge
telecoms control system is still (and probably always will be)
a very difficult task.


> (As far as I can tell from a distance, Erlang is a very pragmatic mix
> of functional and imperative features, probably with some logic
> programming thrown in.)

Not really any logic programming. The heritage from Prolog lies
mainly in the pattern matching and some syntax choices.
There is, for example, no backtracking, which would make it
difficult to reason about real-time characteristics. I've also
heard the comment that it's difficult to backtrack over hardware,
and Erlang was primarily designed to control telephony hardware.

/Uffe
--
Ulf Wiger, Senior Specialist,
/ / / Architecture & Design of Carrier-Class Software
/ / / Strategic Product & System Management
/ / / Ericsson AB, Connectivity and Control Nodes

Jerzy Karczmarczuk

unread,
Mar 3, 2003, 5:38:33 AM3/3/03
to
William Lovas wrote:


> Are there really problems out there for which a lack of static typing
> leads to a more elegant solution? I have yet to find one myself, and
> it's not because i haven't looked.


Elegance is a personal matter.
In absence of static typing you can define the Y combinator, which
may be considered quite elegant.


Jerzy Karczmarczuk

Joachim Durchholz

unread,
Mar 3, 2003, 5:44:45 AM3/3/03
to
Jerzy Karczmarczuk wrote:
>
> Elegance is a personal matter.
> In absence of static typing you can define the Y combinator, which
> may be considered quite elegant.

I was under the impression that the need for the Y combinator is just an
artifact of the way that lambda calculus binds function names to
functions. IOW not elegant, just a hack to get around problems (an
elegant hack maybe, but still a hack).
Did I miss something?

Jerzy Karczmarczuk

unread,
Mar 3, 2003, 5:57:44 AM3/3/03
to
Joachim Durchholz wrote:
> Jerzy Karczmarczuk wrote:
>
>>
>> Elegance is a personal matter.
>> In absence of static typing you can define the Y combinator, which
>> may be considered quite elegant.
>
>
> I was under the impression that the need for the Y combinator is just an
> artifact of the way that lambda calculus binds function names to
> functions. IOW not elegant, just a hack to get around problems (an
> elegant hack maybe, but still a hack).
> Did I miss something?

I don't know who misses what. But I think that Y is the inverse of
a hack. This is a formal, elegant way to define recursivity. This is
as far from an artifact as possible, actually from a hacker point
of view it is practically useless...

Regards
Jerzy Karczmarczuk

Faust

unread,
Mar 3, 2003, 5:57:29 AM3/3/03
to
Joe Armstrong <j...@enfield.sics.se> writes:


> To me the interesting thing is how to structure applications in
> terms of concurrent processes, not the paradigm used for sequential
> computations *within* a process. As long as things are
> "observationally equivalent" I'm not interested in how they were
> computed.

Aha !
Thanks for that insight into the philosophy of Erlang.

I am working my way through your book and some of your "spitting in
the sawdust " tutorials.

Faust

unread,
Mar 3, 2003, 6:00:10 AM3/3/03
to
William Lovas <wlo...@force.stwing.upenn.edu> writes:

>> On the (-) side the lack of a static type system means that certain
>> errors that could have been detected at compile time will occur at run
>> time.
>
> You say this as if it's not a problem. Sometimes run-time is too late
> to find out about such errors -- consider safety-critical applications,
> like rocket-navigation software or nuclear power plant controllers.

Since Joe Armstrong and Ulf Wiger have been responsible for producing
mission critical realtime embedded code using Erlang, you will have
to forgive me if I tend to believe Joe.

Lauri Alanko

unread,
Mar 3, 2003, 6:02:18 AM3/3/03
to
Jerzy Karczmarczuk <kar...@info.unicaen.fr> quoth:

> Elegance is a personal matter.
> In absence of static typing you can define the Y combinator, which
> may be considered quite elegant.

Static typing doesn't prevent it either:

$ ocaml -rectypes
Objective Caml version 3.06

# let y_cbn = fun f -> (fun x -> f (x x)) (fun x -> f (x x));;
val y_cbn : ('a -> 'a) -> 'a = <fun>
# let y_cbv = fun f -> (fun x -> x x) (fun x y -> f (x x) y);;
val y_cbv : (('a -> 'b) -> 'a -> 'b) -> 'a -> 'b = <fun>
# let fact = y_cbv (fun f n -> if n = 0 then 1 else n * f (n - 1));;
val fact : int -> int = <fun>
# fact 4;;
- : int = 24


Lauri Alanko
l...@iki.fi

Ulf Wiger

unread,
Mar 3, 2003, 6:13:12 AM3/3/03
to
Ulf Wiger <ulf....@CUT-ericsson.com> writes:

> The system also has to be able to withstand tremendous load without
> caving in. We've submitted the AXD 301 to up to 10x the engineered
> capacity, and the performance degradation was basically linear
> (and the system did not break.)

This should of course have been "We've submitted the AXD 301
to traffic load equal to up to 10x the engineered capacity."

Sorry about that.

Ulf Wiger

unread,
Mar 3, 2003, 6:10:08 AM3/3/03
to
William Lovas <wlo...@force.stwing.upenn.edu> writes:

> In article <tyf8yvw...@enfield.sics.se>, Joe Armstrong wrote:
>
> > On the (-) side the lack of a static type system means that
> > certain errors that could have been detected at compile time
> > will occur at run time.
>
> You say this as if it's not a problem. Sometimes run-time is too
> late to find out about such errors -- consider safety-critical
> applications, like rocket-navigation software or nuclear power
> plant controllers. In less life-threatening situations, a lack
> of static typing is `merely' an annoyance, but such annoyances
> are the very things that make software a pain in the neck to write.

There are different categories of safety-critical application.
Rocket navigation software, anti-lock breaking systems, nuclear power
plant controllers are some examples, telecoms systems software
is another. The requirements on these systems are tremendously
different, but they share the property that it's indeed a Bad Thing
when they break (people can indeed die if telecoms systems stop
working, and the operator is certain to lose tons of money -- also
a Bad Thing, esp. if your product is to blame.)

One of the biggest challenges in making a telecoms system fit
for mission-critical operation is the enormous amount of
functionality required. The AXD 301 system mentioned has around
1000 concurrent threads running on each processor (typically 2-10
control processors in one system, although there could be more.)
Not just the number of concurrent activities, but also the number
of different _types_ of activity is staggering. With sufficiently
lightweight processes, you can decomposition your system into
lots of smaller state machines, each with its own world, local
variable scope and automatic garbage collection. This alone
solves a large number of problems.

The system also has to be able to withstand tremendous load without
caving in. We've submitted the AXD 301 to up to 10x the engineered
capacity, and the performance degradation was basically linear

(and the system did not break.) This is very much due to Erlang's
outstanding concurrency support, in my opinion. Giving up the
strong concurrency support just to get static typing would be
unthinkable, at least for me.

I understand the theoretical arguments in favour of static typing
in functional programs, but am still waiting for real experiences
from large-scale industrial projects. I've looked at the Concurrent
Haskell paper, and think it looks fairly promising. It also
indicates that one can combine static typing with an elegant
concurrency model. All you have to do now to convince us sceptics
is to make your tools industrial-strength, find a sufficiently
large problem to solve, and then show that you were indeed able
to tackle complexity and quality issues with "normal" industry
programmers. ;-)

Jón Fairbairn

unread,
Mar 3, 2003, 6:22:22 AM3/3/03
to
Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:

That's not a question about static typing, it just depends
on the type system.

y:: forall t . (t -> t) -> t

y f = y2 y2 f
where y2 y2' f = f (y2' y2' f)
y2:: mu r . r -> (t -> t) -> t

Or something. It's ages since I worked on this stuff.
(mu is a fixpoint binder for types).


You can even do it in Haskell, with a little contortion

-- define the Y combinator without using built in recursion

data Y2 t = Recur (Y2 t -> (t -> t) -> t)

y f = y2 (Recur y2) f
where y2:: Y2 t -> (t -> t) -> t
y2 (Recur y2') f = f (y2' (Recur y2') f)

--
Jón Fairbairn Jon.Fa...@cl.cam.ac.uk

Jerzy Karczmarczuk

unread,
Mar 3, 2003, 6:39:11 AM3/3/03
to
Lauri Alanko cites me

>>In absence of static typing you can define the Y combinator, which
>>may be considered quite elegant.

> Static typing doesn't prevent it either:
>
> $ ocaml -rectypes
> Objective Caml version 3.06
>
> # let y_cbn = fun f -> (fun x -> f (x x)) (fun x -> f (x x));;
> val y_cbn : ('a -> 'a) -> 'a = <fun>

...

Oh, don't cheat, please. Call ocaml "normally":


>ocaml
Objective Caml version 3.06

# let yy = fun f -> (fun x -> f (x x)) (fun x -> f (x x));;
This expression has type 'a -> 'b but is here used with type 'a
#

(This expression means "x", underlined by Ocaml).
Now, of course you may protest, saying that recursive types
are legal, elegant, and there is no cheating involved. Hm.
I might *one day* accept that.

Jón Fairbairn goes further, showing a nice emulation of Y
in Haskell. It works indeed. But it uses a recursive data type

data Y2 t = Recur (Y2 t -> (t -> t) -> t)

which I am reluctant to buy.

Look, I adore recursive types, I would need them myself. But
as for today, despite my profound respect for both of you,
I remain unconvinced... Of course I should weaken my statement
that 'static types preclude Y'.


Jerzy Karczmarczuk

Jón Fairbairn

unread,
Mar 3, 2003, 6:54:27 AM3/3/03
to
Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:

> Jón Fairbairn goes further, showing a nice emulation of Y
> in Haskell. It works indeed. But it uses a recursive data type
>
> data Y2 t = Recur (Y2 t -> (t -> t) -> t)
>
> which I am reluctant to buy.
>
> Look, I adore recursive types, I would need them myself. But
> as for today, despite my profound respect for both of you,
> I remain unconvinced... Of course I should weaken my statement
> that 'static types preclude Y'.

Well, you can do it without built-in recursive types,
provided that you have the ability to define Mu.

typefunction Mu t = Mu2 Mu2 t
typefunction Mu2 m2 t = t (m2 m2 t)

then use that "Y on types" to define the types needed to
define Y. Of course, you have trouble making sure that the
typechecker terminates...

--
Jón Fairbairn Jon.Fa...@cl.cam.ac.uk

Joe Armstrong

unread,
Mar 3, 2003, 7:47:27 AM3/3/03
to
Faust <urf...@optushome.com.au> writes:

> William Lovas <wlo...@force.stwing.upenn.edu> writes:
>
> >> On the (-) side the lack of a static type system means that certain
> >> errors that could have been detected at compile time will occur at run
> >> time.
> >
> > You say this as if it's not a problem.

I never said that - my notation "(-)" was meant to convey the idea
that I thought that this was bad - and that it would be nice if you
could detect type errors at compile time.

> Sometimes run-time is too late
> > to find out about such errors -- consider safety-critical applications,
> > like rocket-navigation software or nuclear power plant controllers.

It is *precisely* such systems I am interested in, and which I've
spent the last 25 years building. It *is* at run-time that such errors
occur for the first time despite careful analysis and testing.

I for one would not feel happy sitting in an aircraft if the *only*
think I knew about the system was than the code had passed a static
type checking program. I have seen a large number of such programs.

<In the ICFP functional programming competition two years ago,
roughly half the Haskell entries were rejected because they were
wrong, our single Erlang entry won the Judges prize and was correct -
this despite the lack of a static type system - a type system is no
substitute for careful thought and design>

Think of a type system like a spelling checker - If I run an English
document through a spelling checker and find no spelling errors does
it mean that there are no spelling errors? - of course not. That's
like a type checker. When two arguments to a function are both
integers I can easy accidentally swap the argument order and make a
type correct but totally incorrect program.

Here are a few of the things we do in the design and test of a
system.

1) Coverage testing
Run a test suit that check that every line of code gets executed at
least once

2) Regression testing
Test all bugs every reported are no longer bugs. Every time a bug is
reported a test case reproducing the bug is added to the test suit.

3) Nightly build
Test and re-build the code every night

4) Code reviews
Get other people to read and explain the code

5) Design reviews
ditto

6) Dynamic testing of invariants
The system dynamically checks invariants that cannot be checked statically

7) Dynamic fallback to simpler cases.
If the system detects that an invariant has been broken - meaning
that it cannot achieve a goal, it tires to achieve a simpler goal.

8) Dynamic restarting and monitoring of all critical activities.

9) Hot stand-by and failover. Meaning that computations can be monitored
and automatically resumed on a physically separated processor.

Static type checking checks for one particular type of error - but
is not a universal panacea - a program that has got through type
checking can easily manifest run-time errors (like divide by zero) -
which if you're talking about fault-tolerant systems must be handled
at run-time. Functional code might be referentially transparent, but
the hardware which is controlled by the software is not.

To make fault-tolerent systems you have to throw every idea that
you've every had at the problem - type checking alone is not sufficient.

Please don't mis-represent me - type checking is great to have - one of
many good techniques that is needed to make reliable systems.

/Joe

>
> Since Joe Armstrong and Ulf Wiger have been responsible for producing
> mission critical realtime embedded code using Erlang, you will have
> to forgive me if I tend to believe Joe.

Thanks! We even sold such systems for real money against competition
from real programs written in C and Java :-)

Daniel C. Wang

unread,
Mar 3, 2003, 9:40:02 AM3/3/03
to

Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:


If you're a mathematician/logician the inability to define the Y combinator
in the typed lambda calculus is a deliberate and important feature. The fact
that you *can* defined the Y combinator in the untyped lambda calculus is
the reason types even exist! Types were "invented" to forbid Y.

The lambda calculus was "invented" as a formal logical system. The fact it
has a nice computational interpretation is kind of a lucky coincidence. If
you care about the logical properties of the lambda calculus the Y
combinator is a rather ugly thing you want to forbid. You forbid it by using
types.

Yoann Padioleau

unread,
Mar 3, 2003, 10:00:55 AM3/3/03
to
nospam...@agere.com (Daniel C. Wang) writes:

> Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:
>
> > Joachim Durchholz wrote:
> > > Jerzy Karczmarczuk wrote:
> > >
> > >>
> > >> Elegance is a personal matter.
> > >> In absence of static typing you can define the Y combinator, which
> > >> may be considered quite elegant.

even with static typing :)

let fact f = function
| 0 -> 1
| n -> n * f (n-1)

let rec y f = fun x -> f (y f) x

let factY = y fact

(*
factY 3;;
- : int = 6
*)


> > > I was under the impression that the need for the Y combinator is
> > > just an artifact of the way that lambda calculus binds function
> > > names to functions. IOW not elegant, just a hack to get around
> > > problems (an elegant hack maybe, but still a hack).
> > > Did I miss something?
> >
> > I don't know who misses what. But I think that Y is the inverse of
> > a hack. This is a formal, elegant way to define recursivity. This is
> > as far from an artifact as possible, actually from a hacker point
> > of view it is practically useless...

it can be used to choose between normal call function or memoized function

let fib f = function
| 0 -> 1
| 1 -> 1
| n -> f (n-1) + f (n-2)

let tabulate fib =
let myref = ref [] in
let rec aux fib = fun x ->
try List.assoc x !myref
with Not_found ->
let v = fib (aux fib) x in
(myref := (x,v)::!myref;
v)
in
aux fib

let normalfib = y fib
let memofib = tabulate fib


> >
> > Regards
> > Jerzy Karczmarczuk
>
>
> If you're a mathematician/logician the inability to define the Y combinator
> in the typed lambda calculus is a deliberate and important feature. The fact
> that you *can* defined the Y combinator in the untyped lambda calculus is
> the reason types even exist! Types were "invented" to forbid Y.

i am not sure of this :) your statement seems to be too strong.

> The lambda calculus was "invented" as a formal logical system. The fact it
> has a nice computational interpretation is kind of a lucky coincidence. If
> you care about the logical properties of the lambda calculus the Y
> combinator is a rather ugly thing you want to forbid. You forbid it by using
> types.

Indeed, that's funny.

--
Yoann Padioleau, INSA de Rennes, France,
Opinions expressed here are only mine. Je n'écris qu'à titre personnel.
**____ Get Free. Be Smart. Simply use Linux and Free Software. ____**

Matthias Blume

unread,
Mar 3, 2003, 10:52:31 AM3/3/03
to
Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:

Indeed. Elegance is in the eye of the beholder. And since
non-trivial static type systems have a tendency (:-) to exclude
certain programs from the set of well-typed programs, chances are that
someone somewhere finds one of these excluded programs "elegant".

There's no free lunch.

Matthias

PS: As others have already pointed out: The Y combinator might be
considered "elegant", and it might be considered "ugly". Some static
type systems exclude it, others don't.

Basile STARYNKEVITCH

unread,
Mar 3, 2003, 3:56:08 PM3/3/03
to
>>>>> "HP" == HP <h...@rentec.com> writes:

HP> Faust <urf...@optushome.com.au> wrote in message
HP> news:<n0ker8...@optushome.com.au>...
>> h...@rentec.com (HP) writes:
>>
>> > By reading Erlang's website, > I (being a C++ person) notice
>> that one feature in Erlang is 'hot code replacement' ----
>> i.e. change the code while the application is running. [...]
>> >
>> > Does any functional language support this feature ?

Many functional languages have this feature. Actually, in an extended
sense, all functional languages have them, because functional values
can be thought as changing code (even if they are usually only
implemented as closures, mixing a fixed code snipped with variable
values). In practical sense, many but not all functional values have
this "hot code replacement" feature (at least as new code addition, in
a form of runtime linking).


>>
>> Erlang is a functional language. Is there any reason to not

>> use erlang ?
>>
HP> I was trying to understand if hot code replacement feature
HP> is a natural product of 'functional language'

It is not stricto sensu a natural product of functional languages, but
functional languages help a lot :

1. because they offer explicit functional values (usually
implemented as closures)

2. because values are garbage collected (a GC is almost needed in
functional languages because values may escape their lexical
scope, notably inside closures).

3. because functional languages offer (by definition) syntactic &
semantic features to conveniently manipulate functions

Of course CommonLisp is a functional language by the above criteria.

HP> or it is purly
HP> dependent upon the 'implementation of a language system'.
HP> [e.g. is it feasible to implement such a feature in a C++
HP> interpreter ? Or is it out of the question ? ]

If you consider not only the C++ compiler (or interpreter) but also
the underlying (Unix...) operating system, them even C or C++ handle
runtime code replacement (thru plugins, dynamic loading of shared
libraries, e.g. with dlopen/dlsym/dlclose etc..). A major issue is
when to get rid of functions (ie when to garbage collect functional
values).

And some C libraries permit runtime code generation, like GNU
lightning for example.

Actually hot code replacement involves several important issues not
covered above, in particular

* implementation issues, e.g. cache coherence

* soundness (notably type soundness) = be sure the the new replacing
code matches the signature and invariants of the old replaced
code

* getting rid of the previous replaced code (for example thru garbage
collecting of generated code)

* managing the transition of objects (or data types, or values)
from the old code to the new one. Imagine what should happen if
(in C++ parlance) a class was replaced by another one, with
additional virtual member functions and removed member fields....

[I'm not sure that Erlang deals completely with the last point which
is a very tricky issue]

--

Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basile<at>starynkevitch<dot>net
alias: basile<at>tunes<dot>org
8, rue de la Faïencerie, 92340 Bourg La Reine, France

William Lovas

unread,
Mar 3, 2003, 5:01:13 PM3/3/03
to
In article <3E633EDF...@info.unicaen.fr>, Jerzy Karczmarczuk wrote:
> Oh, don't cheat, please. Call ocaml "normally":
>
>
> >ocaml
> Objective Caml version 3.06
>
> # let yy = fun f -> (fun x -> f (x x)) (fun x -> f (x x));;
> This expression has type 'a -> 'b but is here used with type 'a
> #
>
> (This expression means "x", underlined by Ocaml).
> Now, of course you may protest, saying that recursive types
> are legal, elegant, and there is no cheating involved. Hm.
> I might *one day* accept that.

But... really, there *is* no cheating involved! -rectypes just lets you
use recursive types without explicit folding and unfolding.

> Jón Fairbairn goes further, showing a nice emulation of Y
> in Haskell. It works indeed. But it uses a recursive data type
>
> data Y2 t = Recur (Y2 t -> (t -> t) -> t)
>
> which I am reluctant to buy.

Really, there's nothing magic going on here. The same idea works in O'Caml:

% ocaml
Objective Caml version 3.06

# type 'a d = D of ('a d -> 'a);;
type 'a d = D of ('a d -> 'a)
# let y f = (fun (D x) -> f (fun y -> x (D x) y))
(D (fun (D x) -> f (fun y -> x (D x) y)));;
val y : (('a -> 'b) -> 'a -> 'b) -> 'a -> 'b = <fun>
# let fct f n = if n = 0 then 1 else n * f (n-1);;
val fct : (int -> int) -> int -> int = <fun>
# let fact = y fct;;


val fact : int -> int = <fun>

# fact 5;;
- : int = 120

(note: no -rectypes -- all the recursion is hidden behind the D constructor.)

> Look, I adore recursive types, I would need them myself. But
> as for today, despite my profound respect for both of you,
> I remain unconvinced... Of course I should weaken my statement
> that 'static types preclude Y'.

Are you saying that you're unconvinced about recursive types? Again, let
me restate that there's nothing magical about them. Functional programmers
use recursive datatypes all the time. Consider:

type 'a list = Nil | Cons of 'a * 'a list

type 'a tree = Leaf | Node of 'a * 'a tree * 'a tree

type exp = Var of string | Lambda of string * exp | App of exp * exp

While it is true that you can't type Y in the simply-typed lambda calculus,
it's important to recognize that that's only one possible type system, and
researchers are continually questing for richer and more expressive type
systems. The state of the art is not perfect, but it's pretty damn cool,
and dead useful, in my opinion.

cheers,
William

Joachim Durchholz

unread,
Mar 3, 2003, 5:02:03 PM3/3/03
to
Joe Armstrong wrote:
>
> 7) Dynamic fallback to simpler cases.
> If the system detects that an invariant has been broken - meaning
> that it cannot achieve a goal, it tires to achieve a simpler goal.

Do you program that explicitly, or is it supported by the language?

> Static type checking checks for one particular type of error - but
> is not a universal panacea - a program that has got through type
> checking can easily manifest run-time errors (like divide by zero) -
> which if you're talking about fault-tolerant systems must be handled
> at run-time.

Static type checking can check a whole lot of things.
See it this way: static types are annotations about program properties
that are automatically proven by the compiler. The properties that can
be proven were carefully selected so that an automatic proof is possible
- for older languages, that calculus is very restricted (the equivalent
of "Aussagenlogik", i.e. without quantors), for Hindley-Milner and
similar type inference systems, it's a (still carefully selected) subset
of "Prädikatenlogik" (i.e. with quantors over non-predicates). (Sorry
for not knowing the proper terms in English. Maybe somebody can
translate the terms...)

> Functional code might be referentially transparent, but
> the hardware which is controlled by the software is not.

Referential transparency is orthogonal to static typing.
Well, conceptually - intransparent programs have more failure modes that
cannot be type-checked (not even with parametric polymorphism).

William Lovas

unread,
Mar 3, 2003, 5:51:53 PM3/3/03
to
Prelude: The following is a discussion about the relative merits of static
typing versus dynamic typing, and as such, i've left out any practical con-
siderations with regard to adding static type checking to Erlang or switch-
ing Ericsson over to another statically typed language. I apologize in ad-
vance for this omission and ask you to forgive me for this simplification.

In article <tyf3cm4...@enfield.sics.se>, Joe Armstrong wrote:
> Faust <urf...@optushome.com.au> writes:
>
>> William Lovas <wlo...@force.stwing.upenn.edu> writes:
>>
>> >> On the (-) side the lack of a static type system means that certain
>> >> errors that could have been detected at compile time will occur at run
>> >> time.
>> >
>> > You say this as if it's not a problem.
>
> I never said that - my notation "(-)" was meant to convey the idea
> that I thought that this was bad - and that it would be nice if you
> could detect type errors at compile time.

Sorry, maybe i'm reading too much into your text. In the same breath,
you said "the jury is out" on static typing, and i felt like you were
implying that the (-) wasn't such a big deal in reality. You had also
asked, "Why is this bad?" with regard to "No static typing", which can
easily be seen as a statement that any disadvantages arising from lack
of static typing should not be regarded as "bad":

> 1. No static typing.

Why is this bad? - seems like the jury is out here:

On the (-) side the lack of a static type system [...]

Now, though, you acknowledge that the disadvantage cited *is*, in fact,
bad. Forgive me if i'm confused... did you speak too quickly, or did i
interpret too quickly?

> I for one would not feel happy sitting in an aircraft if the *only*
> think I knew about the system was than the code had passed a static
> type checking program. I have seen a large number of such programs.

Likewise, but at the same time, i'd probably be wary of sitting in an
aircraft whose software had *not* passed a static type checker. Yes,
static type checking is no substitute for a tightly-screwed-on head,
and careful programmers can produce safe, sound code without a static
type checker, but i have a hard time seeing the advantage in *not*
catching the stupid errors early on...

> <In the ICFP functional programming competition two years ago,
> roughly half the Haskell entries were rejected because they were
> wrong, our single Erlang entry won the Judges prize and was correct -
> this despite the lack of a static type system - a type system is no
> substitute for careful thought and design>

Despite appearances, ICFP manages to emphasize cleverness more than
interesting tools, and no doubt you and your Erlang team are quite
clever programmers :)

> Think of a type system like a spelling checker - If I run an English
> document through a spelling checker and find no spelling errors does
> it mean that there are no spelling errors? - of course not. That's
> like a type checker. When two arguments to a function are both
> integers I can easy accidentally swap the argument order and make a
> type correct but totally incorrect program.

I think a better (but not perfect) analogy would be that even after
running a document successfully through a spell-checker, you cannot
be assured that there are no grammar errors -- the point being that
after passing a static type checker, it is mathematically *provable*
that a program contains no type errors, but that doesn't constitute
a proof of correctness.

> Here are a few of the things we do in the design and test of a
> system.
>

> [impressive regression list snipped]


>
> Static type checking checks for one particular type of error - but
> is not a universal panacea - a program that has got through type
> checking can easily manifest run-time errors (like divide by zero) -
> which if you're talking about fault-tolerant systems must be handled
> at run-time. Functional code might be referentially transparent, but
> the hardware which is controlled by the software is not.

No, static type checking is not a panacea -- as you say, it only checks
for one particular type of error. But as it happens, that particular
type of error is easy enough to check for mechanically, so why make the
poor programmer do it? I say let the compiler handle the type errors,
and let the programmer get on with the business of programming -- an art
which consists of a great deal more than typing out code, you'll agree!

> To make fault-tolerent systems you have to throw every idea that
> you've every had at the problem - type checking alone is not sufficient.

Agreed, but i still see no advantage to delaying type error detection to
run-time.

> Please don't mis-represent me - type checking is great to have - one of
> many good techniques that is needed to make reliable systems.

Similarly, i recognize that it's possible to build reliable systems without
static type checking. I also recognize that it's possible to write quality
software in C or assembly, but i see no compelling reason to do so ...


You never answered my question about the advantages of dynamic typing.
Specifically, you'd said:

On the (+) side with dynamic typing code is shorter and you can
write powerful generic routines for common tasks.

What did you mean by this? That is, to get back to (what i see as) the
root of the matter, what practical advantages are offered by dynamic type
systems?


>> Since Joe Armstrong and Ulf Wiger have been responsible for producing
>> mission critical realtime embedded code using Erlang, you will have
>> to forgive me if I tend to believe Joe.
>
> Thanks! We even sold such systems for real money against competition
> from real programs written in C and Java :-)

Let me interject also that i'm elated that you and Ulf are producing
industrial strength systems using functional programming techniques.
At the same time, though, i don't believe your operation would be at
all hindered by the introduction of static typing -- and just maybe
it could even be improved a bit ;)

cheers,
William

gr...@cs.uwa.edu.au

unread,
Mar 3, 2003, 10:43:55 PM3/3/03
to
Ulf Wiger <ulf....@cut-ericsson.com> wrote:
: Giving up the strong concurrency support just to get static

: typing would be unthinkable, at least for me.

Is this just an example of a feature you feel is more
important than static typing, or are you saying that
you've encountered a trade-off between the two?

-Greg

Jerzy Karczmarczuk

unread,
Mar 4, 2003, 3:03:09 AM3/4/03
to
William Lovas asks me:

> Are you saying that you're unconvinced about recursive types? Again, let
> me restate that there's nothing magical about them. Functional programmers
> use recursive datatypes all the time. Consider:
>
> type 'a list = Nil | Cons of 'a * 'a list
> type 'a tree = Leaf | Node of 'a * 'a tree * 'a tree

etc.

I see that we are pushing quite far this academic discussion, but at least
it is nice, with nice, helpful people. But I continue to play the advocatus
diaboli (well, actually *you are*, not me; optional smiley here). I "protested"
against using recursion to define recursion, that's all. This was the point
of my non-conviction. Shall I buy the definition: y f x = f (y f) x ? Hm...

I don't think you suppose that I have never seen the definition of the
list type... But look again on the Jón's example

data Y2 t = Recur (Y2 t -> (t -> t) -> t)

This is a bit worse, no base clause at all. And this is not even co-recursive,
as would be a formally infinite list (without Nil), usable in a lazy language.
That's all. On the other hand I agree with you about almost everything, the
only point which makes me shake my head is the statement of Daniel C. Wang:

<< Types were "invented" to forbid Y >>

My goodness, this goes quite far. But I understand that we should read it
as we should, not literally...
Thank you. Over.


Jerzy Karczmarczuk

Joachim Durchholz

unread,
Mar 4, 2003, 4:06:02 AM3/4/03
to
William Lovas wrote:
>
> Let me interject also that i'm elated that you and Ulf are producing
> industrial strength systems using functional programming techniques.
> At the same time, though, i don't believe your operation would be at
> all hindered by the introduction of static typing -- and just maybe
> it could even be improved a bit ;)

That last sentence is the rub. Adding a static type system to a
dynamically-typed language is usually making things worse.

Language design is subject to many conflicting factors. Therefore,
languages are the results of many engineering trade-offs.
If a language is designed for static typing, that's an additional
constraint that must be satisfied. If static typing is added as an
afterthought, you'll get a language where all trade-offs were done
without taking static typing into account; the result is that static
typing will collide with other features of the language. (Sometimes
you're lucky and the language designer unconsciously considered static
typing in all design decisions.)

The next problem is existing code. Code tends to exploit all details of
the language semantics. If a language is dynamically typed, there WILL
be code that uses polymorphism in ways that the type system doesn't
support. If you're extremely unlucky, it will be a central library
routine and making it typesafe would require a redesign of the entire
interface. (In Erlang, I'd take a long and hard look at the routines for
passing messages between processes. They have all the prerequisites for
being critical: message queues are ubiquitous in Erlang software, and
they must be able to handle data of arbitrary type. Any type system
would have to be able to cover all the types that a programmer may throw
at a specific queue. IOW Erlang would need a type system with parametric
polymorphism as a minimum.)

Ulf Wiger

unread,
Mar 4, 2003, 8:39:32 AM3/4/03
to
Basile STARYNKEVITCH <basile+NO@SPAM+starynkevitch.net.invalid> writes:

I posted a code snippet earlier in this thread that does handle that
last point:

event_loop(State) ->
receive
{From, change_code, FromVsn} ->
NewState = ?MODULE:change_code(FromVsn, State),
?MODULE:event_loop(NewState);
Event ->
NewState = handle_event(Event, State),
event_loop(NewState)
end.

In reality, one needs a little bit more -- a method for suspending
dependent processes while making the transition, for example.

Erlang/OTP has quite a comprehensive support system for in-service
upgrades, allowing you to safely perform data structure conversion,
adding/removing processes, applications, database tables and objects,
etc. all synchronized in a distributed system. Much of this is of
course implemented in the middleware that sits on top of the
programming language.

Ulf Wiger

unread,
Mar 4, 2003, 9:03:07 AM3/4/03
to
gr...@cs.uwa.edu.au writes:

I feel that strong concurrency support is more important than
static typing, yes. I also wrote:

: I've looked at the Concurrent Haskell paper, and think it


: looks fairly promising. It also indicates that one can
: combine static typing with an elegant concurrency model.

I'm utterly convinced that you need strong concurrency
support, including distribution and process monitoring
in order to build e.g. telecoms systems. I'm not yet
convinced that static typing would provide a net benefit
for such systems, but that obviously depends on the
entire programming environment and how the tools are
applied in the project.

I would like to think that I'm open to convincing
real-world examples demonstrating the power of static
typing(*), but then again, 10 years of programming in
a dynamically typed language probably induces some bias...

/Uffe

(*) I've seen many practical examples, but from other
domains, different problems, and from programers with
a decidedly different level of CS education than you'll
find in most industrial projects. They have convinced me
that static typing is great for some programmers when
dealing with some problems.

Joe Armstrong

unread,
Mar 4, 2003, 9:41:31 AM3/4/03
to
William Lovas <wlo...@force.stwing.upenn.edu> writes:

By saying the jury was out I meant that there were a number of
arguments in the static vs. dynamic typing debate and that there was
no overwhelming consensus that either approach had significant
advantages over the other.


>
> > I for one would not feel happy sitting in an aircraft if the *only*
> > think I knew about the system was than the code had passed a static
> > type checking program. I have seen a large number of such programs.
>
> Likewise, but at the same time, i'd probably be wary of sitting in an
> aircraft whose software had *not* passed a static type checker. Yes,
> static type checking is no substitute for a tightly-screwed-on head,
> and careful programmers can produce safe, sound code without a static
> type checker, but i have a hard time seeing the advantage in *not*
> catching the stupid errors early on...
>

This reminds me of the story of an aircraft that crashed due to a
failure in the navigation system. The plane tried to flip over as it
crossed the equator, due to a sign error in the code.

I imagine that if you have some code that has passed a static type
checker then changing an integer N somewhere in the program to -N
probably won't have any effect on the types.

In a navigation system getting the types right is the least of your
problems - getting the answers right is the tricky bit.

Here are two problems:

Problem 1: Getting the types right = a soluble problem
Problem 2: Getting the values right = a much more difficult problem

IMHO there seem to be two many people doing (1) and not enough doing
(2) - there also seems to be a vociferous sub-set of people working on
(1) who seems to think that they have also solved (2).

Me, I try to build fault-tolerant systems - so I'll happily use any
and all techniques I know of to assist me.


> > <In the ICFP functional programming competition two years ago,
> > roughly half the Haskell entries were rejected because they were
> > wrong, our single Erlang entry won the Judges prize and was correct -
> > this despite the lack of a static type system - a type system is no
> > substitute for careful thought and design>
>
> Despite appearances, ICFP manages to emphasize cleverness more than
> interesting tools, and no doubt you and your Erlang team are quite
> clever programmers :)
>
> > Think of a type system like a spelling checker - If I run an English
> > document through a spelling checker and find no spelling errors does
> > it mean that there are no spelling errors? - of course not. That's
> > like a type checker. When two arguments to a function are both
> > integers I can easy accidentally swap the argument order and make a
> > type correct but totally incorrect program.
>
> I think a better (but not perfect) analogy would be that even after
> running a document successfully through a spell-checker, you cannot
> be assured that there are no grammar errors -- the point being that
> after passing a static type checker, it is mathematically *provable*
> that a program contains no type errors, but that doesn't constitute
> a proof of correctness.

Yes^100 - please tell this to your students - I seem to have heard
the statement "If it goes thought the type checker the program will
be *correct* more than once" (I guess correct is a overloaded word,
meaning either "type correct" or "correct according to some
specification")

>
> > Here are a few of the things we do in the design and test of a
> > system.
> >
> > [impressive regression list snipped]
> >
> > Static type checking checks for one particular type of error - but
> > is not a universal panacea - a program that has got through type
> > checking can easily manifest run-time errors (like divide by zero) -
> > which if you're talking about fault-tolerant systems must be handled
> > at run-time. Functional code might be referentially transparent, but
> > the hardware which is controlled by the software is not.
>
> No, static type checking is not a panacea -- as you say, it only checks
> for one particular type of error. But as it happens, that particular
> type of error is easy enough to check for mechanically, so why make the
> poor programmer do it? I say let the compiler handle the type errors,
> and let the programmer get on with the business of programming -- an art
> which consists of a great deal more than typing out code, you'll agree!
>
> > To make fault-tolerent systems you have to throw every idea that
> > you've every had at the problem - type checking alone is not sufficient.
>
> Agreed, but i still see no advantage to delaying type error detection to
> run-time.

Unless the advantages of a dynamic type system outweigh the
advantages of a static type system. For example, Unix pipes and
sockets are just uninterpreted streams of characters (i.e. untyped)
but are still *very* useful (compare this with say XML/WSDL/SOAP
streams which are strongly (I use the word guardedly) typed) - each
have advantages in different contexts.

> > Please don't mis-represent me - type checking is great to have - one of
> > many good techniques that is needed to make reliable systems.
>
> Similarly, i recognize that it's possible to build reliable systems without
> static type checking. I also recognize that it's possible to write quality
> software in C or assembly, but i see no compelling reason to do so ...

Employment - there would be mass unemployment among programmers if
people used decent languages - this would have undesirable knock on
effects for the economy.

> You never answered my question about the advantages of dynamic typing.
> Specifically, you'd said:
>
> On the (+) side with dynamic typing code is shorter and you can
> write powerful generic routines for common tasks.
>
> What did you mean by this? That is, to get back to (what i see as) the
> root of the matter, what practical advantages are offered by dynamic type
> systems?

Generic operations - the code to marshall/demarshall an Erlang term
or save/restore a term - or garbage collect an Erlang memory or
migrate an Erlang process form one machine to another are not type
dependent, generic debugging is easy ...

The code for the generic operations is small and easy to implement
directly. Compiled code volume in large systems is a big problem - so
it's nice to have generic libraries, which are a lot smaller than
specialized encoding routines for every specific data type.

> >> Since Joe Armstrong and Ulf Wiger have been responsible for producing
> >> mission critical realtime embedded code using Erlang, you will have
> >> to forgive me if I tend to believe Joe.
> >
> > Thanks! We even sold such systems for real money against competition
> > from real programs written in C and Java :-)
>
> Let me interject also that i'm elated that you and Ulf are producing
> industrial strength systems using functional programming techniques.
> At the same time, though, i don't believe your operation would be at
> all hindered by the introduction of static typing -- and just maybe
> it could even be improved a bit ;)

It's not for want of trying - we've had a good try, Phil Wadler,
Simon Marlow and Thomas Arts have all had a good try at adding static
typing to Erlang. The main conclusion is that a type system is so
fundamental that it has to be designed in from the beginning not
bolted on later as an afterthought - by the same reasoning bolting
concurrency on top of essential sequential languages also seems doomed
to failure (which is why concurrency in Java, C++ etc. is such a
mess).

I think if you wanted concurrency + strong typing you'd have to
start all over again and re-think most things - sigh ;-)

/Joe

> cheers,
> William

Adrian Hey

unread,
Mar 4, 2003, 11:11:10 AM3/4/03
to
Ulf Wiger wrote:

> gr...@cs.uwa.edu.au writes:
>
>> Ulf Wiger <ulf....@cut-ericsson.com> wrote:
>> : Giving up the strong concurrency support just to get static
>> : typing would be unthinkable, at least for me.
>>
>> Is this just an example of a feature you feel is more
>> important than static typing, or are you saying that
>> you've encountered a trade-off between the two?
>
> I feel that strong concurrency support is more important than
> static typing, yes. I also wrote:
>
> : I've looked at the Concurrent Haskell paper, and think it
> : looks fairly promising. It also indicates that one can
> : combine static typing with an elegant concurrency model.
>
> I'm utterly convinced that you need strong concurrency
> support, including distribution and process monitoring
> in order to build e.g. telecoms systems. I'm not yet
> convinced that static typing would provide a net benefit
> for such systems, but that obviously depends on the
> entire programming environment and how the tools are
> applied in the project.

I've been following this thread and trying to figure out why you regard
strong static typing, as implemented in modern FPL's (not static typing
as in Pascal say) as a feature which is detrimental to concurrency.
I don't know any Erlang, so what is it about Erlang concurrency that
you think can't be handled by static type systems? The two issues seem
quite unrelated to me.

Static typing is arguably detrimental to other aspects of dynamic
programming, concurrent or otherwise. Is that what you're really
worried about? There are well known problems with static typing
which the Clean folk are trying to address, but this seems like a
poor argument in favour of not providing any form of static type
system.

> I would like to think that I'm open to convincing
> real-world examples demonstrating the power of static
> typing(*), but then again, 10 years of programming in
> a dynamically typed language probably induces some bias...

Well my experience is that static typing is so incredibly useful
I'm surprised anybody doubts the benefits.

Just about any Haskell code of any complexity I've written
contains type errors which are detected for me by the compiler.
Presumably the same would be true if I was using Erlang, but
I wouldn't get to find out about them until I tested the code,
(assuming the relevant code got exercised by my tests). Even
then it seems to me that I would still have to locate the source
of the error (which typically wouldn't be the same place as it
was detected I suspect).

I'm also sceptical that even thorough testing can reliably detect
type errors. Compile time static analysis (including type
checking) seems much more convenient and reliable. It should
also enable compilers to generate more efficient code (if for no
other reason than it eliminates the need for run time checks).

Of course there are both practical and theoretical limits to what
kinds of errors can be detected at compile time, but it all helps
improve code quality and programmer productivity IMHO.

For me, the potential to do this sort of thing is a significant
advantage of FPL's, so it seems strange that an FPL advocate should
appear to doubt the benefits.

As for finding "real-world examples demonstrating the power of static
typing", I suspect you will find none. I don't think static type systems
do increase the power of non-broken programs. They just make it far
less likely that broken programs will be created.

Regards
--
Adrian Hey


Matthias Blume

unread,
Mar 4, 2003, 11:52:02 AM3/4/03
to
Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:

> I "protested" against using recursion to define recursion, that's
> all. This was the point of my non-conviction.

As far as I understand the core of this discussion, nobody has tried
to "define recursion". The question was whether or not it is possible
to statically type a program that looks like the Y combinator (or,
more generally, contains self-application). Now, in type systems with
equi-recursive types one can, indeed, do that for certain such
programs.

(In other words, the fact that one needs recursion at the type level
to "define" recursion at the value level is an entirely different --
albeit interesting -- matter.)

Matthias

Adrian Hey

unread,
Mar 4, 2003, 12:38:20 PM3/4/03
to
Joe Armstrong wrote:
> Problem 1: Getting the types right = a soluble problem
> Problem 2: Getting the values right = a much more difficult problem
>
> IMHO there seem to be two many people doing (1) and not enough doing
> (2) - there also seems to be a vociferous sub-set of people working on
> (1) who seems to think that they have also solved (2).

AFAICS the people you are attacking exist only in your imagination.
I don't know of any advocate of static type systems who has made such
a claim. Would you care to provide references? Links? c.l.f threads?

Your view seems to be that static typing should not be used at all
because it fails to eliminate all program bugs. If so, this seems pretty
illogical and unconvincing to me.

The argument that static typing should not be used in Erlang because
it would be incompatible with a lot of existing code is much more
convincing to me. But that's a problem with Erlang, not with static
typing or other statically typed FPL's.

Regards
--
Adrian Hey

Dirk Thierbach

unread,
Mar 4, 2003, 1:07:30 PM3/4/03
to
Joe Armstrong <j...@enfield.sics.se> wrote:
> William Lovas <wlo...@force.stwing.upenn.edu> writes:

>> I think a better (but not perfect) analogy would be that even after
>> running a document successfully through a spell-checker, you cannot
>> be assured that there are no grammar errors -- the point being that
>> after passing a static type checker, it is mathematically *provable*
>> that a program contains no type errors, but that doesn't constitute
>> a proof of correctness.

> Yes^100 - please tell this to your students - I seem to have heard
> the statement "If it goes thought the type checker the program will
> be *correct* more than once"

This experience is normally phrased more like "If it goes through the
type checker, the program will just work." This is because type
checking is good enough to catch all those stupid little bugs most of
us make. And moreover, you don't need all the tests you would have to
write in a dynamically typed language to catch them, while still not
beeing able to be *sure* that you really have caught at least all the
stupid bugs.

To continue the above analogy, spelling mistakes happen frequently,
but grammar errors happen a lot less often (unless you are a non-native
speaker like me :-) If you don't have a spell checker, it's quite
tedious to find all those spelling mistakes, because it is easy
to overlook them while proofreading. Grammar errors, on the other
hand, stick out, and are usually spotted easily.

> (I guess correct is a overloaded word, meaning either "type correct"
> or "correct according to some specification")

It is only proven to be type-correct (of course), but in practice it
is 98% of the time correct as in 'has no bugs'. Don't confuse the two.
Of course static type checking can never *prove* that the program is
completely bug free, and no sane person would claim this.

- Dirk

Benjamin Ylvisaker

unread,
Mar 4, 2003, 4:58:13 PM3/4/03
to
On Tue, 4 Mar 2003 19:07:30 +0100
Dirk Thierbach <dthie...@gmx.de> wrote:

> > (I guess correct is a overloaded word, meaning either "type
> > correct" or "correct according to some specification")
>
> It is only proven to be type-correct (of course), but in practice it
> is 98% of the time correct as in 'has no bugs'. Don't confuse the
> two.

This figure seems pretty meaningless to me. Static type checking fans
(including myself) like to trot out the very small examples that show
that given some type signature it is nearly impossible to write a
buggy function that passes type checking. However, I am currently
writing my first somewhat significant program in SML (on the order of
a few thousand lines of code) and I assure that it is very easy to
write a large SML program that passes the type checker, but doesn't do
what one would like it to do.

I think of static type checking as a language feature that saves
programmer/tester time more than something that significantly impacts
the reliability of large applications.

Benjamin

(I can post to Usenet with my real email address without fear of a
flood of SPAM, because I use fastmail.fm and they have awesome SPAM
filtering. You should check out fastmail, too.)

Neelakantan Krishnaswami

unread,
Mar 4, 2003, 5:02:21 PM3/4/03
to
Joachim Durchholz <joac...@gmx.de> wrote:
>
> The properties that can be proven were carefully selected so that an
> automatic proof is possible - for older languages, that calculus is
> very restricted (the equivalent of "Aussagenlogik", i.e. without
> quantors), for Hindley-Milner and similar type inference systems,
> it's a (still carefully selected) subset of "Prädikatenlogik"
> (i.e. with quantors over non-predicates). (Sorry for not knowing the
> proper terms in English. Maybe somebody can translate the terms...)

The translation is unusually straightforward; enough so that I am able
to do it despite being a decade from my last German class. :)

"Aussagenlogik" is usually called propositional logic in English.

"Prädikatenlogik" is predicate logic, and the subset in which
quantifiers cannot range over predicates is called "first-order
predicate logic".

"quantors" are usually called "quantifiers".

--
Neel Krishnaswami
ne...@alum.mit.edu

Lauri Alanko

unread,
Mar 4, 2003, 5:14:58 PM3/4/03
to
Benjamin Ylvisaker <benj...@alumni.cmu.edu> quoth:

> I think of static type checking as a language feature that saves
> programmer/tester time more than something that significantly impacts
> the reliability of large applications.

I think of types mostly as a tool for designing and structuring the
program: it's much easier to write the types of all the functions than
their implementations. And a module signature with identifiers and their
types is much more informative than just a bunch of identifiers.

Of course one can use types as a commenting/design tool even in untyped
languages, but it just gives such a warm fuzzy feeling to know that the
compiler will instantly complain if you get the types wrong.


Lauri Alanko
l...@iki.fi

Joachim Durchholz

unread,
Mar 4, 2003, 5:42:05 PM3/4/03
to
Neelakantan Krishnaswami wrote:
> "Aussagenlogik" is usually called propositional logic in English.
>
> "Prädikatenlogik" is predicate logic, and the subset in which
> quantifiers cannot range over predicates is called "first-order
> predicate logic".

Thanks. I was somewhat out of practice with these terms.

> "quantors" are usually called "quantifiers".

Oops... right.

Jeffrey M. Vinocur

unread,
Mar 4, 2003, 6:12:35 PM3/4/03
to
In article <20030304165813.2...@alumni.cmu.edu>,
Benjamin Ylvisaker <benj...@alumni.cmu.edu> wrote:

>Dirk Thierbach <dthie...@gmx.de> wrote:
>
>> 98% of the time correct as in 'has no bugs'.
>
>This figure seems pretty meaningless to me. Static type checking fans
>(including myself) like to trot out the very small examples that show
>that given some type signature it is nearly impossible to write a
>buggy function that passes type checking. However, I am currently
>writing my first somewhat significant program in SML (on the order of
>a few thousand lines of code) and I assure that it is very easy to
>write a large SML program that passes the type checker, but doesn't do
>what one would like it to do.

Yes, certainly. And I think anyone who's written a big program
in SML would agree. But there is certainly an advantage to
knowing that there won't be any runtime errors (other than
exceptions you've raised yourself). Bugs in SML tend to have a
lot more to do with the problem at hand than the details of
whether that line of code really does what you think it does, if
that makes any sense.


--
Jeffrey M. Vinocur * jm...@cornell.edu
http://www.people.cornell.edu/pages/jmv16/

Jens Axel Søgaard

unread,
Mar 4, 2003, 7:01:17 PM3/4/03
to
Torbjorn Lager wrote:
> h...@rentec.com (HP) wrote in message
> news:<10caf2e2.03030...@posting.google.com>...

>> By reading Erlang's website,
>> I (being a C++ person) notice that
>> one feature in Erlang is 'hot code replacement' ----
>> i.e. change the code while the application is running.
>> This is one feature that I always wanted in some situations
>> at work.

>>
>> Does any functional language support this feature ?
>> (e.g. Does Mozart/Oz or Lisp support it ?)

Correct me if I'm wrong, but I believe that Common Lisp
have had this for years.

Hmm. Better check, google, google. Yep:

http://www.lispworks.com/products/myths_and_legends.html

See under the heading
"Do the conditions of the problem change frequently, even dynamically?"

--
Jens Axel Søgaard


Luke Gorrie

unread,
Mar 4, 2003, 7:00:22 PM3/4/03
to
jm...@cornell.edu (Jeffrey M. Vinocur) writes:

> Bugs in SML tend to have a lot more to do with the problem at hand
> than the details of whether that line of code really does what you
> think it does, if that makes any sense.

Indeed it does. Now I must share a trivial anecdote about a nasty bug
during my complete-newbie experiments with Haskell. It was exactly one
of these "doesn't do what I think" problems, and I'm still scarred
from it :-)

I was writing a Haskell program to solve a combinatorial puzzle. I had
an algorithm that I knew was correct, coded it up, got it past the
type checker, and ran it. It gave me an answer, and I patted myself on
the back.

A little while later I was looking over it again, and discovered that
the answer, though plausible, was wrong. I reread the program many
times but was convinced it was correct. So I started running only
part-way and then manually checking its state, and did this for a long
time because the program got a long way before entering an incorrect
state.

Finally I narrowed it down to one tiny function, and saw the problem:
an operator precedence bug. IIRC, I had written:

if a then x else [] ++ if b then y else []

When I meant:

(if a then x else []) ++ (if b then y else [])

Or something very similar. Not fun at all!

So, nothing profound, and I certainly don't mean to suggest that
Haskell is bad - I've shown the code to other people, who spotted the
problem instantly. But, my own earnest attempts to get this "programs
that just work" glory have been a bit bumpy in these early days.
Maybe someone else can commiserate :-)

Cheers,
Luke

Daniel C. Wang

unread,
Mar 4, 2003, 7:21:35 PM3/4/03
to

Dirk Thierbach <dthie...@gmx.de> writes:

{stuff deleted}


> It is only proven to be type-correct (of course), but in practice it
> is 98% of the time correct as in 'has no bugs'. Don't confuse the two.
> Of course static type checking can never *prove* that the program is
> completely bug free, and no sane person would claim this.

Static-type checking 100% gurantees that some program faitfully implements a
specification.

val f: int -> int

is a specification in ML that states when "f" is given an integer value if
it returns a value the value is an integer.

In other more sophisticated languages (Coq for example)
you can write the type specification

val f: \forall i:int. i -> fact(i)

where fact is a specification of the factorial function. If "f" does indeed
have the type \forall i:int. i -> fact(i) then it meets the specification.

So types allow you to verify many non-trivial properties. The properties you
enforce are limited only by the sophistication of the specification
language, and the patience of the programmer who baiscally has to force his
program through the type-checker by constructing what amounts to a formal
proof.

So if you have a clearly defined specification that is expressible in your
type system and your program type checks and your type system is sound of
course. You can be 100% guaranteed your program is "bug free" if we define
"bug freeness" to be any program that faithfully implements the
specification.

Of course the *specification* may have bugs, but a type system can't do
anything about that. In fact there's not much you can do about any design
level bugs, other than build in fault tolerance.

As a practical matter, the ability to specify "interesting" non-trivial
properties conviently in a type system is a big area of open
research. However, it's just a matter of time when every program will come
with a proof of correctness and we can spend our time debuging design level
bugs rather than program level bugs. All the tools are there, we're just
missing the right abstractions and approiate proof engineering skills to
pull this off. I'll be surprised if this doesn't happen within the next 100
years.

Daniel C. Wang

unread,
Mar 4, 2003, 7:31:38 PM3/4/03
to

jm...@cornell.edu (Jeffrey M. Vinocur) writes:
{stuff deleted}

> Yes, certainly. And I think anyone who's written a big program
> in SML would agree. But there is certainly an advantage to
> knowing that there won't be any runtime errors (other than
> exceptions you've raised yourself). Bugs in SML tend to have a
> lot more to do with the problem at hand than the details of
> whether that line of code really does what you think it does, if
> that makes any sense.

You don't really appreciate the power of types, until you write non-trivial
programs in Twelf, Coq, or NuPRL.

Dependent Types Ensure Partial Correctness of Theorem Provers, by Andrew
W. Appel and Amy P. Felty. Accepted for publication, Journal of Functional
Programming, 2002.
(http://www.cs.princeton.edu/~appel/papers/prover/prover.pdf)

I think it's the case that the newest version of Twelf could acutally verify
total correctness of the theorem prover. If the prover was complete for the
logic it was trying to prove.

http://www.cs.cmu.edu/~twelf/
http://coq.inria.fr/
http://www.cs.cornell.edu/Info/Projects/NuPrl/nuprl.html

People should be careful and not confuse theorem proving with checking the
validity of a proof. Theorem proving in general is impossible. Checking the
validity of a proof is so simple a computer can do it!

Lauri Alanko

unread,
Mar 4, 2003, 7:39:26 PM3/4/03
to
nospam...@agere.com (Daniel C. Wang) quoth:

> People should be careful and not confuse theorem proving with checking the
> validity of a proof. Theorem proving in general is impossible. Checking the
> validity of a proof is so simple a computer can do it!

Then again, writing the proof in such a form that the computer can check
it is often more complicated than checking it by hand...


Lauri Alanko
l...@iki.fi

Christopher Richards

unread,
Mar 4, 2003, 8:26:16 PM3/4/03
to

I haven't found it so. Writing (machine-checkable) proofs is exactly
like writing programs -- lemmas are just functions on proofs, after
all. After having written a couple BIGNUM machine-checkable proofs, I
actually prefer them. There are no hidden assumptions or hand-waves
to trip you up.

--
Chris

Eli Barzilay

unread,
Mar 4, 2003, 8:42:53 PM3/4/03
to
nospam...@agere.com (Daniel C. Wang) writes:

> You don't really appreciate the power of types, until you write
> non-trivial programs in Twelf, Coq, or NuPRL.

This is one way you can get to appreciate the power, but why isn't it
obvious that this does not have to come at the price of reduced
appreciation for dynamic typing? ML would be wonderful for some big
project involving many people etc, but I would still prefer my pet
dynamically typed language with heavy abuse of dynamic typing for
scripts and personal hacks. And even if you do like static typing for
scripts (at least one person here uses Nuprl for scripting), you would
probably not go as far as proving them correct: given the job you want
to get done it is usually not worth the extra time. In the same way,
given the jobs I want to get done with my scripts and relying on my
programming skills more, I find it quicker to not use static typing.

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://www.barzilay.org/ Maze is Life!

gr...@cs.uwa.edu.au

unread,
Mar 4, 2003, 11:56:54 PM3/4/03
to
Joe Armstrong <j...@enfield.sics.se> wrote:
: This reminds me of the story of an aircraft that crashed due to a

: failure in the navigation system. The plane tried to flip over as it
: crossed the equator, due to a sign error in the code.
: I imagine that if you have some code that has passed a static type
: checker then changing an integer N somewhere in the program to -N
: probably won't have any effect on the types.

Depends on whether your type system differentiates between an integer
and a positive integer. But even just knowing that it is an integer
might be a nice step toward finding this sort of bug.

: Problem 1: Getting the types right = a soluble problem


: Problem 2: Getting the values right = a much more difficult problem
: IMHO there seem to be two many people doing (1) and not enough doing (2)

One of these can be done automatically, the other requires work.
It's not surprising which one is more commonly done. However, you
should have more time to spend on 2 if 1 is done for you.

: - there also seems to be a vociferous sub-set of people working on


: (1) who seems to think that they have also solved (2).

It would be nice to have a type system that sophisticated, but I don't
see it anywhere yet. (In fact, in the full sense of what you're saying,
it's undecidable anyway) The thing is, though, that there is a sliding
scale from 1 to 2, depending on the strength/scope of the type system,
and the more we can incorporate into 1, the less effort is required
to do 2, so hopefully the more programmers will bother to do it.

:> i still see no advantage to delaying type error detection to run-time.


: Unless the advantages of a dynamic type system outweigh the
: advantages of a static type system. For example, Unix pipes and
: sockets are just uninterpreted streams of characters (i.e. untyped)
: but are still *very* useful (compare this with say XML/WSDL/SOAP
: streams which are strongly (I use the word guardedly) typed) - each
: have advantages in different contexts.

Uninterpreted streams gain an advantage from not making you interpret
them. Isn't that orthogonal to whether they are statically typed?

:> On the (+) side with dynamic typing code is shorter and you can


:> write powerful generic routines for common tasks.
:> What did you mean by this? That is, to get back to (what i see as) the
:> root of the matter, what practical advantages are offered by dynamic type
:> systems?

: The code for the generic operations is small and easy to implement
: directly. it's nice to have generic libraries, which are a lot smaller


: than specialized encoding routines for every specific data type.

Surely this is an argument for polymorphic typing, not dynamic?


-Greg

gr...@cs.uwa.edu.au

unread,
Mar 5, 2003, 12:14:13 AM3/5/03
to
Eli Barzilay <e...@barzilay.org> wrote:
: given the jobs I want to get done with my scripts and relying on my

: programming skills more, I find it quicker to not use static typing.

Could you give an example of where static typing slows you down?

-Greg

Eli Barzilay

unread,
Mar 5, 2003, 12:29:39 AM3/5/03
to
gr...@cs.uwa.edu.au writes:

Yes -- there are many common examples, like not needing the extra
syntax involved in sum types option, or the well chewed printf thing,
or the basic philosophical issue of being forced to do a lot of
designing before you start coding etc. But I don't think that going
through explicit examples will do any good -- everyone knows them and
most have prepared counter speaches (and counter counter speaches
and...). No wonder so many threads here end up with some version of
the current subject line.

Joachim Durchholz

unread,
Mar 5, 2003, 3:49:01 AM3/5/03
to
Lauri Alanko wrote:
>
> Then again, writing the proof in such a form that the computer can check
> it is often more complicated than checking it by hand...

Actually it's simple.
You just write down your assumptions on what the code does.
If the checker is good, it should be enough to write down loop
invariants (or their equivalent for recursion - is there a beast called
"recursion invariant"?). The computer should be able to fill the gaps
between the invariants, I think that's just propositional logic (which
has nasty worst-case complexities but is usually straightforward).

Just my personal theory - could anybody confirm or disprove it?

Markus Mottl

unread,
Mar 5, 2003, 4:45:12 AM3/5/03
to
Eli Barzilay <e...@barzilay.org> wrote:
>> Could you give an example of where static typing slows you down?

> Yes -- there are many common examples, like not needing the extra
> syntax involved in sum types option,

What extra syntax? Do you never specify what kinds of values your
functions accept? How are others supposed to know how to use them? Type
definitions are a tiny one-time effort (< 5% of code) and, besides
allowing rigorous automated checks, provide unambiguous documentation,
too.

Btw., you can also use polymorphic variants (e.g. as implemented in
OCaml). Then you don't even need to define the type: the compiler infers
automatically, what kinds of values your functions accept.

Sure, when mixing various types (strings, floats, etc.) in one
datastructure (e.g. a list), you'll have to tag them, but... - when
accessing them, you'll need to explicitly check their type in dynamically
typed languages anyway to handle them as required! No win here...

> or the well chewed printf thing,

printf is also available in OCaml...

> or the basic philosophical issue of being forced to do a lot of
> designing before you start coding etc.

Really? When writing large programs, you'd be well advised to do a lot
of designing anyway. And for short programs: no, you are not forced to
do this. I also write scripts using OCaml and never felt like having to
design more or less than, say, in Perl. Indeed, the static type system
almost always finds some silly bugs in them. Languages with modern type
systems greatly _encourage_ good design, but they won't point a gun at you
if you don't do it. They'll just shoot you when you cause type errors -
for good reason.

Regards,
Markus Mottl

--
Markus Mottl mar...@oefai.at
Austrian Research Institute
for Artificial Intelligence http://www.oefai.at/~markus

Eli Barzilay

unread,
Mar 5, 2003, 5:33:25 AM3/5/03
to
"Markus Mottl" <mar...@oefai.at> writes:
>
> [...nothing new...]

This is exactly the response I was trying to avoid. So the question
is: are you really expecting an unexpected reply -- something that
will have any un-re-re-re-hashed content? I'm not taking the easy way
out -- you want to, I will reply, I just think it will go down a known
path.

Ulf Wiger

unread,
Mar 5, 2003, 6:32:00 AM3/5/03
to
Adrian Hey <ah...@NoSpicedHam.iee.org> writes:

> Ulf Wiger wrote:
> >
> > : I've looked at the Concurrent Haskell paper, and think it
> > : looks fairly promising. It also indicates that one can
> > : combine static typing with an elegant concurrency model.
> >
> > I'm utterly convinced that you need strong concurrency
> > support, including distribution and process monitoring
> > in order to build e.g. telecoms systems. I'm not yet
> > convinced that static typing would provide a net benefit
> > for such systems, but that obviously depends on the
> > entire programming environment and how the tools are
> > applied in the project.
>
> I've been following this thread and trying to figure out why
> you regard strong static typing, as implemented in modern FPL's
> (not static typing as in Pascal say) as a feature which is
> detrimental to concurrency.

I'm obviously not expressing myself clearly. I apologize.
I was trying to explain how I've not seen a compelling reason why you
couldn't make a statically typed language implementation that also
has strong concurrency support.

Perhaps I should also clarify this: I don't invent programming
languages; I only use them. As long as I can't find a statically
typed programming environment that has concurrency support to
rival that of Erlang, and preferrably also a comparable assortment
of middleware components, my choice is easy.

I can not give you an explanation for why such an environment
(to my knowledge) doesn't exist today, other than that it's not
been sufficiently interesting to the community of language
designers who'd be capable of pulling it off.

> I don't know any Erlang, so what is it about Erlang concurrency
> that you think can't be handled by static type systems? The two
> issues seem quite unrelated to me.

If you really want to find a feature of Erlang that's difficult
for a statically typed language, then it would be hot code
replacement. I've read some papers about how it could be done,
and they were complicated enough that I didn't bother to
understand them in detail. I did grasp that it was mainly a
theoretical argument at this point.

I'm willing to concede that there is no law of nature that
says it can't be done. From a practical point of view, there
are certainly other ways of achieving in-service upgrade of
an entire system, esp. if that system has redundancy (which
it always has, or the in-service upgrade requirement is moot.)
In short, you don't have to do it the Erlang way, if that's
too difficult.


> Static typing is arguably detrimental to other aspects of
> dynamic programming, concurrent or otherwise. Is that what
> you're really worried about?

No, I'm worried about pushing products to the market that I
can sell for real money, knowing that 80% of the life-cycle
cost of these products lies in maintenance, and I have to
accept the fact that I often have to hire programmers fresh
out of college to perform maintenance on other people's code.
I need strong support for whatever programming language
I decide to use, and I need good libraries. While it's
interesting to engage in debates about whether one could
do things differently, the alternatives need to be viable
enough that I could do some relevant prototyping in the
near future; otherwise, it's not really on my map.

(Although the fact that I'm participating in this discussion
might be seen as an indicator that I'm not really quite
that dogmatic... at least not all the time.)

> > I would like to think that I'm open to convincing
> > real-world examples demonstrating the power of static
> > typing(*), but then again, 10 years of programming in
> > a dynamically typed language probably induces some bias...
>
> Well my experience is that static typing is so incredibly
> useful I'm surprised anybody doubts the benefits.

Is static typing so useful that no other considerations are
needed? Should I go to Corporate Management and ask them to
bet their next billion-dollar project on a statically typed
language because... static typing is obviously useful?

Or are you saying there are no drawbacks?

I'm saying that I consider strong concurrency support to be
more critical in my domain than static typing. This doesn't
mean that the two couldn't be combined -- it just means that
if I have to choose (and at the moment it seems I do), between
static typing and strong concurrency support, I would choose
the latter any day and twice on a Sunday.

> I'm also sceptical that even thorough testing can reliably
> detect type errors.

Well, what matters in our world is how often our customers
suffer service interruption. We measure those things, and
the target for us is 99.999% availability or better. Currently,
we are better than that.

> As for finding "real-world examples demonstrating the power of
> static typing", I suspect you will find none. I don't think
> static type systems do increase the power of non-broken
> programs. They just make it far less likely that broken
> programs will be created.

I was referring to experiences from product development using
such languages. Did the product come out in time, on budget,
and with sufficient quality? How much (tounge in cheek) did
static typing contribute? How about maintenance? Programmer
training (learning curve)? Stuff like that.

(Anyone who doesn't want to be convinced will always find
ways to question any such arguments, but some will listen;
without them, you will have a hard time convincing people
in industry, even those who would want to be convinced.
Then there are always people who are willing to be guinea
pigs. We were for Erlang.)

Regards,
UW

Joe Armstrong

unread,
Mar 5, 2003, 7:54:31 AM3/5/03
to
Adrian Hey <ah...@NoSpicedHam.iee.org> writes:

> Joe Armstrong wrote:
> > Problem 1: Getting the types right = a soluble problem
> > Problem 2: Getting the values right = a much more difficult problem
> >
> > IMHO there seem to be two many people doing (1) and not enough doing
> > (2) - there also seems to be a vociferous sub-set of people working on
> > (1) who seems to think that they have also solved (2).
>
> AFAICS the people you are attacking exist only in your imagination.
> I don't know of any advocate of static type systems who has made such
> a claim. Would you care to provide references? Links? c.l.f threads?

I have often hear this claim - 20 seconds in google provides statements like:

In article <1992Jul14.2...@eng.umd.edu> cl...@eng.umd.edu (Charles Lin) writes:

--- quote ----
- In ML most of the errors are discovered at compile time. As a rule
of thumb one can say that an ML program usually works after it is
compiled whereas in LISP you have to test your functions on some
trivial data first.

---- end ----

I assume "works" means what I call "getting the values right".

I can *easily* write incorrect programs with correct types

In Erlang I guess that we could prove that:

factorial(0) -> 1;
factorial(N) -> N * factorial(N-2).

is of type int -> int

But it's still wrong.

> Your view seems to be that static typing should not be used at all
> because it fails to eliminate all program bugs. If so, this seems pretty
> illogical and unconvincing to me.

Well if I'd said that it would, indeed be "illogical and
inconvincing" - but this is not my view, and I have never said
that. What I wrote in my last posting was:

> > Please don't mis-represent me - type checking is great to have - one of
> > many good techniques that is needed to make reliable systems.

Which means exactly what it says and nothing else.

> The argument that static typing should not be used in Erlang because
> it would be incompatible with a lot of existing code is much more
> convincing to me. But that's a problem with Erlang, not with static
> typing or other statically typed FPL's.

But I have never argued this - since nobody has been able to make a
static type system for all of Erlang the question of what to do with
legacy code has never arisen - (type systems for sub-sets of Erlang
have been made but never for the full language) -

My view is that if we could add a post-hock type system to Erlang I
would encourage people to use it.

I assume by "problem" you mean academic problem (i.e. nobody knows
how to do it). It's certainly not a practical problem or anything that
is inhibiting the spread of the language -

/Joe

Fergus Henderson

unread,
Mar 5, 2003, 8:07:57 AM3/5/03
to
I have something new to report in this long-running debate.

Last time we had this debate, one of the advantages pointed out
by dynamic typing advocates was that dynamically typed languages
let you execute code which was not yet complete, without needing
to manually write stubs for the unimplemented procedures.

My news is that the latest release-of-the-day versions of Mercury
include support for this. If you compile with the new `--allow-stubs'
option, the compiler will only report a warning (rather than an error)
about procedures for which there are no clauses. Any calls to such
procedures will of course throw an exception at run-time.

So now at least one statically typed language gives programmers the
ability to execute code that is not yet complete, without needing to
manually write stubs.

--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.

Mark Carroll

unread,
Mar 5, 2003, 8:39:23 AM3/5/03
to
In article <tyf4r6i...@enfield.sics.se>,
Joe Armstrong <j...@enfield.sics.se> wrote:
(snip)

>In article <1992Jul14.2...@eng.umd.edu> cl...@eng.umd.edu (Charles Lin) writes:
>
>--- quote ----
>- In ML most of the errors are discovered at compile time. As a rule
>of thumb one can say that an ML program usually works after it is
>compiled whereas in LISP you have to test your functions on some
>trivial data first.
>
>---- end ----
>
>I assume "works" means what I call "getting the values right".
>
>I can *easily* write incorrect programs with correct types
(snip)

There might be a big "YMMV" here: the types of bugs that I introduce
tend to be such that they cause a type error. However, I know that
this is not true for a number of other good programmers with whom I've
been acquainted: their bugs seem to be type safe! So, I find it quite
plausible that Charles Lin's statement will be correct for some
people, like me, but not at all for others. That is, it may not be so
much what you can do, as what you tend to do.

-- Mark

Ketil Malde

unread,
Mar 5, 2003, 8:42:16 AM3/5/03
to
Mark Carroll <ma...@chiark.greenend.org.uk> writes:

> In article <tyf4r6i...@enfield.sics.se>, Joe Armstrong <j...@enfield.sics.se> wrote:
>>In article <1992Jul14.2...@eng.umd.edu> cl...@eng.umd.edu (Charles Lin) writes:

>>--- quote ----


>> In ML most of the errors are discovered at compile time. As a rule
>> of thumb one can say that an ML program usually works after it is
>> compiled whereas in LISP you have to test your functions on some
>> trivial data first.
>>---- end ----

> There might be a big "YMMV" here: the types of bugs that I introduce


> tend to be such that they cause a type error.

I would say that I *definitely* catch most errors compile time. It's
possible that the errors caught there would be trivial to find and fix
at run time too -- many of the errors that do get through the type
checker are, IME, notoriously hard.

Perhaps Joe intended to emphasize the 'rule of thumb'? Which,
incidentally, I also think is a rather weak statement -- and one which
I have no trouble agreeing with: More often than not, when I get my
functions type correct, they are also semantically correct.

>> I assume "works" means what I call "getting the values right".
>> I can *easily* write incorrect programs with correct types

So can I, but I usually don't. :-)

It seems there is violent agreement that:

a) strong types are generally beneficial
b) other features may be better, depending on the application
c) type correct /= semantically correct

OTOH, I'm very curious about why the concurrent-static language
attempts floundered; can anybody shed a light on that?

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants

Joachim Durchholz

unread,
Mar 5, 2003, 9:14:08 AM3/5/03
to
Ulf Wiger wrote:
>
> Perhaps I should also clarify this: I don't invent programming
> languages; I only use them. As long as I can't find a statically
> typed programming environment that has concurrency support to
> rival that of Erlang, and preferrably also a comparable assortment
> of middleware components, my choice is easy.
>
> I can not give you an explanation for why such an environment
> (to my knowledge) doesn't exist today, other than that it's not
> been sufficiently interesting to the community of language
> designers who'd be capable of pulling it off.

You might be interested in Alice then. It's an ongoing project to add
static typing to Mozart/Oz (which carries claims on concurrency support
that sound very similar to those on Erlang).
(I say "claim" not because I want to imply scepticism, but because I
have never looked too deeply into the concurrency stuff of either
language and can't confirm it.)

Adrian Hey

unread,
Mar 5, 2003, 9:13:18 AM3/5/03
to
Ulf Wiger wrote:

> Adrian Hey <ah...@NoSpicedHam.iee.org> writes:
>> Well my experience is that static typing is so incredibly
>> useful I'm surprised anybody doubts the benefits.
>
> Is static typing so useful that no other considerations are
> needed?

No, of course not. But there's no reason why other considerations
should prevent the use of static type systems in principle. In
practice the other considerations (such as concurrency support) may
cause people to chose Erlang. But IMO this will be despite, not
because of, it's lack of static type security.

> Should I go to Corporate Management and ask them to
> bet their next billion-dollar project on a statically typed
> language because... static typing is obviously useful?

Why not? I'm not asserting that any other language suitable for your
needs currently exists. But then again, I can see no reason to dismiss
any future candidates on the grounds that they provide a static type
system.



> Or are you saying there are no drawbacks?

Yes, IMO there are no drawbacks with using a strong static type
system. AFAICS the use of a static (compile time) type checking
does not prevent the use dynamic typing where needed (or maybe
even unsafe, unchecked type cast hackery). Both Clean &
Haskell (ghc) provide some support for dynamic typing. Whether
either are currently adequate for whatever purpose I don't know.


> I'm saying that I consider strong concurrency support to be
> more critical in my domain than static typing. This doesn't
> mean that the two couldn't be combined -- it just means that
> if I have to choose (and at the moment it seems I do), between
> static typing and strong concurrency support, I would choose
> the latter any day and twice on a Sunday.

Fair enough, we all have to make pragmatic choices. For my
professional needs all currently available FPL implementations
suck. But maybe things won't always be that way.

It seems like the gist of what you're saying is that you might prefer
an Erlang with a modern static type system and the only thing stopping
you using such a thing is that nobody has produced a sufficiently
polished implementation. So maybe the right thing to do is to beg
Erlang implementors (whoever they are) for a better Erlang, not to
advertise this particular wart as a feature :-)

Regards
--
Adrian Hey

Adrian Hey

unread,
Mar 5, 2003, 10:03:00 AM3/5/03
to
Joe Armstrong wrote:

> In article <1992Jul14.2...@eng.umd.edu> cl...@eng.umd.edu (Charles
> Lin) writes:
>
> --- quote ----
> - In ML most of the errors are discovered at compile time. As a rule
> of thumb one can say that an ML program usually works after it is
> compiled whereas in LISP you have to test your functions on some
> trivial data first.
>
> ---- end ----
>
> I assume "works" means what I call "getting the values right".

This is a perfectly reasonable statement about ML, though it might
not be entirely fair to Lisp. It is clearly not an assertion that
problem (2) has been solved by the use of a static type system.

But from my own experience using Haskell (and I think other
Haskeller's will concur) I can say that my programs rarely
fail because of some fundamental flaw in the program design or
logic. If they fail it's usually silly "finger trouble" errors
like supplying arguments in the wrong order or using a list when
a list of lists is required. These errors usually (not *always*)
manifest themselves as type errors detected by the compiler.

Regards
--
Adrian Hey


Jochen Schmidt

unread,
Mar 5, 2003, 10:44:58 AM3/5/03
to
Fergus Henderson wrote:

> I have something new to report in this long-running debate.
>
> Last time we had this debate, one of the advantages pointed out
> by dynamic typing advocates was that dynamically typed languages
> let you execute code which was not yet complete, without needing
> to manually write stubs for the unimplemented procedures.
>
> My news is that the latest release-of-the-day versions of Mercury
> include support for this. If you compile with the new `--allow-stubs'
> option, the compiler will only report a warning (rather than an error)
> about procedures for which there are no clauses. Any calls to such
> procedures will of course throw an exception at run-time.
>
> So now at least one statically typed language gives programmers the
> ability to execute code that is not yet complete, without needing to
> manually write stubs.

Yes but this a whole lot away from the dynamic coding process that is common
in dynamically typed languages.

If I start coding I run my environment (Xanalys LispWorks in this case) and
incrementally add functionality. The feature of testing incomplete programs
allows me to verify my code from the first line on. If I accidently hit a
place where my code is not complete enough I get help from the environment
to resolve the issues. If I change my design the environment helps me
migrating to the new design while staying in the live image.

When in production I can fix things in the running application that I did
not even plan to make "dynamically fixable".

I think dynamic systems can make finding unexpected bugs *alot* easier
because they tend to have more information available at runtime.

So while this new feature of the Mercury compiler may indeed be a good idea
it is far from being comparable to what the same feature means for a
dynamic language.

ciao,
Jochen

Ulf Wiger

unread,
Mar 5, 2003, 10:43:36 AM3/5/03
to
f...@students.cs.mu.OZ.AU (Fergus Henderson) writes:

> Last time we had this debate, one of the advantages pointed out
> by dynamic typing advocates was that dynamically typed languages
> let you execute code which was not yet complete, without needing
> to manually write stubs for the unimplemented procedures.

True. I do that all the time. Very useful feature when designing
large systems.

> My news is that the latest release-of-the-day versions of Mercury
> include support for this. If you compile with the new `--allow-stubs'
> option, the compiler will only report a warning (rather than an error)
> about procedures for which there are no clauses. Any calls to such
> procedures will of course throw an exception at run-time.
>
> So now at least one statically typed language gives programmers the
> ability to execute code that is not yet complete, without needing to
> manually write stubs.

Nice. (:

/Uffe

Ulf Wiger

unread,
Mar 5, 2003, 10:40:25 AM3/5/03
to
Adrian Hey <ah...@NoSpicedHam.iee.org> writes:

> It seems like the gist of what you're saying is that you
> might prefer an Erlang with a modern static type system and
> the only thing stopping you using such a thing is that nobody
> has produced a sufficiently polished implementation. So maybe
> the right thing to do is to beg Erlang implementors (whoever
> they are) for a better Erlang, not to advertise this
> particular wart as a feature :-)

I have not (at least not this time ;) advertised the lack of
static typing in Erlang as a feature.

I have on several occasions voiced the opinion that the lack
of static typing in Erlang is not, in practice, a big problem
(basing this opinion on my experience of working in a project
for 6 years with 50-100 Average Programmers(tm) and a code base
of more than a million lines of Erlang.) You are welcome to
prove me wrong, but I think it will take a little bit more
than purely hypothetical arguments without having studied
our product and project.

This is not to be read as a dismissal. ;) We have on several
occasions invited CS researchers to help us improve our ways
of working and our delivery precision. Some of the work has
been type related, but the results from those activities have
been far from conclusive. Those who have actually studied our
system and participated in that work are of course welcome to
agree or disagree with me.

/Uffe

Benjamin Ylvisaker

unread,
Mar 5, 2003, 11:15:48 AM3/5/03
to
On Wed, 5 Mar 2003 15:03:00 +0000
Adrian Hey <ah...@NoSpicedHam.iee.org> wrote:

> Joe Armstrong wrote:
>
> > In article <1992Jul14.2...@eng.umd.edu> cl...@eng.umd.edu
> > (Charles Lin) writes:
> >
> > --- quote ----
> > - In ML most of the errors are discovered at compile time. As a
> > rule of thumb one can say that an ML program usually works after
> > it is compiled whereas in LISP you have to test your functions on
> > some trivial data first.
> >
> > ---- end ----
> >
> > I assume "works" means what I call "getting the values right".
>
> This is a perfectly reasonable statement about ML, though it might
> not be entirely fair to Lisp. It is clearly not an assertion that
> problem (2) has been solved by the use of a static type system.

I think for many problems, this rule of thumb is too strong. Imagine,
for example that one has created a large, highly concurrent
application with John Reppy's excellent CML library. Probably, most
of the tricky debugging is going to be chasing down spurious deadlock
issues, and the like. The type checker can only tell you that the
right kind of data is being communicated over the channels.

> But from my own experience using Haskell (and I think other
> Haskeller's will concur) I can say that my programs rarely fail
> because of some fundamental flaw in the program design or logic. If
> they fail it's usually silly "finger trouble" errors like supplying
> arguments in the wrong order or using a list when a list of lists is
> required. These errors usually (not *always*) manifest themselves as
> type errors detected by the compiler.

Perhaps one of the reasons why people end up talking past each other
so often in these typing discussions is that the value of static
typing varies with what kind of application one is writing. I imagine
that heavily symbolic programs benefit much more from static typing
than heavily numerical programs. That is, if almost all of your data
are just integers or floating point numbers, where is the big benefit
of static typing?

Just to be clear, I think that a well designed static type system is
almost always (always?) a Good Thing(tm). However, I believe people
who say that for their problem it doesn't really make a big
difference.

Benjamin

(I can post to Usenet with my real email address without fear of a
flood of SPAM, because I use fastmail.fm and they have awesome SPAM
filtering. You should check out fastmail, too.)

Ulf Wiger

unread,
Mar 5, 2003, 11:18:55 AM3/5/03
to
Adrian Hey <ah...@NoSpicedHam.iee.org> writes:

> But from my own experience using Haskell (and I think other
> Haskeller's will concur) I can say that my programs rarely
> fail because of some fundamental flaw in the program design or
> logic. If they fail it's usually silly "finger trouble" errors
> like supplying arguments in the wrong order or using a list when
> a list of lists is required. These errors usually (not *always*)
> manifest themselves as type errors detected by the compiler.

This may be where domains differ. In our domain, we can never
assume to understand the problem fully in the beginning of
a design project. Most of the difficult errors we encounter
are related to misunterstanding of requirements, or failure
to see how things relate to each other.

A lot of time is spent submitting the system to varying types
of load and then injecting hardware failures, pulling out
boards, etc. Quite often when things fail, it's the complex
interwork of different features, and programmers having chosen
the wrong invariants. In other cases it can be that programmers
have not understood the cost of things, and cause memory
allocation problems, CPU overload etc. Generally, concurrency
is difficult, and distribution even more so. This will of
course cause some problems, considering the amount of
concurrency and distribution in our system.

Of course we also have silly "finger trouble" errors. Many
such errors _are_ caught at compile time (mis-typing variable
names, syntax errors, etc.) Sometimes, sloppy programming causes
type errors, and we have to weed those out during testing.
With >10,000 automated test cases, most of those errors are
found quickly (although not at compile time.) On rare
occasions do they slip through all the way into system test,
which means that they become very expensive typos. This is
annoying, but doesn't happen often enough to get our
(mostly language-neutral) project managers frightfully
upset about it.

All in all, type-related errors don't seem to even register
as significant problems, compared to the domain-specific
problems we have to deal with regardless of programming
language. We can say unequivocally that declarative
programming, limited side-effects and symbolic error
reporting are extremely helpful. The method of
of programming for the correct case and handling errors
in separate monitoring processes is also extremely
helpful. A static type system well implemented would of
course be helpful too.

Let me say that again to avoid misunderstanding:
A static type system well implemented would of course
be helpful too.

Ulf Wiger

unread,
Mar 5, 2003, 11:27:43 AM3/5/03
to
Joe Armstrong <j...@enfield.sics.se> writes:

> I can *easily* write incorrect programs with correct types

No kidding. I just found a bug that has confounded me all day:
a perfectly type-correct search-and-replace bug, where I had
tried to improve code that _you_ had written. (:

Basically:

Self = self(),
...
start_child(fun() -> ..., driver:loop(..., Self) end)

Replacing self() with Self was valid everywhere except right
there. Since the fun executed in the context of another thread,
self() =/= Self in this case.

This of course proves nothing, but I'm glad to have found the
bug. Sorry Joe, for breaking your code. (:

Hoff, Todd

unread,
Mar 5, 2003, 12:29:52 PM3/5/03
to
Benjamin Ylvisaker <benj...@alumni.cmu.edu> wrote:
> Perhaps one of the reasons why people end up talking past each other
> so often in these typing discussions is that the value of static
> typing varies with what kind of application one is writing.

People also simply just value different features differently.
Our tastes differ in art, music, food, creation theories, etc,
they also differ in typing. Our values determine our axioms, so
unless you can shift someones values, which is very difficult,
there's not much left to argue about.

Personally, i would like an ocaml that has erlang's distribution and
concurrency capabilities.

--
Do a caterpillar and butterfly speak the same language?

Mark Alexander Wotton

unread,
Mar 5, 2003, 6:57:54 PM3/5/03
to
On 05 Mar 2003 12:32:00 +0100, Ulf Wiger posted:

> I'm obviously not expressing myself clearly. I apologize.
> I was trying to explain how I've not seen a compelling reason why you
> couldn't make a statically typed language implementation that also
> has strong concurrency support.

Sorry if this is a bit off-topic, but I was wondering: do you have a link
to the work done on trying to add static typing to Erlang? I'd be
fascinated to know why it was so difficult to do.

mrak

--
realise your life was only bait for a bigger fish
-- aesop rock

Donn Cave

unread,
Mar 5, 2003, 11:11:51 PM3/5/03
to
Quoth to...@possibility.com (Hoff, Todd):

That's ironic, because I'd be more interested in Erlang if it had
ocaml's concurrency capabilities. Ocaml is the one FP implementation
I am aware of that can run in separate OS level threads. I suppose
there is a significant degree of serialization while interlocking over
the heap and so forth, but for my purposes that isn't a problem.
It's interesting how many different things concurrency can mean.

That's a rhetorical observation, by the way - I don't think Erlang
is in the cards for me, anyway, not just as a matter of taste but
for practical reasons that came up when I tried to build it. This
is another area where ocaml shines.

Donn Cave, do...@drizzle.com

Lauri Alanko

unread,
Mar 6, 2003, 2:15:36 AM3/6/03
to
"Donn Cave" <do...@drizzle.com> quoth:

> That's ironic, because I'd be more interested in Erlang if it had
> ocaml's concurrency capabilities. Ocaml is the one FP implementation
> I am aware of that can run in separate OS level threads. I suppose
> there is a significant degree of serialization while interlocking over
> the heap and so forth, but for my purposes that isn't a problem.

If that isn't a problem to you, then why do you value OS-level threads
so much? The serialization in ocaml is significant indeed: there is
always only one thread executing at a given time. I don't see how this
is any more useful (from a user's point of view) than user-level
threads.


Lauri Alanko
l...@iki.fi

Ulf Wiger

unread,
Mar 6, 2003, 4:32:44 AM3/6/03
to
mwo...@cse.unsw.edu.au (Mark Alexander Wotton) writes:

> Sorry if this is a bit off-topic, but I was wondering:
> do you have a link to the work done on trying to add static
> typing to Erlang? I'd be fascinated to know why it was so
> difficult to do.

You may perhaps flip through Phil Wadlers invited lecture at
the 2002 SIGPLAN Erlang Workshop in Pittsburgh -- "The Great
Type Hope" (addressing only sequential Erlang):

http://www.research.avayalabs.com/user/wadler/papers/erlang/erlang-slides.pdf

Then you can read Fabien Dagnat's presentation from the
2002 Erlang User Conference -- "Static Analysis of
Communication in Erlang Programs" (addressing concurrency
but not hot code replacement):

http://www.erlang.se/euc/02/dagnat.pdf


In some of the work, e.g. by Thomas Arts, we've tried to
ease in static typing by allowing APIs to be typed, but
the rest of the code be mostly untyped (e.g relying on type
inferencing.) AFAIR, Thomas decided to leave type inferencing
aside and go only for explicit typing. This contributed to
making it a little bit more difficult to beta-test it in AXD 301,
since we have so much code.

http://www.ericsson.com/cslab/~thomas/types.shtml
(Here, you will also find lots of reference to type checking
and verification, under Publications.)
http://www.ericsson.com/cslab/~thomas/specweb.html

From my own experiences of playing with his type checker, I
must say that it was promsising. But in order to introduce
types in Erlang and get wider acceptance within the
community, the threshold must be lowered and the benefit
in terms of productivity and quality must be clearer than
it is now (currently, I'd say typing of Erlang code is
bleeding edge, and both Thomas and Joe have left Ericsson;
this leaves the Type Checker and SpecWeb in the air to some
extent.)

The closest thing to type checking in Erlang that's actually
used at the moment is Richard Carlsson's edoc, which allows
you to embed structured comments in your source code.
The edoc parser supports types (but doesn't check them
against the source.)

http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/*checkout*/jungerl/jungerl/lib/edoc/doc/edoc.html?rev=1.1

(This may perhaps be the best way to ease Erlang programmers
into type specifications...)

A more radical view of Erlang and types may be that a select
few insist on pairing the two together, and the larger
Erlang community politely tolerates them and suffers through
at least one typing lecture at each User Conference. ;-)
So the real challenge may well be to introduce static typing
for those who want it without "ruining" Erlang for the rest,
who are not interested in static typing.

/Uffe

Thomas Lindgren

unread,
Mar 6, 2003, 4:58:17 AM3/6/03
to

"Donn Cave" <do...@drizzle.com> writes:

> That's ironic, because I'd be more interested in Erlang if it had
> ocaml's concurrency capabilities. Ocaml is the one FP implementation
> I am aware of that can run in separate OS level threads. I suppose
> there is a significant degree of serialization while interlocking over
> the heap and so forth, but for my purposes that isn't a problem.
> It's interesting how many different things concurrency can mean.

What _is_ the reason you want that? (That is, multiple, preemptively
scheduled threads are desirable, but performance is unimportant.) At
the moment, I can only think of testing for race conditions?

If you're interested, Pekka Hedqvist wrote a parallel Erlang with Tony
Rogvall in 1988, see:
<ftp://ftp.csd.uu.se/pub/papers/masters-theses/0089-hedqvist.ps.gz>

The conventional Erlang approach to use an SMP is otherwise to run
several distributed Erlang nodes on the same host.

(Pekka later suggested that the right approach might be to speed up
distributed Erlang on a single host, rather than parallelizing each
node. That sounded very reasonable to me, if the goal is good performance,
scalability, etc.)

Best,
Thomas
--
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin

Hoff, Todd

unread,
Mar 6, 2003, 11:03:36 AM3/6/03
to
"Donn Cave" <do...@drizzle.com> wrote:
> That's ironic, because I'd be more interested in Erlang if it had
> ocaml's concurrency capabilities. Ocaml is the one FP implementation
> I am aware of that can run in separate OS level threads. I suppose
> there is a significant degree of serialization while interlocking over
> the heap and so forth, but for my purposes that isn't a problem.
> It's interesting how many different things concurrency can mean.

Sorry if i have missed something in ocaml. I didn't see anything like
erlang's process/message passing model, which is very important
in an embedded environment. I've come to believe that on a big
project with many people, if it's not enforced in the language then
i can't depend on it.

The process model of erlang is quite nice and seems to perform well.
Using real OS threads on a single processor doesn't seem to be a real
win. This has made me feel rather foolish about my previous thoughts
on user space implemented "threads."

Donn Cave

unread,
Mar 6, 2003, 12:59:36 PM3/6/03
to
Quoth Lauri Alanko <l...@iki.fi>:

For my immediate purposes it's an irreducible requirement imposed by
the environment, an API that creates threads and expects the application
to use them.

More generally, isn't that one of two basic ways for an application to
be responsive to a heterogeous set of possible external events? The
single threaded alternative, where a function like UNIX's select()
dispatches several event sources, has its virtues, but doesn't ever
seem to cover quite everything - for example, how do you wait for one
of {process exit status, I/O input}? An internal, runtime-managed
thread system has to deal with this problem for every event source
or all threads are wedged as soon as one blocks on an external function -
and what about long external computations?

In applications that I write or am familiar with, if computation per
se occupies a significant percentage of the application's execution
time, one assumes that something is broken. There's no need to
parallelize computations, the I/O is where it's happening.

I am still stubbornly clinging to the notion that this perspective
is not incompatible with FP, but I sure get the feeling sometimes
that I am looking into a parallel universe, speaking of parallelism.

Donn Cave, do...@u.washington.edu

Donn Cave

unread,
Mar 6, 2003, 1:12:25 PM3/6/03
to
Quoth to...@possibility.com (Hoff, Todd):
...

| Sorry if i have missed something in ocaml. I didn't see anything like
| erlang's process/message passing model, which is very important
| in an embedded environment. I've come to believe that on a big
| project with many people, if it's not enforced in the language then
| i can't depend on it.
|
| The process model of erlang is quite nice and seems to perform well.
| Using real OS threads on a single processor doesn't seem to be a real
| win. This has made me feel rather foolish about my previous thoughts
| on user space implemented "threads."

I'm sure there is nothing like Erlang's process in ocaml. The only
thing I know of to compare that with is the O'Haskell reactive object,
which may look even more comparable in its "Timber" reincarnation
as a time-sensitive language for embedded applications. That's
interesting only in a theoretical way at present.

Interesting point (that you raise by implication) about multiple
processors. If you're interested in truly parallel computation,
that would seem to be the only way to get it.

Donn Cave, do...@u.washington.edu

Stephen J. Bevan

unread,
Mar 6, 2003, 4:58:45 PM3/6/03
to
"Donn Cave" <do...@u.washington.edu> writes:
> ... The

> single threaded alternative, where a function like UNIX's select()
> dispatches several event sources, has its virtues, but doesn't ever
> seem to cover quite everything - for example, how do you wait for one
> of {process exit status, I/O input}?

1. Install a signal handler for SIGCHLD and issue a wait in the
handler. An example of this is given in Stevens' Advanced
Programming in the Unix Environment on page 605 in the modem example.

2. Install a signal handler for SIGCHLD that flags or counts child
process death. The signal will wake up any blocked
select/poll/kqueue .. etc, which can then check the flag/counter
and issue wait(pid) as appropriate (this won't block since you
know there is at least one process dead).

> An internal, runtime-managed
> thread system has to deal with this problem for every event source
> or all threads are wedged as soon as one blocks on an external function -
> and what about long external computations?

The simplest solution to the problem of the blocking external function
is not to call it and instead use a non-blocking version e.g. instead
of calling gethostbyname, use an asynchronous DNS library. If the
blocking function absolutely has to be called then another possibility
is to fork it off as as separate process and talk to it over a pipe or
shared memory. Clearly tedious if you have lots of blocking functions.

> In applications that I write or am familiar with, if computation per
> se occupies a significant percentage of the application's execution
> time, one assumes that something is broken. There's no need to
> parallelize computations, the I/O is where it's happening.

In that case Erlang's concurrency model would be a very good match if
it were not for the API that your application needs to use which
assumes threads.

Donn Cave

unread,
Mar 6, 2003, 6:33:27 PM3/6/03
to
Quoth ste...@dino.dnsalias.com (Stephen J. Bevan):

| "Donn Cave" <do...@u.washington.edu> writes:
|> ... The
|> single threaded alternative, where a function like UNIX's select()
|> dispatches several event sources, has its virtues, but doesn't ever
|> seem to cover quite everything - for example, how do you wait for one
|> of {process exit status, I/O input}?
|
| 1. Install a signal handler for SIGCHLD and issue a wait in the
| handler. An example of this is given in Stevens' Advanced
| Programming in the Unix Environment on page 605 in the modem example.
|
| 2. Install a signal handler for SIGCHLD that flags or counts child
| process death. The signal will wake up any blocked
| select/poll/kqueue .. etc, which can then check the flag/counter
| and issue wait(pid) as appropriate (this won't block since you
| know there is at least one process dead).

Pardon me if this is drifting away from anything interesting to most
people following the thread, but SIGCHLD is a sore point with me.
There is at least one other way to notice a child exit from select
(a pipe), and as lame and unreliable as it may be, it's better than
deliberately subjecting the whole application to a SIGCHLD delivery
that will interrupt some blocking I/O at some unpredictable point.

But my point about threads really doesn't have so much to do with UNIX,
anyway, particularly since even today not all UNIX platforms support
OS threads. It's just this: if you want a generic way to deal with
multiple event sources simultaneously, OS threads looks like it.
Unless the OS makes an effort to provide some generic dispatching
capability, which none that I've used do (save maybe DEC VMS.)

|> An internal, runtime-managed
|> thread system has to deal with this problem for every event source
|> or all threads are wedged as soon as one blocks on an external function -
|> and what about long external computations?
|
| The simplest solution to the problem of the blocking external function
| is not to call it and instead use a non-blocking version e.g. instead
| of calling gethostbyname, use an asynchronous DNS library. If the
| blocking function absolutely has to be called then another possibility
| is to fork it off as as separate process and talk to it over a pipe or
| shared memory. Clearly tedious if you have lots of blocking functions.
|
|> In applications that I write or am familiar with, if computation per
|> se occupies a significant percentage of the application's execution
|> time, one assumes that something is broken. There's no need to
|> parallelize computations, the I/O is where it's happening.
|
| In that case Erlang's concurrency model would be a very good match if
| it were not for the API that your application needs to use which
| assumes threads.

Note that the API uses threads to dispatch on I/O events (from
the user interface), and that is about my only requirement for
concurrency. I think I do agree with you in some sense, but note
that not only is it only true in principle (because the API is
not actually substitutable), when we raise the question of what
API could be substituted to make Erlang (or O'Haskell or whatever),
work, you could expect that API would still have to be engineered
into the runtime.

Erlang is out of the question anyway because of build issues, but
O'Haskell has what I guess may be a similar approach at a conceptual
level. I do find O'Haskell's reactive objects concept compelling,
but it comes at that cost: the runtime has to be re-implemented in
terms of every different I/O (and other long call) dispatching solution.
It's worth it, in my opinion, but it's certainly not as adaptable as
a runtime that supports OS threads and lets the user use them as
required.

Donn Cave, do...@u.washington.edu

Stephen J. Bevan

unread,
Mar 6, 2003, 9:26:09 PM3/6/03
to
"Donn Cave" <do...@u.washington.edu> writes:
> Pardon me if this is drifting away from anything interesting to most
> people following the thread, but SIGCHLD is a sore point with me.
> There is at least one other way to notice a child exit from select
> (a pipe), and as lame and unreliable as it may be, it's better than
> deliberately subjecting the whole application to a SIGCHLD delivery
> that will interrupt some blocking I/O at some unpredictable point.

I was assuming a single OS thread which uses an event system
(e.g. libevent, liboop, ... etc.) with or without a thread abstraction
built on top of that. In that context all I/O is non-blocking. I/O
functions such as read(2)/write(2)/... etc. need wrappers that take
care of checking for EINTR, EAGAIN, short reads, short writes and then
do the appropriate thing. Various event systems or user-level thread
systems take care of these details to a greater or lesser extent.


> But my point about threads really doesn't have so much to do with UNIX,
> anyway, particularly since even today not all UNIX platforms support
> OS threads. It's just this: if you want a generic way to deal with
> multiple event sources simultaneously, OS threads looks like it.

I agree that OS threads can be used for that. How well it works
depends on how many threads you create and how well the thread
implementation scales. A canonical example is writing a server in
which there is one OS thread per client connection. That's a nice
simple way to write it, but it is unlikely to work well on most thread
systems once you get (tens of) thousands of connections. There are
exceptions (Linux 2.5 with the new scheduler changes is way better
than Linux 2.4) but it does mean that OS threads as currently
implemented aren't necessarily a pratical generic solution for
listening on multiple event sources.


> Unless the OS makes an effort to provide some generic dispatching
> capability, which none that I've used do (save maybe DEC VMS.)

I thought Windows had something close to that with
WaitForMultipleObjects (or something close to that name).
The *BSDs have kqueue(2) which covers a lot more ground than select/poll.
Linux 2.5 has epoll(2) which again covers more than select/poll.


> | In that case Erlang's concurrency model would be a very good match if
> | it were not for the API that your application needs to use which
> | assumes threads.
>
> Note that the API uses threads to dispatch on I/O events (from
> the user interface), and that is about my only requirement for
> concurrency.

Ah, would it be possible to separate the UI from the application and
connect them via shared memory, pipe or socket? I mainly write
servers (daemons) and by their nature I have to separate the UI from
the application. However, I understand that for some applications
this isn't possible/desirable e.g. for performance reasons.


> Erlang is out of the question anyway because of build issues, but
> O'Haskell has what I guess may be a similar approach at a conceptual
> level.

Agreed.

> I do find O'Haskell's reactive objects concept compelling,
> but it comes at that cost: the runtime has to be re-implemented in
> terms of every different I/O (and other long call) dispatching solution.

I don't know what the "runtime" is in this case and hence why it has
to be re-implemented or what "every different I/O" refers to.

> It's worth it, in my opinion, but it's certainly not as adaptable as
> a runtime that supports OS threads and lets the user use them as
> required.

OS threads, scripting languages, dynamic typing and global variables
are all adaptable in one way or another, they also all have their
downsides. Where to draw the line with regard to using each of them
is a matter of (repeated) debate.

Donn Cave

unread,
Mar 7, 2003, 12:58:25 AM3/7/03
to
Quoth ste...@dino.dnsalias.com (Stephen J. Bevan):
| "Donn Cave" <do...@u.washington.edu> writes:
...
|> But my point about threads really doesn't have so much to do with UNIX,
|> anyway, particularly since even today not all UNIX platforms support
|> OS threads. It's just this: if you want a generic way to deal with
|> multiple event sources simultaneously, OS threads looks like it.
|
| I agree that OS threads can be used for that. How well it works
| depends on how many threads you create and how well the thread
| implementation scales. A canonical example is writing a server in
| which there is one OS thread per client connection. That's a nice
| simple way to write it, but it is unlikely to work well on most thread
| systems once you get (tens of) thousands of connections. There are
| exceptions (Linux 2.5 with the new scheduler changes is way better
| than Linux 2.4) but it does mean that OS threads as currently
| implemented aren't necessarily a pratical generic solution for
| listening on multiple event sources.

I wouldn't draw that conclusion just because they don't scale to
four or five digits. Tens of thousands of connections can be a
challenge all by itself, and yet we don't say connections aren't
a practical solution and everyone should be using datagrams.

Honestly, I wouldn't care to be known as an advocate of multithreading
as a generic programming solution, especially for applications like
that where as a matter of fact I would avoid it like the plague, scaling
problems or not. But it is a powerful tool, and that's one answer to
``why on earth would anyone want OS threads?'', which did seem to be
a reaction that people other than yourself had.

...


|>| In that case Erlang's concurrency model would be a very good match if
|>| it were not for the API that your application needs to use which
|>| assumes threads.
|>
|> Note that the API uses threads to dispatch on I/O events (from
|> the user interface), and that is about my only requirement for
|> concurrency.
|
| Ah, would it be possible to separate the UI from the application and
| connect them via shared memory, pipe or socket? I mainly write
| servers (daemons) and by their nature I have to separate the UI from
| the application. However, I understand that for some applications
| this isn't possible/desirable e.g. for performance reasons.

It wouldn't need that much separation, really - for Haskell or Erlang
or whatever we only need to constrain the interpreter data structures
to a single OS thread, not necessarily a separate process. It still
would not be fun, though. The main problem is that the interpreter
is in the middle of the call hierarchy - called by a API dispatching
function, it will in turn call other API functions, and the latter
must occur in the thread dispatched by the former. That would take
some horrible kind of cross-thread co-routines that surely be much
worse than just writing everything in C++.

|> I do find O'Haskell's reactive objects concept compelling,
|> but it comes at that cost: the runtime has to be re-implemented in
|> terms of every different I/O (and other long call) dispatching solution.
|
| I don't know what the "runtime" is in this case and hence why it has
| to be re-implemented or what "every different I/O" refers to.

I'm probably misusing these terms, I'm just an old computer center
hacker, but the situation is not hypothetical. If you look at the
source, sockets and the Tk X11 interface are integrated into the
heart of O'Haskell's reactive object system, leaving it with two
alternative I/O dispatching solutions - handle sockets etc. with
select, or handle X11 and sockets etc. with Tk (which no doubt uses
select internally, but it doesn't matter as long as it works.) If
I want to add Qt or something to O'Haskell, the same thing would
have to be done, accounting for ttys and sockets and whatnot in
terms of however Qt handles those things. Where in an ocaml port,
that can be left to the programmer/user, because ocaml doesn't support
dispatching on I/O as a language feature.

Donn Cave, do...@drizzle.com

Joachim Durchholz

unread,
Mar 7, 2003, 3:59:33 AM3/7/03
to
Donn Cave wrote:
> Tens of thousands of connections can be a
> challenge all by itself, and yet we don't say connections aren't
> a practical solution and everyone should be using datagrams.

*grin* actually there is some experimentation in that direction.
I have seen an announcement that Mozart/Oz is moving towards UDP (with
no mention of the reasons, unfortunately - I'm trying to find out more
right now).

Ulf Wiger

unread,
Mar 7, 2003, 4:02:38 AM3/7/03
to
"Donn Cave" <do...@u.washington.edu> writes:

> More generally, isn't that one of two basic ways for an application to
> be responsive to a heterogeous set of possible external events? The
> single threaded alternative, where a function like UNIX's select()
> dispatches several event sources, has its virtues, but doesn't ever
> seem to cover quite everything - for example, how do you wait for one
> of {process exit status, I/O input}? An internal, runtime-managed
> thread system has to deal with this problem for every event source
> or all threads are wedged as soon as one blocks on an external function -
> and what about long external computations?

Erlang programmers can handle this by configuring a thread pool,
and writing small "linked-in drivers" that use POSIX threads
for such blocking events. Erlang has an asynchronous driver API
to assist the programmer.

A typical (and the most common) example is the file driver,
which uses threads to avoid having the runtime system blocked
by delays in the file system.

Stephen J. Bevan

unread,
Mar 7, 2003, 10:59:16 AM3/7/03
to
"Donn Cave" <do...@drizzle.com> writes:
> Quoth ste...@dino.dnsalias.com (Stephen J. Bevan):
> | "Donn Cave" <do...@u.washington.edu> writes:
> ...
> |> But my point about threads really doesn't have so much to do with UNIX,
> |> anyway, particularly since even today not all UNIX platforms support
> |> OS threads. It's just this: if you want a generic way to deal with
> |> multiple event sources simultaneously, OS threads looks like it.
> |
> | I agree that OS threads can be used for that. How well it works
> | depends on how many threads you create and how well the thread
> | implementation scales. A canonical example is writing a server in
> | which there is one OS thread per client connection. That's a nice
> | simple way to write it, but it is unlikely to work well on most thread
> | systems once you get (tens of) thousands of connections. There are
> | exceptions (Linux 2.5 with the new scheduler changes is way better
> | than Linux 2.4) but it does mean that OS threads as currently
> | implemented aren't necessarily a pratical generic solution for
> | listening on multiple event sources.
>
> I wouldn't draw that conclusion just because they don't scale to
> four or five digits.

My conclusion was really at the start :-

> | ... How well it works


> | depends on how many threads you create and how well the thread
> | implementation scales.

The example I gave was one where the nice abstraction breaks down.
That doesn't mean the abstraction can't be used up to that point if
one so wishes, hence the word "necessarily". The same argument goes
for :-

> Tens of thousands of connections can be a
> challenge all by itself, and yet we don't say connections aren't
> a practical solution and everyone should be using datagrams.

i.e. TCP connections are fine to a point but if you need more
connections than that then you have to consider switching to something
else e.g. UDP.


> But it is a powerful tool, and that's one answer to
> ``why on earth would anyone want OS threads?'', which did seem to be
> a reaction that people other than yourself had.

If you'd asked the question in comp.lang.ada or comp.lang.limbo I'd
have expected similar responses i.e. why do you want (raw) OS threads
when the language has concurrency built in? In response to Lauri you
desribed why you need them (they are forced on you by the UI API)
which gave the necessary context for your request.


> | Ah, would it be possible to separate the UI from the application and
> | connect them via shared memory, pipe or socket? I mainly write
> | servers (daemons) and by their nature I have to separate the UI from
> | the application. However, I understand that for some applications
> | this isn't possible/desirable e.g. for performance reasons.
>
> It wouldn't need that much separation, really - for Haskell or Erlang
> or whatever we only need to constrain the interpreter data structures
> to a single OS thread, not necessarily a separate process. It still
> would not be fun, though. The main problem is that the interpreter
> is in the middle of the call hierarchy - called by a API dispatching
> function, it will in turn call other API functions, and the latter
> must occur in the thread dispatched by the former. That would take
> some horrible kind of cross-thread co-routines that surely be much
> worse than just writing everything in C++.

It would. Which is why I would separate them completely if possible
so that there would be no cross-thead issues: the application program
would run in its process and the UI would run in its process and use
as many threads as it wants. This breaks down if the application
fundamentally must use some (threaded) API and there is a lot of data
being shared, at which point I'd want to re-evaluate the programming
language choice :-)


> |> I do find O'Haskell's reactive objects concept compelling,
> |> but it comes at that cost: the runtime has to be re-implemented in
> |> terms of every different I/O (and other long call) dispatching solution.
> |
> | I don't know what the "runtime" is in this case and hence why it has
> | to be re-implemented or what "every different I/O" refers to.
>
> I'm probably misusing these terms, I'm just an old computer center
> hacker, but the situation is not hypothetical. If you look at the
> source, sockets and the Tk X11 interface are integrated into the
> heart of O'Haskell's reactive object system, leaving it with two
> alternative I/O dispatching solutions - handle sockets etc. with
> select, or handle X11 and sockets etc. with Tk (which no doubt uses
> select internally, but it doesn't matter as long as it works.) If
> I want to add Qt or something to O'Haskell, the same thing would
> have to be done, accounting for ttys and sockets and whatnot in
> terms of however Qt handles those things. Where in an ocaml port,
> that can be left to the programmer/user, because ocaml doesn't support
> dispatching on I/O as a language feature.

I understand what you mean now. Your problem is a common one an
occurs in a variety of guises. For example, some still consider DOS
the best realtime system available for PCs: it boots on almost any PC
and since it doesn't have a scheduler it doesn't get in the way of you
adding whatever scheduler you want. Less is more as they say :-)

Donn Cave

unread,
Mar 7, 2003, 12:30:27 PM3/7/03
to
Quoth ste...@dino.dnsalias.com (Stephen J. Bevan):
| "Donn Cave" <do...@drizzle.com> writes:
...

|> But it is a powerful tool, and that's one answer to
|> ``why on earth would anyone want OS threads?'', which did seem to be
|> a reaction that people other than yourself had.
|
| If you'd asked the question in comp.lang.ada or comp.lang.limbo I'd
| have expected similar responses i.e. why do you want (raw) OS threads
| when the language has concurrency built in? In response to Lauri you
| desribed why you need them (they are forced on you by the UI API)
| which gave the necessary context for your request.

And then I went on to mention a couple of reasons why APIs like that
exist - to handle disparate types of events in the absence of a unified
event handling dispatch function, and also for the sake of real
concurrency on multiprocessor platforms. As long as we're summarizing.

I wish I knew more about Limbo, but I'm surprised to hear it doesn't
support OS threads - I understand that it does support the kind of
internal pseudo-thread, but I thought Inferno had OS threads (that
share memory and other resources even if they may be called "processes"),
and you'd think Limbo would have to be up to working with them.

Donn Cave, do...@u.washington.edu

Stephen J. Bevan

unread,
Mar 7, 2003, 4:45:48 PM3/7/03
to
"Donn Cave" <do...@u.washington.edu> writes:
> Quoth ste...@dino.dnsalias.com (Stephen J. Bevan):
> | If you'd asked the question in comp.lang.ada or comp.lang.limbo I'd
> | have expected similar responses i.e. why do you want (raw) OS threads
> | when the language has concurrency built in? In response to Lauri you
> | desribed why you need them (they are forced on you by the UI API)
> | which gave the necessary context for your request.
[snip]

> I wish I knew more about Limbo, but I'm surprised to hear it doesn't
> support OS threads ...

The Ada and Limbo examples were only meant show why an unqualified
query as to how to manipulate OS threads in languages with built in
concurrency might provoke a certain type of (negative) response. I
wasn't implying anything about how a particular Ada or Limbo
implementation might or might not use OS threads internally to support
its concurrency primitives. Clearly if they did it would make
interfacing to third party libraries which require the use of OS
threads much easier.

Donn Cave

unread,
Mar 7, 2003, 6:24:44 PM3/7/03
to

I don't care what they do internally, either. I'm saying that on
comp.lang.limbo, they would say "you can do that" - Limbo's concurrency
primitives create OS threads and run in them in a way that is exposed
to the programmer. Of course it appears to have the luxury of a platform
that supplies some other useful features (channels) for dealing with OS
threads, so you get the best of both - its concurrency is real and also
practically useful. But one would expect any system programming language
to support threads, unless for a platform that doesn't have them.

(Don't know if it has "hot code replacement", or really anything in
detail about Inferno or Limbo. Here's a URL for the benefit of anyone
who has the least interest -
http://www.vitanuova.com/inferno/papers/descent.html

Donn Cave, do...@u.washington.edu

Stephen J. Bevan

unread,
Mar 7, 2003, 10:28:03 PM3/7/03
to
"Donn Cave" <do...@u.washington.edu> writes:
> | The Ada and Limbo examples were only meant show why an unqualified
> | query as to how to manipulate OS threads in languages with built in
> | concurrency might provoke a certain type of (negative) response. I
> | wasn't implying anything about how a particular Ada or Limbo
> | implementation might or might not use OS threads internally to support
> | its concurrency primitives. Clearly if they did it would make
> | interfacing to third party libraries which require the use of OS
> | threads much easier.
>
> I don't care what they do internally, either.

Then I think we are talking past each other. What they do internally
is of prime importance since if they don't use OS threads it is a fair
bet that any attempt to link in a library that does use OS threads
will not interact very well.


> I'm saying that on comp.lang.limbo, they would say "you can do that"
> - Limbo's concurrency primitives create OS threads and run in them
> in a way that is exposed to the programmer. Of course it appears to
> have the luxury of a platform that supplies some other useful
> features (channels) for dealing with OS threads, so you get the best
> of both - its concurrency is real and also practically useful.

The Limbo is available under Inferno which in turn is available as a
native OS or hosted on a variety of OSes. I've no idea if they all
use OS threads or not. If they do that solves one set of problems.
However, for some hosted platforms it rules out using threads for
other problems e.g. the thread per connection approach which would
work fine under Erlang (or at least to the point where file-descriptor
limits are the problem not threads) or other systems which do not use
an OS thread per language thread. Note I'm not arguing against OS
threads here, just pointing out that they are not an automatic win and
there are good reasons for some languages/implementations not using
them if those languages/implementations are targetting a certain class
of problems.


> But one would expect any system programming language
> to support threads, unless for a platform that doesn't have them.
>
> (Don't know if it has "hot code replacement", or really anything in
> detail about Inferno or Limbo.

Limbo has a "load" expression which attempts to dynamically load a
module of a given name. You are allowed to load modules multiple
times and they are considered to be distinct instances. That provides
at least a base on which to build some forms of hot code replacement.

Donn Cave

unread,
Mar 7, 2003, 11:40:56 PM3/7/03
to
Quoth ste...@dino.dnsalias.com (Stephen J. Bevan):
...

| Then I think we are talking past each other.

I thought I noticed that a couple rounds back.

| ... What they do internally


| is of prime importance since if they don't use OS threads it is a fair
| bet that any attempt to link in a library that does use OS threads
| will not interact very well.

What is important in the end is whether the implementation
supports execution in multiple separate threads. If it does
(e.g., ocaml), the rest can be worked out. Language level
concurrency features are not directly relevant, and it's
conceivable that a language might offer an internal pseudo
concurrency feature and also be viable in a threaded application.
(I've seen this with "stackless" Python, and there is apparently
a potential development in this direction for GHC.)

|> I'm saying that on comp.lang.limbo, they would say "you can do that"
|> - Limbo's concurrency primitives create OS threads and run in them
|> in a way that is exposed to the programmer. Of course it appears to
|> have the luxury of a platform that supplies some other useful
|> features (channels) for dealing with OS threads, so you get the best
|> of both - its concurrency is real and also practically useful.
|
| The Limbo is available under Inferno which in turn is available as a
| native OS or hosted on a variety of OSes. I've no idea if they all
| use OS threads or not. If they do that solves one set of problems.
| However, for some hosted platforms it rules out using threads for
| other problems e.g. the thread per connection approach which would
| work fine under Erlang (or at least to the point where file-descriptor
| limits are the problem not threads) or other systems which do not use
| an OS thread per language thread. Note I'm not arguing against OS
| threads here, just pointing out that they are not an automatic win and
| there are good reasons for some languages/implementations not using
| them if those languages/implementations are targetting a certain class
| of problems.

I think I can go out on a limb here and say that Inferno certainly
does use OS threads (where Inferno is the OS in question.) On the
other hand, I could only guess that it (always) implements threading
internally, so in some sense Erlang might be compared to Inferno plus
Limbo.

Donn Cave, do...@drizzle.com

It is loading more messages.
0 new messages