ML's one-arg/one-ret-value scheme is indeed elegant and useful. I like
it better than Scheme's system. But you can't, as you suggested, just
go add types to Scheme. Then it wouldn't be Scheme. It would be ML
with a better syntax, or Infer. This is good. Probably better than Scheme.
But not Scheme.
I think you could add a lot of missing, important things to Scheme and keep
it Scheme: modules, records, exceptions. But not Hindley-Milner, required
types.
But who cares? James Clerk Maxwell was once lecturing on theories of light.
There are two general models for light, he said: particle and wave. People
used to think of light as a stream of tiny particles. But they are all dead.
If you see where I'm headed...
-Olin
> OK, I'll bite. Why would you want something to remain Scheme if
> something better were available? Isn't there enough experience to say
> that static typing, like static scoping, is the Right Thing?
>
Nope. Dynamic typing gives programs better modularity ease of authoring,
cuts down on multiple inclusions of a copy of a particular routine for each
type it applies to (which still happens in c++, even with templates), and
has numerous other benefits. Static typing has ease of compilation and a
very tiny edge in speed. So whether it's The Right Thing just depends
on what you're trying to do. In general, in environments where functions
are passed as data, dynamic typing is agreed to be superior. The smarter
compilers get (and the more complex programs get) the more I expect to see
dynamic typing in production languages.
Bear
In article <qijn30o...@lambda.ai.mit.edu>,
Olin Shivers <shi...@ai.mit.edu> wrote:
> But you can't, as you suggested, just
>go add types to Scheme. Then it wouldn't be Scheme. It would be ML
>with a better syntax, or Infer. This is good. Probably better than Scheme.
>But not Scheme.
...
Why didn't you do scsh in ML, instead of Scheme?
Because the name ``mlsh'' is really ugly, probably.
Cliff
--
Clifford Beshers Computer Graphics and User Interfaces Lab
bes...@cs.columbia.edu Department of Computer Science
http://www.cs.columbia.edu/~beshers Columbia University
In article <qijn30o...@lambda.ai.mit.edu>,
Olin Shivers <shi...@ai.mit.edu> wrote:
> But you can't, as you suggested, just
>go add types to Scheme. Then it wouldn't be Scheme. It would be ML
>with a better syntax, or Infer. This is good. Probably better than Scheme.
>But not Scheme.
OK, I'll bite. Why would you want something to remain Scheme if
something better were available? Isn't there enough experience to say
that static typing, like static scoping, is the Right Thing?
I'm not trying to start a flamewar (honest). I'd just like to hear
the technical and software engineering arguments in favor of Scheme as
a development language, vs. ML or an ML-like successor to Scheme. I'm
a Scheme user with an interest in AI. For some time, I've been
wondering whether I should switch to ML, for things that don't require
Common Lisp.
Why didn't you do scsh in ML, instead of Scheme?
>I think you could add a lot of missing, important things to Scheme and
> keep it Scheme: modules, records, exceptions.
However, portable Scheme language extensions for these features seem
even more remote than the global optimizations for containers you
mentioned earlier. Even the feature that started this thread,
multiple return values, isn't "really" portable Scheme yet, because
the R5RS report never appeared.
Scheme language development seems to have stalled. Why is Scheme
worth developing in, despite the absence of the features you list? Or
do you expect that Scheme will eventually include such features?
> But who cares? James Clerk Maxwell was once lecturing on theories of light.
> There are two general models for light, he said: particle and wave. People
> used to think of light as a stream of tiny particles. But they are all dead.
> If you see where I'm headed...
Sorry, I don't. Are you suggesting Scheme v. ML == particle v. wave
theory? My impression is that both models are still used intuitively
under specific circumstances, even by physicists who know better.
Under what circumstances, in your view, should one prefer Scheme over
ML or an ML-like successor?
Rob Helm
In article <31F841...@sonic.net>, "Ray S. Dillinger" <be...@sonic.net> wrote:
> B. Robert Helm wrote:
>
> > OK, I'll bite. Why would you want something to remain Scheme if
> > something better were available? Isn't there enough experience to say
> > that static typing, like static scoping, is the Right Thing?
> >
>
> Nope. Dynamic typing gives programs better modularity ease of authoring,
This MIGHT be true if you compare dynamic typing to a statically typed
language with a really BAD type system. SML's type system is good and it
promotes modularity and manages complexity better than Scheme.
> cuts down on multiple inclusions of a copy of a particular routine for each
> type it applies to (which still happens in c++, even with templates),
SML does not produce a different implementation of a polymorphic routine
on a per-type basis.
> Static typing has ease of compilation and a
> very tiny edge in speed.
With the note that the performance of a good Scheme implementation is
much better than most 'main-stream' programmers would guess, the
difference between a statically-typed language and a latently-typed
language is not "tiny". Even with straight Scheme a lot of energy has
gone into improving performance via type inferrence.
But SML is faster. For a raw iterative loop, C/C++ will always beat SML
(and Scheme). However, it has been my experience that if any dynamic
memory management is involved, SML will significantly OUTPERFORM C/C++.
> In general, in environments where functions
> are passed as data, dynamic typing is agreed to be superior.
You can't statically type "eval", but other than that, I don't believe
this statement at all.
> The smarter
> compilers get (and the more complex programs get) the more I expect to see
> dynamic typing in production languages.
You're probably right, but the real reason dynamically-typed languages
have an advantage over statically-typed languages is because there is no
one right type system. Safety and speed are very good things, but the
most important thing about a type system is the semantic framework it
provides for managing complexity. The biggest advantage Scheme has over
SML is that it is possible to build new type systems, or type systems more
appropriate for a given problem.
> So whether it's The Right Thing just depends
> on what you're trying to do.
Yup, yup.
Also, Scheme is more fun to program in because Scheme tends to let you
'feel' your way to a working program, whereas SML forces you to think your
way there. But we have to remember that they're both so far beyond C++
that it's silly to argue about them.
> Bear
-thant
Path: hplntx!news.dtc.hp.com!col.hp.com!sdd.hp.com!swrinde!howland.reston.ans.net!newsfeed.internetmci.com!bloom-beacon.mit.edu!senator-bedfellow.mit.edu!usenet
From: shi...@ai.mit.edu (Olin Shivers)
Newsgroups: comp.lang.scheme
Date: 25 Jul 1996 09:15:17 -0400
Organization: Artificial Intelligence Lab, MIT
Lines: 18
Sender: shi...@lambda.ai.mit.edu
Reply-To: shi...@ai.mit.edu
NNTP-Posting-Host: lambda.ai.mit.edu
X-Newsreader: Gnus v5.1
Hey, no fair, Greg. The ML hackers got type systems; the Scheme hackers don't.
ML's one-arg/one-ret-value scheme is indeed elegant and useful. I like
it better than Scheme's system. But you can't, as you suggested, just
go add types to Scheme. Then it wouldn't be Scheme. It would be ML
with a better syntax, or Infer. This is good. Probably better than Scheme.
But not Scheme.
I think you could add a lot of missing, important things to Scheme and keep
it Scheme: modules, records, exceptions. But not Hindley-Milner, required
types.
But who cares? James Clerk Maxwell was once lecturing on theories of light.
There are two general models for light, he said: particle and wave. People
used to think of light as a stream of tiny particles. But they are all dead.
If you see where I'm headed...
-Olin
| From: shi...@ai.mit.edu (Olin Shivers)
| Date: 25 Jul 1996 09:15:17 -0400
|
| Hey, no fair, Greg. The ML hackers got type systems; the Scheme hackers don't.
|
| ML's one-arg/one-ret-value scheme is indeed elegant and useful. I like
| it better than Scheme's system. But you can't, as you suggested, just
| go add types to Scheme. Then it wouldn't be Scheme. It would be ML
| with a better syntax, or Infer. This is good. Probably better than Scheme.
| But not Scheme.
You don't need types to do this in Scheme or Lisp. You just need
multiple entry points for a higher-performance implementation. In
another post I mention a rough outline of how to do this.
The only thing you have to give up is EQ?-ness of the
multiple-value/argument aggregates. Of course, since ML doesn't
provide it either, you are no worse off.
In many cases, a fair amount of the efficiency to be gained from static
type systems can be regained by compiling procedures in two modes:
- assumptions satisfied
- general (default) mode
For first-order code, the assumptions can be checked by the linker,
which can generate interface stubs on demand to avoid exponentiation
of code, and there is no overhead for the calls when the assumptions
match.
Higher-order calls do incur some overhead, but the overhead is not
significantly greater than what is already in place (arity checking).
The alternate code can be generated on demand and cached for reuse.
| I think you could add a lot of missing, important things to Scheme and keep
| it Scheme: modules, records, exceptions. But not Hindley-Milner, required
| types.
|
| But who cares? James Clerk Maxwell was once lecturing on theories of light.
| There are two general models for light, he said: particle and wave. People
| used to think of light as a stream of tiny particles. But they are
| all dead.
I think this applies equally well to both languages. There may just
be a slight phase difference.
After all, things like VB, C, C++, and Java are where the action
really is, unfortunately.
p> Be on the lookout for a growing flurry of Scheme activity in the
p> SGML (Standard Generalized Markup Language - ISO 8879) community in
p> connection with DSSSL (Document Style Semantics and Specification
p> Language - ISO 10179), a standard composition specification
p> formalism which is based on R4RS Scheme (with a few tweaks).
This "growing flurry of activity" has already been happening for over
a year. Unfortunately (or something), DSSSL isn't going to take the
world over and, as a result, it's not going to be a great big deal for
Scheme programmers looking for work. In addition, the annoyance of
dealing with SGML can reasonably be expected to put off a lot of
otherwise-enthusiastic Schemers.
Apart from gratuitous overuse of special forms and special read syntax
and a small number of other nits that annoy the Scheme purist within
me, I think DSSSL is pretty decent as ISO standards go.
<b
--
Let us pray:
What a Great System. b...@eng.sun.com
Please Do Not Crash. b...@serpentine.com
^G^IP@P6 http://www.serpentine.com/~bos
>I think the Scheme community may yet recover momentum. The folks down at
>Rice are doing really interesting work with Scheme, Danvy has a draft R5RS,
>and Indiana, as always, holds the torch high.
> -Olin
Don't forget the folks at Texas..
As I recall, Alan Kay said that the reason Smalltalk was so named
was precisely to keep people's expectations low.
What could you expect from a little language for kids named Smalltalk?
(If it's not true, I don't want to know.)
--
| Paul R. Wilson, Comp. Sci. Dept., U of Texas @ Austin (wil...@cs.utexas.edu)
| Papers on memory allocators, garbage collection, memory hierarchies,
| persistence and Scheme interpreters and compilers available via ftp from
| ftp.cs.utexas.edu, in pub/garbage (or http://www.cs.utexas.edu/users/wilson/)
Actually, the "flurry of activity" has been "happening" for much longer
than a year -- but now that the ISO standard is (finally) out, you
will see this flurry increase.
I never made the claim that DSSSL was going to employ the large
majority of the Scheme programming masses -- only that those Scheme
programmers who were looking for some *very* interesting work, might
find some to their liking in the SGML/DSSSL community.
Having tracked the DSSSL standard from its inception, and having
worked in the SGML field for going on 8 years, I'm here to tell you
that DSSSL will certainly take the commercial publishing world over
(eventually), and probably the Web as well. As for the "annoyance"
of dealing with SGML, I'm willing to wager that either (a) you've
not done much work with SGML or (b) you were applying it to a problem
for which it was either ill suited or overkill. I personally find
SGML to make my life *much* easier and to remove orders of magnitude
more annoyances than it adds.
I'll be the first to admit, however, that SGML is neither simple, nor
perfect.
I agree, though, that DSSSL is a good example of an ISO standard "done
right".
========================================================================
Patrick Stickler KC4YYY
Information Technologies International Inc. stic...@itii.com
9018 Notchwood Court http://www.itii.com
Orlando, Florida 32825 USA (+1 407) 380-9841
------------------------------------------------------------------------
SGML Consulting & Engineering, Legacy Data Conversion, WWW/HTML
------------------------------------------------------------------------
An affiliate of the Martin Hensel Corporation http://www.hensel.com
========================================================================
Because the name ``mlsh'' is really ugly, probably.
Ah, but if I can count it as an OS, I could call it "mlshos" with somewhat
unsettling, but probably all-too-accurate implications for users.
It has occurred to me on multiple occasions that we would be better served
by acronyms and names for things that had a certain "truth in advertising"
quality. I have considered, for example, naming an OS "Molasses," so that
should anyone give me any grief about its speed, I could just laugh at them
and say, "What did you expect?" Or perhaps naming a commercial tool, "Vapor,"
or one of those systems architecture description languages "Vague."
-Olin
I'm not trying to start a flamewar (honest). I'd just like to hear
the technical and software engineering arguments in favor of Scheme as
a development language, vs. ML or an ML-like successor to Scheme. I'm
a Scheme user with an interest in AI. For some time, I've been
wondering whether I should switch to ML, for things that don't require
Common Lisp.
Why didn't you do scsh in ML, instead of Scheme?
Well, the general issues have been hashed out many, many times. People get
pretty religious about it. My current religion is that I am not interested
in listening to anyone flame on these issues who hasn't written > 1,000 lines
of code in both Scheme and ML. I believe most of the angst about "fascist
static type systems" would evaporate if he who holds the opinions would just
write a program instead of flaming.
However, to address your specific question. I chose Scheme mostly because it
had macros. The name of the game when I did scsh was syntax
extension. Scheme's macros allow me to embed specialised notations inside
my general purpose programming language. Lisp & Scheme are the only languages
I know of that let you do this so well.
Scheme language development seems to have stalled. Why is Scheme
worth developing in, despite the absence of the features you list? Or
do you expect that Scheme will eventually include such features?
I think the Scheme community may yet recover momentum. The folks down at
Yes and no. (I don't think we really disagree... see below.)
Scheme lets you invent your own modularity mechanism, and your own
abstraction-layering techniques. I don't know how to write a metaobject
protocol for ML in ML, but I know how to write one for Scheme, in Scheme.
Until ML has a nice object system and a really, really nice recursive
module system, I'm not buying it. (I know, people are working on those
thigns, and have been for years, and they're making progress. In the
meantime, we can easily add macros, metaobjects, and funky templates
to Scheme without batting an eye. I think ML's type system would be
a pain in the ass for what we want to do.)
I'm one of those people who believes he benefits hugely from Scheme's
libertarianism. Admittedly, it's not for everybody. If I had a bunch
of programmers grinding out Just Code, I might saddle them with a static
type system. Then again, I might not.
Which raises some interesting philosophical questions about bondage
and discipline.
In the old days, it used to be easy to make fun of static type systems,
because there were things it was reasonable to do All The Time that
couldn't be reasonably expressed within the constraints of the static
type systems that people had then.
With parameterized types, things got a lot better---90% of the things
you do with dynamic typing all the sudden were possible with type safety.
With implementation inheritance and subtyping (separated, please) things
get better still. 95% of the things you I want to do can be done easily
the type system.
The only problem is that the other 5% is where all the fun is. When I
solve a hard modularity problem, I usually do something that that requires
a few lines of code, maybe a few pages, but requires a few pages of
explanation, or maybe a technical paper, to explain why it's the Right Thing.
It seems to me that there's an unfortunate split between the strong typing
camp and the dynamic typing camp. Anybody who knows The Truth knows that
strong typing can be great 95% of the time, but is a major obstacle
5% of the time. (Pick your own probabilities---it might be 80/20 or
99/1, but the point stands.)
The problem with Type Safety fanatics is that they keep saying "Type Safety
is the only way to go. Wait a few years until we can type what you're
doing, and then we'll support it in our language." Well, I'm not going
to wait, because it's a never-ending process. Type theorists are always
behind the curve, and for fundamental reasons always will be.
Real, useful languages with strong type systems provide escapes, of course.
Modula-3 has an "any" type, where all bets are (statically) off. C has
unsafe casts, God help us, so that if we aren't happy with a Pascalish
brain-dead type system, we can skip all the way down to an assembler-
or FORTH-like completely untyped paradigm.
While I hate C casts, an "any" type is damned useful. If I want to write
Scheme-like macro and take responsibility for what it emits, I should
be able to do so. I should not have to write everything as higher-order
functions and hope that the compiler is smart enough to flatten it out
into something reasonable. I should especially not have to write a
separate preprocessor that manipulates strings, and replicate half
my compiler front end, because I don't have a decent macro system.
Too many type system designers just don't like to admit that their wonderful
type systems are only a 95% solution, and that 5% of the time you're
simply hamstrung.
>> cuts down on multiple inclusions of a copy of a particular routine for each
>> type it applies to (which still happens in c++, even with templates),
>
>SML does not produce a different implementation of a polymorphic routine
>on a per-type basis.
If it doesn't, at least sometimes, it's not getting the performance benefits
it should from static typing.
>> Static typing has ease of compilation and a
>> very tiny edge in speed.
>
>With the note that the performance of a good Scheme implementation is
>much better than most 'main-stream' programmers would guess, the
>difference between a statically-typed language and a latently-typed
>language is not "tiny". Even with straight Scheme a lot of energy has
>gone into improving performance via type inferrence.
Agreed. But doesn't this suggest that you ought to have optional
strong typing, like Dylan? (Not that I'm a big fan of the details
of Dylan's type system---not enough separation of interfaces from
implementation for my taste---but the general idea of having a strong
type system that you can turn on or off seems the only sane thing to
me.)
>But SML is faster. For a raw iterative loop, C/C++ will always beat SML
>(and Scheme). However, it has been my experience that if any dynamic
>memory management is involved, SML will significantly OUTPERFORM C/C++.
Well, that's partly because existing implementations of malloc are
mostly brain-dead, but a big part of it is lack of GC. So ML gets
it right to have GC and available strong typing. What it gets wrong
is having mandatory strong typing. I'm smarter than a Hindley-Milner
type system, when I try, even if I'm not as smart as Hindley or Milner.
When I don't want to try to be that smart, having strong types would be
nice.
>> In general, in environments where functions
>> are passed as data, dynamic typing is agreed to be superior.
>
>You can't statically type "eval", but other than that, I don't believe
>this statement at all.
If you can't use eval, or *macros* you're toast.
(Note: I think eval is often abused. Usually macros and lambda are
plenty powerful, and even they should be used very judiciously. But
without macros, a language just isn't worth using.)
>> The smarter
>> compilers get (and the more complex programs get) the more I expect to see
>> dynamic typing in production languages.
>
>You're probably right, but the real reason dynamically-typed languages
>have an advantage over statically-typed languages is because there is no
>one right type system.
Precisely. You should be able to build your own type system for your
own domain-specific programming environment, which is what all Serious
Programs evolve into.
>Safety and speed are very good things, but the
>most important thing about a type system is the semantic framework it
>provides for managing complexity. The biggest advantage Scheme has over
>SML is that it is possible to build new type systems, or type systems more
>appropriate for a given problem.
Right. The problem with Scheme for Real Work is that it doesn't have
the right framework for building your own type system. (It's a lot of
work to erect a flexible, extensible type system framework, and *then*
erect your own domain-specific type system.) The problem
with ML for Real Work is that it doesn't have the right type system,
and won't let you build your own.
>> So whether it's The Right Thing just depends
>> on what you're trying to do.
>
>Yup, yup.
Yup, but nope. There's a language missing somewhere. Scheme and ML
have a whole lot of Right Things in common, but they're at opposite
ends of a spectrum that needs to be synthesized, not compromised.
Reflection is the key thing. The ML camp is too much into the "we'll
dictate the semantic framework" mindset, and the Scheme camp is too
much into the "We won't dictate anything at all---heck we won't even
provide anything--you're on your own" mindset.
(Yes, I know this is unfair---there are Schemers working on just
this sort of thing. They just aren't going to make it into Scheme
anytime soon, and for lots of good reasons---they're too cutting-edge
and controversial. Until then, Scheme will be too low-level because
it doesn't provide a semantic framework, and ML will be too high-level
because it dictates one that's limited.)
The real work is in between---how do you design a language that has
the right defaults for most purposes, but the right escapes to let
you do what needs to be done?
I think this requires some serious thought about levels of abstraction
and how software really gets built. The CLOS MOP is a good start,
but it addresses mostly implementation, not interfaces, and interfaces
are the key to providing easily composable abstractions.
C++ templates are also worth a look, oddly enough. They're really macros,
you know... they're dynamically-typed at compile time, but what they
spit out has to pass the C++ type checker. An interesting tradeoff.
(You can do surprisingly hip things with C++ templates. Unfortunately,
C++ has enough warts that most of them turn out to be inconvenient
for uninteresting reasons.)
>Also, Scheme is more fun to program in because Scheme tends to let you
>'feel' your way to a working program, whereas SML forces you to think your
>way there. But we have to remember that they're both so far beyond C++
>that it's silly to argue about them.
Agreed.
>
>> Bear
>
>-thant
Be on the lookout for a growing flurry of Scheme activity in the SGML
(Standard Generalized Markup Language - ISO 8879) community in
connection with DSSSL (Document Style Semantics and Specification
Language - ISO 10179), a standard composition specification formalism
which is based on R4RS Scheme (with a few tweaks).
Any Scheme programmers with any clue about publishing technology (or
the willingness to learn) who are looking for a new job would do well
to look at DSSSL for opportunities...
For the truly interested:
http://www.sil.org/sgml/
http://www.jclark.com/dsssl/
Yeah. Walking into a computer bookshop recently made me think that the
authors of Java should have named the language "Bandwagon".
mathew
--
me...@pobox.com home page with *content* at http://www.pobox.com/~meta/
Help prevent economic censorship on the net - support the Open Text Boycott
See http://www.pobox.com/~meta/rs/ot/
Excuse me? I have reams and reams of scheme source code for untyped
routines, and when I develop a completely new type, I define a few
primitives like comparison operators, and then all my higher-level
functions work just fine. My impression of statically typed languages
is that you have to actually touch each and every routine you want to
work with a new type -- The better the system, the lighter the touch,
but you have to touch them *all*. I call scheme's power better modularity,
ease of authoring, and complexity management than anything I've ever met
in *any* other language.
> > cuts down on multiple inclusions of a copy of a particular routine
> > for each type it applies to (which still happens in c++, even
> > with templates),
>
> SML does not produce a different implementation of a polymorphic routine
> on a per-type basis.
Well then what's the point of you having to specify types in the first
place? And how does it get any speed edge if its polymorphic routines
are in fact untyped?
> > Static typing has ease of compilation and a
> > very tiny edge in speed.
>
> With the note that the performance of a good Scheme implementation is
> much better than most 'main-stream' programmers would guess, the
> difference between a statically-typed language and a latently-typed
> language is not "tiny". Even with straight Scheme a lot of energy has
> gone into improving performance via type inferrence.
If I stick to things that I can *do* in c++ without implementing my own
number system, nonlocal exits, continuations, etc, from scratch, the
compiled c++ code (using the g++ compiler) runs about %5-10 faster than
compiled scheme code (using MIT scheme with a few 'tweaks.') You may
not consider this 'tiny.' I do.
> > In general, in environments where functions
> > are passed as data, dynamic typing is agreed to be superior.
>
> You can't statically type "eval", but other than that, I don't believe
> this statement at all.
I have a function called "compose." it takes a list of functions as
arguments and returns a function which is their composition. I use it
to string together any arbitrary sequence of functions I care to, as
long as each one takes an argument which is the return type of the last.
The functions I get back can be defined as taking *any* argument type,
and producing *any* type result. How are you going to do that in a
statically typed system?
Most of my library takes items of unknown types, plus functions that
provide primitive services (like equality tests) on that type. Some of
them return functions defined to work on that type without the primitive-
service functions provided in the calling frame (which makes functions of
a form better suited to "compose" for example). D'you want to write
*those* in a statically typed system?
> <snip> the real reason dynamically-typed languages
> have an advantage over statically-typed languages is because there is no
> one right type system.
Agreed.
> Safety and speed are very good things, but the
> most important thing about a type system is the semantic framework it
> provides for managing complexity. The biggest advantage Scheme has over
> SML is that it is possible to build new type systems, or type systems more
> appropriate for a given problem.
Also agreed. My programming style capitalizes heavily on that strength,
and on the ability to have nonlocal exits and closures. I suspect yours
capitalizes heavily on strengths of ML. So, both of us think we have
the "best" language. And I guess both of us do, for our particular styles.
> Also, Scheme is more fun to program in because Scheme tends to let you
> 'feel' your way to a working program, whereas SML forces you to think your
> way there.
Hm. I've always considered scheme's real strength to be the ability to
define new primitives and have them just "work" with whatever types you
throw at them. It gives me *fantastic* code reusability. Often I just
size up a problem, then write a routine to solve it which calls a couple
dozen things from my library, and it's done. I haven't been able to do
that with any other language.
Bear
r> My impression of statically typed languages is that you have to
r> actually touch each and every routine you want to work with a new
r> type -- The better the system, the lighter the touch, but you have
r> to touch them *all*.
Yours is an incorrect impression. For example, in Haskell, I can
define type classes that may be extended by declaring certain types as
instances of those classes. I can extend the Num class (which
contains numerical types such as arbitrary-precision Integers,
fixed-precision Ints, and Floats by default) thus:
type PointG = (Int, Int)
instance Eq PointG where
p1 == p2 = theta p1 (0,0) == theta p2 (0,0)
instance Num PointG where
(x1, y1) - (x2, y2) = (x1 - x2, y1 - y2)
... and so on ...
This code gives me a basis for implementing Graham's scanning
algorithm for finding the convex hull, and I can manipulate PointG
objects using the standard numeric operators +, - and so on.
>> SML does not produce a different implementation of a polymorphic
>> routine on a per-type basis.
r> Well then what's the point of you having to specify types in the
r> first place?
This is orthogonal to the issue of specialising for speed. I specify
types in Haskell, even though I usually don't need to because the
correct types are inferred for me, to help me reason about internal
and external consistency in my code. If I had some kind of bolt-on
type inference program that checked Scheme code in a similar manner
(as far as was possible), that would make me very happy.
r> And how does it get any speed edge if its polymorphic routines are
r> in fact untyped?
I find it hard to believe that no individual SML implementation might
specialise polymorphic functions for some types. Performance-oriented
Haskell compilers certainly do.
OK, I'll bite. Why would you want something to remain Scheme if
something better were available? Isn't there enough experience to say
that static typing, like static scoping, is the Right Thing?
There are some things that you can't typecheck statically. There are
some things that can be typechecked statically but that people haven't
figured out yet how to. Defining, implementing, and explaining a
modern static type system is a lot harder than defining, implementing,
and explaining a more powerful dynamic type system. And getting
separate compilation, runtime upgrades, and incremental development to
work with a static type checker is hard. On the other hand, static
type checking clearly is very useful during code maintenance and on
projects involving many programmers. So, it really isn't an either/or
situation: both kinds of type systems have their uses. Where language
complexity is less of an issue, modern general purpose languages seem
to move towards offering both.
Thomas.
> SML does not produce a different implementation of a polymorphic routine
> on a per-type basis.
It's worth looking at the TIL compiler being produced at CMU. The
polymorphism-by-boxing technology built into SML/NJ has been in the
process of revision for some years now, and these revisions are now
being seen in new compilers. TIL's use of intensional polymorphism
helps them do several clever things. The details are on-line:
http://foxnet.cs.cmu.edu/rwh/papers.html
'shriram
This solution, at least, doesn't beg a `sufficently smart compiler,'
and rather falls in with Scheme's iterative programming techniques.
--
Alexander Williams {zan...@photobooks.com ||"Quiet, Mal! Can't you
tha...@alf.dec.com} || see I'm Creating!?"
============================================// -- Dr Blight
Disclaimer: I am a Scheme and Lisp lover. I simply believe that ML is
a good language (although I don't like the environments I know), and
that bashing ML from a Scheme perspective is as unproductive as
bashing Scheme or Lisp from an ML (or C++) perspective.
In article <31FA4C...@sonic.net>,
Ray S. Dillinger <be...@sonic.net> argued against ML:
RSD> My impression of statically typed languages
RSD> is that you have to actually touch each and every routine you want to
RSD> work with a new type -- The better the system, the lighter the touch,
RSD> but you have to touch them *all*.
Nope. That's the work of the compiler. You have to *recompile* them
all, but not actually change the source code. Or, in simple cases,
you can use polymorphism, and not even recompile any code.
An example of polymorphism: you have some type T, an ordering
predicate lessp:T*T->bool and a function
sort:'a list*('a*'a->bool)->'a list (where 'a is the ML notation for a
type variable). Then, if some code does:
sort l lessp
then this code will not need to be changed if you change the
representation of T. You only need to make sure that lessp is
consistent with T (and the compiler will often barf when it is not the
case).
You don't always want to be polymorphic. Suppose that you have a
complicated function munge:T->T which never accesses directly a value
of type T except through a well defined (albeit large) set of
low-level functions (such as lessp). Then, if you change the
representation of T, you will only need to change the low level
functions. Of course, unlike in Scheme, all of the code will need to
be recompiled (that's one of the reasons why I program in Lisp).
The module system (both in SML and recent versions of Caml) will help
you make sure that this encapsulation is respected (as far as I know,
there is no way to guarantee in Scheme that there are no hidden
dependencies on a particular representation). It will also allow you
to write the code in such a manner that you will not need to recompile
it, but only relink it (to reapply the functors, in ML parlance).
Would a module system for Scheme give you the same guarantees? (I
think it could be done, but not without introducing types; I seem to
remember that there was a proposal for such a beast once (Blume?)).
>> SML does not produce a different implementation of a polymorphic routine
>> on a per-type basis.
RSD> Well then what's the point of you having to specify types in the first
RSD> place? And how does it get any speed edge if its polymorphic routines
RSD> are in fact untyped?
See the example above. munge is not polymorphic, but it could easily
be recompiled when the types changed. On the other hand, sort is
polymorphic, but the type system guaranteed that applying the
functinal argument to a couple of members of the list is safe, so that
no runtime checks were needed (it is not clear to me whether the
compiler needs to tag the types anyway).
(RSD than goes on to refute the myth that Dynamic typing must be slow;
I absolutely agree with him)
>> You can't statically type "eval", but other than that, I don't believe
>> [that dynamic typing is any more powerful than static typing].
Dynamic typing allows you to write many programs that can't be written
in a static type system. However, usually there is another way to
express the same behaviour.
RSD> I have a function called "compose." it takes a list of functions as
RSD> arguments and returns a function which is their composition [the
RSD> functions are of arbitrary types].
This seems impossible to do directly in the SML type algebra.
However, if the types of the arguments/results are in a small finite
set, than you can take a union of the types used (by the way, that's a
simple example of monadic style; it would be clumsy to write in SML,
but quite natural in Haskell). (By the way, have you had a look at
Objective Caml yet? It's somewhere on www.inria.fr.)
>> <snip> the real reason dynamically-typed languages
>> have an advantage over statically-typed languages is because there is no
>> one right type system.
That's true from a practical point of view, but wrong from a
theoretical one. The One Right Type System for functional systems
without first class control operators is second order intuitionistic
predicate calculus.
My opinion is that dynamic type systems provide a worse developing
environment. It is more difficult to debug your code (what's the type
of that print statement you introduced for debugging?), you can't
change a function on the fly without recompiling all the functions
that use it, it is next to impossible to have a `format' function
(although the original Caml had `printf', hacked in by raping the type
system in an unsafe way), and I've never seen a good debugger for ML
(sorry, Jerome, if you're reading this). But systems such as
ML/Haskell are in their infancy. Lisp, on the other hand, has a
mature implementation technology.
This message does not represent the opinions of anyone, not even mine.
J. Chroboczek
Hey, no fair, Greg. The ML hackers got type systems; the Scheme hackers
don't.
Who says? There are at least three implemented type systems for Scheme:
Wright & Cartwright's Soft Scheme, Flanagan & Felleisen's SBA, and Siskind's
Stalin. All of them type full Scheme with no restrictions.
But you can't, as you suggested, just go add types to Scheme.
Then it wouldn't be Scheme.
Why not? Pray tell why Stalin is not Scheme?
--
Jeff (home page http://tochna.technion.ac.il/~qobi)
| Having tracked the DSSSL standard from its inception, and having worked
| in the SGML field for going on 8 years, I'm here to tell you that DSSSL
| will certainly take the commercial publishing world over (eventually),
| and probably the Web as well. As for the "annoyance" of dealing with
| SGML, I'm willing to wager that either (a) you've not done much work
| with SGML or (b) you were applying it to a problem for which it was
| either ill suited or overkill. I personally find SGML to make my life
| *much* easier and to remove orders of magnitude more annoyances than it
| adds.
you seem to need creds: I have been working with SGML since 1989, I
maintained the SGML Repository at the University of Oslo for nearly 6
years, I was a member of the ISO working group for nearly 5 years, and I
have contributed some 1500 articles to comp.text.sgml over the same years.
my main goal in working with SGML was to find ways to represent information
for arbitrary reuse (_one_ of which is printing) -- some means to let us
humans encode information such that programs could deal with some of the
complexity of the intuitive structures in our communication. my goal was
most emphatically _not_ to put ink on paper or pixels on screens, and that
is the only place where SGML has been widely used. over these years, my
annoyances with SGML grew, and when I had gotten three chapters into a book
on SGML (working title: "A Conceptual Introduction to SGML"), I realized
that I had been willing to accept such a staggeringly large number of
annoyances and flaws in this language that I could neither write further on
my book nor tell people in any other way that SGML is the Right Thing,
because it isn't.
SGML does have some good ideas, but they are neither exploited nor explored
by the user community, and they drown in useless garbage that make the
reuse of information coded in SGML harder than any other language that
could be used. SGML's only actual forte is that it is a standard. the
standard itself is horrendously complex and amazingly lacking in precision,
making it an ordeal to study it and understand it _completely_, which about
a dozen people did last time I checked (at the beginning of this year).
after having invested nearly 5 years of my life in SGML, I decided that it
had no future outside of print shops and government contracts, and I could
either waste my time and money not getting any of the necessary changes
through the ISO "review" process, help people encode a structure in their
information where they had even less clue and incentive to understand their
own information than they do with databases, or do something fun and
exciting and rewarding. I chose the latter.
I have an unfinished article in http://www.naggum.no/~erik/sgml/. it is
not particularly pedagogical, nor has it been polished at all. three
posters I presented at SGML '94 are under ftp://ftp.naggum.no/pub/SGML94.
| I agree, though, that DSSSL is a good example of an ISO standard "done
| right".
DSSSL is perhaps the single best standard I have seen come out of
ISO/JTC 1/SC 18 or SC 22, and I have seen a lot of them. it is unfortunate
that it was wasted on SGML. however, one may hope that DSSSL spawns some
serious redesign in SGML over the next decade. until then, read DSSSL to
find out about a very elegant solution to very complicated problems, and
design your own "input language". I have my own ideas, of course, and will
write them down during this fall.
#\Erik
| Are you saying that it is harder to encode information, even for the
| single purpose of reuse alone, in SGML as opposed to e.g. TeX,
| t/nroff, Scribe, Postscript, etc?
I don't know if I understand what Erik means, but it sure is much easier
to write a LaTeX document class than to write an SGML DTD.
--
Rolf Lindgren | "The opinions expressed above are
Sofienberggt. 13b | not necessarily those of anyone"
N-0551 OSLO | rolf.l...@psykologi.uio.no
Others have already addressed many of the points I wanted to. I just want
to pick a nit or two and then make more subjective remarks.
In article <31FA4C...@sonic.net>, "Ray S. Dillinger" <be...@sonic.net> wrote:
> [...] My programming style capitalizes heavily on [dynamic types],
> and on the ability to have nonlocal exits and closures.
SML supports closures. And although it isn't part of the standard proper,
SML/NJ supports a call/cc mechanism (and a very fancy threading
mechanism), and it is rumored that even MLWorks supports non-local exits
(one-shot continuations).
> Hm. I've always considered scheme's real strength to be the ability to
> define new primitives and have them just "work" with whatever types you
> throw at them.
As others have pointed out, SML's generic polymorphism and functors
provide you with this capability (and in a type-safe manner).
> I suspect yours
> capitalizes heavily on strengths of ML. So, both of us think we have
> the "best" language. And I guess both of us do, for our particular styles.
You've got me wrong. I think Scheme is great. I've done a lot of work
with Scheme and it is still my all-around favorite language. However, SML
does have some strengths over Scheme that may make it more apporopriate
for some of the projects I'm going to be working on and so I've been
looking into it. When working with SML, yeah I really miss macros, and
yeah, I really miss subtyping and all that that implies. My 'beef' is
with what are (IMHO) unfair criticisms of SML like those above.
As Paul Wilson alluded to, the interesting discussion lies in the
middle-ground between the dynamic (latent) type camp and the strong type
camp. Strong type systems will always be behind dynamic type systems in
their ability to elegantly express new conceptual structure. The converse
of this is that if this conceptual structure isn't incorporated into the
compiler, it cannot be used for optimization, and in fact is usually a
performance burden.
Yes, one can play all sorts of code transformation tricks in macro-land to
avoid as much of this burden as possible, and you personally may be able
to make your Scheme code approach within 5%-10% the performance of g++
code with much tweaking, (and yes I would call this difference tiny,) but
the fundamental problem doesn't go away. An application is not well-typed
merely because it runs.
When I talk about building a type system in Scheme, I'm talking about
imposing a structure on the system in order to manage the complexity of an
ever-changing application. I've got to build a system that multiple
programmers can extend safely and efficiently. SML has a real module
system and Scheme doesn't. Therefore, I stand by my claim that SML
promotes modularity and manages complexity better than Scheme.
***
But remember where I (and most other developers are coming from). I've
recently come up with ways to use C++ templates to implement tagged types
(before RTTI) and even a goofy form of closure and curry. I liberally use
envelope classes. I've even got an extendible system of data structures
with built-in mark-and-sweep GC. My ego swells at my own cleverness until
I think: What the fuck am I doing? These are supposed to be non-issues!
And now the world has embraced Java, mainly because it bears a cosmetic
resemblance to C++.
I want to build real, high-performance systems and I want to use a modern,
commercially-supported programming language. As far as I can tell, my
options are: C++ (ha ha), Scheme (Chez is excellent), Common Lisp (worth
serious consideration except that Scheme has spoiled me and every time I
try to read a book on CL I feel like puking), DylanWorks (eventually),
MLWorks (eventually, but sooner than DylanWorks), Erlang (?), ...
Any others?
Scheme and SML float to the top. Given my performance requirements, SML
may have an edge.
-thant
I find that, although no guarantee of the validity or "correctness" of
a person's views, it helps if folks know that one's comments stem from
years of experience rather than days or hours of playing around with a
particular technology. I am, of course, quite familiar with your work
Erik, as would anyone having an even limited exposure to the SGML field,
and it is because of your credentials that I take extra time to respond
to your comments.
>my main goal in working with SGML was to find ways to represent
>information for arbitrary reuse (_one_ of which is printing) -- some
>means to let us humans encode information such that programs could
>deal with some of the complexity of the intuitive structures in our
>communication. my goal was most emphatically _not_ to put ink on paper
>or pixels on screens, and that is the only place where SGML has been
>widely used. over these years, my annoyances with SGML grew, and when
>I had gotten three chapters into a book on SGML (working title: "A
>Conceptual Introduction to SGML"), I realized that I had been willing
>to accept such a staggeringly large number of annoyances and flaws in
>this language that I could neither write further on my book nor tell
>people in any other way that SGML is the Right Thing, because it isn't.
I will certainly concede that SGML may not be the "ideal" technology
for describing and encoding *all* forms of human communication -- and
that it very well may not have met your ultimate needs as a technology,
but perhaps the fact that it *is* widely used in the publishing world
is a clear indication that it is a very good technology for that
subset form of human communication. Whether SGML is "the Right Thing"
depends on what one want's to do with it.
>SGML does have some good ideas, but they are neither exploited nor
>explored by the user community, and they drown in useless garbage that
>make the reuse of information coded in SGML harder than any other
>language that could be used.
I find this comment very difficult to swallow. Are you saying that it
is harder to encode information, even for the single purpose of reuse
alone, in SGML as opposed to e.g. TeX, t/nroff, Scribe, Postscript,
etc? Or that information in SGML is more difficult to use than any
other encoding scheme? Again, this depends on your needs and goals.
I am concerned that your criticisms may be appropriate from the perspective
of your higher goals, but may not be particularly appropriate within
the scope of publishing technology -- and, despite it's shortcomings,
SGML is the "best" thing going at the moment for lessening the cost
and complexity of publishing and providing a significant increase in
reuse and flexibility in the use of information in published titles.
>SGML's only actual forte is that it is a standard. the standard itself
>is horrendously complex and amazingly lacking in precision, making it
>an ordeal to study it and understand it _completely_, which about a
>dozen people did last time I checked (at the beginning of this year).
True. SGML *is* very complex, and I wish it were much less complex. There
are certainly many parts of SGML for which I lack complete understanding;
however, the parts that I do understand provide significant returns in
investment when applied to the problems I have on hand to solve. Just
because a technology is not perfect does not mean it is not useful, nor
in fact "the Right Thing" for solving *particular* problems in the
here and now.
>after having invested nearly 5 years of my life in SGML, I decided
>that it had no future outside of print shops and government contracts,
*Precisely*, by your own admission, SGML *has* a future in the printing
world -- and in fact *is* the future of the printing world. And it is
this future that DSSSL will fuel.
>and I could either waste my time and money not getting any of the
>necessary changes through the ISO "review" process, help people
>encode a structure in their information where they had even less clue
>and incentive to understand their own information than they do with
>databases, or do something fun and exciting and rewarding. I chose the
>latter.
>
>I have an unfinished article in http://www.naggum.no/~erik/sgml/.
>it is not particularly pedagogical, nor has it been polished
>at all. three posters I presented at SGML '94 are under
>ftp://ftp.naggum.no/pub/SGML94.
I hope to read these articles in the near future to get a better
understanding of where you are trying to go.
>| I agree, though, that DSSSL is a good example of an ISO standard
>| "done right".
>
>DSSSL is perhaps the single best standard I have seen come out of
>ISO/JTC 1/SC 18 or SC 22, and I have seen a lot of them. it is
>unfortunate that it was wasted on SGML. however, one may hope that
>DSSSL spawns some serious redesign in SGML over the next decade. until
>then, read DSSSL to find out about a very elegant solution to very
>complicated problems, and design your own "input language". I have my
>own ideas, of course, and will write them down during this fall.
I'm sure I'm not the only one waiting to see what the new dawn of
information technology will give birth to, but the majority of us
must use what is here now, and despite its shortcomings, SGML (and
DSSSL) is *it*.
I eagerly await to see your vision for the successor to SGML. I'm
sure it will be an improvement over the present technologies we must
slog onwards with.
My motivation for answering your post at length is that, given your
credentials, you have the potential to direct and influence individuals
coming into contact with SGML and DSSSL, and I would hope that before
filling their ears with your higher vision of information modelling
(however needed and valid) that you would ask yourself whether or
not that would do those individuals an injustice by steering them
away from a technology that would best address their *immediate*
needs.
One can only move towards perfection on a foundation of imperfection,
otherwise, we'd already be there. SGML and DSSSL are valid and useful
steps in that direction, and offer needed solutions now until we get
further down the path.
Regards,
Patrick Stickler
] > But who cares? James Clerk Maxwell was once lecturing on theories of light.
] > There are two general models for light, he said: particle and wave. People
] > used to think of light as a stream of tiny particles. But they are all dead.
] > If you see where I'm headed...
]
] Sorry, I don't. Are you suggesting Scheme v. ML == particle v. wave
] theory? My impression is that both models are still used intuitively
] under specific circumstances, even by physicists who know better.
] Under what circumstances, in your view, should one prefer Scheme over
] ML or an ML-like successor?
I think he means you don't try to change Scheme into ML in
comp.lang.scheme, you just stop reading comp.lang.scheme and start
reading comp.lang.ml. In a year or two you check the volume in
comp.lang.scheme, and if it is zero you did the right thing.
--
David Fox http://found.cs.nyu.edu/fox xoF divaD
NYU Media Research Lab f...@cs.nyu.edu baL hcraeseR aideM UYN
First, it makes no sense to state that a language does or does not
implement something in a certain way. Second, the TIL/ML compiler (PLDI
'96) does generate type-specific code using a technique called
intensional type analysis (POPL '95).
>
> You're probably right, but the real reason dynamically-typed languages
> have an advantage over statically-typed languages is because there is no
> one right type system. Safety and speed are very good things, but the
> most important thing about a type system is the semantic framework it
> provides for managing complexity. The biggest advantage Scheme has over
> SML is that it is possible to build new type systems, or type systems more
> appropriate for a given problem.
>
This discussion is confused. Statically typed languages completely
dominate dynamically typed languages in every respect. Why? Because
dynamically typed languages are statically typed languages that insist
on using precisely one (static) type (eg, the recursive type of Scheme
values), ignoring the richness of the rest of the type system. Put
another way, Scheme is but one tiny little corner of ML; if you want,
you can stay in that corner (face in), but most people prefer to take
advantage of the additional expressiveness available to them in ML.
(But you can choose to ignore it.)
> Also, Scheme is more fun to program in because Scheme tends to let you
> 'feel' your way to a working program, whereas SML forces you to think your
> way there. But we have to remember that they're both so far beyond C++
> that it's silly to argue about them.
>
There's truth to both statements. As to the former, the Scheme parser /
front end is biased towards embedding what is typed into the tiny Scheme
corner of the world, so it makes it simpler to use (there's only one
type, so no possiblity of type errors) but terrible to modify or
maintain (there's only one type, so no information about what's going on
is recorded in the code). As to the latter, no kidding!
Bob Harper
> In article <qijwwzrt2...@lambda.ai.mit.edu>,
> Olin Shivers <shi...@ai.mit.edu> wrote:
> > Scheme language development seems to have stalled. Why is Scheme
> > worth developing in, despite the absence of the features you list? Or
> > do you expect that Scheme will eventually include such features?
> >
> >I think the Scheme community may yet recover momentum. The folks down at
> >Rice are doing really interesting work with Scheme, Danvy has a draft R5RS,
> >and Indiana, as always, holds the torch high.
> > -Olin
I missed the 1st part of this discussion and would have to jump in like this. My apologies.
Scheme is a fine language for programming in the small, for concept communication, and for
embedded development. I personally think it is stupid for people to create VRML, HTML,
PostScript, Forth, TCL, or the likes. (Now, how many people left that I have not insulted
yet? :-) Every one of them should simply switch to Scheme, or some extension of it.
That said, Scheme is not a good language for large commercial applications. There is still
no standard libraries for Scheme that cover GUI, graphics, networking, database, etc. The
performance is next to impossbile to tune, or predict. Most importantly, too many
programmers (some very talented ones included) loath the Lisp-like syntax ("those parethesis
make me dizzy").
But this is fine. I like Scheme to stay in the academia and be pure. It is the place I
frequently come back to, after months of kludges and hacks. I think there are enough like me
to keep Scheme alive, well, and kicking.
--
Sin-Yaw Wang, sin...@cris.com
In article <4t8ag1$r...@ix.cs.uoregon.edu> bh...@ix.cs.uoregon.edu (B. Robert Helm) writes:
] > But who cares? James Clerk Maxwell was once lecturing on theories of light.
] > There are two general models for light, he said: particle and wave. People
] > used to think of light as a stream of tiny particles. But they are all dead.
] > If you see where I'm headed...
]
] Sorry, I don't. Are you suggesting Scheme v. ML == particle v. wave
] theory? My impression is that both models are still used intuitively
] under specific circumstances, even by physicists who know better.
] Under what circumstances, in your view, should one prefer Scheme over
] ML or an ML-like successor?
I think he means you don't try to change Scheme into ML in
comp.lang.scheme, you just stop reading comp.lang.scheme and start
reading comp.lang.ml. In a year or two you check the volume in
comp.lang.scheme, and if it is zero you did the right thing.
You can also do what I do now: keep reading c.l.s. for its entertainment
value...
:-)
--
-Matthias
I'd guess that 90%+ of LaTeX users don't write new document styles; they
just write documents using the pre-defined styles available to them.
It ought to be the same for SGML, surely? Why would the average person
want to write a DTD?
mathew
--
http://www.pobox.com/~meta/
Scheme lets you invent your own modularity mechanism, and your own
abstraction-layering techniques. I don't know how to write a metaobject
protocol for ML in ML, but I know how to write one for Scheme, in Scheme.
Really? I was under the impression that Scheme is not OO. Any MOP
you can write for Scheme you can also write for ML.
Until ML has a nice object system and a really, really nice recursive
module system, I'm not buying it.
None of those are available in Scheme either.
(I know, people are working on those
thigns, and have been for years, and they're making progress. In the
meantime, we can easily add macros, metaobjects, and funky templates
to Scheme without batting an eye. I think ML's type system would be
a pain in the ass for what we want to do.)
Well, I don't know what you want to do, but I found that whenever I
thought the ML type system to be a pain in any one of my body parts I
was doing something stupid, something that shouldn't be done in the
first place. (Of course, this might just be a lack of imagination on
my part that I can't think of smart things to do which don't work with
ML.)
It seems to me that there's an unfortunate split between the strong typing
camp and the dynamic typing camp. Anybody who knows The Truth knows that
strong typing can be great 95% of the time, but is a major obstacle
5% of the time. (Pick your own probabilities---it might be 80/20 or
99/1, but the point stands.)
It seems to be quite close to 100/0 for me.
Type theorists are always behind the curve, and for fundamental
reasons always will be.
This depends on which direction one thinks progress is being made.
Type theorists might very well be ahead of the curve. I don't know
any fundamental reasons why *that* should always be, though.
Too many type system designers just don't like to admit that their wonderful
type systems are only a 95% solution, and that 5% of the time you're
simply hamstrung.
Maybe this is because it is simply not true for them. Why admit
something that is wrong according to your own experience?
>SML does not produce a different implementation of a polymorphic routine
>on a per-type basis.
If you can't use eval, or *macros* you're toast.
Ooops. So you say: when you use Scheme, you are toast. Neither eval
nor macros are part of Scheme (yet). In the case of eval I wish it
would stay that way. Macros (the hygienic variety) is kinda
interesting, though.
>You're probably right, but the real reason dynamically-typed languages
>have an advantage over statically-typed languages is because there is no
>one right type system.
Precisely. You should be able to build your own type system for your
own domain-specific programming environment, which is what all Serious
Programs evolve into.
Whatever type system you think you can build in Scheme -- the same can
easily be done in ML. As Bob Harper pointed out in a different reply
-- Scheme is a *sub*set of ML, after all -- it is ML restricted to a
pre-defined recursive datatype!
Right. The problem with Scheme for Real Work is that it doesn't have
the right framework for building your own type system. (It's a lot of
work to erect a flexible, extensible type system framework, and *then*
erect your own domain-specific type system.) The problem
with ML for Real Work is that it doesn't have the right type system,
and won't let you build your own.
Why such cheap shots? ML has the right type system, at least for some
people, and they even get work done. And anything that can be done in
Scheme can obviously also be done in ML.
--
-Matthias
> Why would the average person want to write a DTD?
He or she wouldn't normally. However, when there is a need to express
an information complex or idea structure for which there is no common
standard DTD, it should be possible -- even natural -- to communicate
the structure as well as the content of the message clearly.
-tih
--
Tom Ivar Helbekkmo
t...@Hamartun.Priv.NO
> Why such cheap shots? ML has the right type system, at least for some
> people, and they even get work done. And anything that can be done in
> Scheme can obviously also be done in ML.
Why such cheap shots? Scheme has the right type system, at least for
some people, and they even get work done. And anything that can be
done in ML can obviously also be done in Scheme.
(No? I'll await your expressiveness argument in my mailbox. TeX,
LaTeX and PS are all welcome.)
Before you accuse me of quoting you out of context, here's what Paul
Wilson said that you responded to:
> Right. The problem with Scheme for Real Work is that it doesn't have
> the right framework for building your own type system. (It's a lot of
> work to erect a flexible, extensible type system framework, and *then*
> erect your own domain-specific type system.) The problem
> with ML for Real Work is that it doesn't have the right type system,
> and won't let you build your own.
Sounds a lot like, "Scheme doesn't have enough of a framework for a
type system, and ML doesn't have the right one for me". My own
experiences with using H-M variants and other "type" systems has been
similar; I like the information in "types", but don't like that of
H-M, so I use SBA variants. Cheap shot?
Contrary to what your newsreader probably says, this _is_
comp.lang.scheme. I'd be as happy as anyone here to hear out any
_new_ arguments you've got. What are they?
'shriram
bl...@zayin.cs.princeton.edu (Matthias Blume) wrote:
> Why such cheap shots? ML has the right type system, at least for some
> people, and they even get work done. And anything that can be done in
> Scheme can obviously also be done in ML.
Why such cheap shots? Scheme has the right type system, at least for
some people, and they even get work done. And anything that can be
done in ML can obviously also be done in Scheme.
I didn't say anything to the contrary. And I wasn't making a cheap
shot either.
Contrary to what your newsreader probably says, this _is_
comp.lang.scheme. I'd be as happy as anyone here to hear out any
_new_ arguments you've got. What are they?
I didn't claim to have any _new_ arguments. The old one I've got are
sufficient for me. I can certainly say I know both languages fairly
well, and I've come a long way of liking one more than the other. And
it is precisely because of its type system. If this isn't convincing
to you -- so be it.
And `Cheap shot?' you ask? Maybe I should have said: `Unfounded
assertion' as in `The problem with ML for Real Work is that it doesn't
have the right type system, ... ', which is certainly not true,
because Real Work does get done in ML.
--
-Matthias
>This discussion is confused. Statically typed languages completely
>dominate dynamically typed languages in every respect. Why? Because
>dynamically typed languages are statically typed languages that insist
>on using precisely one (static) type (eg, the recursive type of Scheme
>values), ignoring the richness of the rest of the type system. Put
>another way, Scheme is but one tiny little corner of ML; if you want,
>you can stay in that corner (face in), but most people prefer to take
>advantage of the additional expressiveness available to them in ML.
>(But you can choose to ignore it.)
This argument is true only in a very strange sense. If it were true in
a straightforward sense, all Scheme programs would transliterate
trivially into (natural) ML programs, but they don't. Transliterating
Scheme programs into ML programs with one type involves making type
tags explicit in the source program, which is not pretty and definitely
not useful.
A wide misconception among (or misstatement by) static typing
proponents is that Scheme is not strongly typed. In fact, it is, but
the types are checked at run time, allowing fewer programs to be
rejected, which makes Scheme more rather than less general, and makes
ML "but one tiny little corner" of Scheme, if we want to resort to such
pejorative statements. Static typing is a useful tool for structuring
code and facilitating early detection of some program errors. It does
make a language richer in the sense that it allows expresion of more
static types (of course), but not in the more obvious sense that it
allows more algorithms to be expressed easily.
Kent
Well, Scheme is strongly typed in the sense that it detects all
violations of its _own_ type system. However, there is no convenient,
standard way to keep it from violating _my_ type system. To do that
in R4RS/IEEE Scheme, I have to write code to store and check type
tags, just as I would in the hypothetical "one-type" ML translation of
my Scheme program.
Is there some language in which that isn't true? I mean, if you say
(+ 3 'foo) in any language, surely an error must result, right?
[This is an ingenuous request for information, not a disguised argument
on either side of the Scheme/ML debate.]
% perl
print 3 + 'foo';
3
Yeah, right... :-)
All Scheme programs *do* transliterate directly into ML. It is a very
simple matter to write a parser that translates Scheme code into ML
code. Every Scheme program will have type sv, where sv is a recursive
datatype of the form
datatype sv = Nil | Cons of sv * sv | Func of sv list -> sv | ...
The translation will make all tag checking and tag creation explicit,
precisely as in a Scheme compiler. For example,
fun car (Cons (a, _)) = a
| car _ = raise RuntimeTypeError
and so forth. It is entirely faithful to the Scheme semantics, and can
be optimized precisely the same way as a Scheme compiler might.
As long as you stick with writing Scheme code, you'll never know the
difference. As soon as you try to step outside the strictures of Scheme
(as you certainly will), you will notice the overhead of all the tag
manipulations, and move instead to programming with many types, rather
than one type. In truth few people actually believe in the doctrine of
one true type. As soon as you question this article of faith, you
quickly realize that you prefer to have a much more refined type system
as exemplified by ML. And you realize that all this tag hacking is
utterly pointless.
> A wide misconception among (or misstatement by) static typing
> proponents is that Scheme is not strongly typed. In fact, it is, but
> the types are checked at run time, allowing fewer programs to be
> rejected, which makes Scheme more rather than less general, and makes
> ML "but one tiny little corner" of Scheme, if we want to resort to such
> pejorative statements. Static typing is a useful tool for structuring
> code and facilitating early detection of some program errors. It does
> make a language richer in the sense that it allows expresion of more
> static types (of course), but not in the more obvious sense that it
> allows more algorithms to be expressed easily.
>
I have always maintained that Scheme is a typed language. In fact,
Scheme is a statically typed language. But since it has only one type,
the static checking is trivialized.
Tag checking is not type checking. (Sorry to quibble about terminology,
but we have to make distinctions if we are to get anywhere.) Tags
cannot possibly supplant static types. For example, you cannot achieve
proper data abstraction in Scheme precisely because data abstraction is
a matter of static typing, and no amount of tag hacking can replace it.
Besides the fact that there are only a fixed set of tags in Scheme,
there is a further weakness of tag-based pseudo-replacements for a type
system: enforcement *must* be computable. This is a very serious
limitation not shared by static typing disciplines. Even if we stay
within decidable conditions, the run-time overheads of repeated tag
checks on values (not to mention the very association of tags with
values) are preposterous.
In anticipation of the obvious remark, let me say also that languages
with the ability to generate "new" tags at run-time are ALSO static type
disciplines, and perfectly transliteratable into ML along precisely the
same lines.
Summary: ML is strictly more expressive than Scheme because Scheme can
be faithfully transliterated into ML, whereas the reverse is impossible.
> Kent
Bob
The successor to the 'B' language allows this: for example,
vincent% cat foo.c
#include <math.h>
#include <stdio.h>
main() {printf("%d\n", sin("foo"));}
vincent% cc foo.c -lm
vincent% ./a.out
2147465140
And some people even use this language (or so I hear).