Whither now, Oh Scheme.

46 views
Skip to first unread message

R Racine

unread,
Oct 26, 2003, 9:45:33 PM10/26/03
to
We are amidst a computer programming language renaissance. New languages
and lots of new users to play with them. Daily it seems.

12-18 months the #scheme IRC channel on freenode.net was essentially
abandoned. Often myself or another, on occasion a crowd of 3. As I type,
on a Sunday evening, there are 40 users.

Discussions on #scheme are generally threefold in nature; SRFI's, as
several SRFI authors are frequently on, a bit of homework or Scheme newbie
assistance, and as one might imagine a good deal of comparative
implementation discussion.

There seems to be a consensus Scheme(s) for every situation except one,
serious application development. And a number of Schemers are interested
in doing just that. A case can be made that a SIFSAD (Scheme Intended For
Serious Application Development) does not exist today, what is worse, it
is doubtful one will exist tomorrow.

But suppose there was a plan for SIFSAD, a roadmap for a Scheme Intended
For Serious Application Development, what would it look like. You would
have to start from somewhere, have a destination in mind, the path
becomes just a bit of machete work.

Assuming the best opportunity for Sifsad is an evolutionary one from the
core of an existing Scheme implementation, here are some hypothetical
Sifsad bios.

Scheme 48 / PreScheme compiler - PreScheme compiler is resurrected,
initially emitting C code. Later native emitters Itanium/AMD 64 bit
systems were added.

PLT/MzScheme - mzc compiler is enhanced with aggressive optimizations.
MzScheme becomes not only one of the functionally richest implementations
but the fastest as well.

Chicken/Bigloo/Gambit/Larceny/Scheme->C et al. Consensus is reached on one
code base, remaining authors, recognizing the will of the Scheme community
work to add the best features of each into the common code base. The
resulting Scheme->C compiler is widely regarded as the best HLL compiler
available.

Chez Scheme - Individual licenses are made available at reasonable cost.
Source is GPL'd for non-commercial use.

MITScheme - Port to new 64 bit systems is successfully achieved. Module
system, syntax-case support is added. With memory constraints lifted,
development of lightning fast, large memory footprint application are
possible in an incremental compilation environment.


Whither now, Scheme.

felix

unread,
Oct 27, 2003, 2:25:47 AM10/27/03
to
"R Racine" <r...@adelphia.net> wrote in message news:<pan.2003.10.27....@adelphia.net>...

>
> There seems to be a consensus Scheme(s) for every situation except one,
> serious application development. And a number of Schemers are interested
> in doing just that. A case can be made that a SIFSAD (Scheme Intended For
> Serious Application Development) does not exist today, what is worse, it
> is doubtful one will exist tomorrow.

Could you elaborate on that? Why do you think (say) Bigloo or PLT
might not suitable for serious app development?

>
> But suppose there was a plan for SIFSAD, a roadmap for a Scheme Intended
> For Serious Application Development, what would it look like. You would
> have to start from somewhere, have a destination in mind, the path
> becomes just a bit of machete work.
>
> Assuming the best opportunity for Sifsad is an evolutionary one from the
> core of an existing Scheme implementation, here are some hypothetical
> Sifsad bios.
>
> Scheme 48 / PreScheme compiler - PreScheme compiler is resurrected,
> initially emitting C code. Later native emitters Itanium/AMD 64 bit
> systems were added.

PreSchene might not be anyones favorite Scheme dialect.

>
> PLT/MzScheme - mzc compiler is enhanced with aggressive optimizations.
> MzScheme becomes not only one of the functionally richest implementations
> but the fastest as well.

Interesting alternative. But Mzc still has to provide clean interfacing
to the MzScheme runtime system, which is not really tuned for
maximum performance, but for other things (debuggability, ease of use,
robustness, etc.)

>
> Chicken/Bigloo/Gambit/Larceny/Scheme->C et al. Consensus is reached on one
> code base, remaining authors, recognizing the will of the Scheme community
> work to add the best features of each into the common code base. The
> resulting Scheme->C compiler is widely regarded as the best HLL compiler
> available.

(BTW, Larceny is not a Scheme->C compiler)

So that would mean we reduce all Scheme->C compilation strategies down
to the least common divisor:

- drop Chicken's fast continuations
- drop Gambit's (forthcoming) very efficient multithreading system
- drop Bigloo's/Scheme->C's direct compilations style and make it a CPS compiler
(you want 1st class continuations and TCO, right?)

What you will get is a Scheme implementation that is either unusable,
incomplete or inefficient.

>
> Chez Scheme - Individual licenses are made available at reasonable cost.
> Source is GPL'd for non-commercial use.

Hm. Can't say much about that...

>
> MITScheme - Port to new 64 bit systems is successfully achieved. Module
> system, syntax-case support is added. With memory constraints lifted,
> development of lightning fast, large memory footprint application are
> possible in an incremental compilation environment.

What many people don't realize is that there CAN'T BE NO SINGLE ALL-POWERFUL
SCHEME implementation. Tradeoffs have to be made, unless you want to
produce a mediocre one. Chicken (for example) will never beat Bigloo, in
terms of raw performance, yet Bigloo's (or PLT's) continuations are
awfully inefficient. Damn, it's even impossible to pin down a single
perfect implementation strategy (Cheney-on-the-MTA? Direct style?
Trampoline style? Bytecode VM? Threaded VM?). What GC? Conservative?
Ref. counting? Stop-and-copy? Mark-and-sweep? Which is best? Or,
more importantly, which is best for *all* applications? None, I'd say.

Several Scheme implementations are more than adequate for serious development
and people use it for that. In fact, Schemes generally provide better
performance and often have better foreign function interfaces than
languages like Python, Ruby or Perl, which seem to be well accepted for serious
stuff. Scheme is more rigorously defined, is better suited to
compilation and provides incredibly powerful syntactic abstractions.

It *is* easy to get lost in the number of implementations, and many
of those are somewhat half-finished, partly because it's so easy
to whip up a simple Scheme, yet this has absolutely nothing to do
with Scheme not being ready for development of real-world code.


cheers,
felix

felix

unread,
Oct 27, 2003, 4:49:14 AM10/27/03
to
fe...@proxima-mt.de (felix) wrote in message news:<e36dad49.03102...@posting.google.com>...

>
> (BTW, Larceny is not a Scheme->C compiler)
>

Or is petite larceny already available?
It seems it isn't, but I may be wrong.


cheers,
felix

R Racine

unread,
Oct 27, 2003, 8:28:37 AM10/27/03
to

On Sun, 26 Oct 2003 23:25:47 -0800, felix wrote:

> What many people don't realize is that there CAN'T BE NO SINGLE
> ALL-POWERFUL SCHEME implementation. Tradeoffs have to be made, unless
> you want to produce a mediocre one. Chicken (for example) will never
> beat Bigloo, in terms of raw performance, yet Bigloo's (or PLT's)
> continuations are awfully inefficient. Damn, it's even impossible to pin
> down a single perfect implementation strategy (Cheney-on-the-MTA? Direct
> style? Trampoline style? Bytecode VM? Threaded VM?). What GC?
> Conservative? Ref. counting? Stop-and-copy? Mark-and-sweep? Which is
> best? Or, more importantly, which is best for *all* applications? None,
> I'd say.
>
> Several Scheme implementations are more than adequate for serious
> development and people use it for that. In fact, Schemes generally
> provide better performance and often have better foreign function
> interfaces than languages like Python, Ruby or Perl, which seem to be
> well accepted for serious stuff. Scheme is more rigorously defined, is
> better suited to compilation and provides incredibly powerful syntactic
> abstractions.
>
> It *is* easy to get lost in the number of implementations, and many of
> those are somewhat half-finished, partly because it's so easy to whip up
> a simple Scheme, yet this has absolutely nothing to do with Scheme not
> being ready for development of real-world code.
>
>
>

In my previous post I mentioned a threefold path to Nirvana. Determine a
starting point, define an endpoint, get the mechete ready. To properly
select an implementation to evolve into Sifsad, it only makes sense to
select an implementation that is best to build of off. There is a
distinct chance that the "best" implementation to move forward with is not
even one of the top 2 or 3 implementations used today.

So a priori, agreed, no debate, compromises must and will occur. However,
I will debate whether it is possible to a) effectively determine which
tradeoffs to select IF the end goal is adequately defined, b) compromises
can not be ameliorated by modular code design c) such tradeoffs inevitably
result in mediocrity.

For example,
The end goal is defined.
- Speed of application. Very important. - Efficient use of large amounts
of memory. Very important.
- Full debugging. Continuation restarts ???
- Core fullblown MOP. Highly optimized dispatch. - Modules, standalone
compilation, interfaces/signatures (also parametric
interfaces/signatures) and runtime determinable implementations. [Imagine
the SRFI-44 debate on the definition of a collections library in the light
of a SIG/UNITs or Scheme48/Chez interfaces or SML sigs....
- Standalone, static exe capability.
- Real multithreading capable of utilizy multiple processors. - so on and
so forth...

The point is, define the goal and tradeoffs become a debate in the context
of what is necessary to achieve the goal.

Another point, Larceny [as you correctly pointed out is not just a
Scheme->C system, later tonight I intend to post on why proposing Larceny
makes sense] has 5 - 6 different GC systems. The Larceny core is very
well designed and supports plugable GC systems. What is the penalty for
this flexibility? I doubt the efficiency of the Twobit compiled code is
impacted. PLT also has 2 GC/VM systems. Such things can be abstracted in
the code base to support multiple solutions and pluggability with minimal
impact.

Bottom line, I believe it IS possible to allow for flexible pluggable
strategies to many of the issues you raised such as various VM strategies.

Couldn't you, being well versed on the Cheney-on-the-MTA approach either
show that this approach is decidedly superior then the MzScheme approach
or is a must have option in Sifsad and then assist in adding it to
MzScheme? (Assuming MzScheme makes sense as the base system.)

Must two or more! Scheme distributions exist, complete with different
runtimes and libraries, predicated on the single point of bifurcation as
to how continuation capture is occuring??!!

In the context of doing comparative analysis via two small experimental
systems yes. In the world of the application developer where the method
of continuation capture is invisible, it is decidedly not justification
for forking two blown Scheme systems. Just capture the damn things, make
it stable, make it fast and MAKE IT ONE Scheme System. Thank you very
much.


Regards,


Ray

Scott G. Miller

unread,
Oct 27, 2003, 9:21:52 AM10/27/03
to

> In the context of doing comparative analysis via two small experimental
> systems yes. In the world of the application developer where the method
> of continuation capture is invisible, it is decidedly not justification
> for forking two blown Scheme systems. Just capture the damn things, make
> it stable, make it fast and MAKE IT ONE Scheme System. Thank you very
> much.

I hope I'm wrong, but it seems you have a simplified view of Scheme
architectures. Continuation capture is probably *the* fundamental
feature that drives selecting the implementation strategy. One cannot
have a modular continuation capture implementation. Thats why systems
with slow call/cc are unlikely to get much better without rearchitecting
themselves at a low level.

If I'm using continuations heavily, I'm going to want to choose an
implementation with that property. If I'm not using them all, but I
demand high performance otherwise, then I'm likely to make a completely
different choice. Its these sort of trade offs which make sifsad a bad
idea. You should ask yourself what the real problem is that prevents
serious application development. I would argue that its the lack of
large, (standard?, maybe.) library. This means covering things such as
usable GUI toolkits, extensive database connectivity, mature threading,
networking, datastructures... the sort of things career programmers take
for granted from the platform libraries of C++ or Java.

The fallacy is believing this is only possible if we standardize on one
Scheme.

Scott

Ray Dillinger

unread,
Oct 27, 2003, 12:30:59 PM10/27/03
to
"Scott G. Miller" wrote:
>
> > In the context of doing comparative analysis via two small experimental
> > systems yes. In the world of the application developer where the method
> > of continuation capture is invisible, it is decidedly not justification
> > for forking two blown Scheme systems. Just capture the damn things, make
> > it stable, make it fast and MAKE IT ONE Scheme System. Thank you very
> > much.
>
> I hope I'm wrong, but it seems you have a simplified view of Scheme
> architectures. Continuation capture is probably *the* fundamental
> feature that drives selecting the implementation strategy. One cannot
> have a modular continuation capture implementation. Thats why systems
> with slow call/cc are unlikely to get much better without rearchitecting
> themselves at a low level.

Partly.... A scheme that compiles to a well-designed intermediate
form could have two back-ends; one that heap-allocates and garbage
collects call frames, and one that uses the hardware stack. These
back-ends would generate code that obeyed two different runtime
models, but there's also a "tail" end -- keyhole optimization of
machine code -- that could be shared between them. The runtime
symbol table and associated code could also be shared between the
two models.

So you'd wind up duplicating maybe half of a simple compiler to
accomodate the fundamentally different designs. And effort spent
on the crankiest and most bottomless, nonportable areas -- machine
code and cache optimization -- would be sharable. By the time
you'd done aggressive optimizations and ported to a half-dozen
different hardware/OS combinations, the duplicated effort might
be a tenth or less of the compiler.

From a compilation point of view, it's easy to scan scheme code and
see if you can find places where call/cc is ever used. You could
make a first-order choice of which backend to invoke just by
checking for it. But the right thing to do would be to profile
it at the intermediate-code level and make a hard assessment of
which model is a "win" for the given program.

Much of what would need to be done can only be done in scheme as
a result of whole-program optimization. And that means getting
program code away from the REPL, because as long as you have the
REPL in the system, you absolutely cannot prove that something
isn't going to be redefined or mutated. It also means very
serious support for optional declarations to eliminate unnecessary
typechecks and very serious support for memory and CPU profiling.

Finally, we really *really* need a linkable object file format
that we don't have to go through an FFI for. FFI's distort or
contort the meaning of scheme code; they introduce special cases,
cause wraparound or length errors in integers, truncate complex
numbers, create exceptions to garbage collection handling, and
wreak all kinds of misfits with the runtime model. We paper over
the problems reasonably well, but still they never quite work
right. When scheme programs link to scheme libraries they shouldn't
need to use braindead C calling conventions.


> The fallacy is believing this is only possible if we standardize on one
> Scheme.

I think maybe there needs to be a 'SISFAD' standard, above and
beyond R5RS, that specifies a lot of things R5RS doesn't specify.
I'd like to see a bunch of people implement it, much as a bunch
of people have implemented R5RS.

A SISFAD standard would expressly forbid some of the things that
make some schemes unusable for serious application development,
like limits on the memory size (guile and MIT scheme have this
problem particularly badly) and failure to support the full
numeric tower. It would specify a format for libraries portable
across all implementations of SISFAD, define which R5RS and other
functions are found in what libraries, define a set of OS calls
accessible through libraries, and straighten out a few things
like binary I/O primitives for pipes, sockets and files.

It would specify the syntax of performance declarations, but the
only requirement of implementations should be that they must not
barf on the syntax -- actually using it for performance enhancement
is a plus, but not barfing on it is crucial.

Bear

Anton van Straaten

unread,
Oct 27, 2003, 1:03:16 PM10/27/03
to
Ray Dillinger wrote:

> "Scott G. Miller" wrote:
> > The fallacy is believing this is only possible if we standardize
> > on one Scheme.
>
> I think maybe there needs to be a 'SISFAD' standard, above and
> beyond R5RS, that specifies a lot of things R5RS doesn't specify.
> I'd like to see a bunch of people implement it, much as a bunch
> of people have implemented R5RS.

This makes much more sense to me than "standardizing on one Scheme".

Of course, the first thing to be standardized has to be a better acronym
than SIFSAD!!

Anton

Scott G. Miller

unread,
Oct 27, 2003, 2:19:04 PM10/27/03
to

I've been speaking deliberately abstractly, but many of these topics
were covered at Matthias Radestock's ILC presentation, and will likely
come up again in some detail around the Scheme Workshop and LL3. See
you there!

Scott

Bruce Stephens

unread,
Oct 27, 2003, 2:56:23 PM10/27/03
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> Ray Dillinger wrote:
>> "Scott G. Miller" wrote:
>> > The fallacy is believing this is only possible if we standardize
>> > on one Scheme.
>>
>> I think maybe there needs to be a 'SISFAD' standard, above and
>> beyond R5RS, that specifies a lot of things R5RS doesn't specify.
>> I'd like to see a bunch of people implement it, much as a bunch
>> of people have implemented R5RS.
>
> This makes much more sense to me than "standardizing on one Scheme".

As far as I understand it, that's what SRFIs are about. The existing
ones don't seem to me to go nearly far enough, though.

Part of what makes (for example) Perl good is CPAN, and all the
conventions (and the resulting community) that make CPAN possible. So
I can download a tarball, unpack it, run Makefile.PL using my chosen
Perl interpreter, and then "make; make test; make install" will work
(with high probability).

That's all made easier because Perl has a single implementation (give
or take), of course. Even so, if there were a common FFI (even a
restricted one), and a few extra things (a common module and/or
package system, perhaps a common object system) something similar
could be built for Scheme.

I'm guessing it won't happen, though. I'm not sure quite what it is,
but something seems to prevent such cooperation.

And that seems to mean that there isn't a scheme community in the same
way that there's a Perl community---so I can be confident of getting
Perl's LDAP package and being able to use it, but Bigloo's equivalent
<http://sourceforge.net/projects/bigloo-lib/> doesn't even build with
the current bigloo, presumably because bigloo's community is simply
too small. (I found much the same with some RScheme libraries, and
doubtless the same is true of most scheme implementations.)

Taylor Campbell

unread,
Oct 27, 2003, 3:22:46 PM10/27/03
to
Would you like a pony, too?

felix

unread,
Oct 27, 2003, 5:42:24 PM10/27/03
to
On Mon, 27 Oct 2003 13:28:37 GMT, R Racine <r...@adelphia.net> wrote:

> In my previous post I mentioned a threefold path to Nirvana. Determine a
> starting point, define an endpoint, get the mechete ready. To properly
> select an implementation to evolve into Sifsad, it only makes sense to
> select an implementation that is best to build of off. There is a
> distinct chance that the "best" implementation to move forward with is
> not
> even one of the top 2 or 3 implementations used today.

Possible, *if* a Sifsad (geez, what an awful name! ;-) is possible
and practical, which I seriously doubt...

> For example,
> The end goal is defined.

> -Speed of application. Very important. - Efficient use of large amounts
> of memory. Very important.

No disagreement here.

> -Full debugging. Continuation restarts ???

But you want speed to, right? Ok, so have several optimization settings.

> -Core fullblown MOP. Highly optimized dispatch.

Oh, how about speed? I assume a simple procedure call is more efficient
(whatever tricks your dynamic dispatch plays, it will not beat the
direct procedure call, naturally). Here you have your first tradeoff.
Why do you want OO baggage in the core, when you want speed at the same
time?

> - Modules, standalone
> compilation, interfaces/signatures (also parametric
> interfaces/signatures) and runtime determinable implementations.
> [Imagine
> the SRFI-44 debate on the definition of a collections library in the
> light
> of a SIG/UNITs or Scheme48/Chez interfaces or SML sigs....

What kind of modules? How easy to use should they be? Should they
allow interactive use? Man, do you realize how much work has gone into
Scheme module systems, yet none really satisfies everybody!

> The point is, define the goal and tradeoffs become a debate in the
> context
> of what is necessary to achieve the goal.

Yes, this is not new. People on c.l.s (and elsewhere) debate about these
things
for decades, now. Have they reached only the slightest bit of consensus?
No, they haven't. Why, I ask you.

>
> Another point, Larceny [as you correctly pointed out is not just a
> Scheme->C system, later tonight I intend to post on why proposing Larceny
> makes sense] has 5 - 6 different GC systems. The Larceny core is very
> well designed and supports plugable GC systems. What is the penalty for
> this flexibility? I doubt the efficiency of the Twobit compiled code is
> impacted. PLT also has 2 GC/VM systems. Such things can be abstracted
> in
> the code base to support multiple solutions and pluggability with minimal
> impact.

Absolutely. Yet, there are implementation strategies that are very tightly
coupled with there collectors. One example is Cheney-on-the-MTA, another
is "traditional" direct style compilers that target C, which mostly use
conservative
GC.

>
> Bottom line, I believe it IS possible to allow for flexible pluggable
> strategies to many of the issues you raised such as various VM
> strategies.

Possible, yes. But not always adequate. I claim that the ideal Scheme
implementation you have in mind will be completely unusable for others.

>
> Couldn't you, being well versed on the Cheney-on-the-MTA approach either
> show that this approach is decidedly superior then the MzScheme approach
> or is a must have option in Sifsad and then assist in adding it to
> MzScheme? (Assuming MzScheme makes sense as the base system.)

It doesn't (if I may say so). I wouldn't touch the MzScheme sources
unless physically forced to do so. That Cheney-on-the-MTA is superior
(to direct style, like Bigloo) is something that I'm firmly convinced
off. And? That doesn't matter to someone who isn't interested in anything
but raw speed of straight-line code. Tradeoffs, again.

>
> Must two or more! Scheme distributions exist, complete with different
> runtimes and libraries, predicated on the single point of bifurcation as
> to how continuation capture is occuring??!!

If you look carefully, you'll find many more differences than only
continuation capture. And capture is only *one* issue with continuations.
How about safe-for-space complexity? Reification? Storage consumption?

>
> In the context of doing comparative analysis via two small experimental
> systems yes. In the world of the application developer where the method
> of continuation capture is invisible, it is decidedly not justification
> for forking two blown Scheme systems. Just capture the damn things, make
> it stable, make it fast and MAKE IT ONE Scheme System. Thank you very
> much.
>

Many people have tried to do so. Yet, the ideal Scheme system hasn't been
done yet.
If the unification of all Scheme implementation efforts is the really
important issue for you, then you effectively strive for mediocrity,
unless you happen to be a Scheme implementation wizard, vastly ahead
of all the others. Mind you, that would be nice!


cheers,
felix

Bruce Stephens

unread,
Oct 27, 2003, 6:23:55 PM10/27/03
to
felix <fe...@call-with-current-continuation.org> writes:

[...]

> Many people have tried to do so. Yet, the ideal Scheme system hasn't been
> done yet.
> If the unification of all Scheme implementation efforts is the really
> important issue for you, then you effectively strive for mediocrity,
> unless you happen to be a Scheme implementation wizard, vastly ahead
> of all the others. Mind you, that would be nice!

Probably true. In that sense, Perl, Python, etc., are mediocre---some
reasonable uses of the languages are inefficient.

On the other hand, if you've got a one-day sort of problem to solve
that requires access to LDAP, SSL, PostgreSQL, and gtk, then the
mediocre solutions win.

Heck, people have been writing reasonable size applications in Tcl for
years, largely because it had a very convenient binding to Tk. tkman
(a *really* nice manpage reader) was first written (about 10 years
ago, apparently) when Tcl was a strongly string-based interpreter; the
author even wrote a paper about the various hackery he used to make it
fast enough (the files had non-essential spaces removed and ghastly
things like that).

Even then, there were presumably choices that ought to have been
better (Tcl's far from a perfect language, and it was much worse in
1993); but Tcl had a convenient binding to Tk and an easy to use FFI,
and that was enough for it to be more usable for a large class of
applications.

For a big application, the work necessary to bind a few libraries is
dwarfed by the work necessary to attack the real problem. However,
that leaves lots of little applications where you're naturally going
to choose a language which has lots of convenient packages. Perhaps
more importantly, I suspect big applications often start off as small
ones---something like Perl makes it easier to start work on a problem.

Bradd W. Szonye

unread,
Oct 27, 2003, 7:06:08 PM10/27/03
to
Bruce Stephens <bruce+...@cenderis.demon.co.uk> wrote:
> For a big application, the work necessary to bind a few libraries is
> dwarfed by the work necessary to attack the real problem. However,
> that leaves lots of little applications where you're naturally going
> to choose a language which has lots of convenient packages. Perhaps
> more importantly, I suspect big applications often start off as small
> ones---something like Perl makes it easier to start work on a problem.

Heck yeah. More than a few times, I've started a big project by writing
a prototype in Perl. More precisely, I try to hack it up in Perl, and if
that doesn't work, I do a better implementation in a more appropriate
language. As a bonus, the initial hack-job implementation gives me
enough experience with the problem domain that I can do a better design
for the "real" version.
--
Bradd W. Szonye
http://www.szonye.com/bradd
My Usenet e-mail address is temporarily disabled.
Please visit my website to obtain an alternate address.

R Racine

unread,
Oct 27, 2003, 7:12:31 PM10/27/03
to
On Mon, 27 Oct 2003 08:21:52 -0600, Scott G. Miller wrote:

> I hope I'm wrong, but it seems you have a simplified view of Scheme
> architectures.

I do. I represent the pitch fork wielding, torch waving, unwashed masses
of fustrated Scheme application developers. And yes, maybe I am a mass of
one. (shades of a "silent" majority here)

I am not saying that Sifsad will have some trival property flag and will
then suddenly manifest 3 modes of continuation capture.

I'm just saying that after a decade or two, is it unreasonable to suggest
that there has been enough experimental versions, and multiple approaches
to the reach a "reasonable" conclusion (not a perfect conclusion) with
regard to implementing continuation capture if one were to design Sifsad.

As one of the unwashed, I don't care how its done, I am sure I wouldn't
understand the internals if I tried. I can't slam dunk a basketball
either. So be it.

SML/NJ is fast (not the fastest, but commercially fast) and supports
continuations. And no, I am not saying, do it just like SML/NJ.

Ray

Jens Axel Søgaard

unread,
Oct 27, 2003, 7:27:42 PM10/27/03
to
R Racine wrote:
> On Mon, 27 Oct 2003 08:21:52 -0600, Scott G. Miller wrote:

>>I hope I'm wrong, but it seems you have a simplified view of Scheme
>>architectures.

> I do. I represent the pitch fork wielding, torch waving, unwashed masses
> of fustrated Scheme application developers. And yes, maybe I am a mass of
> one. (shades of a "silent" majority here)

What is missing in DrScheme?

> I am not saying that Sifsad will have some trival property flag and will
> then suddenly manifest 3 modes of continuation capture.

> I'm just saying that after a decade or two, is it unreasonable to suggest
> that there has been enough experimental versions, and multiple approaches
> to the reach a "reasonable" conclusion (not a perfect conclusion) with
> regard to implementing continuation capture if one were to design Sifsad.

The Grand Unified Scheme is nothing but a dream. You will always need
to make compromises in implementations. That's why you ought to be
thrilled about the wide range of Scheme implementations in existence.
In other languages (e.g. Python/Perl) you are pretty much stuck with
one implementation.

> As one of the unwashed, I don't care how its done, ...

That's a bold statement in these parts of the wood.

See the last discussion on the Grand Unified Scheme:

<http://groups.google.com/groups?hl=da&lr=&ie=UTF-8&th=5f1ec978a3e333dc&rnum=2>


Perhaps a better idea was to begin making an FFI-SRFI?


--
Jens Axel Søgaard

Bruce Stephens

unread,
Oct 27, 2003, 7:42:23 PM10/27/03
to
"R Racine" <r...@adelphia.net> writes:

> On Mon, 27 Oct 2003 08:21:52 -0600, Scott G. Miller wrote:
>
>> I hope I'm wrong, but it seems you have a simplified view of Scheme
>> architectures.
>
> I do. I represent the pitch fork wielding, torch waving, unwashed
> masses of fustrated Scheme application developers. And yes, maybe I
> am a mass of one. (shades of a "silent" majority here)

I'm sure you're not alone.

That's part of the problem: gathering a community of users seems much
easier when there's only one implementation.

But scheme (even if you add in slib and a selection of SFRIs) is small
enough that it's reasonably straightforward to produce an
implementation. Certainly not *that* easy, but easy enough that there
seem to be about half a dozen implementations that aren't quite dead
yet.

[...]

> I'm just saying that after a decade or two, is it unreasonable to
> suggest that there has been enough experimental versions, and
> multiple approaches to the reach a "reasonable" conclusion (not a
> perfect conclusion) with regard to implementing continuation capture
> if one were to design Sifsad.

I'd say that STklos and guile are probably acceptable interpreters
(STklos is a byte-coding interpreter; I forget the details of guile),
and that bigloo and rscheme are probably pretty good compilers. (I'm
judging implementations in terms of speed, popularity, whether I've
heard of them, etc.)

So it seems to me that not only do we have reasonble conclusions about
acceptable solutions, we have several. Tom Lord's working on another,
and presumably there are other new ones being worked on, too. (And
there are the other interpreters, native code/C compilers, and JVM and
.Net implementations, too.)

We don't lack choice.

> As one of the unwashed, I don't care how its done, I am sure I
> wouldn't understand the internals if I tried. I can't slam dunk a
> basketball either. So be it.
>
> SML/NJ is fast (not the fastest, but commercially fast) and supports
> continuations. And no, I am not saying, do it just like SML/NJ.

Perhaps the best is to accept what's there, and to build prototypes
and so on with Perl (or Python, Ruby, etc.) and then (once you know
what you're trying to do) build it in your preferred scheme (or lisp).

That feels wrong, though. I'd welcome a single (even if mediocre)
implementation of scheme that was generally regarded as the one to use
rather than Tcl, Perl, or Python. (I guess guile is it, or perhaps
Scheme48, with the nice scsh, but I'd really like it to be a compiler;
I think the GNU project messed up there---I think they ought to have
chosen RScheme, or at least cooperated sufficiently that RScheme could
have been substituted later, but perhaps it wouldn't have made a
difference.)

R Racine

unread,
Oct 27, 2003, 7:50:57 PM10/27/03
to
On Mon, 27 Oct 2003 23:42:24 +0100, felix wrote:
> Possible, *if* a Sifsad (geez, what an awful name! ;-) is possible and
> practical, which I seriously doubt...

The Sifsad name was chosen with the intent of it never seeing the light of
day in a real implementation. But you have to admit googling Sifsad would
minimize the irrelevant.



> If the unification of all Scheme implementation efforts is the really
> important issue for you, then you effectively strive for mediocrity,
> unless you happen to be a Scheme implementation wizard, vastly ahead of
> all the others. Mind you, that would be nice!

The sad fact of Scheme life is that if I were a Scheme implementation
wizard, and we all know very well I am not, I would have already announced
Yet Another Scheme Implementation. Math profs are annointed to generate new
Math profs. Scheme implementation wizards seemed destined to create
endless streams of Scheme implementation. They are the
Sysiphus' of language implementors. Doomed by the gods to endlessly
create half finished implementations in isolation from one another.

I am not proposing a GUS (Grand Unified Scheme).

Just a useful one.

Bruce Stephens

unread,
Oct 27, 2003, 7:57:28 PM10/27/03
to
Jens Axel Søgaard <use...@jasoegaard.dk> writes:

> R Racine wrote:
>> On Mon, 27 Oct 2003 08:21:52 -0600, Scott G. Miller wrote:
>
>>>I hope I'm wrong, but it seems you have a simplified view of Scheme
>>>architectures.
>
>> I do. I represent the pitch fork wielding, torch waving, unwashed masses
>> of fustrated Scheme application developers. And yes, maybe I am a mass of
>> one. (shades of a "silent" majority here)
>
> What is missing in DrScheme?

Bindings to Gtk/GNOME and other random useful libraries? Speed?

Perhaps there are such bindings, and I just don't know where to look
for them. It's true that speed isn't the main priority for the
DrScheme family, though, isn't it?

[...]

> The Grand Unified Scheme is nothing but a dream. You will always
> need to make compromises in implementations. That's why you ought to
> be thrilled about the wide range of Scheme implementations in
> existence.

Except that some implementations are virtually dead, and none have
quite the extensions that I want for this particular application...

> In other languages (e.g. Python/Perl) you are pretty much stuck with
> one implementation.

But that's OK, because although it is a compromise, it's a reasonable
one, and because there's only the one, there's an enormous library of
extensions and code that I can use. There's lots of scheme code, too,
but each blob of code that I find will take a few hours of work to
massage to work with the implementation that I've chosen to use (with
its particular combination of module system and so on).

[...]

> Perhaps a better idea was to begin making an FFI-SRFI?

Probably. On the other hand, if it were that easy, someone would
already have done it.

Anton van Straaten

unread,
Oct 27, 2003, 8:02:18 PM10/27/03
to
Jens Axel Søgaard write:

> R Racine wrote:
> > I'm just saying that after a decade or two, is it unreasonable to
suggest
> > that there has been enough experimental versions, and multiple
approaches
> > to the reach a "reasonable" conclusion (not a perfect conclusion) with
> > regard to implementing continuation capture if one were to design
Sifsad.
>
> The Grand Unified Scheme is nothing but a dream. You will always need
> to make compromises in implementations. That's why you ought to be
> thrilled about the wide range of Scheme implementations in existence.
> In other languages (e.g. Python/Perl) you are pretty much stuck with
> one implementation.

I think it's interesting & relevant to look at the ways in which this is
*not* true. First, there's Jython, which is a well-established
implementation of Python on the Java platform. There's also the Psyco
compiler for Python, which is a kind of JIT compiler. Then there are
implementations of both Python and Perl under way for .NET.

So I think it's possible that the much-vaunted single implementations of
some languages are merely an artifact of their youth. Implementations will
multiply over time, because of the need to support significantly different
platforms, if nothing else. The fact that Scheme has an amazing family of
implementations is an asset - but it also needs to do better at supporting
*reasonable* portability between at least some of those implementations.

Anton

felix

unread,
Oct 27, 2003, 8:19:02 PM10/27/03
to
On Tue, 28 Oct 2003 00:50:57 GMT, R Racine <r...@adelphia.net> wrote:

> On Mon, 27 Oct 2003 23:42:24 +0100, felix wrote:
>> Possible, *if* a Sifsad (geez, what an awful name! ;-) is possible and
>> practical, which I seriously doubt...
>
> The Sifsad name was chosen with the intent of it never seeing the light
> of
> day in a real implementation. But you have to admit googling Sifsad
> would
> minimize the irrelevant.

Absolutely.

>
>> If the unification of all Scheme implementation efforts is the really
>> important issue for you, then you effectively strive for mediocrity,
>> unless you happen to be a Scheme implementation wizard, vastly ahead of
>> all the others. Mind you, that would be nice!
>
> The sad fact of Scheme life is that if I were a Scheme implementation
> wizard, and we all know very well I am not, I would have already
> announced
> Yet Another Scheme Implementation. Math profs are annointed to generate
> new
> Math profs. Scheme implementation wizards seemed destined to create
> endless streams of Scheme implementation. They are the
> Sysiphus' of language implementors. Doomed by the gods to endlessly
> create half finished implementations in isolation from one another.

I wouldn't consider PLT (for example) half-finished.

>
> I am not proposing a GUS (Grand Unified Scheme).
>
> Just a useful one.
>

I can name several useful Scheme implementations. Just ask.
Many of those are used commercially and provide splendid FFIs and/or
extension libraries.
If Scheme implementations are insufficient for you, do it yourself.
But I don't think you will do any better than what is currently available,
since the major implementations take most known implementation strategies
pretty far.

Here's an idea: pick an implementation (unimportant which one), sit down
and start
writing libraries for it (doesn't matter for what).
Then (and only then) you really will help making Scheme more usable for
real-world development.

cheers,
felix

R Racine

unread,
Oct 27, 2003, 9:05:01 PM10/27/03
to
On Tue, 28 Oct 2003 01:27:42 +0100, Jens Axel Søgaard wrote:


> What is missing in DrScheme?
>
>

Not too much AFAIAC. On a personal level if I list the top 3 things that
have blown me away in the Scheme impl world:

MIT Scheme: The ground breaking work done here. You see MITScheme code,
concepts and ideas in many of the current Scheme implementations. It
is/was the fountainhead.

PLT Scheme: An almost endless stream of what Scheme is capable of.
Unit/Sigs, Languages , inheritable Structures, Contracts, the Syntax
concept, opaque types, module system ... You can just randomly click
about the help system and almost stumble into whole new concepts.

Another example from MzScheme. From Eli's Swindle. I saw that Swindle
had somehow added support for self evaluating symbols which start with a
colon. When I installed Swindle, I didn't recall any patching or
recompiling. So hey, how'd he do that? So I looked.

(module base mzscheme

(provide (all-from-except mzscheme
#%module-begin #%top #%app define let let* letrec lambda))

.... stuff....

;;>> (#%top . id)
;;>This special syntax is redefined to make keywords (symbols whose names
;;>begin with a ":") evaluate to themselves. Note that this does not
;;>interfere with using such symbols for local bindings. (provide (rename
top~ #%top))
(define-syntax (top~ stx)
(syntax-case stx ()
((_ . x)
(let ((x (syntax-object->datum #'x)))
(and (symbol? x) (not (eq? x '||))
(eq? #\: (string-ref (symbol->string x) 0))))
(syntax/loc stx (#%datum . x)))
((_ . x) (syntax/loc stx (#%top . x)))))

... stuff ...)

That was it! No special compiler hacking, reader hacking, any hacking at
all. Just suck in the MzScheme language, extend what it means to be a
datum or a top level symbol with a 7 line macro and export a new
"extended" Scheme language with self-evaluating colon prefixed symbols.
Not only that I could use this extended Scheme, regular MzScheme or yet
another variant in a controlled module by module basis. WOW

The Larceny Twobit Compiler: IMHO the finest bits of Scheme code I have
ever beheld. I have seen Scheme code which tackles far less loftier
targets then a highly optimizing, pluggable emitter, native compiler that
is not a tenth as readable and elegant. [BTW Sifsad should be based on
the Twobit compiler :)]

I digress. What is missing in DrScheme? Overall I love it. Mainly a
Sifsad focus. The system, DrScheme, has a intensional pedalogical focus.
My concerns, efficient memory usage, optimized VM, speed, debugging are
not their focus. The mzc compiler is not on par with some of the other
Scheme->C systems out there. Is there an inherant architectural tradeoff
which prevents mzc from approaching Chicken or Bigloo with speed. Don't
know. If two or three Scheme wizs announced this very night that they
were going to join the PLT team with a Sifsad prioritized feature list. I
would do a hand spring and take up organized religion.

What I find more troubling is some of the other Scheme wiz's disdain for
MzScheme from the aspect of a production quality Scheme. What is it that
THEY find missing in PLT? Do they know something that we simple Joes do
not regarding the inner workings of MzScheme?

What is that they see that prevents two major groups focusing on the PLT
code base and providing two releases/versions of PLT. DrScheme and Sifsad.


Ray

Alex Shinn

unread,
Oct 27, 2003, 9:29:28 PM10/27/03
to
At Mon, 27 Oct 2003 23:42:24 +0100, felix wrote:
>
> What kind of modules? How easy to use should they be? Should they
> allow interactive use? Man, do you realize how much work has gone into
> Scheme module systems, yet none really satisfies everybody!

Would it be too much to ask for a standard *syntax* to the module
system, without specifying the semantics? No matter how many SRFIs or
libraries we write, if we can consistently load them into a program then
the same program can never run unmodified on two different Schemes.

Suppose we use a syntax encompassing all of the module-system concepts
in use now. Something like

(define-module <module-A>
(use-module <module-B> [<procedure> ...])
(use-syntax <module-C> [<syntax> ...])
(autoload <module-D> [<procedure> ...])
(export <procedure> ...)
[(export-all)]
)

... module code ...

as a preamble in a module file. <procedure> may either be a symbol name
or a list of a symbol followed by optional type declarations, which a
Scheme that doesn't use type declarations can ignore. If your Scheme
doesn't differentiate between importing syntax and importing procedures
then the use-module and use-syntax forms are the same. Likewise if your
Scheme doesn't support autoloading then that too is equivalent to
use-module. export-all means export all top-level definitions in the
module, and this could probably be optional (since it's handy for
prototyping but when your module is "finished" and ready for use it's
better style to explicitly declare your exports).

There are issues to be resolved but I don't believe it's impossible to
at least make the syntax work for all the major module systems out
there. The question is, if a SRFI were to be created that specified a
syntax like the above, would Scheme implementations support it?

--
Alex

Shriram Krishnamurthi

unread,
Oct 27, 2003, 10:24:26 PM10/27/03
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> I think it's interesting & relevant to look at the ways in which this is
> *not* true. First, there's Jython, which is a well-established
> implementation of Python on the Java platform. There's also the Psyco
> compiler for Python, which is a kind of JIT compiler. Then there are
> implementations of both Python and Perl under way for .NET.
>
> So I think it's possible that the much-vaunted single implementations of
> some languages are merely an artifact of their youth.

Indeed, isn't that what happened with Stackless Python? My
understanding is that for a while, Stackless created an Avignon vs
Rome situation in the Python community. The noise over Stackless
seems to have subsided, but it seems likely Parrot will have
continuations, which means the debate will have to reopen. And I
believe Tismer and others are now working on something called PyPy,
which means yet another implementation...

Shriram

R Racine

unread,
Oct 27, 2003, 10:37:23 PM10/27/03
to
On Tue, 28 Oct 2003 02:19:02 +0100, felix wrote:


> I can name several useful Scheme implementations. Just ask.

Few. Very few, have had success writing substantive applications in
Scheme. Of those few, the majority, have or still are on some endless
merry-go-round of trying it on this impl and then that. I expect most
give up and, use C#, Java, SML, CL or Haskell.

To not recognize that there is an implementation "issue" with Scheme that
is impacting its adoption in the realworld, retention of the few
application level coders it has and constraining a substantial and broad
library code base from forming is ... I don't know. A shame.

> Many of those are used commercially and provide splendid FFIs and/or
> extension libraries.
> If Scheme implementations are insufficient for you, do it yourself. But
> I don't think you will do any better than what is currently available,
> since the major implementations take most known implementation
> strategies pretty far.
>
> Here's an idea: pick an implementation (unimportant which one), sit down

Therein lies the crux. I have been claiming it is. Ever try using
Bigloo-libs GTK bindings in XYZ impl. Or grabbing Schematics SchemeUnit
for use in ABC impl. Non starters. Sure you can spend a couple of days
porting it to whatever your current impl of choice. Then you get to do it
again when the library code has a new version released.

> and start
> writing libraries for it (doesn't matter for what).

My efforts are diluted. Because 49 other library writers are writing
libraries for some other impl.

> Then (and only then) you really will help making Scheme more usable for
> real-world development.

<sigh>Knew this one is coming eventually. No comment.</sigh>


Ray

Shriram Krishnamurthi

unread,
Oct 27, 2003, 10:31:08 PM10/27/03
to
Alex Shinn <fo...@synthcode.com> writes:

> Would it be too much to ask for a standard *syntax* to the module
> system, without specifying the semantics?

This is a troll, right? I'd expect more from a regular like Alex...

> No matter how many SRFIs or
> libraries we write, if we can consistently load them into a program then
> the same program can never run unmodified on two different Schemes.

I think you mean "...if we cannot consistently...". What does it mean
to load consistently in the absence of a semantics?

> Suppose we use a syntax encompassing all of the module-system concepts

> in use now. [...]

Doesn't encompass units.

Shriram

Anton van Straaten

unread,
Oct 27, 2003, 11:12:00 PM10/27/03
to
Shriram Krishnamurthi wrote:
> Alex Shinn <fo...@synthcode.com> writes:
>
> > Would it be too much to ask for a standard *syntax* to the module
> > system, without specifying the semantics?
>
> This is a troll, right? I'd expect more from a regular like Alex...

Maybe Alex means something like a standard module declaration syntax which
maps to a minimal set of sufficiently similar semantics on different
Schemes. Which seems like it could be a workable idea, to me.

> > No matter how many SRFIs or
> > libraries we write, if we can consistently load them into a program then
> > the same program can never run unmodified on two different Schemes.
>
> I think you mean "...if we cannot consistently...". What does it mean
> to load consistently in the absence of a semantics?

I dunno, Perl seems to manage! ;)

> > Suppose we use a syntax encompassing all of the module-system concepts
> > in use now. [...]
>
> Doesn't encompass units.

Standardizing something on the level of units isn't going to happen, I'm
sure. But I think a lowest-common denominator module system, which would
support writing portable modular code and publishing portable libraries,
would be helpful.

Sure, that won't allow taking an arbitrary whiz-bang library from
implementation A and plugging it in to implementation B, but that's not the
point. The point, I think, would be to build up the base a bit further, in
a direction that supports some of these pragmatic issues that we're all
aware of - so that there's a plausible portable base for application and
library developers to develop to, if they choose.

Anton

Bradd W. Szonye

unread,
Oct 27, 2003, 11:40:43 PM10/27/03
to
> Jens Axel Søgaard <use...@jasoegaard.dk> writes:
>> What is missing in DrScheme?

Bruce Stephens <bruce+...@cenderis.demon.co.uk> wrote:
> Bindings to Gtk/GNOME and other random useful libraries? Speed?

It used to have a Gtk binding, and supposedly there's a new one in the
works. I'm not too worried about that, though; the wxWindows binding is
pretty good and probably more portable. A GNOME binding would be a dead
end, portability-wise. The ability to write GUI apps for Windows and X
(without paying a ton of money or relying on Cygnus) was actually *the*
major selling point for PLT, for me.

> Perhaps there are such bindings, and I just don't know where to look
> for them. It's true that speed isn't the main priority for the
> DrScheme family, though, isn't it?

Apparently not, but that's not necessarily a bad thing. Portability,
robustness, ease of use, and a killer development environment seem to be
the main goals, and those things sell. And it's not like PLT is *slow*
-- it just isn't C, that's all. It compares favorably with other
interpreted languages.

BTW, the development environment was actually a drawback for me -- I'm a
hardcore vim & Makefiles kinda guy. (In fact, I wrote comprehensive vim
syntax-highlighting rules for PLT Scheme. I was originally supposed to
take over maintenance/development from the original author, but I never
got around to finishing and publishing my rules, because there were some
performance issues that I never quite worked out.)

Bradd W. Szonye

unread,
Oct 28, 2003, 12:43:26 AM10/28/03
to
Anton van Straaten <an...@appsolutions.com> wrote:
> Maybe Alex means something like a standard module declaration syntax
> which maps to a minimal set of sufficiently similar semantics on
> different Schemes. Which seems like it could be a workable idea, to
> me.

Agreed. Some folks might rankle at some of the necessary restrictions,
though. For example, you couldn't count on shadowing/redefining imported
identifiers like you can at the top level; some Schemes (like Scheme-48)
support that, but others (like PLT) don't, and for good reasons.

I was actually toying with the idea of implementing modules as FEATURE,
based on the requirements syntax of SRFI-7. However, I decided that
wasn't quite the right way to do it. More on this later if I actually
find time to implement something useful.

Anton van Straaten

unread,
Oct 28, 2003, 1:48:52 AM10/28/03
to
Bradd W. Szonye wrote:
> Anton van Straaten <an...@appsolutions.com> wrote:
> > Maybe Alex means something like a standard module declaration syntax
> > which maps to a minimal set of sufficiently similar semantics on
> > different Schemes. Which seems like it could be a workable idea, to
> > me.
>
> Agreed. Some folks might rankle at some of the necessary restrictions,
> though. For example, you couldn't count on shadowing/redefining imported
> identifiers like you can at the top level; some Schemes (like Scheme-48)
> support that, but others (like PLT) don't, and for good reasons.

It would still be better than the restrictions imposed by coding to R5RS, or
some mixture of R5RS+SRFIs+SLIB. Sure, you can use SLIB's modules, or
Taylor Campbell's lexmod, or roll your own modules, but all of these have
disadvantages which could (I believe) be addressed by some relatively
minimal implementation support for a standard "simple" module system.

Anton

Ray Dillinger

unread,
Oct 28, 2003, 1:49:52 AM10/28/03
to
"Bradd W. Szonye" wrote:
>
> Anton van Straaten <an...@appsolutions.com> wrote:
> > Maybe Alex means something like a standard module declaration syntax
> > which maps to a minimal set of sufficiently similar semantics on
> > different Schemes. Which seems like it could be a workable idea, to
> > me.
>
> Agreed. Some folks might rankle at some of the necessary restrictions,
> though. For example, you couldn't count on shadowing/redefining imported
> identifiers like you can at the top level; some Schemes (like Scheme-48)
> support that, but others (like PLT) don't, and for good reasons.
>
> I was actually toying with the idea of implementing modules as FEATURE,
> based on the requirements syntax of SRFI-7. However, I decided that
> wasn't quite the right way to do it. More on this later if I actually
> find time to implement something useful.

I've been thinking about writing a portable "module mangler."

It would read from disk a bunch of scheme files with some kind
of standard module syntax, and output a single honkin-large
scheme file (maybe in a temporary directory) that puts them
all together with separate namespaces kept separate, and
strictly-controlled scope for macros, and so on.

So you could do development in a bunch of different files and
be confident of putting them all together in one program with
a well-defined semantics, regardless of implementation.

It would answer namespace and macrology-scope issues, but it
would never answer the separate-compilation issue. Even so,
it might attract enough of a following to standardize a
module syntax, especially if distributed with a bunch of
good libraries.

What do people think of the idea?

Bear

Alex Shinn

unread,
Oct 28, 2003, 1:48:19 AM10/28/03
to
At 27 Oct 2003 22:31:08 -0500, Shriram Krishnamurthi wrote:
>
> Alex Shinn <fo...@synthcode.com> writes:
>
> > Would it be too much to ask for a standard *syntax* to the module
> > system, without specifying the semantics?
>
> This is a troll, right?

It's not a troll, though perhaps it's not expressed clearly and
certainly isn't completely thought out.

> > No matter how many SRFIs or
> > libraries we write, if we can consistently load them into a program then
> > the same program can never run unmodified on two different Schemes.
>
> I think you mean "...if we cannot consistently...".

Yes, sorry.

> What does it mean to load consistently in the absence of a semantics?

Not complete absence but a sort of minimal assumption. Consider every
SRFI that has a reference implementation, every module I see browsing
/usr/lib/plt/collects/mzlib/, the C-parser just posted to c.l.s., and
countless utility modules from all the Scheme implementations. Many of
them are written in highly portable Scheme, which can be made more
portable with further SRFIs and standardization. However, at the
beginning of every one is a little incantation that says "this is a
module" with some extra information about what modules it uses and what
procedures it provides. If we just standardize on the syntax of that
incantation then there suddenly becomes the chance that a module written
in one Scheme would work out-of-the-box on another Scheme. More
complicated semantics, module-introspection, etc. would still not be
portable.

> > Suppose we use a syntax encompassing all of the module-system concepts
> > in use now. [...]
>
> Doesn't encompass units.

From the MzScheme manual:

In some ways, a unit resembles a module (see Chapter 5 in PLT
MzScheme: Language Manual), but units and modules serve different
purposes overall.

I would only suggest this for modules, not units.

--
Alex

felix

unread,
Oct 28, 2003, 2:38:58 AM10/28/03
to
Alex Shinn <fo...@synthcode.com> wrote in message news:<87vfqa...@strelka.synthcode.com>...

> At Mon, 27 Oct 2003 23:42:24 +0100, felix wrote:
> >
> > What kind of modules? How easy to use should they be? Should they
> > allow interactive use? Man, do you realize how much work has gone into
> > Scheme module systems, yet none really satisfies everybody!
>
> Would it be too much to ask for a standard *syntax* to the module
> system, without specifying the semantics? No matter how many SRFIs or
> libraries we write, if we can consistently load them into a program then
> the same program can never run unmodified on two different Schemes.
>
>[...]

>
> There are issues to be resolved but I don't believe it's impossible to
> at least make the syntax work for all the major module systems out
> there. The question is, if a SRFI were to be created that specified a
> syntax like the above, would Scheme implementations support it?

It's easy: submit a SRFI, and you'll have a good chance of being
able to discusss the relevant questions with the relevant people
(or those which are interested in solving these issues).


cheers,
felix

Michele Simionato

unread,
Oct 28, 2003, 3:43:02 AM10/28/03
to
Shriram Krishnamurthi <s...@cs.brown.edu> wrote in message news:<w7dn0bm...@cs.brown.edu>...

I think you have got the wrong impression. The concept of "different
implementation" in the Python world is completely different from the
concept of "different implementation" in the Scheme world.

Somebody saying "Python has only one implementation" wouldn't be far from
the truth. There is only ONE implementation that matters, which is
CPython. All the others implementors strive to get as close as
possible to CPython. The minimal compatibility is 99%.
Different implementations provide something more and are intented to
be used in specific situations (you want to script Java, use Jython,
you want to skip the C-stack restriction, use stackless) but they are
in no sense competitors of CPython. If the PyPy project succeeds
(and everybody hope so in the community, including Guido van Rossum)
we will have a faster Python, but it will still be 99.99% compatible
with CPython. At least this is ideal goal of the developers, as I
understand their claims (and I think I do understand them).

I do think Perl/Python/Ruby succeds because they are basically one man
projects. Of course, there are hundreds of Python developers, but only one
has the last word when essential decisions for the language have to be
taken: Guido van Rossum.
It is also interesting to notice that Guido's ideas are *really* respected in
the community, more respected than you could imagine. Also, a lot of people in
the Python community are practical programmers and not language designers
or academical people: this makes a big difference. Let me give a trivial
example: a large minority in the community regularly rants about the fact
that the list .sort() method returns None and not the sorted list. Now, nobody
will *ever* think about making a new implementation correcting this "wart"
(personally, I don't think it is a wart, by the way, at least in the context
of Python). It would be considered foolish to make an implementation which
makes the same things Python already does in a different way. Implementations
are free to add, NOT to change. Essentially the ideas is "okay, this is a
wart in my opinion, but I will live with it, because forking the community
would be much worse than correcting the wart".

My postings here made me realize that the Scheme community is very
different from the Perl/Python/Ruby communities: a Pythonista has
no difficulties in accepting a BDFL (Benevolent Dictator For Life),
no difficulties in trading performances for easy of use, no difficulties
in accepting a bondage & discipline syntax (actually a rather large
minority would appreciate even a stricter bondage & discipline syntax!).
I could give other examples, but you get the idea.

Notice that I am not saying that one approach is better than one other:
there are trade-offs. If you chose the one-implementation way you have
advantages (even big advantages), if you choose the way of freedom you
have other advantages (which may be considered even bigger by some).

I've got the impression that there is no way that the Perl/Python/Ruby
model will ever work in the Scheme community, for historical and
socialogical reason. This can be considered good (for some reasons) or
bad (for other reasons).

What I (as an outsider to the community) would appreciate is:

1. make a stricter R5RS (not very strict, but stricter than now);

2. make more srfi (much more);

3. make them available on every implementation.

These points are (maybe/maybe not) in the range of realizable things; I don't
think I will ever see an unique (unique in Python sense) implementation of
Scheme; one could even argue that this a good thing, BTW.

For the moment being, you schemers are stuck with Perl/Python/Ruby; if
this is of any consolation, think that it could have been worse (i.e.
Java/C++ ;)


Michele Simionato

felix

unread,
Oct 28, 2003, 4:01:13 AM10/28/03
to
"R Racine" <r...@adelphia.net> wrote in message news:<pan.2003.10.28....@adelphia.net>...

>
> Few. Very few, have had success writing substantive applications in
> Scheme. Of those few, the majority, have or still are on some endless
> merry-go-round of trying it on this impl and then that. I expect most
> give up and, use C#, Java, SML, CL or Haskell.

Any numbers? You seem to be quite convinced of that. Is Haskell
really used more heavily for substantive applications than Scheme?
Or are you just guessing, since the respective communities appear
more unified?

If C#, Java or CL give you what you want, go ahead, use it.
Personally C#, Java, SML or Haskell don't give me the stuff I need. Neither
does CL, actually.

>
> To not recognize that there is an implementation "issue" with Scheme that
> is impacting its adoption in the realworld, retention of the few
> application level coders it has and constraining a substantial and broad
> library code base from forming is ... I don't know. A shame.

Stop whining. You are trying to blame the wrong people. It's a shame
that you think you're entitled to any demands. If Scheme (or better,
the available implementations) don't (doesn't) serve your needs, fine.
Fix it or try alternatives. Have you tried Common LISP? This might
be exactly what you need. I'm serious.

>
> > Many of those are used commercially and provide splendid FFIs and/or
> > extension libraries.
> > If Scheme implementations are insufficient for you, do it yourself. But
> > I don't think you will do any better than what is currently available,
> > since the major implementations take most known implementation
> > strategies pretty far.
> >
> > Here's an idea: pick an implementation (unimportant which one), sit down
>
> Therein lies the crux. I have been claiming it is. Ever try using
> Bigloo-libs GTK bindings in XYZ impl. Or grabbing Schematics SchemeUnit
> for use in ABC impl. Non starters. Sure you can spend a couple of days
> porting it to whatever your current impl of choice. Then you get to do it
> again when the library code has a new version released.
>
> > and start
> > writing libraries for it (doesn't matter for what).
>
> My efforts are diluted. Because 49 other library writers are writing
> libraries for some other impl.

There are not, you are wildly exaggerating. It *is* possible to write
cross-implementation libraries (see srfi.schemers.org for a couple
of examples), and it is even possible to write libraries for things
like GTK, with a little bit of pre-/post-processing, macros, careful use
of lexical scope and clean design.

(Now it's your turn to start whining why nobody did this for you already)

This discussion painfully reminds me on the ever-popular cl-is-great-but-
if-it-just-had-this-extension drivels that come up regularly on
comp.lang.lisp. Yet, it hasn't changed anything.

But we probably won't come to any useful conclusion here.

I will now go to comp.lang.python and complain about the fact
that there is no extension that provides macros, precise space-and-time
efficient GC and tail-call-optimization, all requirements that I find
very important for serious application development.
I wonder what they will tell me...?


cheers,
felix

Grzegorz Chrupala

unread,
Oct 28, 2003, 4:39:30 AM10/28/03
to
Jens Axel Søgaard <use...@jasoegaard.dk> wrote in message news:<3f9db83e$0$70001$edfa...@dread12.news.tele.dk>...

> R Racine wrote:
> > On Mon, 27 Oct 2003 08:21:52 -0600, Scott G. Miller wrote:
>
> >>I hope I'm wrong, but it seems you have a simplified view of Scheme
> >>architectures.
>
> > I do. I represent the pitch fork wielding, torch waving, unwashed masses
> > of fustrated Scheme application developers. And yes, maybe I am a mass of
> > one. (shades of a "silent" majority here)
>
> What is missing in DrScheme?

For me, a major gap is Unicode and multibyte character support. This
is by now standard in implementations of most other widely used
programming languages but surprisingly few Schemes have it.

--
Grzegorz

Bruce Stephens

unread,
Oct 28, 2003, 5:21:31 AM10/28/03
to
"Bradd W. Szonye" <bradd...@szonye.com.invalid> writes:

[...]

> Apparently not, but that's not necessarily a bad thing. Portability,
> robustness, ease of use, and a killer development environment seem
> to be the main goals, and those things sell. And it's not like PLT
> is *slow* -- it just isn't C, that's all. It compares favorably with
> other interpreted languages.

Yes, I agree with all that. I'd just like some language which had
reasonable portability, ease of use, etc., and had the option of
blinding speed, at least on common platforms. And that doesn't seem
to me to be impossible---there are various very fast scheme
implementations around. It's just that the various scheme
implementations seem to stay just far enough apart in various respects
(FFI, mostly) that using more than one of them is inconvenient.

[...]

Scott G. Miller

unread,
Oct 28, 2003, 11:27:25 AM10/28/03
to

There is a reason for that. The R5RS character operators cannot be made
to work reliably with unicode characters. SISC for example supports
unicode characters and arbitrary character maps, but makes no effort to
contort the standard operators to behave properly. There was a usenet
discussion about this in the past which you could probably find by googling.

Scott

Bruce Stephens

unread,
Oct 28, 2003, 12:26:24 PM10/28/03
to
"Scott G. Miller" <scgm...@freenetproject.org> writes:

> Grzegorz Chrupala wrote:
>> Jens Axel Søgaard <use...@jasoegaard.dk> wrote in message news:<3f9db83e$0$70001$edfa...@dread12.news.tele.dk>...

[...]

>>>What is missing in DrScheme?
>> For me, a major gap is Unicode and multibyte character support. This
>> is by now standard in implementations of most other widely used
>> programming languages but surprisingly few Schemes have it.
>
> There is a reason for that. The R5RS character operators cannot be
> made to work reliably with unicode characters. SISC for example
> supports unicode characters and arbitrary character maps, but makes no
> effort to contort the standard operators to behave properly. There
> was a usenet discussion about this in the past which you could
> probably find by googling.

I couldn't find it. I did searches under comp.lang.scheme for
unicode, utf8, utf-8, and most of the threads seemed positive (giving
implementations that support unicode in some form). I didn't see any
threads showing fundamental problems.

Anton van Straaten

unread,
Oct 28, 2003, 12:55:05 PM10/28/03
to

Perhaps you didn't make the proper offerings to the Great God Google...

Dunno if it's what Scott was thinking of, but here's a post in which Bear
describes some issues with Unicode & R5RS:
http://groups.google.com/groups?selm=3D753365.6BE29F0E%40sonic.net
Some of the earlier and later posts in that thread are also relevant.

Anton

Scott G. Miller

unread,
Oct 28, 2003, 1:11:11 PM10/28/03
to

Nah, its not his fault, I couldn't find it either (the above is not what
I recall). I'll try and dig up the reference, it may not have been on
usenet.

Scott

David Rush

unread,
Oct 28, 2003, 1:53:55 PM10/28/03
to
On Tue, 28 Oct 2003 02:05:01 GMT, R Racine <r...@adelphia.net> wrote:
> On Tue, 28 Oct 2003 01:27:42 +0100, Jens Axel Søgaard wrote:
>> What is missing in DrScheme?

> What I find more troubling is some of the other Scheme wiz's disdain for


> MzScheme from the aspect of a production quality Scheme. What is it that
> THEY find missing in PLT? Do they know something that we simple Joes do
> not regarding the inner workings of MzScheme?

Well

1) I'm not a Scheme 'wiz' for any value of 'wiz'
2) I like PLT

but I don't use it. And haven't for quite a while (like since early v200).
There are a few reasons for this, some rational and some less so:

1) it's just not fast enough. I do Data Mining and IR applications in
Scheme and I'm starving for CPU cycles, even on my 2Ghz+ machines

2) it was a pain to make fast. The notion of 'standalone executable', while
ostensibly supported involved a complete rebuild of the PLT core

3) I write daemons and command-line programs and don't need GUI bells and
whistles; if I did, PLT would be right up there. Although I'm pretty
excited about SCX/Scsh, and I found programming raw XLIB under Stalin
to have a perverse attraction as well...

4) The unit system was impressive ... and intimidating. And I hated all
the extra punctuation I saw floating around inside of PLT's naming
conventions

5) MrSpidey can't handle big enough programs - and I *really* wish it did.
In fact, if MrSpidey could handle 15KLOC+ programs I would probably
start to make the effort to move back to PLT for pre-production
development. but did I mention that it's not fast enough for my crippled
486/133 at home?

6) the v200 release b0rk3d my PLT code base and the performance wasn't good
enough for me to abandon Gambit & Larceny (which my code also ran on
since I have put a lot of effort into a portable Scheme programming
infrastructure)

7) I'm really attached to Scsh's adaptation of Posix to Scheme. Where PLT
has diverged, I haven't actually found it any better.

8) PLT's library is very big...and very inbred so I can't easily chop off
parts of it to use under other, faster, Scheme implementations. So
programming in PLT becomes a painful exercise in figuring out how to
implement the PLT signatures for my production platforms.

9) PLT is a pain to install. I'm sure that the PLT folks don't think so,
but
but I haven't been able to get a fully-working install for quite a while
now. It doesn't use configure/make to build and it is very finicky about
file locations. Given that I *usually* need to have a multi-platform
environment I find the lack of flexibility in PLT's installation very
irritating.

10) Very good alternatives to PLT also exist...specifically Gambit (gets
my vote for best all-round), Larceny (if only all the world was SPARC),
Bigloo (great for speed assuming you can live with it's limits). And
Stalin which is fast fast fast, but slow slow slow to compile.

Even though I am obsessed with performance, please understand that PLT is
I think the second-fastest interpreter out there (Petite Chez is #1). And
remember that I *do* like many things about PLT, even if it doesn't come
out when I'm whingeing. In fact, I am planning to use PLT to teach my kids
programming.

david rush
--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/

David Rush

unread,
Oct 28, 2003, 2:08:48 PM10/28/03
to
On Tue, 28 Oct 2003 11:29:28 +0900, Alex Shinn <fo...@synthcode.com> wrote:
> At Mon, 27 Oct 2003 23:42:24 +0100, felix wrote:
>>
>> What kind of modules? How easy to use should they be? Should they
>> allow interactive use? Man, do you realize how much work has gone into
>> Scheme module systems, yet none really satisfies everybody!
>
> Would it be too much to ask for a standard *syntax* to the module
> system, without specifying the semantics?

I don't think you realize just how outrageous this statement is.

Nevertheless I have been writing a GUMS (Grand Unified Module System) for
several *years* now, based on the theory that all module systems can be
modelled as source-to-source compilers which produce a single source
module.
It works, for certain values of 'work', and if I was an academic I could
probably find the time to finish and polish and add the major missing
module languages to it. If you want to help, please contact me privately
(this is a serious offer). The project is on SourceForge at

http://mangler.sourceforge.net

but be warned, building it is only straightforward for me (even with the
instructions page I imagine), and since I have no users, I tend to get
a bit sloppy about maintaining pieces of it. This has turned out to be a
rather larger project than I thought it would be when I started, if only
because maintaining the library is aa necessity I didn't foresee.

David Rush

unread,
Oct 28, 2003, 2:13:59 PM10/28/03
to
On Tue, 28 Oct 2003 06:49:52 GMT, Ray Dillinger <be...@sonic.net> wrote:
> "Bradd W. Szonye" wrote:
> I've been thinking about writing a portable "module mangler."

Ray - I've been working on this for years. That's what S2 is all
about. It does work, but I just don't have the time to keep the
docs (and libs) up to date.

> It would read from disk a bunch of scheme files with some kind
> of standard module syntax, and output a single honkin-large
> scheme file (maybe in a temporary directory) that puts them
> all together with separate namespaces kept separate, and
> strictly-controlled scope for macros, and so on.

That's exactly what I do. I've got the hooks in for alpha-renaming
top-level symbols, but I've never had the need to fully productize
the code. You want to help? I'll happily help you get your first
builds going (bootstrapping the animal is a bit tricky).

> What do people think of the idea?

obviously I think it's brilliant. I just have a day job so my version
seems doomed to live in the twilight of my needs...

http://mangler.sourceforge.net

David Rush

unread,
Oct 28, 2003, 2:26:11 PM10/28/03
to

What I do about that is I plonk the FFI-specific parts of the code
into cond-expand blocks. It seems to work pretty well for me anyway, but
then I'm generally not going much beyond POSIX.

Bradd W. Szonye

unread,
Oct 28, 2003, 3:17:47 PM10/28/03
to
Anton van Straaten <an...@appsolutions.com> wrote:
> Dunno if it's what Scott was thinking of, but here's a post in which Bear
> describes some issues with Unicode & R5RS:
> http://groups.google.com/groups?selm=3D753365.6BE29F0E%40sonic.net
> Some of the earlier and later posts in that thread are also relevant.

That article deals with Unicode support in Scheme code. There's also the
issue of Unicode support for data. The former problem is thornier than
the latter, because supporting Unicode in Scheme code includes all the
problems of Unicode in data *plus* the special considerations necessary
for a case-insensitive programming language.

Bear's overview is good, but he missed an alternative:

Use the Unicode algorithms for case-folding equivalence. When the result
is ambiguous, signal an error. Give the programmer a way to resolve
ambiguities. Example:

A program written in German contains the identifiers "masse" and
"maße." If only one of the two identifiers is in scope, "MASSE"
refers to the one that's in scope. If both are in scope, "MASSE" is
ambiguous.

How does a programmer resolve the ambiguity? The simpler method is to
simply disallow ambiguous uses. The programmer must not use "MASSE" when
both "masse" and "maße" are in scope. A more sophisticated method could
allow a way to specify which identifier "MASSE" is supposed to be
equivalent to.

Unfortunately, this can violate the principle of least surprise. Suppose
that only "maße" is in scope. The programmer writes

(lambda (MASSE) ... maße ...)

intending to bind MASSE but not maße. Unfortunately, this method shadows
the free variable "maße" because it's "unambiguous." I don't expect that
this would be a common problem, but it would be nasty when it did
happen. And case-folding isn't the only situation where that comes up.
For example, consider the words "resume" and "rèsumé" under English
collation rules. Depending on context, they may or may not be the same
word. There's a more general problem here: Identifiers that are
ambiguous even without case transformations.

Sometimes, identifiers are ambiguous even when they're spelled
identically. For example, try writing a "resume" (curriculum vitae)
class with "resume" (coroutine yielding) semantics. Oops, there's an
identifier collision! That's a thorny problem all on its own, and
locale-dependent identifiers just make it thornier.

Any identifier clash will tend to violate the principle of least
surprise. Case-folding and other locale-dependent forms of equivalence
just make it more surprising. To the human eye, the addition of accents
sufficiently disambiguate "resume" and "rèsumé," but to a compiler,
they're just as ambiguous as they are without the accents. That mismatch
between what the human sees and what the machine sees is what adds to
the surprise.

I understand why Bear chose the resolution he did -- simply don't permit
any ambiguous characters -- but unfortunately it doesn't address the
underlying problem.

Of course, even if you can deal with that problem, there's still the
problem of combining code from two different languages, with different
concepts of "equivalent symbols"! It's not too surprising that many
languages just punt on this issue and say, "Different spellings mean
different symbols."

Grzegorz Chrupała

unread,
Oct 28, 2003, 4:18:01 PM10/28/03
to
Bradd W. Szonye wrote:
> Use the Unicode algorithms for case-folding equivalence. When the result
> is ambiguous, signal an error. Give the programmer a way to resolve
> ambiguities. Example:
>
> A program written in German contains the identifiers "masse" and
> "maße." If only one of the two identifiers is in scope, "MASSE"
> refers to the one that's in scope. If both are in scope, "MASSE" is
> ambiguous.

Use of Unicode characters in indentifier names is largely irrelevant.
Unicode is essential in many applications such as NLP or XML processing,
but it is needed to deal with *data* mainly (characters, strings, symbols),
not identifier names. The potential ambiguity between maße and MASSE as a
variable name is a non-issue. For variable-name case folding, just use
standard Unicode case mapping, where (char-upcase #\ß) is just #\ß and be
done with it.

It is red herrings such as the above that mislead people into thinking that
Unicode support on a basic level is more complicated than it really is.
--
Grzegorz
http://pithekos.net

Bradd W. Szonye

unread,
Oct 28, 2003, 4:52:29 PM10/28/03
to
Grzegorz Chrupa?a <grze...@pithekos.net> wrote:
> Use of Unicode characters in indentifier names is largely irrelevant.

That's why I initially mentioned the difference between Unicode support
for data and Unicode support for program code (e.g., identifiers). The
rest of my article was in response to Bear's earlier discussion of the
latter.

> For variable-name case folding, just use standard Unicode case
> mapping, where (char-upcase #\ß) is just #\ß and be done with it.

Is that actually true? If so, I'd consider that a defect in Unicode,
because the correct spelling of "capital esszed" is "SS." And besides,
case-folding is only part of the problem, because it's only one example
of different but equivalent spellings.

> It is red herrings such as the above that mislead people into thinking
> that Unicode support on a basic level is more complicated than it
> really is.

Unicode support for data is fairly tricky on its own. Many languages
choose not to complicate things by applying the data rules to code. For
example, C++ permits a wide variety of Unicode characters in data and in
code, but it does not attempt locale-dependent equivalence for code --
every different spelling is a different identifier.

However, Schemers like it when the same rules apply to code and data
both. Also, programmers in any case-insensitive language like it when
identifiers "do the right thing" in non-English languages. That's why
any discussion of extended character sets is likely to stray into a
discussion of identifier equivalence.

Grzegorz Chrupała

unread,
Oct 28, 2003, 6:05:58 PM10/28/03
to
Bradd W. Szonye wrote:

> Grzegorz Chrupa?a <grze...@pithekos.net> wrote:
>> For variable-name case folding, just use standard Unicode case
>> mapping, where (char-upcase #\ß) is just #\ß and be done with it.
>
> Is that actually true? If so, I'd consider that a defect in Unicode,
> because the correct spelling of "capital esszed" is "SS." And besides,
> case-folding is only part of the problem, because it's only one example
> of different but equivalent spellings.

The basic non-locale dependent case mapping of ß is ß. There is a
SpecialCasing table which deals with characters such as ß where case
mappings are not simple 1-1 character correspndences.
(http://www.unicode.org/Public/UNIDATA/)

>
> Unicode support for data is fairly tricky on its own. Many languages
> choose not to complicate things by applying the data rules to code. For
> example, C++ permits a wide variety of Unicode characters in data and in
> code, but it does not attempt locale-dependent equivalence for code --
> every different spelling is a different identifier.
>
> However, Schemers like it when the same rules apply to code and data
> both. Also, programmers in any case-insensitive language like it when
> identifiers "do the right thing" in non-English languages. That's why
> any discussion of extended character sets is likely to stray into a
> discussion of identifier equivalence.

"Doing the right thing" in the general case, in a fully locale sensitive way
is indeed complicated, if at all possible. IMO the rules for identifiers as
should be well-defined and simple as well as consitent with treatment of
strings on the basic level, i.e. they should use the general, non-locale
dependent case-mappings.
When dealing with data one could choose to use more refined,
locale-dependent mappings, algorithms etc as needed.

As I see it, it is enough if the the core language provides core
Unicode-compatible functionality including the way to read and write UTF-8
and UTF-16 encoded text, distinguish characters and bytes, get the length
of a string in characters and bytes, provide standard Unicode
case-mappings, sorting, and Unicode aware standard character predicates
such as char-whitespace? etc. Anything beyond that can be more or less
easily added in libraries or defined by the user as needed.

Cheers,
--
Grzegorz
http://pithekos.net

R Racine

unread,
Oct 28, 2003, 6:42:02 PM10/28/03
to

> On Tue, 28 Oct 2003 02:05:01 GMT, R Racine <r...@adelphia.net> wrote:
>> What I find more troubling is some of the other Scheme wiz's disdain
>> for MzScheme from the aspect of a production quality Scheme. What is
>> it that THEY find missing in PLT? Do they know something that we simple
>> Joes do not regarding the inner workings of MzScheme?

[2nd response attempt. 1st Vaporwared, I guess.]

I did not phrase that very well. Sorry.

Here are couple of way to get to Sifsad.

Axiom
-----
There is a general consensus amoungst the Scheme User/Application
developer community with the direction the PLT group has expanded Scheme
beyond R5RS with regard to Units, modules, general libraries etc... Across
the board, taking everything into account we, most, not all, general
scheme users and app developers, are pretty happy with PLT philosophy of
Scheme.

However, there is also recognition that PLT fails to deliver the speed
necessary for it to become the mainstream implementation for general
application development.


Strawman Proposal #1
--------------------

The Scheme Community joins in to assist PLT with aggressively enhancing
mzc performance to be on par with the bulk with the other Scheme -> C
compilers.

- This assumes that mzc can be significantly improved as the main
priorities of the PLT group are teaching and language research resulting
in the current overall efficiency of mzc not being what it could be.
- This is what I was attempting to address above. When I have
previously proposed this plan to "wizs", those with the technical chops to
work with compiler optimizations, the response can be best categorized as
"can't be done" without much follow on detail. (beyond continuation
capture efficency, which is not a killer for something that is targeting
most general application developement needs)

Strawman Proposal #2
--------------------

Assuming that the PLT system cannot be improved to the performance levels
of other Scheme -> C systems because the basic architecture of the PLT
system was based on other priorities then speed, the Scheme community
adopts an existing, fundamentally strong, fast Scheme -> C implementation
with the goal of attaining 100% code compiliance with PLT (as is
reasonable).

The goal would be to have near identical collections shared between the
two implementations. "Write once, run on both", as it were.


I claim either one of these solutions would be a bit of godsend to the
majority of bread and butter Scheme users who would like to use Scheme for
general application development.

Of course those with "specialized" applications will chose alternate
implementations that emphasize aspects vital to their application.


Ray

Jens Axel Søgaard

unread,
Oct 28, 2003, 6:44:49 PM10/28/03
to
Bruce Stephens wrote:

> Jens Axel Søgaard <use...@jasoegaard.dk> writes:
>>What is missing in DrScheme?

> Bindings to Gtk/GNOME and other random useful libraries?

What are the benefits of Gtk/GNOME over the current portable (Windows,
Unix, Machintosh) GUI already in DrScheme?


Which other useful libraries are you thinking about?

> Speed?

I wanted to hear what mzscheme misses compared to Perl/Python
and as far as I can tell, mzscheme has no problem in the speed
department.

[The existence of the *very* fast Scheme compilers does not imply
that mzscheme is slow]

> Perhaps there are such bindings, and I just don't know where to look
> for them. It's true that speed isn't the main priority for the
> DrScheme family, though, isn't it?

Yes - but that doesn't mean it is slow.

>>In other languages (e.g. Python/Perl) you are pretty much stuck with
>>one implementation.

> But that's OK, because although it is a compromise, it's a reasonable
> one, and because there's only the one, there's an enormous library of
> extensions and code that I can use.

Then find a Scheme that makes the same compromises as in Python/Perl
and use that. Ignore the rest.

> There's lots of scheme code, too,
> but each blob of code that I find will take a few hours of work to
> massage to work with the implementation that I've chosen to use (with
> its particular combination of module system and so on).

My experience is that the authors often are willing to do the porting,
if they are asked.

>>Perhaps a better idea was to begin making an FFI-SRFI?
>
>
> Probably. On the other hand, if it were that easy, someone would
> already have done it.

I didn't say it was easy. Far from. Lars Hansen has made some leg work
though.

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Oct 28, 2003, 6:46:11 PM10/28/03
to
Bradd W. Szonye wrote:

> BTW, the development environment was actually a drawback for me -- I'm a
> hardcore vim & Makefiles kinda guy. (In fact, I wrote comprehensive vim
> syntax-highlighting rules for PLT Scheme. I was originally supposed to
> take over maintenance/development from the original author, but I never
> got around to finishing and publishing my rules, because there were some
> performance issues that I never quite worked out.)

?

Why didn't you just ignore DrScheme and used mzscheme?

--
Jens Axel Søgaard


Jens Axel Søgaard

unread,
Oct 28, 2003, 6:55:03 PM10/28/03
to
R Racine wrote:
> On Tue, 28 Oct 2003 01:27:42 +0100, Jens Axel Søgaard wrote:

>>What is missing in DrScheme?

> Not too much AFAIAC. On a personal level if I list the top 3 things that
> have blown me away in the Scheme impl world:
>
> MIT Scheme: The ground breaking work done here. You see MITScheme code,
> concepts and ideas in many of the current Scheme implementations. It
> is/was the fountainhead.
>
> PLT Scheme: An almost endless stream of what Scheme is capable of.
> Unit/Sigs, Languages , inheritable Structures, Contracts, the Syntax
> concept, opaque types, module system ... You can just randomly click
> about the help system and almost stumble into whole new concepts.
>
> Another example from MzScheme. From Eli's Swindle. I saw that Swindle
> had somehow added support for self evaluating symbols which start with a
> colon. When I installed Swindle, I didn't recall any patching or
> recompiling. So hey, how'd he do that? So I looked.

[Very clever example]

Yes also love the very high level of flexibility.
It is perfect for defining new languages without having
to write a compiler form scratch.

> I digress. What is missing in DrScheme? Overall I love it. Mainly a
> Sifsad focus. The system, DrScheme, has a intensional pedalogical focus.
> My concerns, efficient memory usage, optimized VM, speed, debugging are
> not their focus.

I don't agree that debugging is not in focus. Part of an pedagogical
environment is to produce precise error messages to the user.

Specifically DrScheme has

- stack traces
- arrows on top of the source to show calling sequence
- syntax coloring of live code
- a tool for building test suites
- an algebraic stepper (mostly for beginners though)

> The mzc compiler is not on par with some of the other
> Scheme->C systems out there. Is there an inherant architectural tradeoff
> which prevents mzc from approaching Chicken or Bigloo with speed. Don't
> know. If two or three Scheme wizs announced this very night that they
> were going to join the PLT team with a Sifsad prioritized feature list. I
> would do a hand spring and take up organized religion.

If you compare the speed of mzc executables to Perl and Python what are
your conclusions?

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Oct 28, 2003, 7:00:15 PM10/28/03
to
David Rush wrote:
> On Tue, 28 Oct 2003 02:05:01 GMT, R Racine <r...@adelphia.net> wrote:
>> On Tue, 28 Oct 2003 01:27:42 +0100, Jens Axel Søgaard wrote:
>>> What is missing in DrScheme?

> but I don't use it. And haven't for quite a while (like since early v200).


> There are a few reasons for this, some rational and some less so:

[Relevant speed reasons snipped - I am interested in the other reasons]

> 5) MrSpidey can't handle big enough programs - and I *really* wish it did.
> In fact, if MrSpidey could handle 15KLOC+ programs I would probably
> start to make the effort to move back to PLT for pre-production
> development. but did I mention that it's not fast enough for my crippled
> 486/133 at home?

I have actually never tried MrSpidey - but you can't seriously
list that as a reason, since the competing languages doesn't have
similar tools.

> 7) I'm really attached to Scsh's adaptation of Posix to Scheme. Where PLT
> has diverged, I haven't actually found it any better.

POSIX. That would indeed be a good thing to have better support for.

> 8) PLT's library is very big...and very inbred so I can't easily chop off
> parts of it to use under other, faster, Scheme implementations. So
> programming in PLT becomes a painful exercise in figuring out how to
> implement the PLT signatures for my production platforms.

Again. I am narrowmindedly comparing to Perl/Python today, so that
doesn't apply.

> 9) PLT is a pain to install. I'm sure that the PLT folks don't think so,
> but
> but I haven't been able to get a fully-working install for quite a while
> now. It doesn't use configure/make to build and it is very finicky about
> file locations. Given that I *usually* need to have a multi-platform
> environment I find the lack of flexibility in PLT's installation very
> irritating.

Hm. A valid concern.

> Even though I am obsessed with performance, please understand that PLT is
> I think the second-fastest interpreter out there (Petite Chez is #1). And
> remember that I *do* like many things about PLT, even if it doesn't come
> out when I'm whingeing. In fact, I am planning to use PLT to teach my kids
> programming.

How old are they? You could start my showing them the turtles in
DrScheme. That's great fun.

--
Jens Axel Søgaard

Bradd W. Szonye

unread,
Oct 28, 2003, 7:09:31 PM10/28/03
to
> Bradd W. Szonye wrote:
>> BTW, the development environment was actually a drawback for me --
>> I'm a hardcore vim & Makefiles kinda guy.

Jens Axel Søgaard <use...@jasoegaard.dk> wrote:
> Why didn't you just ignore DrScheme and used mzscheme?

That's what I do.

Jens Axel Søgaard

unread,
Oct 28, 2003, 7:15:56 PM10/28/03
to
Bradd W. Szonye wrote:
>>Bradd W. Szonye wrote:

>>>BTW, the development environment was actually a drawback for me --
>>>I'm a hardcore vim & Makefiles kinda guy.

> Jens Axel Søgaard <use...@jasoegaard.dk> wrote:
>>Why didn't you just ignore DrScheme and used mzscheme?

> That's what I do.

So what's the drawback?

--
Jens Axel Søgaard

R Racine

unread,
Oct 28, 2003, 7:56:59 PM10/28/03
to
On Wed, 29 Oct 2003 00:55:03 +0100, Jens Axel Søgaard wrote:

> If you compare the speed of mzc executables to Perl and Python what are
> your conclusions?

Hands down mzc wins. However, I do not consider Python and Perl serious
application development languages. In arena of scripting, small or
one-off applications IMHO mzc/mzscheme is clearly superior. No contest.

But I would like to see mzc/mzscheme move from the champ of the middle
weight division to heavy weight contender. For me this means aggregate
benchmark suite performance on par (lets say less within a factor of 2x)
with SML/NJ, CMUCL, or C++ and most of the other Scheme -> C systems.

Anecdotal story. Recently while the "Coins" discussion was taking place
on c.l.s the author of one of the major Scheme->C systems was on the
#Scheme IRC. I believe both of us were surprised at how competitive mzc was
vs a well respected Scheme -> C system. Mzc didn't win but did well. (I
believe the GMP bingings for large exacts and how cleverly large exacts
are implemented in MzScheme account for its very respectable showing.)

I expect (guessing here) mzc would be less competitive on boyer.scm for
example.

Ray

Matthias Felleisen

unread,
Oct 28, 2003, 8:04:48 PM10/28/03
to
R Racine wrote:

Some large company located near the northwestern corner of the continental US
has sponsored Will Clinger (Larceny) and PLT to create a merger of the two
Scheme systems not unlike a mix of the strawman proposals that you have put up
below.

The specific plan is as follows:
- Will and some others are retargeting Larceny to the intermediate language of
said companies virtual machine. Will has been calling this project Common
Lareceny.
- Joe Marshall and some others are porting MrEd to said company's toolbox
The result could be a MrEd that's almost completely in Scheme.
Will is certainly encouraging us to think of Scheme as a systems language.
Eli's arrival has strengthened this goal even more.
- Once we have a joint Scheme, we are hoping to retarget it to other platforms.

How realistic is the plan? Producing Larceny was a two-man effort. It's a fast,
reliable R5RS implementation with a few extra goodies. It is particularly
well-suited for the research ideas that Will wishes to pursue.

PLT Scheme is a many people, many years effort. Matthew (mzscheme), Robby
(drscheme), Shriram (zodiac, server, libs), Cormac (mrspidey), Philippe (mrflow
= mrspidey successor), Paul Steckler (myster, sister, mzcom), John (the foot,
and soon a debugger), Paul Graunke (the server, soon to be managed by Greg),
Scott (parser tools), and countless others who are working and/or have worked on
bits and pieces of the tool suite, not to mention their "day jobs". It is an
expensive product.

Merging the two projects is not an easy task. It won't be done quickly. If
people really want a top-notch product, however, it may be the route to go.
If you have time to contribute or money or you want to volunteer friends, please
do so. The goal is to produce a good platform for the first Schemers and the
rest of the world, too.

-- Matthias