Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Whither now, Oh Scheme.

55 views
Skip to first unread message

R Racine

unread,
Oct 26, 2003, 9:45:33 PM10/26/03
to
We are amidst a computer programming language renaissance. New languages
and lots of new users to play with them. Daily it seems.

12-18 months the #scheme IRC channel on freenode.net was essentially
abandoned. Often myself or another, on occasion a crowd of 3. As I type,
on a Sunday evening, there are 40 users.

Discussions on #scheme are generally threefold in nature; SRFI's, as
several SRFI authors are frequently on, a bit of homework or Scheme newbie
assistance, and as one might imagine a good deal of comparative
implementation discussion.

There seems to be a consensus Scheme(s) for every situation except one,
serious application development. And a number of Schemers are interested
in doing just that. A case can be made that a SIFSAD (Scheme Intended For
Serious Application Development) does not exist today, what is worse, it
is doubtful one will exist tomorrow.

But suppose there was a plan for SIFSAD, a roadmap for a Scheme Intended
For Serious Application Development, what would it look like. You would
have to start from somewhere, have a destination in mind, the path
becomes just a bit of machete work.

Assuming the best opportunity for Sifsad is an evolutionary one from the
core of an existing Scheme implementation, here are some hypothetical
Sifsad bios.

Scheme 48 / PreScheme compiler - PreScheme compiler is resurrected,
initially emitting C code. Later native emitters Itanium/AMD 64 bit
systems were added.

PLT/MzScheme - mzc compiler is enhanced with aggressive optimizations.
MzScheme becomes not only one of the functionally richest implementations
but the fastest as well.

Chicken/Bigloo/Gambit/Larceny/Scheme->C et al. Consensus is reached on one
code base, remaining authors, recognizing the will of the Scheme community
work to add the best features of each into the common code base. The
resulting Scheme->C compiler is widely regarded as the best HLL compiler
available.

Chez Scheme - Individual licenses are made available at reasonable cost.
Source is GPL'd for non-commercial use.

MITScheme - Port to new 64 bit systems is successfully achieved. Module
system, syntax-case support is added. With memory constraints lifted,
development of lightning fast, large memory footprint application are
possible in an incremental compilation environment.


Whither now, Scheme.

felix

unread,
Oct 27, 2003, 2:25:47 AM10/27/03
to
"R Racine" <r...@adelphia.net> wrote in message news:<pan.2003.10.27....@adelphia.net>...

>
> There seems to be a consensus Scheme(s) for every situation except one,
> serious application development. And a number of Schemers are interested
> in doing just that. A case can be made that a SIFSAD (Scheme Intended For
> Serious Application Development) does not exist today, what is worse, it
> is doubtful one will exist tomorrow.

Could you elaborate on that? Why do you think (say) Bigloo or PLT
might not suitable for serious app development?

>
> But suppose there was a plan for SIFSAD, a roadmap for a Scheme Intended
> For Serious Application Development, what would it look like. You would
> have to start from somewhere, have a destination in mind, the path
> becomes just a bit of machete work.
>
> Assuming the best opportunity for Sifsad is an evolutionary one from the
> core of an existing Scheme implementation, here are some hypothetical
> Sifsad bios.
>
> Scheme 48 / PreScheme compiler - PreScheme compiler is resurrected,
> initially emitting C code. Later native emitters Itanium/AMD 64 bit
> systems were added.

PreSchene might not be anyones favorite Scheme dialect.

>
> PLT/MzScheme - mzc compiler is enhanced with aggressive optimizations.
> MzScheme becomes not only one of the functionally richest implementations
> but the fastest as well.

Interesting alternative. But Mzc still has to provide clean interfacing
to the MzScheme runtime system, which is not really tuned for
maximum performance, but for other things (debuggability, ease of use,
robustness, etc.)

>
> Chicken/Bigloo/Gambit/Larceny/Scheme->C et al. Consensus is reached on one
> code base, remaining authors, recognizing the will of the Scheme community
> work to add the best features of each into the common code base. The
> resulting Scheme->C compiler is widely regarded as the best HLL compiler
> available.

(BTW, Larceny is not a Scheme->C compiler)

So that would mean we reduce all Scheme->C compilation strategies down
to the least common divisor:

- drop Chicken's fast continuations
- drop Gambit's (forthcoming) very efficient multithreading system
- drop Bigloo's/Scheme->C's direct compilations style and make it a CPS compiler
(you want 1st class continuations and TCO, right?)

What you will get is a Scheme implementation that is either unusable,
incomplete or inefficient.

>
> Chez Scheme - Individual licenses are made available at reasonable cost.
> Source is GPL'd for non-commercial use.

Hm. Can't say much about that...

>
> MITScheme - Port to new 64 bit systems is successfully achieved. Module
> system, syntax-case support is added. With memory constraints lifted,
> development of lightning fast, large memory footprint application are
> possible in an incremental compilation environment.

What many people don't realize is that there CAN'T BE NO SINGLE ALL-POWERFUL
SCHEME implementation. Tradeoffs have to be made, unless you want to
produce a mediocre one. Chicken (for example) will never beat Bigloo, in
terms of raw performance, yet Bigloo's (or PLT's) continuations are
awfully inefficient. Damn, it's even impossible to pin down a single
perfect implementation strategy (Cheney-on-the-MTA? Direct style?
Trampoline style? Bytecode VM? Threaded VM?). What GC? Conservative?
Ref. counting? Stop-and-copy? Mark-and-sweep? Which is best? Or,
more importantly, which is best for *all* applications? None, I'd say.

Several Scheme implementations are more than adequate for serious development
and people use it for that. In fact, Schemes generally provide better
performance and often have better foreign function interfaces than
languages like Python, Ruby or Perl, which seem to be well accepted for serious
stuff. Scheme is more rigorously defined, is better suited to
compilation and provides incredibly powerful syntactic abstractions.

It *is* easy to get lost in the number of implementations, and many
of those are somewhat half-finished, partly because it's so easy
to whip up a simple Scheme, yet this has absolutely nothing to do
with Scheme not being ready for development of real-world code.


cheers,
felix

felix

unread,
Oct 27, 2003, 4:49:14 AM10/27/03
to
fe...@proxima-mt.de (felix) wrote in message news:<e36dad49.03102...@posting.google.com>...

>
> (BTW, Larceny is not a Scheme->C compiler)
>

Or is petite larceny already available?
It seems it isn't, but I may be wrong.


cheers,
felix

R Racine

unread,
Oct 27, 2003, 8:28:37 AM10/27/03
to

On Sun, 26 Oct 2003 23:25:47 -0800, felix wrote:

> What many people don't realize is that there CAN'T BE NO SINGLE
> ALL-POWERFUL SCHEME implementation. Tradeoffs have to be made, unless
> you want to produce a mediocre one. Chicken (for example) will never
> beat Bigloo, in terms of raw performance, yet Bigloo's (or PLT's)
> continuations are awfully inefficient. Damn, it's even impossible to pin
> down a single perfect implementation strategy (Cheney-on-the-MTA? Direct
> style? Trampoline style? Bytecode VM? Threaded VM?). What GC?
> Conservative? Ref. counting? Stop-and-copy? Mark-and-sweep? Which is
> best? Or, more importantly, which is best for *all* applications? None,
> I'd say.
>
> Several Scheme implementations are more than adequate for serious
> development and people use it for that. In fact, Schemes generally
> provide better performance and often have better foreign function
> interfaces than languages like Python, Ruby or Perl, which seem to be
> well accepted for serious stuff. Scheme is more rigorously defined, is
> better suited to compilation and provides incredibly powerful syntactic
> abstractions.
>
> It *is* easy to get lost in the number of implementations, and many of
> those are somewhat half-finished, partly because it's so easy to whip up
> a simple Scheme, yet this has absolutely nothing to do with Scheme not
> being ready for development of real-world code.
>
>
>

In my previous post I mentioned a threefold path to Nirvana. Determine a
starting point, define an endpoint, get the mechete ready. To properly
select an implementation to evolve into Sifsad, it only makes sense to
select an implementation that is best to build of off. There is a
distinct chance that the "best" implementation to move forward with is not
even one of the top 2 or 3 implementations used today.

So a priori, agreed, no debate, compromises must and will occur. However,
I will debate whether it is possible to a) effectively determine which
tradeoffs to select IF the end goal is adequately defined, b) compromises
can not be ameliorated by modular code design c) such tradeoffs inevitably
result in mediocrity.

For example,
The end goal is defined.
- Speed of application. Very important. - Efficient use of large amounts
of memory. Very important.
- Full debugging. Continuation restarts ???
- Core fullblown MOP. Highly optimized dispatch. - Modules, standalone
compilation, interfaces/signatures (also parametric
interfaces/signatures) and runtime determinable implementations. [Imagine
the SRFI-44 debate on the definition of a collections library in the light
of a SIG/UNITs or Scheme48/Chez interfaces or SML sigs....
- Standalone, static exe capability.
- Real multithreading capable of utilizy multiple processors. - so on and
so forth...

The point is, define the goal and tradeoffs become a debate in the context
of what is necessary to achieve the goal.

Another point, Larceny [as you correctly pointed out is not just a
Scheme->C system, later tonight I intend to post on why proposing Larceny
makes sense] has 5 - 6 different GC systems. The Larceny core is very
well designed and supports plugable GC systems. What is the penalty for
this flexibility? I doubt the efficiency of the Twobit compiled code is
impacted. PLT also has 2 GC/VM systems. Such things can be abstracted in
the code base to support multiple solutions and pluggability with minimal
impact.

Bottom line, I believe it IS possible to allow for flexible pluggable
strategies to many of the issues you raised such as various VM strategies.

Couldn't you, being well versed on the Cheney-on-the-MTA approach either
show that this approach is decidedly superior then the MzScheme approach
or is a must have option in Sifsad and then assist in adding it to
MzScheme? (Assuming MzScheme makes sense as the base system.)

Must two or more! Scheme distributions exist, complete with different
runtimes and libraries, predicated on the single point of bifurcation as
to how continuation capture is occuring??!!

In the context of doing comparative analysis via two small experimental
systems yes. In the world of the application developer where the method
of continuation capture is invisible, it is decidedly not justification
for forking two blown Scheme systems. Just capture the damn things, make
it stable, make it fast and MAKE IT ONE Scheme System. Thank you very
much.


Regards,


Ray

Scott G. Miller

unread,
Oct 27, 2003, 9:21:52 AM10/27/03
to

> In the context of doing comparative analysis via two small experimental
> systems yes. In the world of the application developer where the method
> of continuation capture is invisible, it is decidedly not justification
> for forking two blown Scheme systems. Just capture the damn things, make
> it stable, make it fast and MAKE IT ONE Scheme System. Thank you very
> much.

I hope I'm wrong, but it seems you have a simplified view of Scheme
architectures. Continuation capture is probably *the* fundamental
feature that drives selecting the implementation strategy. One cannot
have a modular continuation capture implementation. Thats why systems
with slow call/cc are unlikely to get much better without rearchitecting
themselves at a low level.

If I'm using continuations heavily, I'm going to want to choose an
implementation with that property. If I'm not using them all, but I
demand high performance otherwise, then I'm likely to make a completely
different choice. Its these sort of trade offs which make sifsad a bad
idea. You should ask yourself what the real problem is that prevents
serious application development. I would argue that its the lack of
large, (standard?, maybe.) library. This means covering things such as
usable GUI toolkits, extensive database connectivity, mature threading,
networking, datastructures... the sort of things career programmers take
for granted from the platform libraries of C++ or Java.

The fallacy is believing this is only possible if we standardize on one
Scheme.

Scott

Ray Dillinger

unread,
Oct 27, 2003, 12:30:59 PM10/27/03
to
"Scott G. Miller" wrote:
>
> > In the context of doing comparative analysis via two small experimental
> > systems yes. In the world of the application developer where the method
> > of continuation capture is invisible, it is decidedly not justification
> > for forking two blown Scheme systems. Just capture the damn things, make
> > it stable, make it fast and MAKE IT ONE Scheme System. Thank you very
> > much.
>
> I hope I'm wrong, but it seems you have a simplified view of Scheme
> architectures. Continuation capture is probably *the* fundamental
> feature that drives selecting the implementation strategy. One cannot
> have a modular continuation capture implementation. Thats why systems
> with slow call/cc are unlikely to get much better without rearchitecting
> themselves at a low level.

Partly.... A scheme that compiles to a well-designed intermediate
form could have two back-ends; one that heap-allocates and garbage
collects call frames, and one that uses the hardware stack. These
back-ends would generate code that obeyed two different runtime
models, but there's also a "tail" end -- keyhole optimization of
machine code -- that could be shared between them. The runtime
symbol table and associated code could also be shared between the
two models.

So you'd wind up duplicating maybe half of a simple compiler to
accomodate the fundamentally different designs. And effort spent
on the crankiest and most bottomless, nonportable areas -- machine
code and cache optimization -- would be sharable. By the time
you'd done aggressive optimizations and ported to a half-dozen
different hardware/OS combinations, the duplicated effort might
be a tenth or less of the compiler.

From a compilation point of view, it's easy to scan scheme code and
see if you can find places where call/cc is ever used. You could
make a first-order choice of which backend to invoke just by
checking for it. But the right thing to do would be to profile
it at the intermediate-code level and make a hard assessment of
which model is a "win" for the given program.

Much of what would need to be done can only be done in scheme as
a result of whole-program optimization. And that means getting
program code away from the REPL, because as long as you have the
REPL in the system, you absolutely cannot prove that something
isn't going to be redefined or mutated. It also means very
serious support for optional declarations to eliminate unnecessary
typechecks and very serious support for memory and CPU profiling.

Finally, we really *really* need a linkable object file format
that we don't have to go through an FFI for. FFI's distort or
contort the meaning of scheme code; they introduce special cases,
cause wraparound or length errors in integers, truncate complex
numbers, create exceptions to garbage collection handling, and
wreak all kinds of misfits with the runtime model. We paper over
the problems reasonably well, but still they never quite work
right. When scheme programs link to scheme libraries they shouldn't
need to use braindead C calling conventions.


> The fallacy is believing this is only possible if we standardize on one
> Scheme.

I think maybe there needs to be a 'SISFAD' standard, above and
beyond R5RS, that specifies a lot of things R5RS doesn't specify.
I'd like to see a bunch of people implement it, much as a bunch
of people have implemented R5RS.

A SISFAD standard would expressly forbid some of the things that
make some schemes unusable for serious application development,
like limits on the memory size (guile and MIT scheme have this
problem particularly badly) and failure to support the full
numeric tower. It would specify a format for libraries portable
across all implementations of SISFAD, define which R5RS and other
functions are found in what libraries, define a set of OS calls
accessible through libraries, and straighten out a few things
like binary I/O primitives for pipes, sockets and files.

It would specify the syntax of performance declarations, but the
only requirement of implementations should be that they must not
barf on the syntax -- actually using it for performance enhancement
is a plus, but not barfing on it is crucial.

Bear

Anton van Straaten

unread,
Oct 27, 2003, 1:03:16 PM10/27/03
to
Ray Dillinger wrote:

> "Scott G. Miller" wrote:
> > The fallacy is believing this is only possible if we standardize
> > on one Scheme.
>
> I think maybe there needs to be a 'SISFAD' standard, above and
> beyond R5RS, that specifies a lot of things R5RS doesn't specify.
> I'd like to see a bunch of people implement it, much as a bunch
> of people have implemented R5RS.

This makes much more sense to me than "standardizing on one Scheme".

Of course, the first thing to be standardized has to be a better acronym
than SIFSAD!!

Anton

Scott G. Miller

unread,
Oct 27, 2003, 2:19:04 PM10/27/03
to

I've been speaking deliberately abstractly, but many of these topics
were covered at Matthias Radestock's ILC presentation, and will likely
come up again in some detail around the Scheme Workshop and LL3. See
you there!

Scott

Bruce Stephens

unread,
Oct 27, 2003, 2:56:23 PM10/27/03
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> Ray Dillinger wrote:
>> "Scott G. Miller" wrote:
>> > The fallacy is believing this is only possible if we standardize
>> > on one Scheme.
>>
>> I think maybe there needs to be a 'SISFAD' standard, above and
>> beyond R5RS, that specifies a lot of things R5RS doesn't specify.
>> I'd like to see a bunch of people implement it, much as a bunch
>> of people have implemented R5RS.
>
> This makes much more sense to me than "standardizing on one Scheme".

As far as I understand it, that's what SRFIs are about. The existing
ones don't seem to me to go nearly far enough, though.

Part of what makes (for example) Perl good is CPAN, and all the
conventions (and the resulting community) that make CPAN possible. So
I can download a tarball, unpack it, run Makefile.PL using my chosen
Perl interpreter, and then "make; make test; make install" will work
(with high probability).

That's all made easier because Perl has a single implementation (give
or take), of course. Even so, if there were a common FFI (even a
restricted one), and a few extra things (a common module and/or
package system, perhaps a common object system) something similar
could be built for Scheme.

I'm guessing it won't happen, though. I'm not sure quite what it is,
but something seems to prevent such cooperation.

And that seems to mean that there isn't a scheme community in the same
way that there's a Perl community---so I can be confident of getting
Perl's LDAP package and being able to use it, but Bigloo's equivalent
<http://sourceforge.net/projects/bigloo-lib/> doesn't even build with
the current bigloo, presumably because bigloo's community is simply
too small. (I found much the same with some RScheme libraries, and
doubtless the same is true of most scheme implementations.)

Taylor Campbell

unread,
Oct 27, 2003, 3:22:46 PM10/27/03
to
Would you like a pony, too?

felix

unread,
Oct 27, 2003, 5:42:24 PM10/27/03
to
On Mon, 27 Oct 2003 13:28:37 GMT, R Racine <r...@adelphia.net> wrote:

> In my previous post I mentioned a threefold path to Nirvana. Determine a
> starting point, define an endpoint, get the mechete ready. To properly
> select an implementation to evolve into Sifsad, it only makes sense to
> select an implementation that is best to build of off. There is a
> distinct chance that the "best" implementation to move forward with is
> not
> even one of the top 2 or 3 implementations used today.

Possible, *if* a Sifsad (geez, what an awful name! ;-) is possible
and practical, which I seriously doubt...

> For example,
> The end goal is defined.

> -Speed of application. Very important. - Efficient use of large amounts
> of memory. Very important.

No disagreement here.

> -Full debugging. Continuation restarts ???

But you want speed to, right? Ok, so have several optimization settings.

> -Core fullblown MOP. Highly optimized dispatch.

Oh, how about speed? I assume a simple procedure call is more efficient
(whatever tricks your dynamic dispatch plays, it will not beat the
direct procedure call, naturally). Here you have your first tradeoff.
Why do you want OO baggage in the core, when you want speed at the same
time?

> - Modules, standalone
> compilation, interfaces/signatures (also parametric
> interfaces/signatures) and runtime determinable implementations.
> [Imagine
> the SRFI-44 debate on the definition of a collections library in the
> light
> of a SIG/UNITs or Scheme48/Chez interfaces or SML sigs....

What kind of modules? How easy to use should they be? Should they
allow interactive use? Man, do you realize how much work has gone into
Scheme module systems, yet none really satisfies everybody!

> The point is, define the goal and tradeoffs become a debate in the
> context
> of what is necessary to achieve the goal.

Yes, this is not new. People on c.l.s (and elsewhere) debate about these
things
for decades, now. Have they reached only the slightest bit of consensus?
No, they haven't. Why, I ask you.

>
> Another point, Larceny [as you correctly pointed out is not just a
> Scheme->C system, later tonight I intend to post on why proposing Larceny
> makes sense] has 5 - 6 different GC systems. The Larceny core is very
> well designed and supports plugable GC systems. What is the penalty for
> this flexibility? I doubt the efficiency of the Twobit compiled code is
> impacted. PLT also has 2 GC/VM systems. Such things can be abstracted
> in
> the code base to support multiple solutions and pluggability with minimal
> impact.

Absolutely. Yet, there are implementation strategies that are very tightly
coupled with there collectors. One example is Cheney-on-the-MTA, another
is "traditional" direct style compilers that target C, which mostly use
conservative
GC.

>
> Bottom line, I believe it IS possible to allow for flexible pluggable
> strategies to many of the issues you raised such as various VM
> strategies.

Possible, yes. But not always adequate. I claim that the ideal Scheme
implementation you have in mind will be completely unusable for others.

>
> Couldn't you, being well versed on the Cheney-on-the-MTA approach either
> show that this approach is decidedly superior then the MzScheme approach
> or is a must have option in Sifsad and then assist in adding it to
> MzScheme? (Assuming MzScheme makes sense as the base system.)

It doesn't (if I may say so). I wouldn't touch the MzScheme sources
unless physically forced to do so. That Cheney-on-the-MTA is superior
(to direct style, like Bigloo) is something that I'm firmly convinced
off. And? That doesn't matter to someone who isn't interested in anything
but raw speed of straight-line code. Tradeoffs, again.

>
> Must two or more! Scheme distributions exist, complete with different
> runtimes and libraries, predicated on the single point of bifurcation as
> to how continuation capture is occuring??!!

If you look carefully, you'll find many more differences than only
continuation capture. And capture is only *one* issue with continuations.
How about safe-for-space complexity? Reification? Storage consumption?

>
> In the context of doing comparative analysis via two small experimental
> systems yes. In the world of the application developer where the method
> of continuation capture is invisible, it is decidedly not justification
> for forking two blown Scheme systems. Just capture the damn things, make
> it stable, make it fast and MAKE IT ONE Scheme System. Thank you very
> much.
>

Many people have tried to do so. Yet, the ideal Scheme system hasn't been
done yet.
If the unification of all Scheme implementation efforts is the really
important issue for you, then you effectively strive for mediocrity,
unless you happen to be a Scheme implementation wizard, vastly ahead
of all the others. Mind you, that would be nice!


cheers,
felix

Bruce Stephens

unread,
Oct 27, 2003, 6:23:55 PM10/27/03
to
felix <fe...@call-with-current-continuation.org> writes:

[...]

> Many people have tried to do so. Yet, the ideal Scheme system hasn't been
> done yet.
> If the unification of all Scheme implementation efforts is the really
> important issue for you, then you effectively strive for mediocrity,
> unless you happen to be a Scheme implementation wizard, vastly ahead
> of all the others. Mind you, that would be nice!

Probably true. In that sense, Perl, Python, etc., are mediocre---some
reasonable uses of the languages are inefficient.

On the other hand, if you've got a one-day sort of problem to solve
that requires access to LDAP, SSL, PostgreSQL, and gtk, then the
mediocre solutions win.

Heck, people have been writing reasonable size applications in Tcl for
years, largely because it had a very convenient binding to Tk. tkman
(a *really* nice manpage reader) was first written (about 10 years
ago, apparently) when Tcl was a strongly string-based interpreter; the
author even wrote a paper about the various hackery he used to make it
fast enough (the files had non-essential spaces removed and ghastly
things like that).

Even then, there were presumably choices that ought to have been
better (Tcl's far from a perfect language, and it was much worse in
1993); but Tcl had a convenient binding to Tk and an easy to use FFI,
and that was enough for it to be more usable for a large class of
applications.

For a big application, the work necessary to bind a few libraries is
dwarfed by the work necessary to attack the real problem. However,
that leaves lots of little applications where you're naturally going
to choose a language which has lots of convenient packages. Perhaps
more importantly, I suspect big applications often start off as small
ones---something like Perl makes it easier to start work on a problem.

Bradd W. Szonye

unread,
Oct 27, 2003, 7:06:08 PM10/27/03
to
Bruce Stephens <bruce+...@cenderis.demon.co.uk> wrote:
> For a big application, the work necessary to bind a few libraries is
> dwarfed by the work necessary to attack the real problem. However,
> that leaves lots of little applications where you're naturally going
> to choose a language which has lots of convenient packages. Perhaps
> more importantly, I suspect big applications often start off as small
> ones---something like Perl makes it easier to start work on a problem.

Heck yeah. More than a few times, I've started a big project by writing
a prototype in Perl. More precisely, I try to hack it up in Perl, and if
that doesn't work, I do a better implementation in a more appropriate
language. As a bonus, the initial hack-job implementation gives me
enough experience with the problem domain that I can do a better design
for the "real" version.
--
Bradd W. Szonye
http://www.szonye.com/bradd
My Usenet e-mail address is temporarily disabled.
Please visit my website to obtain an alternate address.

R Racine

unread,
Oct 27, 2003, 7:12:31 PM10/27/03
to
On Mon, 27 Oct 2003 08:21:52 -0600, Scott G. Miller wrote:

> I hope I'm wrong, but it seems you have a simplified view of Scheme
> architectures.

I do. I represent the pitch fork wielding, torch waving, unwashed masses
of fustrated Scheme application developers. And yes, maybe I am a mass of
one. (shades of a "silent" majority here)

I am not saying that Sifsad will have some trival property flag and will
then suddenly manifest 3 modes of continuation capture.

I'm just saying that after a decade or two, is it unreasonable to suggest
that there has been enough experimental versions, and multiple approaches
to the reach a "reasonable" conclusion (not a perfect conclusion) with
regard to implementing continuation capture if one were to design Sifsad.

As one of the unwashed, I don't care how its done, I am sure I wouldn't
understand the internals if I tried. I can't slam dunk a basketball
either. So be it.

SML/NJ is fast (not the fastest, but commercially fast) and supports
continuations. And no, I am not saying, do it just like SML/NJ.

Ray

Jens Axel Søgaard

unread,
Oct 27, 2003, 7:27:42 PM10/27/03
to
R Racine wrote:
> On Mon, 27 Oct 2003 08:21:52 -0600, Scott G. Miller wrote:

>>I hope I'm wrong, but it seems you have a simplified view of Scheme
>>architectures.

> I do. I represent the pitch fork wielding, torch waving, unwashed masses
> of fustrated Scheme application developers. And yes, maybe I am a mass of
> one. (shades of a "silent" majority here)

What is missing in DrScheme?

> I am not saying that Sifsad will have some trival property flag and will
> then suddenly manifest 3 modes of continuation capture.

> I'm just saying that after a decade or two, is it unreasonable to suggest
> that there has been enough experimental versions, and multiple approaches
> to the reach a "reasonable" conclusion (not a perfect conclusion) with
> regard to implementing continuation capture if one were to design Sifsad.

The Grand Unified Scheme is nothing but a dream. You will always need
to make compromises in implementations. That's why you ought to be
thrilled about the wide range of Scheme implementations in existence.
In other languages (e.g. Python/Perl) you are pretty much stuck with
one implementation.

> As one of the unwashed, I don't care how its done, ...

That's a bold statement in these parts of the wood.

See the last discussion on the Grand Unified Scheme:

<http://groups.google.com/groups?hl=da&lr=&ie=UTF-8&th=5f1ec978a3e333dc&rnum=2>


Perhaps a better idea was to begin making an FFI-SRFI?


--
Jens Axel Søgaard

Bruce Stephens

unread,
Oct 27, 2003, 7:42:23 PM10/27/03
to
"R Racine" <r...@adelphia.net> writes:

> On Mon, 27 Oct 2003 08:21:52 -0600, Scott G. Miller wrote:
>
>> I hope I'm wrong, but it seems you have a simplified view of Scheme
>> architectures.
>
> I do. I represent the pitch fork wielding, torch waving, unwashed
> masses of fustrated Scheme application developers. And yes, maybe I
> am a mass of one. (shades of a "silent" majority here)

I'm sure you're not alone.

That's part of the problem: gathering a community of users seems much
easier when there's only one implementation.

But scheme (even if you add in slib and a selection of SFRIs) is small
enough that it's reasonably straightforward to produce an
implementation. Certainly not *that* easy, but easy enough that there
seem to be about half a dozen implementations that aren't quite dead
yet.

[...]

> I'm just saying that after a decade or two, is it unreasonable to
> suggest that there has been enough experimental versions, and
> multiple approaches to the reach a "reasonable" conclusion (not a
> perfect conclusion) with regard to implementing continuation capture
> if one were to design Sifsad.

I'd say that STklos and guile are probably acceptable interpreters
(STklos is a byte-coding interpreter; I forget the details of guile),
and that bigloo and rscheme are probably pretty good compilers. (I'm
judging implementations in terms of speed, popularity, whether I've
heard of them, etc.)

So it seems to me that not only do we have reasonble conclusions about
acceptable solutions, we have several. Tom Lord's working on another,
and presumably there are other new ones being worked on, too. (And
there are the other interpreters, native code/C compilers, and JVM and
.Net implementations, too.)

We don't lack choice.

> As one of the unwashed, I don't care how its done, I am sure I
> wouldn't understand the internals if I tried. I can't slam dunk a
> basketball either. So be it.
>
> SML/NJ is fast (not the fastest, but commercially fast) and supports
> continuations. And no, I am not saying, do it just like SML/NJ.

Perhaps the best is to accept what's there, and to build prototypes
and so on with Perl (or Python, Ruby, etc.) and then (once you know
what you're trying to do) build it in your preferred scheme (or lisp).

That feels wrong, though. I'd welcome a single (even if mediocre)
implementation of scheme that was generally regarded as the one to use
rather than Tcl, Perl, or Python. (I guess guile is it, or perhaps
Scheme48, with the nice scsh, but I'd really like it to be a compiler;
I think the GNU project messed up there---I think they ought to have
chosen RScheme, or at least cooperated sufficiently that RScheme could
have been substituted later, but perhaps it wouldn't have made a
difference.)

R Racine

unread,
Oct 27, 2003, 7:50:57 PM10/27/03
to
On Mon, 27 Oct 2003 23:42:24 +0100, felix wrote:
> Possible, *if* a Sifsad (geez, what an awful name! ;-) is possible and
> practical, which I seriously doubt...

The Sifsad name was chosen with the intent of it never seeing the light of
day in a real implementation. But you have to admit googling Sifsad would
minimize the irrelevant.



> If the unification of all Scheme implementation efforts is the really
> important issue for you, then you effectively strive for mediocrity,
> unless you happen to be a Scheme implementation wizard, vastly ahead of
> all the others. Mind you, that would be nice!

The sad fact of Scheme life is that if I were a Scheme implementation
wizard, and we all know very well I am not, I would have already announced
Yet Another Scheme Implementation. Math profs are annointed to generate new
Math profs. Scheme implementation wizards seemed destined to create
endless streams of Scheme implementation. They are the
Sysiphus' of language implementors. Doomed by the gods to endlessly
create half finished implementations in isolation from one another.

I am not proposing a GUS (Grand Unified Scheme).

Just a useful one.

Bruce Stephens

unread,
Oct 27, 2003, 7:57:28 PM10/27/03
to
Jens Axel Søgaard <use...@jasoegaard.dk> writes:

> R Racine wrote:
>> On Mon, 27 Oct 2003 08:21:52 -0600, Scott G. Miller wrote:
>
>>>I hope I'm wrong, but it seems you have a simplified view of Scheme
>>>architectures.
>
>> I do. I represent the pitch fork wielding, torch waving, unwashed masses
>> of fustrated Scheme application developers. And yes, maybe I am a mass of
>> one. (shades of a "silent" majority here)
>
> What is missing in DrScheme?

Bindings to Gtk/GNOME and other random useful libraries? Speed?

Perhaps there are such bindings, and I just don't know where to look
for them. It's true that speed isn't the main priority for the
DrScheme family, though, isn't it?

[...]

> The Grand Unified Scheme is nothing but a dream. You will always
> need to make compromises in implementations. That's why you ought to
> be thrilled about the wide range of Scheme implementations in
> existence.

Except that some implementations are virtually dead, and none have
quite the extensions that I want for this particular application...

> In other languages (e.g. Python/Perl) you are pretty much stuck with
> one implementation.

But that's OK, because although it is a compromise, it's a reasonable
one, and because there's only the one, there's an enormous library of
extensions and code that I can use. There's lots of scheme code, too,
but each blob of code that I find will take a few hours of work to
massage to work with the implementation that I've chosen to use (with
its particular combination of module system and so on).

[...]

> Perhaps a better idea was to begin making an FFI-SRFI?

Probably. On the other hand, if it were that easy, someone would
already have done it.

Anton van Straaten

unread,
Oct 27, 2003, 8:02:18 PM10/27/03
to
Jens Axel Søgaard write:

> R Racine wrote:
> > I'm just saying that after a decade or two, is it unreasonable to
suggest
> > that there has been enough experimental versions, and multiple
approaches
> > to the reach a "reasonable" conclusion (not a perfect conclusion) with
> > regard to implementing continuation capture if one were to design
Sifsad.
>
> The Grand Unified Scheme is nothing but a dream. You will always need
> to make compromises in implementations. That's why you ought to be
> thrilled about the wide range of Scheme implementations in existence.
> In other languages (e.g. Python/Perl) you are pretty much stuck with
> one implementation.

I think it's interesting & relevant to look at the ways in which this is
*not* true. First, there's Jython, which is a well-established
implementation of Python on the Java platform. There's also the Psyco
compiler for Python, which is a kind of JIT compiler. Then there are
implementations of both Python and Perl under way for .NET.

So I think it's possible that the much-vaunted single implementations of
some languages are merely an artifact of their youth. Implementations will
multiply over time, because of the need to support significantly different
platforms, if nothing else. The fact that Scheme has an amazing family of
implementations is an asset - but it also needs to do better at supporting
*reasonable* portability between at least some of those implementations.

Anton

felix

unread,
Oct 27, 2003, 8:19:02 PM10/27/03
to
On Tue, 28 Oct 2003 00:50:57 GMT, R Racine <r...@adelphia.net> wrote:

> On Mon, 27 Oct 2003 23:42:24 +0100, felix wrote:
>> Possible, *if* a Sifsad (geez, what an awful name! ;-) is possible and
>> practical, which I seriously doubt...
>
> The Sifsad name was chosen with the intent of it never seeing the light
> of
> day in a real implementation. But you have to admit googling Sifsad
> would
> minimize the irrelevant.

Absolutely.

>
>> If the unification of all Scheme implementation efforts is the really
>> important issue for you, then you effectively strive for mediocrity,
>> unless you happen to be a Scheme implementation wizard, vastly ahead of
>> all the others. Mind you, that would be nice!
>
> The sad fact of Scheme life is that if I were a Scheme implementation
> wizard, and we all know very well I am not, I would have already
> announced
> Yet Another Scheme Implementation. Math profs are annointed to generate
> new
> Math profs. Scheme implementation wizards seemed destined to create
> endless streams of Scheme implementation. They are the
> Sysiphus' of language implementors. Doomed by the gods to endlessly
> create half finished implementations in isolation from one another.

I wouldn't consider PLT (for example) half-finished.

>
> I am not proposing a GUS (Grand Unified Scheme).
>
> Just a useful one.
>

I can name several useful Scheme implementations. Just ask.
Many of those are used commercially and provide splendid FFIs and/or
extension libraries.
If Scheme implementations are insufficient for you, do it yourself.
But I don't think you will do any better than what is currently available,
since the major implementations take most known implementation strategies
pretty far.

Here's an idea: pick an implementation (unimportant which one), sit down
and start
writing libraries for it (doesn't matter for what).
Then (and only then) you really will help making Scheme more usable for
real-world development.

cheers,
felix

R Racine

unread,
Oct 27, 2003, 9:05:01 PM10/27/03
to
On Tue, 28 Oct 2003 01:27:42 +0100, Jens Axel Søgaard wrote:


> What is missing in DrScheme?
>
>

Not too much AFAIAC. On a personal level if I list the top 3 things that
have blown me away in the Scheme impl world:

MIT Scheme: The ground breaking work done here. You see MITScheme code,
concepts and ideas in many of the current Scheme implementations. It
is/was the fountainhead.

PLT Scheme: An almost endless stream of what Scheme is capable of.
Unit/Sigs, Languages , inheritable Structures, Contracts, the Syntax
concept, opaque types, module system ... You can just randomly click
about the help system and almost stumble into whole new concepts.

Another example from MzScheme. From Eli's Swindle. I saw that Swindle
had somehow added support for self evaluating symbols which start with a
colon. When I installed Swindle, I didn't recall any patching or
recompiling. So hey, how'd he do that? So I looked.

(module base mzscheme

(provide (all-from-except mzscheme
#%module-begin #%top #%app define let let* letrec lambda))

.... stuff....

;;>> (#%top . id)
;;>This special syntax is redefined to make keywords (symbols whose names
;;>begin with a ":") evaluate to themselves. Note that this does not
;;>interfere with using such symbols for local bindings. (provide (rename
top~ #%top))
(define-syntax (top~ stx)
(syntax-case stx ()
((_ . x)
(let ((x (syntax-object->datum #'x)))
(and (symbol? x) (not (eq? x '||))
(eq? #\: (string-ref (symbol->string x) 0))))
(syntax/loc stx (#%datum . x)))
((_ . x) (syntax/loc stx (#%top . x)))))

... stuff ...)

That was it! No special compiler hacking, reader hacking, any hacking at
all. Just suck in the MzScheme language, extend what it means to be a
datum or a top level symbol with a 7 line macro and export a new
"extended" Scheme language with self-evaluating colon prefixed symbols.
Not only that I could use this extended Scheme, regular MzScheme or yet
another variant in a controlled module by module basis. WOW

The Larceny Twobit Compiler: IMHO the finest bits of Scheme code I have
ever beheld. I have seen Scheme code which tackles far less loftier
targets then a highly optimizing, pluggable emitter, native compiler that
is not a tenth as readable and elegant. [BTW Sifsad should be based on
the Twobit compiler :)]

I digress. What is missing in DrScheme? Overall I love it. Mainly a
Sifsad focus. The system, DrScheme, has a intensional pedalogical focus.
My concerns, efficient memory usage, optimized VM, speed, debugging are
not their focus. The mzc compiler is not on par with some of the other
Scheme->C systems out there. Is there an inherant architectural tradeoff
which prevents mzc from approaching Chicken or Bigloo with speed. Don't
know. If two or three Scheme wizs announced this very night that they
were going to join the PLT team with a Sifsad prioritized feature list. I
would do a hand spring and take up organized religion.

What I find more troubling is some of the other Scheme wiz's disdain for
MzScheme from the aspect of a production quality Scheme. What is it that
THEY find missing in PLT? Do they know something that we simple Joes do
not regarding the inner workings of MzScheme?

What is that they see that prevents two major groups focusing on the PLT
code base and providing two releases/versions of PLT. DrScheme and Sifsad.


Ray

Alex Shinn

unread,
Oct 27, 2003, 9:29:28 PM10/27/03
to
At Mon, 27 Oct 2003 23:42:24 +0100, felix wrote:
>
> What kind of modules? How easy to use should they be? Should they
> allow interactive use? Man, do you realize how much work has gone into
> Scheme module systems, yet none really satisfies everybody!

Would it be too much to ask for a standard *syntax* to the module
system, without specifying the semantics? No matter how many SRFIs or
libraries we write, if we can consistently load them into a program then
the same program can never run unmodified on two different Schemes.

Suppose we use a syntax encompassing all of the module-system concepts
in use now. Something like

(define-module <module-A>
(use-module <module-B> [<procedure> ...])
(use-syntax <module-C> [<syntax> ...])
(autoload <module-D> [<procedure> ...])
(export <procedure> ...)
[(export-all)]
)

... module code ...

as a preamble in a module file. <procedure> may either be a symbol name
or a list of a symbol followed by optional type declarations, which a
Scheme that doesn't use type declarations can ignore. If your Scheme
doesn't differentiate between importing syntax and importing procedures
then the use-module and use-syntax forms are the same. Likewise if your
Scheme doesn't support autoloading then that too is equivalent to
use-module. export-all means export all top-level definitions in the
module, and this could probably be optional (since it's handy for
prototyping but when your module is "finished" and ready for use it's
better style to explicitly declare your exports).

There are issues to be resolved but I don't believe it's impossible to
at least make the syntax work for all the major module systems out
there. The question is, if a SRFI were to be created that specified a
syntax like the above, would Scheme implementations support it?

--
Alex

Shriram Krishnamurthi

unread,
Oct 27, 2003, 10:24:26 PM10/27/03
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> I think it's interesting & relevant to look at the ways in which this is
> *not* true. First, there's Jython, which is a well-established
> implementation of Python on the Java platform. There's also the Psyco
> compiler for Python, which is a kind of JIT compiler. Then there are
> implementations of both Python and Perl under way for .NET.
>
> So I think it's possible that the much-vaunted single implementations of
> some languages are merely an artifact of their youth.

Indeed, isn't that what happened with Stackless Python? My
understanding is that for a while, Stackless created an Avignon vs
Rome situation in the Python community. The noise over Stackless
seems to have subsided, but it seems likely Parrot will have
continuations, which means the debate will have to reopen. And I
believe Tismer and others are now working on something called PyPy,
which means yet another implementation...

Shriram

R Racine

unread,
Oct 27, 2003, 10:37:23 PM10/27/03
to
On Tue, 28 Oct 2003 02:19:02 +0100, felix wrote:


> I can name several useful Scheme implementations. Just ask.

Few. Very few, have had success writing substantive applications in
Scheme. Of those few, the majority, have or still are on some endless
merry-go-round of trying it on this impl and then that. I expect most
give up and, use C#, Java, SML, CL or Haskell.

To not recognize that there is an implementation "issue" with Scheme that
is impacting its adoption in the realworld, retention of the few
application level coders it has and constraining a substantial and broad
library code base from forming is ... I don't know. A shame.

> Many of those are used commercially and provide splendid FFIs and/or
> extension libraries.
> If Scheme implementations are insufficient for you, do it yourself. But
> I don't think you will do any better than what is currently available,
> since the major implementations take most known implementation
> strategies pretty far.
>
> Here's an idea: pick an implementation (unimportant which one), sit down

Therein lies the crux. I have been claiming it is. Ever try using
Bigloo-libs GTK bindings in XYZ impl. Or grabbing Schematics SchemeUnit
for use in ABC impl. Non starters. Sure you can spend a couple of days
porting it to whatever your current impl of choice. Then you get to do it
again when the library code has a new version released.

> and start
> writing libraries for it (doesn't matter for what).

My efforts are diluted. Because 49 other library writers are writing
libraries for some other impl.

> Then (and only then) you really will help making Scheme more usable for
> real-world development.

<sigh>Knew this one is coming eventually. No comment.</sigh>


Ray

Shriram Krishnamurthi

unread,
Oct 27, 2003, 10:31:08 PM10/27/03
to
Alex Shinn <fo...@synthcode.com> writes:

> Would it be too much to ask for a standard *syntax* to the module
> system, without specifying the semantics?

This is a troll, right? I'd expect more from a regular like Alex...

> No matter how many SRFIs or
> libraries we write, if we can consistently load them into a program then
> the same program can never run unmodified on two different Schemes.

I think you mean "...if we cannot consistently...". What does it mean
to load consistently in the absence of a semantics?

> Suppose we use a syntax encompassing all of the module-system concepts

> in use now. [...]

Doesn't encompass units.

Shriram

Anton van Straaten

unread,
Oct 27, 2003, 11:12:00 PM10/27/03
to
Shriram Krishnamurthi wrote:
> Alex Shinn <fo...@synthcode.com> writes:
>
> > Would it be too much to ask for a standard *syntax* to the module
> > system, without specifying the semantics?
>
> This is a troll, right? I'd expect more from a regular like Alex...

Maybe Alex means something like a standard module declaration syntax which
maps to a minimal set of sufficiently similar semantics on different
Schemes. Which seems like it could be a workable idea, to me.

> > No matter how many SRFIs or
> > libraries we write, if we can consistently load them into a program then
> > the same program can never run unmodified on two different Schemes.
>
> I think you mean "...if we cannot consistently...". What does it mean
> to load consistently in the absence of a semantics?

I dunno, Perl seems to manage! ;)

> > Suppose we use a syntax encompassing all of the module-system concepts
> > in use now. [...]
>
> Doesn't encompass units.

Standardizing something on the level of units isn't going to happen, I'm
sure. But I think a lowest-common denominator module system, which would
support writing portable modular code and publishing portable libraries,
would be helpful.

Sure, that won't allow taking an arbitrary whiz-bang library from
implementation A and plugging it in to implementation B, but that's not the
point. The point, I think, would be to build up the base a bit further, in
a direction that supports some of these pragmatic issues that we're all
aware of - so that there's a plausible portable base for application and
library developers to develop to, if they choose.

Anton

Bradd W. Szonye

unread,
Oct 27, 2003, 11:40:43 PM10/27/03
to
> Jens Axel Søgaard <use...@jasoegaard.dk> writes:
>> What is missing in DrScheme?

Bruce Stephens <bruce+...@cenderis.demon.co.uk> wrote:
> Bindings to Gtk/GNOME and other random useful libraries? Speed?

It used to have a Gtk binding, and supposedly there's a new one in the
works. I'm not too worried about that, though; the wxWindows binding is
pretty good and probably more portable. A GNOME binding would be a dead
end, portability-wise. The ability to write GUI apps for Windows and X
(without paying a ton of money or relying on Cygnus) was actually *the*
major selling point for PLT, for me.

> Perhaps there are such bindings, and I just don't know where to look
> for them. It's true that speed isn't the main priority for the
> DrScheme family, though, isn't it?

Apparently not, but that's not necessarily a bad thing. Portability,
robustness, ease of use, and a killer development environment seem to be
the main goals, and those things sell. And it's not like PLT is *slow*
-- it just isn't C, that's all. It compares favorably with other
interpreted languages.

BTW, the development environment was actually a drawback for me -- I'm a
hardcore vim & Makefiles kinda guy. (In fact, I wrote comprehensive vim
syntax-highlighting rules for PLT Scheme. I was originally supposed to
take over maintenance/development from the original author, but I never
got around to finishing and publishing my rules, because there were some
performance issues that I never quite worked out.)

Bradd W. Szonye

unread,
Oct 28, 2003, 12:43:26 AM10/28/03
to
Anton van Straaten <an...@appsolutions.com> wrote:
> Maybe Alex means something like a standard module declaration syntax
> which maps to a minimal set of sufficiently similar semantics on
> different Schemes. Which seems like it could be a workable idea, to
> me.

Agreed. Some folks might rankle at some of the necessary restrictions,
though. For example, you couldn't count on shadowing/redefining imported
identifiers like you can at the top level; some Schemes (like Scheme-48)
support that, but others (like PLT) don't, and for good reasons.

I was actually toying with the idea of implementing modules as FEATURE,
based on the requirements syntax of SRFI-7. However, I decided that
wasn't quite the right way to do it. More on this later if I actually
find time to implement something useful.

Anton van Straaten

unread,
Oct 28, 2003, 1:48:52 AM10/28/03
to
Bradd W. Szonye wrote:
> Anton van Straaten <an...@appsolutions.com> wrote:
> > Maybe Alex means something like a standard module declaration syntax
> > which maps to a minimal set of sufficiently similar semantics on
> > different Schemes. Which seems like it could be a workable idea, to
> > me.
>
> Agreed. Some folks might rankle at some of the necessary restrictions,
> though. For example, you couldn't count on shadowing/redefining imported
> identifiers like you can at the top level; some Schemes (like Scheme-48)
> support that, but others (like PLT) don't, and for good reasons.

It would still be better than the restrictions imposed by coding to R5RS, or
some mixture of R5RS+SRFIs+SLIB. Sure, you can use SLIB's modules, or
Taylor Campbell's lexmod, or roll your own modules, but all of these have
disadvantages which could (I believe) be addressed by some relatively
minimal implementation support for a standard "simple" module system.

Anton

Ray Dillinger

unread,
Oct 28, 2003, 1:49:52 AM10/28/03
to
"Bradd W. Szonye" wrote:
>
> Anton van Straaten <an...@appsolutions.com> wrote:
> > Maybe Alex means something like a standard module declaration syntax
> > which maps to a minimal set of sufficiently similar semantics on
> > different Schemes. Which seems like it could be a workable idea, to
> > me.
>
> Agreed. Some folks might rankle at some of the necessary restrictions,
> though. For example, you couldn't count on shadowing/redefining imported
> identifiers like you can at the top level; some Schemes (like Scheme-48)
> support that, but others (like PLT) don't, and for good reasons.
>
> I was actually toying with the idea of implementing modules as FEATURE,
> based on the requirements syntax of SRFI-7. However, I decided that
> wasn't quite the right way to do it. More on this later if I actually
> find time to implement something useful.

I've been thinking about writing a portable "module mangler."

It would read from disk a bunch of scheme files with some kind
of standard module syntax, and output a single honkin-large
scheme file (maybe in a temporary directory) that puts them
all together with separate namespaces kept separate, and
strictly-controlled scope for macros, and so on.

So you could do development in a bunch of different files and
be confident of putting them all together in one program with
a well-defined semantics, regardless of implementation.

It would answer namespace and macrology-scope issues, but it
would never answer the separate-compilation issue. Even so,
it might attract enough of a following to standardize a
module syntax, especially if distributed with a bunch of
good libraries.

What do people think of the idea?

Bear

Alex Shinn

unread,
Oct 28, 2003, 1:48:19 AM10/28/03
to
At 27 Oct 2003 22:31:08 -0500, Shriram Krishnamurthi wrote:
>
> Alex Shinn <fo...@synthcode.com> writes:
>
> > Would it be too much to ask for a standard *syntax* to the module
> > system, without specifying the semantics?
>
> This is a troll, right?

It's not a troll, though perhaps it's not expressed clearly and
certainly isn't completely thought out.

> > No matter how many SRFIs or
> > libraries we write, if we can consistently load them into a program then
> > the same program can never run unmodified on two different Schemes.
>
> I think you mean "...if we cannot consistently...".

Yes, sorry.

> What does it mean to load consistently in the absence of a semantics?

Not complete absence but a sort of minimal assumption. Consider every
SRFI that has a reference implementation, every module I see browsing
/usr/lib/plt/collects/mzlib/, the C-parser just posted to c.l.s., and
countless utility modules from all the Scheme implementations. Many of
them are written in highly portable Scheme, which can be made more
portable with further SRFIs and standardization. However, at the
beginning of every one is a little incantation that says "this is a
module" with some extra information about what modules it uses and what
procedures it provides. If we just standardize on the syntax of that
incantation then there suddenly becomes the chance that a module written
in one Scheme would work out-of-the-box on another Scheme. More
complicated semantics, module-introspection, etc. would still not be
portable.

> > Suppose we use a syntax encompassing all of the module-system concepts
> > in use now. [...]
>
> Doesn't encompass units.

From the MzScheme manual:

In some ways, a unit resembles a module (see Chapter 5 in PLT
MzScheme: Language Manual), but units and modules serve different
purposes overall.

I would only suggest this for modules, not units.

--
Alex

felix

unread,
Oct 28, 2003, 2:38:58 AM10/28/03
to
Alex Shinn <fo...@synthcode.com> wrote in message news:<87vfqa...@strelka.synthcode.com>...

> At Mon, 27 Oct 2003 23:42:24 +0100, felix wrote:
> >
> > What kind of modules? How easy to use should they be? Should they
> > allow interactive use? Man, do you realize how much work has gone into
> > Scheme module systems, yet none really satisfies everybody!
>
> Would it be too much to ask for a standard *syntax* to the module
> system, without specifying the semantics? No matter how many SRFIs or
> libraries we write, if we can consistently load them into a program then
> the same program can never run unmodified on two different Schemes.
>
>[...]

>
> There are issues to be resolved but I don't believe it's impossible to
> at least make the syntax work for all the major module systems out
> there. The question is, if a SRFI were to be created that specified a
> syntax like the above, would Scheme implementations support it?

It's easy: submit a SRFI, and you'll have a good chance of being
able to discusss the relevant questions with the relevant people
(or those which are interested in solving these issues).


cheers,
felix

Michele Simionato

unread,
Oct 28, 2003, 3:43:02 AM10/28/03
to
Shriram Krishnamurthi <s...@cs.brown.edu> wrote in message news:<w7dn0bm...@cs.brown.edu>...

I think you have got the wrong impression. The concept of "different
implementation" in the Python world is completely different from the
concept of "different implementation" in the Scheme world.

Somebody saying "Python has only one implementation" wouldn't be far from
the truth. There is only ONE implementation that matters, which is
CPython. All the others implementors strive to get as close as
possible to CPython. The minimal compatibility is 99%.
Different implementations provide something more and are intented to
be used in specific situations (you want to script Java, use Jython,
you want to skip the C-stack restriction, use stackless) but they are
in no sense competitors of CPython. If the PyPy project succeeds
(and everybody hope so in the community, including Guido van Rossum)
we will have a faster Python, but it will still be 99.99% compatible
with CPython. At least this is ideal goal of the developers, as I
understand their claims (and I think I do understand them).

I do think Perl/Python/Ruby succeds because they are basically one man
projects. Of course, there are hundreds of Python developers, but only one
has the last word when essential decisions for the language have to be
taken: Guido van Rossum.
It is also interesting to notice that Guido's ideas are *really* respected in
the community, more respected than you could imagine. Also, a lot of people in
the Python community are practical programmers and not language designers
or academical people: this makes a big difference. Let me give a trivial
example: a large minority in the community regularly rants about the fact
that the list .sort() method returns None and not the sorted list. Now, nobody
will *ever* think about making a new implementation correcting this "wart"
(personally, I don't think it is a wart, by the way, at least in the context
of Python). It would be considered foolish to make an implementation which
makes the same things Python already does in a different way. Implementations
are free to add, NOT to change. Essentially the ideas is "okay, this is a
wart in my opinion, but I will live with it, because forking the community
would be much worse than correcting the wart".

My postings here made me realize that the Scheme community is very
different from the Perl/Python/Ruby communities: a Pythonista has
no difficulties in accepting a BDFL (Benevolent Dictator For Life),
no difficulties in trading performances for easy of use, no difficulties
in accepting a bondage & discipline syntax (actually a rather large
minority would appreciate even a stricter bondage & discipline syntax!).
I could give other examples, but you get the idea.

Notice that I am not saying that one approach is better than one other:
there are trade-offs. If you chose the one-implementation way you have
advantages (even big advantages), if you choose the way of freedom you
have other advantages (which may be considered even bigger by some).

I've got the impression that there is no way that the Perl/Python/Ruby
model will ever work in the Scheme community, for historical and
socialogical reason. This can be considered good (for some reasons) or
bad (for other reasons).

What I (as an outsider to the community) would appreciate is:

1. make a stricter R5RS (not very strict, but stricter than now);

2. make more srfi (much more);

3. make them available on every implementation.

These points are (maybe/maybe not) in the range of realizable things; I don't
think I will ever see an unique (unique in Python sense) implementation of
Scheme; one could even argue that this a good thing, BTW.

For the moment being, you schemers are stuck with Perl/Python/Ruby; if
this is of any consolation, think that it could have been worse (i.e.
Java/C++ ;)


Michele Simionato

felix

unread,
Oct 28, 2003, 4:01:13 AM10/28/03
to
"R Racine" <r...@adelphia.net> wrote in message news:<pan.2003.10.28....@adelphia.net>...

>
> Few. Very few, have had success writing substantive applications in
> Scheme. Of those few, the majority, have or still are on some endless
> merry-go-round of trying it on this impl and then that. I expect most
> give up and, use C#, Java, SML, CL or Haskell.

Any numbers? You seem to be quite convinced of that. Is Haskell
really used more heavily for substantive applications than Scheme?
Or are you just guessing, since the respective communities appear
more unified?

If C#, Java or CL give you what you want, go ahead, use it.
Personally C#, Java, SML or Haskell don't give me the stuff I need. Neither
does CL, actually.

>
> To not recognize that there is an implementation "issue" with Scheme that
> is impacting its adoption in the realworld, retention of the few
> application level coders it has and constraining a substantial and broad
> library code base from forming is ... I don't know. A shame.

Stop whining. You are trying to blame the wrong people. It's a shame
that you think you're entitled to any demands. If Scheme (or better,
the available implementations) don't (doesn't) serve your needs, fine.
Fix it or try alternatives. Have you tried Common LISP? This might
be exactly what you need. I'm serious.

>
> > Many of those are used commercially and provide splendid FFIs and/or
> > extension libraries.
> > If Scheme implementations are insufficient for you, do it yourself. But
> > I don't think you will do any better than what is currently available,
> > since the major implementations take most known implementation
> > strategies pretty far.
> >
> > Here's an idea: pick an implementation (unimportant which one), sit down
>
> Therein lies the crux. I have been claiming it is. Ever try using
> Bigloo-libs GTK bindings in XYZ impl. Or grabbing Schematics SchemeUnit
> for use in ABC impl. Non starters. Sure you can spend a couple of days
> porting it to whatever your current impl of choice. Then you get to do it
> again when the library code has a new version released.
>
> > and start
> > writing libraries for it (doesn't matter for what).
>
> My efforts are diluted. Because 49 other library writers are writing
> libraries for some other impl.

There are not, you are wildly exaggerating. It *is* possible to write
cross-implementation libraries (see srfi.schemers.org for a couple
of examples), and it is even possible to write libraries for things
like GTK, with a little bit of pre-/post-processing, macros, careful use
of lexical scope and clean design.

(Now it's your turn to start whining why nobody did this for you already)

This discussion painfully reminds me on the ever-popular cl-is-great-but-
if-it-just-had-this-extension drivels that come up regularly on
comp.lang.lisp. Yet, it hasn't changed anything.

But we probably won't come to any useful conclusion here.

I will now go to comp.lang.python and complain about the fact
that there is no extension that provides macros, precise space-and-time
efficient GC and tail-call-optimization, all requirements that I find
very important for serious application development.
I wonder what they will tell me...?


cheers,
felix

Grzegorz Chrupala

unread,
Oct 28, 2003, 4:39:30 AM10/28/03
to
Jens Axel Søgaard <use...@jasoegaard.dk> wrote in message news:<3f9db83e$0$70001$edfa...@dread12.news.tele.dk>...

> R Racine wrote:
> > On Mon, 27 Oct 2003 08:21:52 -0600, Scott G. Miller wrote:
>
> >>I hope I'm wrong, but it seems you have a simplified view of Scheme
> >>architectures.
>
> > I do. I represent the pitch fork wielding, torch waving, unwashed masses
> > of fustrated Scheme application developers. And yes, maybe I am a mass of
> > one. (shades of a "silent" majority here)
>
> What is missing in DrScheme?

For me, a major gap is Unicode and multibyte character support. This
is by now standard in implementations of most other widely used
programming languages but surprisingly few Schemes have it.

--
Grzegorz

Bruce Stephens

unread,
Oct 28, 2003, 5:21:31 AM10/28/03
to
"Bradd W. Szonye" <bradd...@szonye.com.invalid> writes:

[...]

> Apparently not, but that's not necessarily a bad thing. Portability,
> robustness, ease of use, and a killer development environment seem
> to be the main goals, and those things sell. And it's not like PLT
> is *slow* -- it just isn't C, that's all. It compares favorably with
> other interpreted languages.

Yes, I agree with all that. I'd just like some language which had
reasonable portability, ease of use, etc., and had the option of
blinding speed, at least on common platforms. And that doesn't seem
to me to be impossible---there are various very fast scheme
implementations around. It's just that the various scheme
implementations seem to stay just far enough apart in various respects
(FFI, mostly) that using more than one of them is inconvenient.

[...]

Scott G. Miller

unread,
Oct 28, 2003, 11:27:25 AM10/28/03
to

There is a reason for that. The R5RS character operators cannot be made
to work reliably with unicode characters. SISC for example supports
unicode characters and arbitrary character maps, but makes no effort to
contort the standard operators to behave properly. There was a usenet
discussion about this in the past which you could probably find by googling.

Scott

Bruce Stephens

unread,
Oct 28, 2003, 12:26:24 PM10/28/03
to
"Scott G. Miller" <scgm...@freenetproject.org> writes:

> Grzegorz Chrupala wrote:
>> Jens Axel Søgaard <use...@jasoegaard.dk> wrote in message news:<3f9db83e$0$70001$edfa...@dread12.news.tele.dk>...

[...]

>>>What is missing in DrScheme?
>> For me, a major gap is Unicode and multibyte character support. This
>> is by now standard in implementations of most other widely used
>> programming languages but surprisingly few Schemes have it.
>
> There is a reason for that. The R5RS character operators cannot be
> made to work reliably with unicode characters. SISC for example
> supports unicode characters and arbitrary character maps, but makes no
> effort to contort the standard operators to behave properly. There
> was a usenet discussion about this in the past which you could
> probably find by googling.

I couldn't find it. I did searches under comp.lang.scheme for
unicode, utf8, utf-8, and most of the threads seemed positive (giving
implementations that support unicode in some form). I didn't see any
threads showing fundamental problems.

Anton van Straaten

unread,
Oct 28, 2003, 12:55:05 PM10/28/03
to

Perhaps you didn't make the proper offerings to the Great God Google...

Dunno if it's what Scott was thinking of, but here's a post in which Bear
describes some issues with Unicode & R5RS:
http://groups.google.com/groups?selm=3D753365.6BE29F0E%40sonic.net
Some of the earlier and later posts in that thread are also relevant.

Anton

Scott G. Miller

unread,
Oct 28, 2003, 1:11:11 PM10/28/03
to

Nah, its not his fault, I couldn't find it either (the above is not what
I recall). I'll try and dig up the reference, it may not have been on
usenet.

Scott

David Rush

unread,
Oct 28, 2003, 1:53:55 PM10/28/03
to
On Tue, 28 Oct 2003 02:05:01 GMT, R Racine <r...@adelphia.net> wrote:
> On Tue, 28 Oct 2003 01:27:42 +0100, Jens Axel Søgaard wrote:
>> What is missing in DrScheme?

> What I find more troubling is some of the other Scheme wiz's disdain for


> MzScheme from the aspect of a production quality Scheme. What is it that
> THEY find missing in PLT? Do they know something that we simple Joes do
> not regarding the inner workings of MzScheme?

Well

1) I'm not a Scheme 'wiz' for any value of 'wiz'
2) I like PLT

but I don't use it. And haven't for quite a while (like since early v200).
There are a few reasons for this, some rational and some less so:

1) it's just not fast enough. I do Data Mining and IR applications in
Scheme and I'm starving for CPU cycles, even on my 2Ghz+ machines

2) it was a pain to make fast. The notion of 'standalone executable', while
ostensibly supported involved a complete rebuild of the PLT core

3) I write daemons and command-line programs and don't need GUI bells and
whistles; if I did, PLT would be right up there. Although I'm pretty
excited about SCX/Scsh, and I found programming raw XLIB under Stalin
to have a perverse attraction as well...

4) The unit system was impressive ... and intimidating. And I hated all
the extra punctuation I saw floating around inside of PLT's naming
conventions

5) MrSpidey can't handle big enough programs - and I *really* wish it did.
In fact, if MrSpidey could handle 15KLOC+ programs I would probably
start to make the effort to move back to PLT for pre-production
development. but did I mention that it's not fast enough for my crippled
486/133 at home?

6) the v200 release b0rk3d my PLT code base and the performance wasn't good
enough for me to abandon Gambit & Larceny (which my code also ran on
since I have put a lot of effort into a portable Scheme programming
infrastructure)

7) I'm really attached to Scsh's adaptation of Posix to Scheme. Where PLT
has diverged, I haven't actually found it any better.

8) PLT's library is very big...and very inbred so I can't easily chop off
parts of it to use under other, faster, Scheme implementations. So
programming in PLT becomes a painful exercise in figuring out how to
implement the PLT signatures for my production platforms.

9) PLT is a pain to install. I'm sure that the PLT folks don't think so,
but
but I haven't been able to get a fully-working install for quite a while
now. It doesn't use configure/make to build and it is very finicky about
file locations. Given that I *usually* need to have a multi-platform
environment I find the lack of flexibility in PLT's installation very
irritating.

10) Very good alternatives to PLT also exist...specifically Gambit (gets
my vote for best all-round), Larceny (if only all the world was SPARC),
Bigloo (great for speed assuming you can live with it's limits). And
Stalin which is fast fast fast, but slow slow slow to compile.

Even though I am obsessed with performance, please understand that PLT is
I think the second-fastest interpreter out there (Petite Chez is #1). And
remember that I *do* like many things about PLT, even if it doesn't come
out when I'm whingeing. In fact, I am planning to use PLT to teach my kids
programming.

david rush
--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/

David Rush

unread,
Oct 28, 2003, 2:08:48 PM10/28/03
to
On Tue, 28 Oct 2003 11:29:28 +0900, Alex Shinn <fo...@synthcode.com> wrote:
> At Mon, 27 Oct 2003 23:42:24 +0100, felix wrote:
>>
>> What kind of modules? How easy to use should they be? Should they
>> allow interactive use? Man, do you realize how much work has gone into
>> Scheme module systems, yet none really satisfies everybody!
>
> Would it be too much to ask for a standard *syntax* to the module
> system, without specifying the semantics?

I don't think you realize just how outrageous this statement is.

Nevertheless I have been writing a GUMS (Grand Unified Module System) for
several *years* now, based on the theory that all module systems can be
modelled as source-to-source compilers which produce a single source
module.
It works, for certain values of 'work', and if I was an academic I could
probably find the time to finish and polish and add the major missing
module languages to it. If you want to help, please contact me privately
(this is a serious offer). The project is on SourceForge at

http://mangler.sourceforge.net

but be warned, building it is only straightforward for me (even with the
instructions page I imagine), and since I have no users, I tend to get
a bit sloppy about maintaining pieces of it. This has turned out to be a
rather larger project than I thought it would be when I started, if only
because maintaining the library is aa necessity I didn't foresee.

David Rush

unread,
Oct 28, 2003, 2:13:59 PM10/28/03
to
On Tue, 28 Oct 2003 06:49:52 GMT, Ray Dillinger <be...@sonic.net> wrote:
> "Bradd W. Szonye" wrote:
> I've been thinking about writing a portable "module mangler."

Ray - I've been working on this for years. That's what S2 is all
about. It does work, but I just don't have the time to keep the
docs (and libs) up to date.

> It would read from disk a bunch of scheme files with some kind
> of standard module syntax, and output a single honkin-large
> scheme file (maybe in a temporary directory) that puts them
> all together with separate namespaces kept separate, and
> strictly-controlled scope for macros, and so on.

That's exactly what I do. I've got the hooks in for alpha-renaming
top-level symbols, but I've never had the need to fully productize
the code. You want to help? I'll happily help you get your first
builds going (bootstrapping the animal is a bit tricky).

> What do people think of the idea?

obviously I think it's brilliant. I just have a day job so my version
seems doomed to live in the twilight of my needs...

http://mangler.sourceforge.net

David Rush

unread,
Oct 28, 2003, 2:26:11 PM10/28/03
to

What I do about that is I plonk the FFI-specific parts of the code
into cond-expand blocks. It seems to work pretty well for me anyway, but
then I'm generally not going much beyond POSIX.

Bradd W. Szonye

unread,
Oct 28, 2003, 3:17:47 PM10/28/03
to
Anton van Straaten <an...@appsolutions.com> wrote:
> Dunno if it's what Scott was thinking of, but here's a post in which Bear
> describes some issues with Unicode & R5RS:
> http://groups.google.com/groups?selm=3D753365.6BE29F0E%40sonic.net
> Some of the earlier and later posts in that thread are also relevant.

That article deals with Unicode support in Scheme code. There's also the
issue of Unicode support for data. The former problem is thornier than
the latter, because supporting Unicode in Scheme code includes all the
problems of Unicode in data *plus* the special considerations necessary
for a case-insensitive programming language.

Bear's overview is good, but he missed an alternative:

Use the Unicode algorithms for case-folding equivalence. When the result
is ambiguous, signal an error. Give the programmer a way to resolve
ambiguities. Example:

A program written in German contains the identifiers "masse" and
"maße." If only one of the two identifiers is in scope, "MASSE"
refers to the one that's in scope. If both are in scope, "MASSE" is
ambiguous.

How does a programmer resolve the ambiguity? The simpler method is to
simply disallow ambiguous uses. The programmer must not use "MASSE" when
both "masse" and "maße" are in scope. A more sophisticated method could
allow a way to specify which identifier "MASSE" is supposed to be
equivalent to.

Unfortunately, this can violate the principle of least surprise. Suppose
that only "maße" is in scope. The programmer writes

(lambda (MASSE) ... maße ...)

intending to bind MASSE but not maße. Unfortunately, this method shadows
the free variable "maße" because it's "unambiguous." I don't expect that
this would be a common problem, but it would be nasty when it did
happen. And case-folding isn't the only situation where that comes up.
For example, consider the words "resume" and "rèsumé" under English
collation rules. Depending on context, they may or may not be the same
word. There's a more general problem here: Identifiers that are
ambiguous even without case transformations.

Sometimes, identifiers are ambiguous even when they're spelled
identically. For example, try writing a "resume" (curriculum vitae)
class with "resume" (coroutine yielding) semantics. Oops, there's an
identifier collision! That's a thorny problem all on its own, and
locale-dependent identifiers just make it thornier.

Any identifier clash will tend to violate the principle of least
surprise. Case-folding and other locale-dependent forms of equivalence
just make it more surprising. To the human eye, the addition of accents
sufficiently disambiguate "resume" and "rèsumé," but to a compiler,
they're just as ambiguous as they are without the accents. That mismatch
between what the human sees and what the machine sees is what adds to
the surprise.

I understand why Bear chose the resolution he did -- simply don't permit
any ambiguous characters -- but unfortunately it doesn't address the
underlying problem.

Of course, even if you can deal with that problem, there's still the
problem of combining code from two different languages, with different
concepts of "equivalent symbols"! It's not too surprising that many
languages just punt on this issue and say, "Different spellings mean
different symbols."

Grzegorz Chrupała

unread,
Oct 28, 2003, 4:18:01 PM10/28/03
to
Bradd W. Szonye wrote:
> Use the Unicode algorithms for case-folding equivalence. When the result
> is ambiguous, signal an error. Give the programmer a way to resolve
> ambiguities. Example:
>
> A program written in German contains the identifiers "masse" and
> "maße." If only one of the two identifiers is in scope, "MASSE"
> refers to the one that's in scope. If both are in scope, "MASSE" is
> ambiguous.

Use of Unicode characters in indentifier names is largely irrelevant.
Unicode is essential in many applications such as NLP or XML processing,
but it is needed to deal with *data* mainly (characters, strings, symbols),
not identifier names. The potential ambiguity between maße and MASSE as a
variable name is a non-issue. For variable-name case folding, just use
standard Unicode case mapping, where (char-upcase #\ß) is just #\ß and be
done with it.

It is red herrings such as the above that mislead people into thinking that
Unicode support on a basic level is more complicated than it really is.
--
Grzegorz
http://pithekos.net

Bradd W. Szonye

unread,
Oct 28, 2003, 4:52:29 PM10/28/03
to
Grzegorz Chrupa?a <grze...@pithekos.net> wrote:
> Use of Unicode characters in indentifier names is largely irrelevant.

That's why I initially mentioned the difference between Unicode support
for data and Unicode support for program code (e.g., identifiers). The
rest of my article was in response to Bear's earlier discussion of the
latter.

> For variable-name case folding, just use standard Unicode case
> mapping, where (char-upcase #\ß) is just #\ß and be done with it.

Is that actually true? If so, I'd consider that a defect in Unicode,
because the correct spelling of "capital esszed" is "SS." And besides,
case-folding is only part of the problem, because it's only one example
of different but equivalent spellings.

> It is red herrings such as the above that mislead people into thinking
> that Unicode support on a basic level is more complicated than it
> really is.

Unicode support for data is fairly tricky on its own. Many languages
choose not to complicate things by applying the data rules to code. For
example, C++ permits a wide variety of Unicode characters in data and in
code, but it does not attempt locale-dependent equivalence for code --
every different spelling is a different identifier.

However, Schemers like it when the same rules apply to code and data
both. Also, programmers in any case-insensitive language like it when
identifiers "do the right thing" in non-English languages. That's why
any discussion of extended character sets is likely to stray into a
discussion of identifier equivalence.

Grzegorz Chrupała

unread,
Oct 28, 2003, 6:05:58 PM10/28/03
to
Bradd W. Szonye wrote:

> Grzegorz Chrupa?a <grze...@pithekos.net> wrote:
>> For variable-name case folding, just use standard Unicode case
>> mapping, where (char-upcase #\ß) is just #\ß and be done with it.
>
> Is that actually true? If so, I'd consider that a defect in Unicode,
> because the correct spelling of "capital esszed" is "SS." And besides,
> case-folding is only part of the problem, because it's only one example
> of different but equivalent spellings.

The basic non-locale dependent case mapping of ß is ß. There is a
SpecialCasing table which deals with characters such as ß where case
mappings are not simple 1-1 character correspndences.
(http://www.unicode.org/Public/UNIDATA/)

>
> Unicode support for data is fairly tricky on its own. Many languages
> choose not to complicate things by applying the data rules to code. For
> example, C++ permits a wide variety of Unicode characters in data and in
> code, but it does not attempt locale-dependent equivalence for code --
> every different spelling is a different identifier.
>
> However, Schemers like it when the same rules apply to code and data
> both. Also, programmers in any case-insensitive language like it when
> identifiers "do the right thing" in non-English languages. That's why
> any discussion of extended character sets is likely to stray into a
> discussion of identifier equivalence.

"Doing the right thing" in the general case, in a fully locale sensitive way
is indeed complicated, if at all possible. IMO the rules for identifiers as
should be well-defined and simple as well as consitent with treatment of
strings on the basic level, i.e. they should use the general, non-locale
dependent case-mappings.
When dealing with data one could choose to use more refined,
locale-dependent mappings, algorithms etc as needed.

As I see it, it is enough if the the core language provides core
Unicode-compatible functionality including the way to read and write UTF-8
and UTF-16 encoded text, distinguish characters and bytes, get the length
of a string in characters and bytes, provide standard Unicode
case-mappings, sorting, and Unicode aware standard character predicates
such as char-whitespace? etc. Anything beyond that can be more or less
easily added in libraries or defined by the user as needed.

Cheers,
--
Grzegorz
http://pithekos.net

R Racine

unread,
Oct 28, 2003, 6:42:02 PM10/28/03
to

> On Tue, 28 Oct 2003 02:05:01 GMT, R Racine <r...@adelphia.net> wrote:
>> What I find more troubling is some of the other Scheme wiz's disdain
>> for MzScheme from the aspect of a production quality Scheme. What is
>> it that THEY find missing in PLT? Do they know something that we simple
>> Joes do not regarding the inner workings of MzScheme?

[2nd response attempt. 1st Vaporwared, I guess.]

I did not phrase that very well. Sorry.

Here are couple of way to get to Sifsad.

Axiom
-----
There is a general consensus amoungst the Scheme User/Application
developer community with the direction the PLT group has expanded Scheme
beyond R5RS with regard to Units, modules, general libraries etc... Across
the board, taking everything into account we, most, not all, general
scheme users and app developers, are pretty happy with PLT philosophy of
Scheme.

However, there is also recognition that PLT fails to deliver the speed
necessary for it to become the mainstream implementation for general
application development.


Strawman Proposal #1
--------------------

The Scheme Community joins in to assist PLT with aggressively enhancing
mzc performance to be on par with the bulk with the other Scheme -> C
compilers.

- This assumes that mzc can be significantly improved as the main
priorities of the PLT group are teaching and language research resulting
in the current overall efficiency of mzc not being what it could be.
- This is what I was attempting to address above. When I have
previously proposed this plan to "wizs", those with the technical chops to
work with compiler optimizations, the response can be best categorized as
"can't be done" without much follow on detail. (beyond continuation
capture efficency, which is not a killer for something that is targeting
most general application developement needs)

Strawman Proposal #2
--------------------

Assuming that the PLT system cannot be improved to the performance levels
of other Scheme -> C systems because the basic architecture of the PLT
system was based on other priorities then speed, the Scheme community
adopts an existing, fundamentally strong, fast Scheme -> C implementation
with the goal of attaining 100% code compiliance with PLT (as is
reasonable).

The goal would be to have near identical collections shared between the
two implementations. "Write once, run on both", as it were.


I claim either one of these solutions would be a bit of godsend to the
majority of bread and butter Scheme users who would like to use Scheme for
general application development.

Of course those with "specialized" applications will chose alternate
implementations that emphasize aspects vital to their application.


Ray

Jens Axel Søgaard

unread,
Oct 28, 2003, 6:44:49 PM10/28/03
to
Bruce Stephens wrote:

> Jens Axel Søgaard <use...@jasoegaard.dk> writes:
>>What is missing in DrScheme?

> Bindings to Gtk/GNOME and other random useful libraries?

What are the benefits of Gtk/GNOME over the current portable (Windows,
Unix, Machintosh) GUI already in DrScheme?


Which other useful libraries are you thinking about?

> Speed?

I wanted to hear what mzscheme misses compared to Perl/Python
and as far as I can tell, mzscheme has no problem in the speed
department.

[The existence of the *very* fast Scheme compilers does not imply
that mzscheme is slow]

> Perhaps there are such bindings, and I just don't know where to look
> for them. It's true that speed isn't the main priority for the
> DrScheme family, though, isn't it?

Yes - but that doesn't mean it is slow.

>>In other languages (e.g. Python/Perl) you are pretty much stuck with
>>one implementation.

> But that's OK, because although it is a compromise, it's a reasonable
> one, and because there's only the one, there's an enormous library of
> extensions and code that I can use.

Then find a Scheme that makes the same compromises as in Python/Perl
and use that. Ignore the rest.

> There's lots of scheme code, too,
> but each blob of code that I find will take a few hours of work to
> massage to work with the implementation that I've chosen to use (with
> its particular combination of module system and so on).

My experience is that the authors often are willing to do the porting,
if they are asked.

>>Perhaps a better idea was to begin making an FFI-SRFI?
>
>
> Probably. On the other hand, if it were that easy, someone would
> already have done it.

I didn't say it was easy. Far from. Lars Hansen has made some leg work
though.

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Oct 28, 2003, 6:46:11 PM10/28/03
to
Bradd W. Szonye wrote:

> BTW, the development environment was actually a drawback for me -- I'm a
> hardcore vim & Makefiles kinda guy. (In fact, I wrote comprehensive vim
> syntax-highlighting rules for PLT Scheme. I was originally supposed to
> take over maintenance/development from the original author, but I never
> got around to finishing and publishing my rules, because there were some
> performance issues that I never quite worked out.)

?

Why didn't you just ignore DrScheme and used mzscheme?

--
Jens Axel Søgaard


Jens Axel Søgaard

unread,
Oct 28, 2003, 6:55:03 PM10/28/03
to
R Racine wrote:
> On Tue, 28 Oct 2003 01:27:42 +0100, Jens Axel Søgaard wrote:

>>What is missing in DrScheme?

> Not too much AFAIAC. On a personal level if I list the top 3 things that
> have blown me away in the Scheme impl world:
>
> MIT Scheme: The ground breaking work done here. You see MITScheme code,
> concepts and ideas in many of the current Scheme implementations. It
> is/was the fountainhead.
>
> PLT Scheme: An almost endless stream of what Scheme is capable of.
> Unit/Sigs, Languages , inheritable Structures, Contracts, the Syntax
> concept, opaque types, module system ... You can just randomly click
> about the help system and almost stumble into whole new concepts.
>
> Another example from MzScheme. From Eli's Swindle. I saw that Swindle
> had somehow added support for self evaluating symbols which start with a
> colon. When I installed Swindle, I didn't recall any patching or
> recompiling. So hey, how'd he do that? So I looked.

[Very clever example]

Yes also love the very high level of flexibility.
It is perfect for defining new languages without having
to write a compiler form scratch.

> I digress. What is missing in DrScheme? Overall I love it. Mainly a
> Sifsad focus. The system, DrScheme, has a intensional pedalogical focus.
> My concerns, efficient memory usage, optimized VM, speed, debugging are
> not their focus.

I don't agree that debugging is not in focus. Part of an pedagogical
environment is to produce precise error messages to the user.

Specifically DrScheme has

- stack traces
- arrows on top of the source to show calling sequence
- syntax coloring of live code
- a tool for building test suites
- an algebraic stepper (mostly for beginners though)

> The mzc compiler is not on par with some of the other
> Scheme->C systems out there. Is there an inherant architectural tradeoff
> which prevents mzc from approaching Chicken or Bigloo with speed. Don't
> know. If two or three Scheme wizs announced this very night that they
> were going to join the PLT team with a Sifsad prioritized feature list. I
> would do a hand spring and take up organized religion.

If you compare the speed of mzc executables to Perl and Python what are
your conclusions?

--
Jens Axel Søgaard

Jens Axel Søgaard

unread,
Oct 28, 2003, 7:00:15 PM10/28/03
to
David Rush wrote:
> On Tue, 28 Oct 2003 02:05:01 GMT, R Racine <r...@adelphia.net> wrote:
>> On Tue, 28 Oct 2003 01:27:42 +0100, Jens Axel Søgaard wrote:
>>> What is missing in DrScheme?

> but I don't use it. And haven't for quite a while (like since early v200).


> There are a few reasons for this, some rational and some less so:

[Relevant speed reasons snipped - I am interested in the other reasons]

> 5) MrSpidey can't handle big enough programs - and I *really* wish it did.
> In fact, if MrSpidey could handle 15KLOC+ programs I would probably
> start to make the effort to move back to PLT for pre-production
> development. but did I mention that it's not fast enough for my crippled
> 486/133 at home?

I have actually never tried MrSpidey - but you can't seriously
list that as a reason, since the competing languages doesn't have
similar tools.

> 7) I'm really attached to Scsh's adaptation of Posix to Scheme. Where PLT
> has diverged, I haven't actually found it any better.

POSIX. That would indeed be a good thing to have better support for.

> 8) PLT's library is very big...and very inbred so I can't easily chop off
> parts of it to use under other, faster, Scheme implementations. So
> programming in PLT becomes a painful exercise in figuring out how to
> implement the PLT signatures for my production platforms.

Again. I am narrowmindedly comparing to Perl/Python today, so that
doesn't apply.

> 9) PLT is a pain to install. I'm sure that the PLT folks don't think so,
> but
> but I haven't been able to get a fully-working install for quite a while
> now. It doesn't use configure/make to build and it is very finicky about
> file locations. Given that I *usually* need to have a multi-platform
> environment I find the lack of flexibility in PLT's installation very
> irritating.

Hm. A valid concern.

> Even though I am obsessed with performance, please understand that PLT is
> I think the second-fastest interpreter out there (Petite Chez is #1). And
> remember that I *do* like many things about PLT, even if it doesn't come
> out when I'm whingeing. In fact, I am planning to use PLT to teach my kids
> programming.

How old are they? You could start my showing them the turtles in
DrScheme. That's great fun.

--
Jens Axel Søgaard

Bradd W. Szonye

unread,
Oct 28, 2003, 7:09:31 PM10/28/03
to
> Bradd W. Szonye wrote:
>> BTW, the development environment was actually a drawback for me --
>> I'm a hardcore vim & Makefiles kinda guy.

Jens Axel Søgaard <use...@jasoegaard.dk> wrote:
> Why didn't you just ignore DrScheme and used mzscheme?

That's what I do.

Jens Axel Søgaard

unread,
Oct 28, 2003, 7:15:56 PM10/28/03
to
Bradd W. Szonye wrote:
>>Bradd W. Szonye wrote:

>>>BTW, the development environment was actually a drawback for me --
>>>I'm a hardcore vim & Makefiles kinda guy.

> Jens Axel Søgaard <use...@jasoegaard.dk> wrote:
>>Why didn't you just ignore DrScheme and used mzscheme?

> That's what I do.

So what's the drawback?

--
Jens Axel Søgaard

R Racine

unread,
Oct 28, 2003, 7:56:59 PM10/28/03
to
On Wed, 29 Oct 2003 00:55:03 +0100, Jens Axel Søgaard wrote:

> If you compare the speed of mzc executables to Perl and Python what are
> your conclusions?

Hands down mzc wins. However, I do not consider Python and Perl serious
application development languages. In arena of scripting, small or
one-off applications IMHO mzc/mzscheme is clearly superior. No contest.

But I would like to see mzc/mzscheme move from the champ of the middle
weight division to heavy weight contender. For me this means aggregate
benchmark suite performance on par (lets say less within a factor of 2x)
with SML/NJ, CMUCL, or C++ and most of the other Scheme -> C systems.

Anecdotal story. Recently while the "Coins" discussion was taking place
on c.l.s the author of one of the major Scheme->C systems was on the
#Scheme IRC. I believe both of us were surprised at how competitive mzc was
vs a well respected Scheme -> C system. Mzc didn't win but did well. (I
believe the GMP bingings for large exacts and how cleverly large exacts
are implemented in MzScheme account for its very respectable showing.)

I expect (guessing here) mzc would be less competitive on boyer.scm for
example.

Ray

Matthias Felleisen

unread,
Oct 28, 2003, 8:04:48 PM10/28/03
to
R Racine wrote:

Some large company located near the northwestern corner of the continental US
has sponsored Will Clinger (Larceny) and PLT to create a merger of the two
Scheme systems not unlike a mix of the strawman proposals that you have put up
below.

The specific plan is as follows:
- Will and some others are retargeting Larceny to the intermediate language of
said companies virtual machine. Will has been calling this project Common
Lareceny.
- Joe Marshall and some others are porting MrEd to said company's toolbox
The result could be a MrEd that's almost completely in Scheme.
Will is certainly encouraging us to think of Scheme as a systems language.
Eli's arrival has strengthened this goal even more.
- Once we have a joint Scheme, we are hoping to retarget it to other platforms.

How realistic is the plan? Producing Larceny was a two-man effort. It's a fast,
reliable R5RS implementation with a few extra goodies. It is particularly
well-suited for the research ideas that Will wishes to pursue.

PLT Scheme is a many people, many years effort. Matthew (mzscheme), Robby
(drscheme), Shriram (zodiac, server, libs), Cormac (mrspidey), Philippe (mrflow
= mrspidey successor), Paul Steckler (myster, sister, mzcom), John (the foot,
and soon a debugger), Paul Graunke (the server, soon to be managed by Greg),
Scott (parser tools), and countless others who are working and/or have worked on
bits and pieces of the tool suite, not to mention their "day jobs". It is an
expensive product.

Merging the two projects is not an easy task. It won't be done quickly. If
people really want a top-notch product, however, it may be the route to go.
If you have time to contribute or money or you want to volunteer friends, please
do so. The goal is to produce a good platform for the first Schemers and the
rest of the world, too.

-- Matthias

Bradd W. Szonye

unread,
Oct 28, 2003, 8:08:04 PM10/28/03
to
Bradd wrote:
>>>> BTW, the [PLT] development environment was actually a drawback for

>>>> me -- I'm a hardcore vim & Makefiles kinda guy.

Jens Axel Søgaard <use...@jasoegaard.dk> wrote:
>>> Why didn't you just ignore DrScheme and used mzscheme?

>> That's what I do.

> So what's the drawback?

I've gotten the impression that some of the cool debugging and error
reporting features are only available in DrScheme. And in general, I've
gotten the impression that a lot of effort goes into developing the GUI
specifically rather than into improving the suite overall. That's a
bummer for me -- they're creating stuff that I can't make full use of,
because their "showcase" tool is incompatible with my work habits.

It's not a huge drawback, and it's obviously not stopping my from using
PLT, but I would like to see more "hooks" (or documentation on how to
use those tools outside of the GUI).

Bruce Stephens

unread,
Oct 28, 2003, 8:24:49 PM10/28/03
to
"Anton van Straaten" <an...@appsolutions.com> writes:

[...]

> Perhaps you didn't make the proper offerings to the Great God Google...

Quite possibly.

> Dunno if it's what Scott was thinking of, but here's a post in which Bear
> describes some issues with Unicode & R5RS:
> http://groups.google.com/groups?selm=3D753365.6BE29F0E%40sonic.net
> Some of the earlier and later posts in that thread are also relevant.

I'm not sure that's *so* important. That seems to be specifically
about having unicode in identifiers (there are obvious issues about
matching (presumably you'd want to canonicalize), and the notion of
case insensitivity is more complex in the unicode world). My guess is
that mostly people care about unicode in strings, and I/O with files
(or sockets or whatever) which are in particular encodings.

Of course, there's a strong overlap, especially with a lispy
language---a natural way to process XML is presumably to transform to
and from s-expressions, and to manipulate the s-expressions. Perhaps
the right things to worry about really are identifiers?

Eli Barzilay

unread,
Oct 28, 2003, 8:33:53 PM10/28/03
to
"Bradd W. Szonye" <bradd...@szonye.com.invalid> writes:

> I've gotten the impression that some of the cool debugging and error
> reporting features are only available in DrScheme. And in general,
> I've gotten the impression that a lot of effort goes into developing
> the GUI specifically rather than into improving the suite overall.

Investing lots of efforts on the GUI doesn't imply not improving the
"suite overall".


> That's a bummer for me -- they're creating stuff that I can't make
> full use of, because their "showcase" tool is incompatible with my
> work habits.

You know that you could use the GUI just to debug stuff, and when
you're not debugging just pretend it's not there.


> It's not a huge drawback, and it's obviously not stopping my from
> using PLT, but I would like to see more "hooks"

What hooks?


> (or documentation on how to use those tools outside of the GUI).

What tools exactly? Take the arrows that you get in the GUI that show
you bindings or the arrows that show you how you arrived at this
point -- how would you do these things outside a GUI? There's no
documentation on how to use this outside of a GUI simply because such
documentation requires an implementation, and the implementation of
these features without a GUI seems a bit like science fiction.

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://www.barzilay.org/ Maze is Life!

Eli Barzilay

unread,
Oct 28, 2003, 8:59:42 PM10/28/03
to
(very selective replying)

David Rush <ku...@gofree.indigo.ie> writes:

> 2) it was a pain to make fast. The notion of 'standalone
> executable', while ostensibly supported involved a complete
> rebuild of the PLT core

The standard meaning of a `standalone executable' never had anything
with a complete rebuild.


> 3) I write daemons and command-line programs and don't need GUI
> bells and whistles; if I did, PLT would be right up

> there. [...]

I've done this for years, and am still doing this as my heaviest
usage. I fail to see how having bells an whistles stands in my way.


> 4) The unit system was impressive ... and intimidating. And I hated
> all the extra punctuation I saw floating around inside of PLT's
> naming conventions

I can definitely tell you, having gone through the nightmare of
porting tiny-clos to guile, then redoing it to mzshceme (v5x to v10x),
that the module system is one of the most amazing things I have ever
worked with. Right now Swindle does all kinds of tricks you could not
even dream of of doing while keeping your insanity (and the keyword
stuff is far from the most complex thing, btw) -- yet, it works
perfectly on the command-line as well as in DrScheme. Also, if though
Swindle is so drastically hacked, I can still use other Scheme
modules, and other Scheme modules can use Swindle modules -- and there
are no problems at all.

Units are a little harder to get a handle on, but they are not needed
for most straightforward usages. (But given that you like functors,
you would probably want to use them, but you would probably not have
hard time learning how to use them.)


> 6) the v200 release b0rk3d my PLT code base [...]

When it did that for Swindle, I had a similar reaction. Forcing me
into using modules and other stuff that was incompatible sounded
really bad. I gave it a shot, and the result was so much cleaner to
write, and so much easier to maintain that I actually enjoyed it, and
as a result I could add more stuff which I couldn't before (since the
complexity was close to getting to critical mass).


> 9) PLT is a pain to install. I'm sure that the PLT folks don't think
> so, but but I haven't been able to get a fully-working install
> for quite a while now. It doesn't use configure/make to build

Either you're on a different planet than I am, or you're talking about
something else. It definitely uses configure and make to build.

But even if you don't want to use that, and you're willing to use one
of a few popular platforms, then you can now work with the cvs by a
simple:

curl http://download.plt-scheme.org/scheme/binaries/<some-path> \
| tar xzf -
cd plt
./install -u +z


> and it is very finicky about file locations.

Huh?


> Given that I *usually* need to have a multi-platform environment
> I find the lack of flexibility in PLT's installation very
> irritating.

What lack of flexibility?

MJ Ray

unread,
Oct 28, 2003, 9:00:33 PM10/28/03
to
"Bradd W. Szonye" <bradd...@szonye.com.invalid> wrote:
> Is that actually true? If so, I'd consider that a defect in Unicode,
> because the correct spelling of "capital esszed" is "SS." And besides,

That probably depends on your language, surely? The *character* upcased
is no change. "upcase" means look in the same position in the upper case,
after all.


R Racine

unread,
Oct 28, 2003, 9:41:13 PM10/28/03
to
On Tue, 28 Oct 2003 20:04:48 -0500, Matthias Felleisen wrote:

> Some large company located near the northwestern corner of the
> continental US has sponsored Will Clinger (Larceny) and PLT to create a
> merger of the two Scheme systems not unlike a mix of the strawman
> proposals that you have put up below.

Ohmy!

> The specific plan is as follows:
> - Will and some others are retargeting Larceny to the intermediate
> language of
> said companies virtual machine. Will has been calling this project
> Common Lareceny.
> - Joe Marshall and some others are porting MrEd to said company's
> toolbox
> The result could be a MrEd that's almost completely in Scheme. Will
> is certainly encouraging us to think of Scheme as a systems
> language. Eli's arrival has strengthened this goal even more.
> - Once we have a joint Scheme, we are hoping to retarget it to other
> platforms.
>
> How realistic is the plan? Producing Larceny was a two-man effort. It's
> a fast, reliable R5RS implementation with a few extra goodies. It is
> particularly well-suited for the research ideas that Will wishes to
> pursue.
>
>

Just this week I ran across Larceny (up till then I thought it yet another
fairly decent Scheme->C). I was wrong.

Very impressive accomplishment. I believe in one of my earlier posts I
wrote a line where I was going to post a proposal for Larceny/Twobit as
the core of new Scheme system. Never got to it (lost my nerve with all the
flack), though I was still trying to "bend" conversation that way. Little
did I know I was WAY behind the curve. Stole my thunder though :(

For any Schemers who have poked around a number Scheme's (and who hasn't)
and have not yet looked at Larceny / Twobit. You should.

It took about a 15 or 20 simple compatability function definitions and I
was hosting the Twobit compiler emitting petit-larceny Scheme->C code in
MzScheme. Its so pluggable, change an pass5p2 include and it spits Sparc
assembly. The millicode looked straight forward enough to convice a
duffer like myself into self delusion. "Fire up ol' nasm and whip out
millicode for i386. No problemo. Just follow the petit C millicode. Got
a Sparc sample to follow as well. Couple weekends..." Like I said
delusional for a good 3 minutes there. :)

Targeting the compiler is of course the easy part. The darn runtime is
the crux.

Just this very morning I reactivated an account I have on a Sun 15K for
the sole purpose of playing a bit with the Larceny runtime. Small world.


[Group of dedicated people, to whom we thank for their efforts was here.]


> Merging the two projects is not an easy task. It won't be done quickly.
> If people really want a top-notch product, however, it may be the route
> to go. If you have time to contribute or money or you want to volunteer
> friends, please do so. The goal is to produce a good platform for the
> first Schemers and the rest of the world, too.
>
>

The talent of the group is intimidating. One trick is how the inner core
will "open" and run the project so secondary/tertiary players can
effectively contribute without getting in the way. Watson for the Holmes,
Salieri for the Mozart, Barry Bonds batboys.

> -- Matthias


Ray

P.S.
- That sound you heard was me doing a couple of handsprings.
- Was kidding about the organized religion thing.

Pedro Pinto

unread,
Oct 28, 2003, 9:41:18 PM10/28/03
to
Matthias Felleisen wrote:
[...]

> Merging the two projects is not an easy task. It won't be done quickly. If
> people really want a top-notch product, however, it may be the route to go.
> If you have time to contribute or money or you want to volunteer
> friends, please do so. The goal is to produce a good platform for the
> first Schemers and the
> rest of the world, too.

These are very exciting news. Could you detail how one would go about
contributing? Perhaps a project home page exists somewhere? If not maybe
one should be created (I'de volunteer but I have poor skills in that
area). I have a feeling you could get a lot of help from people who are
currently forced to target said intermediate language through more
primitive means.

-pp

R Racine

unread,
Oct 28, 2003, 9:54:42 PM10/28/03
to
On Tue, 28 Oct 2003 20:04:48 -0500, Matthias Felleisen wrote:


> [stuff]


> Merging the two projects is not an easy task.

> [stuff]
>
> -- Matthias
>

BTW. I don't suppose this new Scheme project has been named yet?
Because, ahem, I think have a real peachy suggestion or two.

Ray

Alex Shinn

unread,
Oct 28, 2003, 10:27:39 PM10/28/03
to
At Wed, 29 Oct 2003 00:44:49 +0100, Jens Axel Søgaard wrote:
>
> Bruce Stephens wrote:
> > Jens Axel Søgaard <use...@jasoegaard.dk> writes:
> >>What is missing in DrScheme?
>
> > Bindings to Gtk/GNOME and other random useful libraries?
>
> What are the benefits of Gtk/GNOME over the current portable (Windows,
> Unix, Machintosh) GUI already in DrScheme?

Please correct me if I'm wrong on any of these:

1) UTF-8 and localization support
2) OpenGL
3) tables
4) trees
5) misc. compound widgets like dialogs and calendars
6) efficiency (probably minor importance)
7) native look&feel (important for newbies & PHB's)
8) familiarity (many people already know the Gtk API)
9) mindshare (lots of new work is done for Gtk)

Also, Gtk is fairly portable. I have GUI Gauche-gtk apps that run
unmodified on both Linux and Mac. Gtk apparently runs on Windows too
though I would never touch said OS.

--
Alex

Shriram Krishnamurthi

unread,
Oct 28, 2003, 11:47:35 PM10/28/03
to
"Bradd W. Szonye" <bradd...@szonye.com.invalid> writes:

> I've gotten the impression that some of the cool debugging and error
> reporting features are only available in DrScheme.

That's partly true. On the other hand, it's kinda' hard to draw
arrows on top of a textual interface.

Many of the language extensions, in particular, can be loaded into
MzScheme; those don't need DrScheme. As for tools, some of the data
can be exposed as data structures with a little work.

After all, the tools always begin textually before becoming graphical.
The problem is that, if a tool is eventually going to have a graphical
interface, it's more work to provide both interfaces.

A group of users who really cared could push for that second interface
to be documented -- especially if they did the first round of exposure
and documentation, it'd be a lot less work the developer to
maintain...

Shriram

Shriram Krishnamurthi

unread,
Oct 28, 2003, 11:53:24 PM10/28/03
to
Alex Shinn <fo...@synthcode.com> writes:

> > What are the benefits of Gtk/GNOME over the current portable (Windows,
> > Unix, Machintosh) GUI already in DrScheme?
>
> Please correct me if I'm wrong on any of these:

Well, the question was which are benefits *over* wxWindows in PLT:

> 1) UTF-8 and localization support

True, but given that DrScheme doesn't quite have this yet...

> 2) OpenGL

Being done independently by Scott Owens (and possibly others).

> 3) tables
> 4) trees

Not sure what these are exactly. Someone with better knowledge of
both toolkits will have to compare.

> 5) misc. compound widgets like dialogs and calendars

Fair enough, but how much use are these to the average developer? And
do they actually work on all three platforms in a consistent way,
interfacing with the native tools (eg, on Windows, will it interface
with my Outlook calendar)?

> 6) efficiency (probably minor importance)

Sure.

> 7) native look&feel (important for newbies & PHB's)

Given that DrScheme looks and quacks like a Windows app on Windows, a
Mac app on the Mac...

> 8) familiarity (many people already know the Gtk API)
> 9) mindshare (lots of new work is done for Gtk)

No dispute there.

Shriram

Shriram Krishnamurthi

unread,
Oct 28, 2003, 11:43:17 PM10/28/03
to
"R Racine" <r...@adelphia.net> writes:

> > Merging the two projects is not an easy task.
>

> BTW. I don't suppose this new Scheme project has been named yet?
> Because, ahem, I think have a real peachy suggestion or two.

Help implement part of it and we'll name it after your pet peach if
you want.

(Btw, there is a real peachy name that Matthias decided to keep under
wraps. Fans of Law and Order can guess it pretty easily.)

Shriram

Alex Shinn

unread,
Oct 29, 2003, 12:29:17 AM10/29/03
to
At 28 Oct 2003 23:53:24 -0500, Shriram Krishnamurthi wrote:
>
> Alex Shinn <fo...@synthcode.com> writes:
>
> > > What are the benefits of Gtk/GNOME over the current portable (Windows,
> > > Unix, Machintosh) GUI already in DrScheme?
> >
> > Please correct me if I'm wrong on any of these:
>
> Well, the question was which are benefits *over* wxWindows in PLT:

Yes, the original question was about wxWindows but I was replying to the
quote above about advantages Gtk has over the DrScheme GUI. Gtk has
existing bindings in at least Bigloo, Gauche and Guile, so is worth
comparing. I'm not too familiar with the DrScheme widget set so I was
trying to get a feel for whether it could serve as a serious
alternative.

> > 1) UTF-8 and localization support
>
> True, but given that DrScheme doesn't quite have this yet...

That's a showstopper for me. I have (currently alpha) a Gtk mail client
and Gtk web browser written in Gauche that I use for English and
Japanese text among others.

> > 2) OpenGL
>
> Being done independently by Scott Owens (and possibly others).

Cool!

> > 3) tables
> > 4) trees
>
> Not sure what these are exactly. Someone with better knowledge of
> both toolkits will have to compare.

Tables as in spreadsheet-like interfaces with editable cells. Trees
like a file explorer with a collapsible hierarchy.

> > 7) native look&feel (important for newbies & PHB's)
>
> Given that DrScheme looks and quacks like a Windows app on Windows, a
> Mac app on the Mac...

And, no offense, looks like a poor Tk substitute on Linux. So DrScheme
is biased towards 2 platforms while Gtk is biased towards 1 (well, it
looks identical to the Linux Gtk on OS X, as opposed to the native
Aqua).

--
Alex

Bradd W. Szonye

unread,
Oct 29, 2003, 1:32:51 AM10/29/03
to
Grzegorz Chrupa?a <grze...@pithekos.net> wrote:
>>> For variable-name case folding, just use standard Unicode case
>>> mapping, where (char-upcase #\ß) is just #\ß and be done with it.

> Bradd W. Szonye wrote:
>> Is that actually true? If so, I'd consider that a defect in Unicode,
>> because the correct spelling of "capital esszed" is "SS." And
>> besides, case-folding is only part of the problem, because it's only
>> one example of different but equivalent spellings.

> The basic non-locale dependent case mapping of ß is ß. There is a
> SpecialCasing table which deals with characters such as ß where case
> mappings are not simple 1-1 character correspndences.

Oh, I see what you're saying now. Ignore collation order in general, and
just use the "non-localized" version of case-insensitivity -- what Unix
geeks would call the "C" locale. That makes some sense. I don't know how
non-English programmers would feel about it (but then again, most of
them are accustomed to programming in ASCII, for better or worse).

> "Doing the right thing" in the general case, in a fully locale
> sensitive way is indeed complicated, if at all possible. IMO the rules
> for identifiers as should be well-defined and simple as well as
> consitent with treatment of strings on the basic level, i.e. they
> should use the general, non-locale dependent case-mappings.

Yeah, it's tough. On the one hand, it'd be nice to permit programming in
the local language. On the other hand, that's very hard to do, maybe
impossible, and it causes interoperability problems when you need to use
other people's code (in other languages).

> When dealing with data one could choose to use more refined,
> locale-dependent mappings, algorithms etc as needed.

Definitely.

By the way, I was experimenting with Unicode sources the other day. I
got to wondering how difficult it would be to use a lambda character
instead of the word lambda. There were a few surprises, some pleasant
and some unpleasant.

Bad: It took me a while to configure everything. Luckily, my favorite
monospaced font (Lucida Console) supports the Greek codepage -- it was
only one of three fonts on my system to do so. Unfortunately, it doesn't
include mathematical symbols like for-all and there-exists. I wish we
had better ISO 10646 fonts (and that font vendors did a better job of
advertising which fonts support which character sets). Currently, you
need to be an expert in the field to figure it out, and even then it's
difficult.

Good: MzScheme dealt with my lambda symbol with no tweaking whatsoever
(beyond defining it to mean the same thing as "lambda"). At first, I
thought I might need to use symbol quotes. Then, I realized that UTF-8
encoding make that unnecessary -- all non-ASCII glyphs have the high bit
set for all bytes, and MzScheme treats all such bytes as identifier
characters. The only drawback is that you don't get case-folding for
free.

Bradd W. Szonye

unread,
Oct 29, 2003, 1:38:42 AM10/29/03
to
> David Rush <ku...@gofree.indigo.ie> writes:
>> 9) PLT is a pain to install. I'm sure that the PLT folks don't think
>> so, but but I haven't been able to get a fully-working install
>> for quite a while now. It doesn't use configure/make to build

Eli Barzilay <e...@barzilay.org> wrote:
> Either you're on a different planet than I am, or you're talking about

> something else. It definitely uses configure and make to build ....
> What lack of flexibility?

He may have overlooked something, but I have a similar complaint in this
area. PLT Scheme ignores the local conventions for file locations:
programs in pfx/bin, docs in pfx/doc, libraries in pfx/lib, etc. Most
autoconf installers let you set the prefix and then distribute stuff in
the standard locations under it. (They'll even let you set the bin and
lib prefixes independently, for slightly non-standard installs.) With
PLT, I need to set up a bunch of symbolic links to support the standard
paths.

Bradd W. Szonye

unread,
Oct 29, 2003, 1:47:03 AM10/29/03
to
Shriram Krishnamurthi <s...@cs.brown.edu> wrote:
> (Btw, there is a real peachy name that Matthias decided to keep under
> wraps. Fans of Law and Order can guess it pretty easily.)

LtVanBuren? MrMcCoy? CptCragen?

Bradd W. Szonye

unread,
Oct 29, 2003, 1:52:04 AM10/29/03
to
> At 28 Oct 2003 23:53:24 -0500, Shriram Krishnamurthi wrote:
>> Given that DrScheme looks and quacks like a Windows app on Windows, a
>> Mac app on the Mac...

Alex Shinn <fo...@synthcode.com> wrote:
> And, no offense, looks like a poor Tk substitute on Linux.

Heh, yeah. The differences from Gtk are subtle but noticeable. However,
I haven't considered it a deal-breaker, because I'm used to seeing a
combination of Gtk, Qt, Athena, etc. apps on Linux. Personally, I like
Qt's interface the best, but Gtk is close.

There's something subtle missing in the Windows interface too. Mainly
stuff like buttons not being quite where I'd expect them to be, and
"common dialogs" not quite matching the native Windows versions. It's
been a while since I've played with the GUI tools, though, so I may be
misremembering, and it may have improved in 205.

Bradd W. Szonye

unread,
Oct 29, 2003, 1:54:32 AM10/29/03
to
Matthias Felleisen <matt...@ccs.neu.edu> wrote:
> Some large company located near the northwestern corner of the
> continental US has sponsored Will Clinger (Larceny) and PLT to create
> a merger of the two Scheme systems not unlike a mix of the strawman
> proposals that you have put up below ....

> - Once we have a joint Scheme, we are hoping to retarget it to other
> platforms.

Sounds interesting, although I'll be mighty bummed if Linux support is
late and MzScheme support suffers. I originally chose PLT so that I
could develop and plan on Linux, then deploy on Windows XP.

Bradd W. Szonye

unread,
Oct 29, 2003, 1:59:26 AM10/29/03
to
> "Bradd W. Szonye" <bradd...@szonye.com.invalid> writes:
>> I've gotten the impression that some of the cool debugging and error
>> reporting features are only available in DrScheme. And in general,
>> I've gotten the impression that a lot of effort goes into developing
>> the GUI specifically rather than into improving the suite overall.

Eli Barzilay <e...@barzilay.org> wrote:
> Investing lots of efforts on the GUI doesn't imply not improving the
> "suite overall".

Sorry, didn't mean to imply otherwise. Just noting that *some* resources
go to developing DrScheme (a tool I don't use) and not the tools that I
do use.

> You know that you could use the GUI just to debug stuff, and when
> you're not debugging just pretend it's not there.

I could, although I've had great difficulty doing so in practice. I can
never figure out how to load my code and get it running. I just press
buttons and nothing interesting happens. It's definitely not the most
intuitive debugger I've used. I should probably read the manual so that
I know what I'm doing and give it a fair trial.

Anton van Straaten

unread,
Oct 29, 2003, 2:07:39 AM10/29/03
to
Bradd W. Szonye wrote:
> Shriram Krishnamurthi <s...@cs.brown.edu> wrote:
> > (Btw, there is a real peachy name that Matthias decided to keep under
> > wraps. Fans of Law and Order can guess it pretty easily.)
>
> LtVanBuren? MrMcCoy? CptCragen?

I'm betting on spinoff show names: "Special Victims Unit" or perhaps
"Criminal Intent"...

Anton

Eli Barzilay

unread,
Oct 29, 2003, 2:11:04 AM10/29/03
to
"Bradd W. Szonye" <bradd...@szonye.com.invalid> writes:

> He may have overlooked something, but I have a similar complaint in
> this area. PLT Scheme ignores the local conventions for file
> locations: programs in pfx/bin, docs in pfx/doc, libraries in
> pfx/lib, etc.

But you have to realize that these conventions are local to just one
platform, which means that they're completely meaningless in the plt
tree context. This means that putting files in the right places
should be the job of a platform-specific installer. For the linux
case, there was an rpm for a while, and I hope to get that back in,
but I don't think that it's high priority (BTW, I always liked the
single tree, but I always used it from my home dir).


> Most autoconf installers let you set the prefix and then distribute
> stuff in the standard locations under it. (They'll even let you set
> the bin and lib prefixes independently, for slightly non-standard
> installs.)

I really don't see what's the point of doing this. The file division
you have seems like on a standard linux distribution you'll be happy
if you get:

1. libraries go in /usr/lib/plt
2. documentation in /usr/share/doc/plt
3. binaries in /usr/bin
4. include files in /usr/include/plt
5. man files in /usr/man/man1

So:

1. There are very few libraries -- one for compiling extensions by
mzc, so it should be in the plt tree. Two others are for embedding
it in a C application, but at that level I don't think having the
libraries in a different place would matter much.

2. The plt documentation is really different than other packages --
stuff that goes in /usr/share/doc is usually readmes etc, and not
things that users should read. So the most I'd put there is the
readme and the notes directory. Other documentations should stay
in the plt tree, where they can be updated automatically, and used
by the web server for queries etc.

3. most of the binaries are scripts -- and these set the default for
PLTHOME variable to know where to find the collections and other
stuff. But is there anything bad with just using symbolic links in
the bin directory?

4. there are a few include files (which mzc knows where to find) and a
few man files (mostly the same as `mzscheme -h' etc).

So I don't see any reason at all to scatter files all over the place
to just make life harder afterwards when files that are required to
run stuff are not in a place you expect them to be. So I think tha
the best approach would be a single plt directory, and putting a few
links to the above stuff, making easy maintenace of an RPM (I don't
even want to do an SRPM). If you have any reasons for this to not
make sense, or if you have any additional information on politically
correct ways of creating RPMs, mail me directly.

Anton van Straaten

unread,
Oct 29, 2003, 2:16:08 AM10/29/03
to
Bradd W. Szonye wrote:

> Eli Barzilay <e...@barzilay.org> wrote:
> > You know that you could use the GUI just to debug stuff, and when
> > you're not debugging just pretend it's not there.
>
> I could, although I've had great difficulty doing so in practice. I can
> never figure out how to load my code and get it running. I just press
> buttons and nothing interesting happens. It's definitely not the most
> intuitive debugger I've used.

That might be because it isn't actually a debugger... :) Although it does
give good navigable backtraces, and some other nice touches. I use it
exactly as Eli suggests.

Anton

chain...@hotmail.com

unread,
Oct 29, 2003, 2:28:31 AM10/29/03
to
mi...@pitt.edu (Michele Simionato) wrote in message
>
> What I (as an outsider to the community) would appreciate is:
>
> 1. make a stricter R5RS (not very strict, but stricter than now);
>
> 2. make more srfi (much more);
>
> 3. make them available on every implementation.

I agree with you on these points and you are not the first who
proposed them. I red on the Bigloo mailinglist that R6RS or something
like this should be a big step towards "making Scheme more user
friendly within different Scheme implementations".

But at the same time I am also convinced that some people make a
mental mistake when using Scheme. May be I am heading in the wrong
direction, but isn't it true that a lot of Python,... guys think when
you have to use Scheme you also have to learn /all/ the different
implementations of Scheme?

I can only place the advice not think so. Most of the time I use
Bigloo and I feel like a Scheme programmer whether I know PLT or
Chicken or Gambit is not of relevance because I actually use Scheme.
Correspondingly, Chicken programmers feel the same,...

I have never heard of a C++ programmer not saying he programs in an
object orientied style even if he has never heard of Smalltalk for
example.

Apropos SRFI: What I saw: Chicken has all the SRFI's, I think Dr.
Scheme too. Bigloo has at least some of them natively integrated and
the newer Bigloos have the option to create SRFI libraries (look into
the SRFI folder of the Bigloo distribution). I am not aware of all the
other Scheme implementations.

Nobody should hesitate to using Scheme for real (outside academic)
projects. I am reaching more and more the conclusion that for example
the often stressed fact that people believe Common Lisp is the
industry standard and Scheme is the academic standard is nothing more
than a gag. Maybe my programming needs are different but I cannot see
this industry strength in Common Lisp.

Fensterbrett

Eli Barzilay

unread,
Oct 29, 2003, 2:39:49 AM10/29/03
to
"Bradd W. Szonye" <bradd...@szonye.com.invalid> writes:

> I could, although I've had great difficulty doing so in practice. I
> can never figure out how to load my code and get it running.

Well, I'm not a gui expert, but both of these seem obvious enough.


> I just press buttons and nothing interesting happens. It's
> definitely not the most intuitive debugger I've used. I should
> probably read the manual so that I know what I'm doing and give it a
> fair trial.

Yes.

Bradd W. Szonye

unread,
Oct 29, 2003, 2:57:53 AM10/29/03
to
> "Bradd W. Szonye" <bradd...@szonye.com.invalid> writes:
>> He may have overlooked something, but I have a similar complaint in
>> this area. PLT Scheme ignores the local conventions for file
>> locations: programs in pfx/bin, docs in pfx/doc, libraries in
>> pfx/lib, etc.

Eli Barzilay <e...@barzilay.org> wrote:
> But you have to realize that these conventions are local to just one
> platform, which means that they're completely meaningless in the plt
> tree context. This means that putting files in the right places
> should be the job of a platform-specific installer.

That's true for any software package. You don't organize the sources
that way! Some of those directories (like /bin) don't even exist in the
source tree. But the installer should organize the deliverables that
way, and autoconf (configure) makes it easy to get it right. This isn't
a huge problem -- just an annoyance and a minor bit of
"unprofessionalism."

> For the linux case, there was an rpm for a while, and I hope to get

> that back in, but I don't think that it's high priority ....

You don't need RPM to put deliverables in the standard directories. RPM
is just a script to run the actual installer and gather up what it
creates. RPM won't help much unless configure & make install do the
right thing in the first place.

>> Most autoconf installers let you set the prefix and then distribute
>> stuff in the standard locations under it. (They'll even let you set
>> the bin and lib prefixes independently, for slightly non-standard
>> installs.)

> I really don't see what's the point of doing this. The file division
> you have seems like on a standard linux distribution you'll be happy
> if you get:
>
> 1. libraries go in /usr/lib/plt
> 2. documentation in /usr/share/doc/plt
> 3. binaries in /usr/bin
> 4. include files in /usr/include/plt
> 5. man files in /usr/man/man1

Yes, that's the standard way to do it on a Red Hat Linux system, but not
all installations use the standard locations, so configure provides
hooks to rearrange the tree if you need to.

> So:
>
> 1. There are very few libraries -- one for compiling extensions by
> mzc, so it should be in the plt tree. Two others are for embedding it
> in a C application, but at that level I don't think having the
> libraries in a different place would matter much.

If you put those libraries in the standard locations, it eliminates one
step from the compile & link makefile.

> 2. The plt documentation is really different than other packages --
> stuff that goes in /usr/share/doc is usually readmes etc, and not
> things that users should read.

Not in my experience! Except for manpages, /usr/share/doc is exactly
where you put stuff like PLT's collects/doc directory.

> So the most I'd put there is the readme and the notes directory.
> Other documentations should stay in the plt tree, where they can be
> updated automatically, and used by the web server for queries etc.

Why can't PLT do that if they're in /usr/share/doc/plt? The simple
answer is that it's designed around a "one tree to rule it all"
approach, but that maps poorly to Unix systems.

> 3. most of the binaries are scripts -- and these set the default for
> PLTHOME variable to know where to find the collections and other
> stuff. But is there anything bad with just using symbolic links in
> the bin directory?

It's one more thing I need to do manually when I install PLT. I can
manually add plt/bin to my path, but either way, it doesn't work out of
the box.

> 4. there are a few include files (which mzc knows where to find) and a
> few man files (mostly the same as `mzscheme -h' etc).

There are only a few include files for *most* libraries. That's not a
good reason to tuck them away in a non-standard directory.

> So I don't see any reason at all to scatter files all over the place
> to just make life harder afterwards when files that are required to
> run stuff are not in a place you expect them to be.

That's the problem: By putting them into a single tree rather than the
standard locations, they *aren't* where Unix developers expect them to
be. We need to manually adjust lots of paths (or create lots of
symlinks) to use this stuff, because it's not where other Unix apps
expect it to be.

> So I think tha the best approach would be a single plt directory, and
> putting a few links to the above stuff, making easy maintenace of an
> RPM (I don't even want to do an SRPM). If you have any reasons for
> this to not make sense, or if you have any additional information on
> politically correct ways of creating RPMs, mail me directly.

RPM has nothing to do with it. RPM only does what make install tells it
to (plus some glue). RPM's scripting language could create those links,
but it's better to do it in make install. And really, it's even better
to put everything where it belongs rather than linking to it. Symlinks
are nice, but there are some gotchas, and in general they aren't as
convenient as installing in "the Unix way" to begin with.

PLT is not alone in this; for example, I think Perl installs to
/opt/perl on HP-UX systems, with the same annoyances. But on Linux
systems, the Perl installer puts everything where other tools expect to
find it, "scattered" across the directories.

It isn't really scattering, though. For most directories, the only
difference between the PLT way and the Linux standard way is that the
"type" name comes before the "package" way instead of the other way
around. For example:

/usr/plt/include /usr/include/plt
/usr/plt/doc /usr/doc/plt
/usr/PKG/TYPE /usr/TYPE/PKG

There are exceptions, but that's the general idea. While it may seem a
bit odd to put type before package, Unix systems do it that way because
it works better for search paths. If you use /usr/plt/include,
programmers need to explicitly put the path in their makefiles. If you
use /usr/include/plt, programmers can just write "#include <plt/foo.h>"
and go with it. Since each search path has its own "type," you can just
list all of the type directories and then use PKG/FILE to find what you
want.

Even PLT makes use of this concept, with plt/collects/PKG. Think of how
annoying it would be if a PLT collection installed itself into
plt/PKG/collects instead. You'd need to manually create links or update
the PLTCOLLECTS path, or it wouldn't work right. That's exactly the same
thing that the PLT installer does to other Unix tools.

Bradd W. Szonye

unread,
Oct 29, 2003, 3:04:16 AM10/29/03
to
chain...@hotmail.com <chain...@hotmail.com> wrote:
> Apropos SRFI: What I saw: Chicken has all the SRFI's, I think Dr.
> Scheme too.

PLT has a lot of them, but not nearly all of them. Some SRFIs are very
difficult to implement in PLT. For example, you can *almost* implement
SRFI-34 (exceptions) with PLT, which provides a native RAISE function.
Unfortunately, PLT's RAISE doesn't permit rethrowing inside the handler.
That doesn't make rethrowing impossible, but it *does* make SRFI-34
semantics impossible, because SRFI-34 requires the handler to run in the
same context as RAISE.

You can work around that by hiding the native RAISE and using the
portable SRFI implementation instead, but hiding built-in functions is
tricky in PLT Scheme. You can do it at the top level, but it's usually
better to use modules instead of the top level in PLT, and you can't
shadow identifiers in modules. There's a way around that too, but it's
cumbersome.

Grzegorz Chrupala

unread,
Oct 29, 2003, 3:12:31 AM10/29/03
to
"Scott G. Miller" <scgm...@freenetproject.org> wrote in message news:<a8OdnQW-9YB...@giganews.com>...
> Grzegorz Chrupala wrote:

> > For me, a major gap is Unicode and multibyte character support. This
> > is by now standard in implementations of most other widely used
> > programming languages but surprisingly few Schemes have it.
>
> There is a reason for that. The R5RS character operators cannot be made
> to work reliably with unicode characters. SISC for example supports
> unicode characters and arbitrary character maps, but makes no effort to
> contort the standard operators to behave properly. There was a usenet
> discussion about this in the past which you could probably find by googling.
>

Frankly, I don't see anything in R5RS that would prevent Unicode
support.
However if there is indeed some incompatibility, then probably the
following statement from the Scheme FAQs should be updated:

Are there implementations that support unicode?

There is nothing in the Scheme standard that conflicts
with supporting unicode, however such support is not
required. There are some Scheme implementations
that handle unicode characters, but most don't. Also,
SRFI-13 and SRFI-14 propose string and character
processing libraries that are unicode compliant.

--
Grzegorz

Jens Axel Søgaard

unread,
Oct 29, 2003, 4:41:26 AM10/29/03
to
Bradd W. Szonye wrote:

> There's something subtle missing in the Windows interface too. Mainly
> stuff like buttons not being quite where I'd expect them to be, and
> "common dialogs" not quite matching the native Windows versions.

But the developer decides which buttons goes where and
which menu items goes where?

If you experience non standard placement in e.g. DrScheme
the file a bug report.

--
Jens Axel Søgaard


Joan Estes

unread,
Oct 29, 2003, 6:48:37 AM10/29/03
to
Matthias Felleisen <matt...@ccs.neu.edu> wrote in message news:<bnn3im$q7p$1...@camelot.ccs.neu.edu>...

> Some large company located near the northwestern corner of the continental US
> has sponsored Will Clinger (Larceny) and PLT to create a merger of the two

> Scheme systems ...

> If you have time to contribute or money or you want to volunteer friends, please
> do so. The goal is to produce a good platform for the first Schemers and the
> rest of the world, too.

Will it be free once it is done? If not, since this big company sounds
like one of those that are richer than God, how about people forget about the
volunteering and get paid to do it?

R Racine

unread,
Oct 29, 2003, 7:22:43 AM10/29/03
to

>
> Are there implementations that support unicode?
>

Chris Hanson's latest release of MIT Scheme supports UTF-16. The extent
of integration "all the way down", I don't know. But what he does, he
does well and throughly. It is another point of reference for those with
an interest in providing Scheme support beyond basic UTF-8.

Ray

Shiro Kawai

unread,
Oct 29, 2003, 7:50:46 AM10/29/03
to
"Bradd W. Szonye" <bradd...@szonye.com.invalid> wrote in message news:<slrnbpunoj.k...@szonye.com>...

> By the way, I was experimenting with Unicode sources the other day. I
> got to wondering how difficult it would be to use a lambda character
> instead of the word lambda. There were a few surprises, some pleasant
> and some unpleasant.

Japanese character set has long included some mathematical
symbols and greek alphabet, so that's a kind of the thing
every Japanese Scheme programmer has tried at least once :-)
Gauche is shipped with a joke script that replaces some Scheme
syntax and procedures for in Japanese or mathematical symbols,
including greek lambda.

There are indeed some "Japanese" programming languages as well.
Besides the interoperability issue, the reason such languages
are inconvenient to use in production than traditional ones
is that the current keyboard UI is pretty much optimized for
ASCII or ISO8859 characters. Typical Japanese input methods
are not very convenient for switching frequently between
Japanese text input and ASCII/symbol input back and forth.
For educational purpose (like to teach programming for kids),
I do see a benefit of programming in local languages, though.

Bruce Stephens

unread,
Oct 29, 2003, 8:36:49 AM10/29/03
to
Alex Shinn <fo...@synthcode.com> writes:

[...]

> Gtk has existing bindings in at least Bigloo, Gauche and Guile, so
> is worth comparing.

As far as I can tell, there's no working binding for the current
version of bigloo (the bigloo-lib project
<http://sourceforge.net/projects/bigloo-lib/> shows significant signs
of being dead (last release almost a year ago, 0% activity, old open
bug reporting it to be unbuildable). Guile seems to have two, both
fairly undocumented; the preferred one is presumably the gobject one,
which looks technically a nice approach, but seems rather slow at
present.

[...]

Shriram Krishnamurthi

unread,
Oct 29, 2003, 8:41:07 AM10/29/03
to
"Bradd W. Szonye" <bradd...@szonye.com.invalid> writes:

> Sounds interesting, although I'll be mighty bummed if Linux support is
> late and MzScheme support suffers.

MzScheme support won't suffer at all. Preserving the current
cross-platform nature is a top priority. All Matthias is saying that
*new goodies* may get unveiled platform-at-a-time. This is no
different from our present situation, where, for instance, MysterX
(the ActiveX interface) works only on Windows, but its creation was
not to the detriment of the cross-platform effort.

> I originally chose PLT so that I
> could develop and plan on Linux, then deploy on Windows XP.

You should be pleased, then -- you'll spend time making it look fast
on Linux, then it'll run like a bat-out-of-hell on XP! (-: (Or, um,
it'll end up running at the same speed.)

Shriram

Shriram Krishnamurthi

unread,
Oct 29, 2003, 8:43:18 AM10/29/03
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> I'm betting on spinoff show names: "Special Victims Unit" or perhaps

That would be MzScheme's will collector...

> "Criminal Intent"...

...and the work stealer.

LtShriram

Damien R. Sullivan

unread,
Oct 29, 2003, 9:45:34 AM10/29/03
to
=?ISO-8859-1?Q?Jens_Axel_S=F8gaard?= <use...@jasoegaard.dk> wrote:

> > Speed?
>
>I wanted to hear what mzscheme misses compared to Perl/Python
>and as far as I can tell, mzscheme has no problem in the speed
>department.

IME, mzscheme does just fine against Perl/Python. The problem is that Perl
and Python are slow. Sure, they tend to be fast enough for what they're used
for, but their performance limits are still much smaller than those of a
compiled language.

>[The existence of the *very* fast Scheme compilers does not imply
>that mzscheme is slow]

It does when you're looking for maximal performance out of given hardware.
Anything interpreted is slow relative to most compiled things.

Actually, looking at my inefficient Fibonacci benchmark, mzscheme does *quite*
well against Perl. 10x faster. mzc module --prim does even better, getting
to within 4x of C. (Perl is 100x slower than C.) I haven't gotten Chez
Scheme to go that fast. (It's a lot faster by default though.) CMUCL can get
within 2x of C, and ocaml matches C.

How much Fibonacci generalizes to anything, I don't know; it's mostly testing
recursive calls and numerics.

-xx- Damien X-)

Toni Nikkanen

unread,
Oct 29, 2003, 10:02:56 AM10/29/03
to
Eli Barzilay <e...@barzilay.org> writes:

> should be the job of a platform-specific installer. For the linux
> case, there was an rpm for a while, and I hope to get that back in,
> but I don't think that it's high priority (BTW, I always liked the
> single tree, but I always used it from my home dir).

I always like to install under a single tree: then it's particularly
easy to get rid of it later without having to hunt for bits
and pieces.
By the way, I was surprised at how the whole PLT bunch built and
installed and worked properly out of the box on a development
version of OpenBSD 3.4 in /usr/local/plt. I didn't see any
advertising promising OpenBSD compatibility anywhere, but it
worked just like that.


Eli Barzilay

unread,
Oct 29, 2003, 1:16:43 PM10/29/03
to
"Bradd W. Szonye" <bradd...@szonye.com.invalid> writes:

> [...]

[I'll take this off to email.]

Joe Marshall

unread,
Oct 29, 2003, 2:23:46 PM10/29/03
to
MJ Ray <m...@dsl.pipex.com> writes:

> "Bradd W. Szonye" <bradd...@szonye.com.invalid> wrote:
>> Is that actually true? If so, I'd consider that a defect in Unicode,
>> because the correct spelling of "capital esszed" is "SS." And besides,
>

> That probably depends on your language, surely? The *character* upcased
> is no change. "upcase" means look in the same position in the upper case,
> after all.

Not in Unicode. Upcase a `LATIN SMALL LETTER SHARP S' and you get
*two* `LATIN CAPTIAL LETTER S'.

Bradd W. Szonye

unread,
Oct 29, 2003, 3:10:07 PM10/29/03
to

That depends on the locale. There's a "locale-neutral" implementation
where upcasing esszed doesn't actually change anything.

David Van Horn

unread,
Oct 29, 2003, 4:01:11 PM10/29/03
to
Bradd W. Szonye wrote:
> PLT has a lot of them, but not nearly all of them. Some SRFIs are very
> difficult to implement in PLT. For example, you can *almost* implement
> SRFI-34 (exceptions) with PLT, which provides a native RAISE function.
> Unfortunately, PLT's RAISE doesn't permit rethrowing inside the handler.
> That doesn't make rethrowing impossible, but it *does* make SRFI-34
> semantics impossible, because SRFI-34 requires the handler to run in the
> same context as RAISE.
>
> You can work around that by hiding the native RAISE and using the
> portable SRFI implementation instead, but hiding built-in functions is
> tricky in PLT Scheme.

If you can work around the issue, how is it impossible to to implement SRFI 34
semantics?

> You can do it at the top level, but it's usually
> better to use modules instead of the top level in PLT, and you can't
> shadow identifiers in modules. There's a way around that too, but it's
> cumbersome.

I don't get this. The PLT raise is different from SRFI-34 raise. The module
system just allows you say what you mean and mean what you say, ie you don't
confuse the two raises. I don't think it's cumbersome at all.

-d

Grzegorz Chrupała

unread,
Oct 29, 2003, 4:30:19 PM10/29/03
to
Matthias Felleisen wrote:

> Some large company located near the northwestern corner of the continental
> US

I was just wondering, is there some unwritten rule that forbids mentioning
the word "Microsoft" in usenet posts? I was unable to find anything in the
FAQs. Care to enlighten a usenet newbie?
--
Grzegorz
http://pithekos.net

Matthias Felleisen

unread,
Oct 29, 2003, 5:29:19 PM10/29/03
to
Pedro Pinto wrote:


> These are very exciting news. Could you detail how one would go about
> contributing? Perhaps a project home page exists somewhere? If not maybe
> one should be created (I'de volunteer but I have poor skills in that
> area). I have a feeling you could get a lot of help from people who are
> currently forced to target said intermediate language through more
> primitive means.

I will bring up the idea of outside contributors over the next week
with Will and Joe and the rest of the re-targeting team.

To all others: Microsoft will not retain any rights and the results, if
any, will be distrubted just like all other PLT software. Note the caveat.

As Shriram has already pointed out, we see .Net for now as just one more
platform though as a platform on which we're trying to merge. If someone had
raised the funds to merge Larceny and PLT Scheme on some other platform we
would have targeted that instead, too.

Keep in mind that this is a large project with a high potential for failure.

-- Matthias

Shriram Krishnamurthi

unread,
Oct 29, 2003, 5:48:58 PM10/29/03
to
Grzegorz =?UTF-8?B?Q2hydXBhxYJh?= <grze...@pithekos.net> writes:

> I was just wondering, is there some unwritten rule that forbids mentioning
> the word "Microsoft" in usenet posts? I was unable to find anything in the
> FAQs. Care to enlighten a usenet newbie?

Given the flamewars that it tends to engender (if you haven't already
figured this part out, you will soon), one tends to step softly around
the mention of The Corporation.

Besides, given that the N*S*A monitors Usenet to look out for
subversive activity, and nothing could be more subversive to the US
than staunching the Freedom to Innovate...

Shriram

It is loading more messages.
0 new messages