Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Dissecting the electorate

3 views
Skip to first unread message

David Rush

unread,
Jul 26, 2007, 3:43:13 PM7/26/07
to
Greetings Schemers,

I spent some time today reading my way through the stated concerns on
the R6RS electorate list, and it seemed to me that the electorate
could be reasonably clustered around the following concerns.

1) language size
2) cross-implementation portability
3) library availability (this is almost a corollary of #2)
4) modules
5) inconsistencies in the spec
6) pedagogical use

I am still working my way through the spec, so I'm not quite ready to
make substantive comments w/rt the contents of 5.97+, but I think that
there is an important point to note which also meshes with one of my
long-standing interests in specification and semantics of Scheme. In
fact, the list above is meant to make it very clear: Modules may well
be the only real issue worth fiddling with in R5RS.

That's a sweeping statement and you would be justified in wondering
how I can say that. But first I have to lay a little foundation.

In the absence of macros, Scheme needs *no* additional modularization
primitives. Simply put, LAMBDA is also the ultimate namespace-
management primitive. I have personally proved this over and over
again while writing quite a lot of software that has been run on
multiple (typically Larceny, Bigloo, Gambit, PLT, and Stalin at the
minimum) Scheme implementations. Mind you, a Scheme which only had
LAMBDA without its syntactic-sugar derivatives (e.g. LET, LETREC)
would be quite a bit more painful to use than the language we
currently have - so it also seems reasonable to me to that there might
be some applicable macrology which could smooth the perception of
namespace management in Scheme.

Which leads us just about immediately into the wonderful world of
macrology. As the PLT people have shown, a rich meta-language for
macro development throws a major twist into the definition of a module
system because modules can now actually perform compile-time
optimizations as well as run-time composition. So, as much as I like
to take issue with people who claim that Scheme has no module system
(because the modularization primitives in Scheme are already
*excellent*), there is a legitimate need - especially in macro-heavy
environments.

Now, getting the module system *right* is not necessarily easy,
primarily because the most important issues are related to human-
factors (at least IMNSHO), but a sound and robust module system (e.g.
SML) is a key enabler in reducing the size of the language
specification (point 1 above), development of portable software (point
2 above), and rich library specifications (point 3 above). A proper
module system should also allow parts of the language to migrate away
from the core semantics and into the library - the C stdio libraries
are the poster child for this benefit, but other issues (such as
Unicode and maybe even the numeric tower) could also meaningfully be
treated in the same way.

My great concern in this is that there doesn't appear to be a
consensus on which system has the best modularization algebra - and as
I have been out of touch with most of the standardization effort I am
not really aware of the content of the debate. The second issue then
is deciding what things should be moved out to the library - and
consequently the mechanisms for library standardization.

And at this point I feel that I need to point out a core fallacy with
this process. The R6RS emperor has no clothes - there is no
enforcement mechanism at all. In some respects, it should be even
worse than the SRFIs which at least are forward-looking and
acknowledge that these are things that people want to see implemented
consistently. In RnRS, the middle 'R' means 'Report' and initially
reflected the common and agreed practices in the Scheme community -
this is a *retrospective* term and could never reflect the bleeding
edge of language innovation.

And as some have noted, some (but by no means all) of the most obvious
objectors to R6RS are implementors. Many of these are people that have
implemented (IMO) the most innovative systems. Perhaps the most
obvious is the work that Jeff Siskind is doing with automatic
differentiation of numerical code and Stalingrad. The minimalist ethic
is an important enabler to this kind of work: it provides a common set
of semantics against which extensions even deeper than macrology can
be clearly defined.

As many of you may have guessed by now, I am leaning towards the anti-
R6RS camp, but I have not made up my mind yet. I should also point out
that I am *not* an implementor, but I also find that the small
semantic footprint of Scheme is a key enabler for the style of
engineering that I have grown into. My recent forays into the Java and
Ruby communities have only confirmed my opinion in this matter. In
both cases the language 'steering committee' has conflated the
distinction between language and library - resulting in a semantically
tangled system. And I certainly do *not* think we will be well served
by emulating the languages that are currently in vogue. Arguably, the
reason that they are becoming more like Scheme is because Scheme has
never sought to have a mainstream approach - with the corollary of
never really seeking to have wide adoption.

And *that* cues a segue into psychological issues: Do we need the
validation of other communities to feel like we are doing good science
and/or engineering? Do we take a sense of identity out of being
different in our practices? I don't think those issues affect the
science of the language spec, but they are relevant to the usability
of the language - which is the whole reason why we *have* languages
more complex than a single-tape Turing machine.

So there's some random thoughts on the state of play, today, 26 July
2007.

david rush
--
http://cyber-rush.org/drr -- a very messy web^Wconstruction site

Griff

unread,
Aug 1, 2007, 8:44:39 PM8/1/07
to
Reading through all of the electorates stated concerns makes me feel
nostalgic; thinking back to that moment I first fell in love with
computer science. Scheme seems to have that effect on people; love at
first sight. Like any love, though, the thing that folks seem to fear
the most is losing it. Many people are afraid of losing the things
that they love about Scheme. They need fear not, as none of those
things are going to disappear.

Having read once that 'any truly good language, no matter what its
original goals, will eventually become a general purpose programming
language; that is the nature of a good language', I can't help but
think that Scheme has this nature. Scheme is special. It draws people
in. It is a place to learn, and play.

A lot of folks fear that all of this will be lost in R6RS, that in
'selling out' its 'beauty will be lost'. These fears are largely
unwarranted. The beauty of moving forward with R6RS is that the magic
of Scheme, no longer be confined to the backrooms and basements of the
privileged few, it will have the opportunity truly shine.

That said, no one is perfect, that is why there is something called
'R7RS' ;)

Nils M Holm

unread,
Aug 2, 2007, 3:44:45 AM8/2/07
to
Griff <gre...@gmail.com> wrote:
> A lot of folks fear that all of this will be lost in R6RS, that in
> 'selling out' its 'beauty will be lost'. These fears are largely
> unwarranted. The beauty of moving forward with R6RS is that the magic
> of Scheme, no longer be confined to the backrooms and basements of the
> privileged few, it will have the opportunity truly shine.

This dilemma could easily be solved by splitting the standard into
two parts: a description of the the (beautiful, minimalistic) core
language and a description of some standard add-on libraries that
adapt the core language to the real world. There is no need to leave
one thing behind to get the other. The core language would still be
the Scheme that people fell in love with, but the libraries would
free the language from the restriction to backrooms and basements.

One could argue that this is what happens at the moment, but, if I
understand things correctly, R6RS Scheme will /have to/ implement a
load of libraries that (IMO) would not belong in the core language.
One of the beauties of Scheme is that many of the proposed extensions
can be implemented in Scheme itself. So only the core language, which
is capable of supporting the propsed extensions, should be mandatory,
and the extensions should be optional. This way everybody would win.

--
Nils M Holm <n m h @ t 3 x . o r g> -- http://t3x.org/nmh/

philip....@gmail.com

unread,
Aug 2, 2007, 10:47:22 AM8/2/07
to
On 2 Aug, 08:44, Nils M Holm <before-2007-10...@online.de> wrote:

> This dilemma could easily be solved by splitting the standard into
> two parts: a description of the the (beautiful, minimalistic) core
> language and a description of some standard add-on libraries that
>

> One could argue that this is what happens at the moment, but, if I
> understand things correctly, R6RS Scheme will /have to/ implement a
> load of libraries that (IMO) would not belong in the core language.

Your idea sounds good to me.

How inter-dependent are the language and library specs? I assume the
ideal is that the library spec depends on the language spec but not
the other way around?

Sorry if I'm over simplifying.

--
'(phil "http://phil.nullable.eu/")

Nils M Holm

unread,
Aug 2, 2007, 4:06:21 PM8/2/07
to
philip....@gmail.com wrote:
> Your idea sounds good to me.
>
> How inter-dependent are the language and library specs? I assume the
> ideal is that the library spec depends on the language spec but not
> the other way around?

This would indeed be optimal, and with some support in the language
part (which I will call the "core language") I think it could be
done. Here is a summary of my thoughts so far:

---

I know, it is a bit late to come up with this, but since there still
is a chance that the new standard will not be ratified, I will just
post this idea.

In order to keep the core language small, the report should be split
into two separate documents: one describing the core language and one
describing a set of add-on libraries that are implemented in the core
language.

This approach would not only preserve the elegance of the original
language, but also allow to add more libraries at a later time
without having changing the core language again. Currently the
language is coupled too tightly to its libraries. By defining the
libraries in terms of the core language and existing libraries,
this depency would become weaker.

Of course, the core language would have to provide some additional
features in order to support the libraries described in the library
part of R5.97RS. Some of the libraries might even have to move to
the core, maybe in the form of a "core library". Still, this approach
would be in the sense of the Scheme spirit, because a set of general
features would be used to create more specific libraries.

Here are some of my half-baked ideas regarding the separation of
a core Scheme language and its libraries. The goal of the whole
idea is to make the libraries implementable in Scheme, so that
they can be truly optional. The libraries would depend on core
Scheme, but not vice versa.

The following R5.97RS libraries, except for those marked with a star (*)
should be optional libraries. Where necessary, the core language should
provide features to facilitate the efficient implementation of those
libraries.

Some of the libraries introduce new notations, such as #vu8, #', #`,
and #,@. To allow such extensions, some cooperation of the reader
would be required.

1) Unicode

Unicode support would certainly require some additions to the core
language, but maybe something as simple as extending the domains
of CHAR->INTEGER and INTEGER->CHAR would be sufficient. The rest
could be implemented in Scheme.

By making Unicode support a library, the character set would be
less rigidly coupled to the language, and make the core language
implementable on systems without Unicode support.

2) Bytevectors

Could be implemented in Scheme, but maybe not efficiently.

3) List Utilities

Could easily be implemented in Scheme, although some of these
procedures should definitely be part of the core language.

4) Sorting

Could easily be implemented in Scheme.

5) Control Structures

R5.97RS even includes implementations of these.

6) Records

Would require a mechanism to introduce new types. Could otherwise
be implemented in Scheme.

7) Exceptions and Conditions

Would definitely require some support in the core language.
Have not though about it yet. (I do not like the idea anyway.)

8) I/O

All but Simple I/O should be in a library. Simple I/O should be
extended to perform efficient I/O on raw octet streams, the rest
could be done in Scheme.

*9) File system procedures

Would have to go to the core language.

*10) Command line acces and exit values

Would have to go to the core language.

11) Arithmetic

Doing this efficiently would definitely need some support in the core
language, and since efficiency probably is the whole point of this
library, this is a dilemma. I have no idea how to make this an add-on,
but I would like to see it as one.

12) syntax-case

Would need some support in the core language.

13) Hashtables

Could be done in Scheme.

14) Enumerations

Could be done in Scheme.

15) Composite library

Could be done in Scheme.

*16) eval

Would have to go to the core language.

*17) Mutable Pairs

Definitely would need some support in the core language,
but I think this should not even be optional.

*18) Mutable Strings

Definitely would need some support in the core language,
but I think this should not even be optional.

19) R5RS compatibility

Most of this can probably be done in Scheme. Except for
LOAD, and I definitely want LOAD in the core language,
because it is invaluable in interactive environments.

Aaron Hsu

unread,
Aug 2, 2007, 6:36:02 PM8/2/07
to
On 2007-08-02 15:06:21 -0500, Nils M Holm <before-2...@online.de> said:

> Of course, the core language would have to provide some additional
> features in order to support the libraries described in the library
> part of R5.97RS. Some of the libraries might even have to move to
> the core, maybe in the form of a "core library". Still, this approach
> would be in the sense of the Scheme spirit, because a set of general
> features would be used to create more specific libraries.

The way I see it, the easiest way to achieve the beauty of the language
without the problems of a massively ugly spec, is to continually reduce
the core language down to only the most universally necessary
procedures. Then, provide one additional mandatory extension, which is
a library spec. Not large library spec or anything, but some way to
detail what features are needed to run a specific scheme file. This
extension should not get in the way of more sophisticated module
systems. Thus, you can have any extension you need, you can guarantee
easy portability, and life would be good. :-)

However, as it stands now, the standard demands a huge set of extra
libraries that are definitely not universally applicable.
--
Aaron Hsu <aaro...@sacrificumdeo.net>

"No one could make a greater mistake than he who did nothing because he
could do only a little." - Edmund Burke

Kumar

unread,
Aug 2, 2007, 11:39:38 PM8/2/07
to

*Anything* left in a specification as optional tends to end up varying
widely among implementations and cause a lot of portability head aches
that prevent both academic as well as industrial usage.

IMO, a standard will be *really* useful if it specifies that a
particular library (say for file or network I/O) must have exactly a
given set of functions with specified semantics - mandatory ... no
exceptions ... period. That way, we'll see more code that relies on
the reliable standard getting written.

Aaron Hsu

unread,
Aug 3, 2007, 12:36:03 AM8/3/07
to
On 2007-08-02 22:39:38 -0500, Kumar <sriku...@gmail.com> said:

> *Anything* left in a specification as optional tends to end up varying
> widely among implementations and cause a lot of portability head aches
> that prevent both academic as well as industrial usage.
>
> IMO, a standard will be *really* useful if it specifies that a
> particular library (say for file or network I/O) must have exactly a
> given set of functions with specified semantics - mandatory ... no
> exceptions ... period. That way, we'll see more code that relies on
> the reliable standard getting written.

I agree with you here. I think it would be good to guarantee
interfaces, but I still think the existence of those interfaces should
be optional. That is, the optional libraries that the standard defines
should state that should a scheme implementation provide extension X,
then it must have precisely this interface; however, if the
implementation does not wish to implement extension X, then they can
leave it out.

Nils M Holm

unread,
Aug 3, 2007, 1:57:54 AM8/3/07
to
Kumar <sriku...@gmail.com> wrote:
> *Anything* left in a specification as optional tends to end up varying
> widely among implementations and cause a lot of portability head aches
> that prevent both academic as well as industrial usage.

Not if a set of libraries exists. When libraries exist, there is no
need to re-invent the wheel. And it has been done before. ML has a
core language and its libraries, Forth has a core language plus a
large set of optional libraries, and I see no portability issues in
either of these languages.

Andrew Reilly

unread,
Aug 3, 2007, 4:17:29 AM8/3/07
to
On Thu, 02 Aug 2007 23:36:03 -0500, Aaron Hsu wrote:

> On 2007-08-02 22:39:38 -0500, Kumar <sriku...@gmail.com> said:
>
>> *Anything* left in a specification as optional tends to end up varying
>> widely among implementations and cause a lot of portability head aches
>> that prevent both academic as well as industrial usage.
>

> I agree with you here. I think it would be good to guarantee
> interfaces, but I still think the existence of those interfaces should
> be optional. That is, the optional libraries that the standard defines
> should state that should a scheme implementation provide extension X,
> then it must have precisely this interface; however, if the
> implementation does not wish to implement extension X, then they can
> leave it out.

I'm a fan of small and perfect, and indeed, it was the seeming
tractability of the R5RS spec that got me started on scheme instead of
Common Lisp a few months ago, when I was looking for "a lisp compiler".
However, just to play devils advocate: it's not as though a compiled
executable needs to include unused library functionality, so how is anyone
practically inconvenienced by having a language standard that specifies
a standard library with a reasonable swath of functionality (which is what
my brief perusal of the R6RS prototype looks like)?

A stab at an answer to my own question: I suppose that interpreters, as
opposed to compiled executables, *do* have to include all of the standard
base library, and if said interpreter is intended to target a small
embeddded environment, then that's potentially a bunch of extra perhaps
unnecessary stuff to carry around, that might not physically fit into
available memory. Is that the problematic situation?

Cheers,

--
Andrew

Kumar

unread,
Aug 3, 2007, 4:54:10 AM8/3/07
to
On Aug 3, 1:57 pm, Nils M Holm <before-2007-10...@online.de> wrote:

There's nothing wrong about making a library entirely optional. (I
suppose I should've chosen more careful wording.) There is nothing
that will prevent an implementation from leaving out a library
entirely any way.

My point was more about smaller scale optional stuff. For example,
leaving out a file-I/O library in its entirety should be ok - just
check that the library exists when your app loads. But if the library
exists, it should support whatever it is intended to support fully.

This requirement keeps libraries to the minimum as well as lets
developers easily check for a library and create a handful of special
code paths. If you find a library is present and still have to create
myriad code paths across platforms within the scope of the library,
we're back to square zero.

A good model would be OpenGL + extensions. It is a fairly well managed
base set + fairly well specified extensions. The specifications
clearly state dependencies between extensions as well. In spite of
such an effort, cards and drivers vary like crazy. There's a lot of
money at stake in the games business, so developers do make elaborate
code branches for all sorts of special behaviour in special systems.
In the absence of such an interest, a language spec can degrade into
grossly unportable code.

The short description of a specification is - One and only one way to
do a particular thing. There is incredible freedom in deciding what
that "particular thing" should be. The place where this is most
successful is when a library does something substantial that
developers can't be bothered to reinvent it, even if they might like
to do so .. and if they do, they should be prepared to live with the
debugger for a while before they get anything useful done.

philip....@gmail.com

unread,
Aug 3, 2007, 5:09:07 AM8/3/07
to
On 3 Aug, 09:17, Andrew Reilly <andrew-newsp...@areilly.bpc-users.org>
wrote:

> so how is anyone
> practically inconvenienced by having a language standard that specifies
> a standard library with a reasonable swath of functionality (which is what
> my brief perusal of the R6RS prototype looks like)?

If contained within the library they're probably not.

I don't know how you solve such a problem but one option is surely to
change the core language as little as possible to support the new
libraries and then to have a free (in all senses) reference
implementation of the optional standard library that will work on all
conforming implementations. But I don't know how feasible this is.
Nils has suggested that something like this could be technically
achievable although things like a module system and exceptions must
require some deep language hooks or at least changes to whatever the
implementation already provides.

The bigger worry for me is the split amongst implementations. Most
Schemes are free software often written by an individual or a small
group of people. Their software is probably their pride and joy. It
must be difficult having worked so hard to build something for so long
to realise that you now have a lot more work to do, some of which you
may even fundamentally disagree with.

Nils M Holm

unread,
Aug 3, 2007, 9:34:17 AM8/3/07
to
Andrew Reilly <andrew-...@areilly.bpc-users.org> wrote:
> I'm a fan of small and perfect, and indeed, it was the seeming
> tractability of the R5RS spec that got me started on scheme instead of
> Common Lisp a few months ago, when I was looking for "a lisp compiler".
> However, just to play devils advocate: it's not as though a compiled
> executable needs to include unused library functionality, so how is anyone
> practically inconvenienced by having a language standard that specifies
> a standard library with a reasonable swath of functionality (which is what
> my brief perusal of the R6RS prototype looks like)?

I think this is not only a technical, but also a philosophical question.
R5RS Scheme is optimal vehicle for research and education not at least
because it /is/ small and elegant. Its orthogonal design invites people
to experiment with the language. Due to its small size you have to master
only a small set of fundamental principles in order to understand it.
By tying the libraries too rigidly to the core language, the advantages
dimish, and Scheme becomes just another dialect of LISP.

OK, I am dramatizing this point a bit, but Scheme certainly attracts
a specific type of programmer, and if I would have stumbled accross
R5.97RS instead of R4RS some time ago, I am not sure if the language
would have impressed me much.

> A stab at an answer to my own question: I suppose that interpreters, as
> opposed to compiled executables, *do* have to include all of the standard
> base library, and if said interpreter is intended to target a small
> embeddded environment, then that's potentially a bunch of extra perhaps
> unnecessary stuff to carry around, that might not physically fit into
> available memory. Is that the problematic situation?

This is, of course, another interesting point.

Nils M Holm

unread,
Aug 3, 2007, 9:37:21 AM8/3/07
to
Kumar <sriku...@gmail.com> wrote:
> My point was more about smaller scale optional stuff. For example,
> leaving out a file-I/O library in its entirety should be ok - just
> check that the library exists when your app loads. But if the library
> exists, it should support whatever it is intended to support fully.

I agree.

Aaron Hsu

unread,
Aug 3, 2007, 7:44:31 PM8/3/07
to
On 2007-08-03 04:09:07 -0500, philip....@gmail.com said:

> The bigger worry for me is the split amongst implementations. Most
> Schemes are free software often written by an individual or a small
> group of people. Their software is probably their pride and joy. It
> must be difficult having worked so hard to build something for so long
> to realise that you now have a lot more work to do, some of which you
> may even fundamentally disagree with.

I agree. Part of Scheme's beauty is in its differences. Each
implementation does things a bit differently, and they do so for
various reasons. Rather than having one monolithic beast trying to
solve everyone's problems, the implementation field is the epitome of
Scheme's general philosophy of specializing interfaces and small pseudo
languages oriented towards specific problem domains to solve problems
through the use of generic features, rather than trying to solve the
problems with large sets of features, with minimal flexibility.

Griff

unread,
Aug 5, 2007, 11:57:23 AM8/5/07
to
These are all good points.

One of the reasons why R6RS needs to go through is to force folks to
recognize that there *is a desire* for more than the language core.

Perhaps R6RS was the wrong way to address that wish; and that is why
we have got R7RS.

Brian Harvey

unread,
Aug 5, 2007, 2:20:46 PM8/5/07
to
Griff <gre...@gmail.com> writes:
>Perhaps R6RS was the wrong way to address that wish; and that is why
>we have got R7RS.

That's an okay argument if you think R6RS is mostly okay but with a few
bugs. It's not a good argument if you think R6RS is moving in a fundamentally
wrong direction. I don't think there's much chance that R6 passes and then
R7 undoes it all.

(But then, I thought R5 was moving in a wrong direction, so what do I know?)

Aaron Hsu

unread,
Aug 5, 2007, 4:23:52 PM8/5/07
to
On 2007-08-05 13:20:46 -0500, b...@cs.berkeley.edu (Brian Harvey) said:

> I don't think there's much chance that R6 passes and then
> R7 undoes it all.

Agreed, it is much harder to move from R5.97RS to something good, than
to move from R5RS to that same goal.

Griff

unread,
Aug 6, 2007, 9:00:35 AM8/6/07
to
On Aug 5, 3:23 pm, Aaron Hsu <aaron....@sacrificumdeo.net> wrote:
> On 2007-08-05 13:20:46 -0500, b...@cs.berkeley.edu (Brian Harvey) said:
>
> > I don't think there's much chance that R6 passes and then
> > R7 undoes it all.
>
> Agreed, it is much harder to move from R5.97RS to something good, than
> to move from R5RS to that same goal.

Guys come on it is not as though Schemers are a bunch of stodgy old
uptight conservative worry warts who aren't willing to experiment, is
it?

Scheme was an experiment, and the results were pretty good.

How many people get things perfect on the first try? :)

Jeffrey Mark Siskind

unread,
Aug 6, 2007, 10:50:26 AM8/6/07
to
> Guys come on it is not as though Schemers are a bunch of stodgy old
> uptight conservative worry warts who aren't willing to experiment, is
> it?
>
> Scheme was an experiment, and the results were pretty good.
>
> How many people get things perfect on the first try? :)

It is precisely for this reason that some of us argue against
standardization efforts such as R5.97RS and R6RS. We want to
experiment and innovate. And the purpose of standardization is the
exact opposite: to prevent experimentation and innovation. The
objective of standardization *is* to get it right on the first try, to
allow long term stability and interoperability, when the benefits of
such outweigh the benefits of innovation.
It is my opinion that the Scheme community will benefit far more from
experimentation and innovation by rejecting R5.97RS/R6RS that it would
from any potential for interoperability that it would afford. We are
not an electric power utility. Nor are we C.

I think most of the proponents of R5.97RS/R6RS confuse the desire for
features in the implementation that they use with desire for
standardization. R5.97RS should not be ratified as R6RS. That would be
disasterous for the community. But that should not prevent anybody
from experimenting with any or all of the features from R5.97RS in
their favorite implementation.

Steve Schafer

unread,
Aug 6, 2007, 11:59:07 AM8/6/07
to
On Mon, 06 Aug 2007 07:50:26 -0700, Jeffrey Mark Siskind
<qo...@purdue.edu> wrote:

>We are not an electric power utility. Nor are we C.

So you're saying, then, that Scheme is _not_ appropriate for electric
power utilities or as a replacement for C, right?

Experimentation and innovation are very good things, obviously. In fact,
people who wish to do so are able to experiment and innovate even in an
environment "polluted" with standards: They simply ignore those
standards, to the extent required to carry out their experiments and
innovations.

In doesn't matter whether the field of endeavor is programming languages
or residential building construction; if you're building something that
is for other people to use and has some level of reasonable expectation
as to its performance, reliability, interoperability, etc., then you
follow a language standard/building code.

But if you're experimenting and innovating, then you do whatever. How
does the existence of the language standard/building code impede you?

Steve Schafer
Fenestra Technologies Corp.
http://www.fenestra.com/

Griff

unread,
Aug 6, 2007, 3:02:36 PM8/6/07
to
On Aug 6, 9:50 am, Jeffrey Mark Siskind <q...@purdue.edu> wrote:
> It is precisely for this reason that some of us argue against
> standardization efforts such as R5.97RS and R6RS. We want to
> experiment and innovate.

How does R6RS prevent this?

> And the purpose of standardization is the
> exact opposite: to prevent experimentation and innovation.

Standardization is (can be?) for folks that want to use the language
for "real world" work and want to rely on different distributions. For
a lot of people, this is the big driver for R6RS.

Brian Harvey

unread,
Aug 6, 2007, 10:45:02 PM8/6/07
to

Imho, it's silly to have meta-arguments about standardization as an ideal.
If R6 fails, there'll still be a standard: R5. Hence, the argument should
be about whether R6 specifically is a good thing or a bad thing. For
example, whether case-sensitive identifiers are merely disgusting, or are
downright disastrous. (As may be obvious, I take the latter view.)

PJW

unread,
Aug 7, 2007, 4:47:07 AM8/7/07
to
> For
> example, whether case-sensitive identifiers are merely disgusting, or are
> downright disastrous. (As may be obvious, I take the latter view.)
Case-sensitive identifiers, I can not say I like that.

> But if you're experimenting and innovating, then you do whatever. How
> does the existence of the language standard/building code impede you?

It would not directly impede but it would move the community/culture
of
scheme towards standards and at least less innovation in those areas.
The change in culture then indirectly impede those who wish scheme to
be
mainly for innovation/experimentation. An individual moves slower if
he
moves against or tangential to his peer group.

Ideally there would be two ratification processes. One for the core
of
scheme and another for standard libraries. However it seems likely
that
pushes for optimized libraries would call for tweaks to the core and
any
tweak to the core could wreak havoc on library implementations. One
outcome of these forces would be two related languages one focuses on
the minimalistic core encouraging experimentation/innovation and
another practical implementation with extensive libraries allowing for
more heavy duty tasks. Or perhaps as occasionally happens when
opposing
forces meet, a diamond of a compromise is formed. Producing something
much
more valuable then either force could give birth to on its own.

I remain hopeful for the later scenario, with the eventuality of
time.

Ray Dillinger

unread,
Aug 7, 2007, 1:51:59 PM8/7/07
to
Steve Schafer wrote:
> On Mon, 06 Aug 2007 07:50:26 -0700, Jeffrey Mark Siskind
> <qo...@purdue.edu> wrote:
>
>
>>We are not an electric power utility. Nor are we C.
>
>
> So you're saying, then, that Scheme is _not_ appropriate for electric
> power utilities or as a replacement for C, right?

Sigh. If you wanna be a replacement for C, you've got to specify
grotty details about how to do the sort of things you need to do
to write device drivers and interfaces between other languages;
Issuing interrupts, handling bare pointers, specifying binary
layout of data, how to treat things as "volatile", etc. C as a
regular language doesn't even go that far, but it has standard
machine-code escapes that allow that sort of thing to be done.

I don't see anybody going there with any version of scheme (unless
you have a spec in your pocket that allows you to write interfaces
between, say, a FORTH system and a SmallTalk system, in scheme,
that doesn't care what scheme implementation it runs on....)

Most of these things actually don't belong in a HLL. The fact that
HLL's sometimes have to worry about grotty details like that is
mostly a symptom that our Operating Systems and interfaces are not
yet sufficiently advanced. Most of these things should be needed
only in an "unhosted" implementation, where you need them for
implementing an OS and device drivers.

So don't build arguments about how Scheme should be an adequate
replacement for C, unless you're willing to go all the way and
issue, then abide by, a standard the size (and shape) of C99.

I have always seen Scheme as having a different mission. That
mission is to be a language that specifies operations on
*INFORMATION* rather than *BITS*. In other words, a character
should have been a character and nothing else. You should use
it for character operations and really shouldn't have to know,
or care, about whether it is in some particular encoding system
or how many units of that encoding system might be needed to
express it. It would be good for sorting and indexing for there
to be a total ordering of characters, but if you care *which*
total ordering that is, or how many bits are in it, etc, or
what numeric ranges it covers, then that is all additional
information regarding some properties that have nothing to do
with its identity as a character, and I don't see it as belonging
in the core language.


Similarly, numbers should just be their values, and possibly a
calculated margin of error. Worrying about how many bytes a
number is coded in, or what specific binary representation it
has, should not be your problem if what you really wanted was
to work with *numbers*. Likewise, a question about whether
something is or is not an integer, for example, is a question
about its value - not about that value's representation. If
you mean to ask how many bytes it is or what its binary representation
is, you are asking about something that has nothing to do with its
value, and I don't see that sort of information as belonging in
the core language either. The core language should be concerned
with the *value* of the number, and allowing people to specify
operations in terms of expected and required degree of precision
of values - not in terms of expected and required binary format.
Once you start needing to read or write, say IEEE-1178 floating
point formats, etc, that's outside the core language - you're
doing binary I/O and somebody ought to provide library bindings
that give people a way to read numbers in that format. You
shouldn't have to know, or care, how the program represents
them once they've been read - you should care about the precision
of your numeric operations and the final result instead.

And strings should just be strings. Encoding, endianness,
code unit counts, etc, have nothing to do with their identity as
strings and shouldn't be in the core language. If you have a
string, you shouldn't have to know, or care, whether it was
originally read in Unicode or Ascii or an ISO national character
set or EBCDIC. You should be able to order and compare them
in the core language, but that's just for people who don't care
about exactly *which* collation or ordering the implementation
uses by default. Any operation whose result depends on what
particular encoding you're using, requires some information
that has nothing to do with the entity's identity as a string,
and shouldn't be in the core language.

And so on....

Get it? I want scheme to be a language which allows me to express
eternal truths about information, not tied to or restricted by the
particular forms information may be expressed in on a particular
system or even at a particular epoch of history. I want the bare,
raw, language that implements "THE RIGHT THING" which is still "THE
RIGHT THING" even in an alternate universe that uses massively-parallel
ternary-logic machines with 27-trit words, has completely different
standards for every last scrap of representation, and uses a set of
Operating Systems and network infrastructure that are founded on
utterly different concepts than our own.

Anything which depends on these transitory issues, IMO, belongs
in a library - not the core language. And I think each and
every library should support its *own* standardization process.
If you want a uniform Unicode Library for scheme, then by george
you ought to try to develop a standard for a scheme unicode
library (with nothing else on the table) and get buy-in from the
people who care.

And, predictably, I'm viewing the R6 candidate as a definitive
step in the WRONG direction. The idea that there should be
a standard way to package modules was good. I'm just not caring
for the rest of it.

Bear

Aaron Hsu

unread,
Aug 7, 2007, 2:59:19 PM8/7/07
to
On 2007-08-07 03:47:07 -0500, PJW <Patrick.Jo...@gmail.com> said:

> One for the core of scheme and another for standard libraries. However
> it seems likely that pushes for optimized libraries would call for
> tweaks to the core and any tweak to the core could wreak havoc on
> library implementations.

I think that if the libraries are sufficiently high-level and general,
and the core language sufficiently minimal, then the issues of
optimization can and should be left to the implementations. The point
of portability is not to insist that every implementation run the same
code as quickly on the same hardware, just that the code run properly
and as expected. This would give the implementations the opportunity to
work together while still giving them the flexibility needed to
implement said portability in a manner appropriate to their own tastes.

Aaron Hsu

unread,
Aug 7, 2007, 3:01:24 PM8/7/07
to
On 2007-08-06 14:02:36 -0500, Griff <gre...@gmail.com> said:

> Standardization is (can be?) for folks that want to use the language
> for "real world" work and want to rely on different distributions. For
> a lot of people, this is the big driver for R6RS.

There is a lot of real-world work that would be hindered by the
existence of this kind of standard. Real world work can rarely be
quantified into a precise set of required libraries that will be
beneficial to everyone in their "real" world. Rather, the best way to
make sure Scheme is available and useable to all real world
applications (which includes pedagogical, embedded, and business
applications) is to maintain a minimal core specification with optional
standard libraries that, if implemented, must have the given interfaces
or interfaces compatible with it. Otherwise, you are limiting yourself
to an arguably small vision of what constitutes real world applications.

Brian Harvey

unread,
Aug 7, 2007, 3:40:39 PM8/7/07
to
Ray Dillinger <be...@sonic.net> writes:
>I have always seen Scheme as having a different mission. That
>mission is to be a language that specifies operations on
>*INFORMATION* rather than *BITS*. [...]

> Likewise, a question about whether
>something is or is not an integer, for example, is a question
>about its value - not about that value's representation.

Thank you for reminding us all about the eternal truths. Many years ago,
when I first learned Scheme and learned that 3.0 is an integer, it really
was like being smacked in the head by a Zen master. Duh! Of course 3.0
is an integer! I was a (budding high school student) mathematician before
I was a computer programmer, and yet somehow I'd forgotten that.

>Once you start needing to read or write, say IEEE-1178 floating
>point formats, etc, that's outside the core language - you're
>doing binary I/O and somebody ought to provide library bindings
>that give people a way to read numbers in that format.

Of course this does mean that the core language has to give the library-writer
the necessary tools to do bit-twiddling. To that extent, the core language
designers have to think about those concerns.

Brian Harvey

unread,
Aug 7, 2007, 3:44:18 PM8/7/07
to
PJW <Patrick.Jo...@gmail.com> writes:
> One
>outcome of these forces would be two related languages one focuses on
>the minimalistic core encouraging experimentation/innovation and
>another practical implementation with extensive libraries allowing for
>more heavy duty tasks.

It was my impression that we already have that -- the bifurcation you talk
of has already happened. The former language is called Scheme, and the
latter one is called Common Lisp.

That's why I don't understand the push for Scheme solve all problems.
Maybe instead we should all lobby the Common Lisp people to move to a
single namespace. :-)

Ray Dillinger

unread,
Aug 7, 2007, 4:29:35 PM8/7/07
to
Brian Harvey wrote:

> Of course this does mean that the core language has to give the library-writer
> the necessary tools to do bit-twiddling. To that extent, the core language
> designers have to think about those concerns.

Machine-language escapes, hardware port operations, and interrupts.
That's enough to build anything that can be built. Like C runtimes,
it won't make libraries that actually need such facilities portable
between implementations or Operating systems. But that's okay.

The rest the core language spec should leave as an exercise.


Bear

Nils M Holm

unread,
Aug 8, 2007, 2:34:03 AM8/8/07
to

Implementing those low-level constructs in the core language would not
be a good idea IMO, because they are not abstract and hurt portability.
I would rather have a simple and flexible FFI in the core so that LL
stuff can be implemented in a language that is better suited for that
task.

Ray Dillinger

unread,
Aug 8, 2007, 3:58:59 AM8/8/07
to
Nils M Holm wrote:
> Ray Dillinger <be...@sonic.net> wrote:

>>Machine-language escapes, hardware port operations, and interrupts.
>>That's enough to build anything that can be built. Like C runtimes,
>>it won't make libraries that actually need such facilities portable
>>between implementations or Operating systems. But that's okay.
>>
>>The rest the core language spec should leave as an exercise.
>
>
> Implementing those low-level constructs in the core language would not
> be a good idea IMO, because they are not abstract and hurt portability.
> I would rather have a simple and flexible FFI in the core so that LL
> stuff can be implemented in a language that is better suited for that
> task.

I don't think one "simple and flexible" FFI that doesn't unduly
limit implementation strategies exists. An attempt to make one
standard, is inevitably an attempt to standardize internal
representations.

And, really, given the low-level constructs anybody can construct
any FFI they want on any given implementation.

Bear


Nils M Holm

unread,
Aug 8, 2007, 8:59:01 AM8/8/07
to
Ray Dillinger <be...@sonic.net> wrote:
> I don't think one "simple and flexible" FFI that doesn't unduly
> limit implementation strategies exists. An attempt to make one
> standard, is inevitably an attempt to standardize internal
> representations.
>
> And, really, given the low-level constructs anybody can construct
> any FFI they want on any given implementation.

True. I guess the idea of low-level constructs in Scheme just makes
me a bit uncomfortable.

Griff

unread,
Aug 8, 2007, 9:09:42 AM8/8/07
to
On Aug 7, 2:44 pm, b...@cs.berkeley.edu (Brian Harvey) wrote:

> PJW <Patrick.John.Whee...@gmail.com> writes:
> > One
> >outcome of these forces would be two related languages one focuses on
> >the minimalistic core encouraging experimentation/innovation and
> >another practical implementation with extensive libraries allowing for
> >more heavy duty tasks.
>
> It was my impression that we already have that -- the bifurcation you talk
> of has already happened. The former language is called Scheme, and the
> latter one is called Common Lisp.

While Common Lisp has relatively much larger libraries; I had read
that Common Lisp is a solution to the problem of essentially merging
various incompatible Lisp implementations into a single standard
(bring all of the problems that come with such a merge).

Pascal Costanza

unread,
Aug 8, 2007, 9:14:37 AM8/8/07
to

Don't read, try. ;)


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Aaron Hsu

unread,
Aug 8, 2007, 10:42:25 PM8/8/07
to
On 2007-08-08 07:59:01 -0500, Nils M Holm <before-2...@online.de> said:

> I guess the idea of low-level constructs in Scheme just makes
> me a bit uncomfortable.

I don't see why these could not be relegated to an optional library.

Nils M Holm

unread,
Aug 9, 2007, 3:23:34 AM8/9/07
to
Aaron Hsu <aaro...@sacrificumdeo.net> wrote:
> On 2007-08-08 07:59:01 -0500, Nils M Holm <before-2...@online.de> said:
>
> > I guess the idea of low-level constructs in Scheme just makes
> > me a bit uncomfortable.
>
> I don't see why these could not be relegated to an optional library.

Because that library could not be implemented in Scheme. So you
have to have either an FFI or the LL constructs themselves in
the core.

Andrew Reilly

unread,
Aug 9, 2007, 3:48:54 AM8/9/07
to
On Thu, 09 Aug 2007 07:23:34 +0000, Nils M Holm wrote:

> Aaron Hsu <aaro...@sacrificumdeo.net> wrote:
>> On 2007-08-08 07:59:01 -0500, Nils M Holm <before-2...@online.de> said:
>>
>> > I guess the idea of low-level constructs in Scheme just makes
>> > me a bit uncomfortable.
>>
>> I don't see why these could not be relegated to an optional library.
>
> Because that library could not be implemented in Scheme. So you
> have to have either an FFI or the LL constructs themselves in
> the core.

That doesn't follow, does it? Libraries don't necessarily have to be
implemented in the language in which they're used.

Certainly a standard FFI would make the issue moot, but a library spec
allows the low-level functions to have a non-foreign implementation:
implemented in the compiler itself, and amenable to all of its
optimization operations. The standard means that code that uses those
will run the same on platforms that support the library. Not including it
in the base makes implementing them at all optional.

Just ferinstance, take the C string library. It's a library. It's
optional. When it's present, it works as described by the standard, but
in any modern compiler worth it's stripes, they're actually implemented by
the compiler itself, which is free to choose different implementation
strategies depending on what it knows about the arguments and the target.

Now, I happen to think that the standard C string library is a terrible
design, and the only pieces of it that I ever use are mem{set,cpy,move},
so in a sense it's a cautionary tale, as well as an existence proof.

Cheers,

--
Andrew

Ray Dillinger

unread,
Aug 9, 2007, 4:06:25 AM8/9/07
to
Nils M Holm wrote:
> Aaron Hsu <aaro...@sacrificumdeo.net> wrote:
>
>>On 2007-08-08 07:59:01 -0500, Nils M Holm <before-2...@online.de> said:

>>>I guess the idea of low-level constructs in Scheme just makes
>>>me a bit uncomfortable.

>>I don't see why these could not be relegated to an optional library.

> Because that library could not be implemented in Scheme. So you
> have to have either an FFI or the LL constructs themselves in
> the core.

A scheme library doesn't have to be implemented in scheme. A
library spec is a set of standard names bound to values,
functions and macros that provide specific functionality. The
implementor can fulfill it with C code, machine code, or
whatever, as long as those things become available when the
library is loaded. The system libraries in C absolutely rely
on this fact; most of them can't be implemented in portable
C, either.

A library that's implemented in portable scheme is nice, because
that can be used across many implementations. But I don't expect
any particular implementation of machine-code inlining to support
more than one implementation anyway, because macros for packing
and unpacking variables, call frames, etc, are going to be
different in different scheme implementations anyway, not to
mention machine-language instructions across different processors.

Bear

Nils M Holm

unread,
Aug 9, 2007, 4:37:45 AM8/9/07
to
Ray Dillinger <be...@sonic.net> wrote:

> Nils M Holm wrote:
> > Because that library could not be implemented in Scheme. So you
> > have to have either an FFI or the LL constructs themselves in
> > the core.
>
> A scheme library doesn't have to be implemented in scheme. A
> library spec is a set of standard names bound to values,
> functions and macros that provide specific functionality. The
> implementor can fulfill it with C code, machine code, or
> whatever, as long as those things become available when the
> library is loaded.

Yes, but that would be a core language feature in disguise, wouldn't
it? It would not really be a library, but something that happens to
be activated using library syntax.

> The system libraries in C absolutely rely
> on this fact; most of them can't be implemented in portable
> C, either.

I do not think that Scheme should strive to emulate C.

Pseudo libraries are plain ugly. I have spend a lot of time
eliminating them from a compiler that I have maintained some
time ago.

Nils M Holm

unread,
Aug 9, 2007, 4:44:17 AM8/9/07
to
Andrew Reilly <andrew-...@areilly.bpc-users.org> wrote:
> On Thu, 09 Aug 2007 07:23:34 +0000, Nils M Holm wrote:
> > Because that library could not be implemented in Scheme. So you
> > have to have either an FFI or the LL constructs themselves in
> > the core.
>
> That doesn't follow, does it? Libraries don't necessarily have to be
> implemented in the language in which they're used.

Sure, they do not have to be, but if they are, it makes the design
more orthogonal.

> Now, I happen to think that the standard C string library is a terrible
> design, and the only pieces of it that I ever use are mem{set,cpy,move},
> so in a sense it's a cautionary tale, as well as an existence proof.

Agreed.

I did not mean to say that the LL part of Scheme /cannot/ be implemented
as a pseudo library, I meant to say that it probably would be a mess.

Andrew Reilly

unread,
Aug 9, 2007, 7:59:34 AM8/9/07
to
On Thu, 09 Aug 2007 08:37:45 +0000, Nils M Holm wrote:
> Yes, but that would be a core language feature in disguise, wouldn't
> it? It would not really be a library, but something that happens to
> be activated using library syntax.

The important distinction is that it becomes optional. Any given
implementation can (for whatever reason) omit that particular low-level
library and still claim to be an implementation of the standard.

So it would probably be a core language feature of most of the
implementations, a dynamically loaded or FFI accessed feature in some
other, smaller implementations, and ignored completely in, say,
application scripting interpreters. It's not as though *most* programs
need low-level bit-bashing or unsafe access, but in those that do, it's
usually handy for those features to be absolutely as efficient as they can
be. IMO, YMMV, etc. (and I should probably add that most of my
programming career has involved embedded systems and assembly language of
some sort...)

Modula-3's "unsafe" module classification might be a model to consider,
but that doesn't really feel in keeping with the current proposals or the
"feel" of scheme.

--
Andrew

Brian Harvey

unread,
Aug 9, 2007, 10:29:35 AM8/9/07
to
Andrew Reilly <andrew-...@areilly.bpc-users.org> writes:
>The important distinction is that it becomes optional. Any given
>implementation can (for whatever reason) omit that particular low-level
>library and still claim to be an implementation of the standard.

I find it unnecessarily confusing to commingle the formerly clear and
distinct meanings of the words "library" and "optional."

The -> notation in COND is an example of an optional feature (I think that's
still true in R5) that's unlikely to be implemented in a library -- I know it
could be done by replacing the built-in COND with a macro, but it's more
likely that an implementor would build it into the built-in COND.

The LIST? predicate is something I'd want to consider required in a Scheme
implementation, but is likely to be part of a library, i.e., written in
Scheme, especially if a compiler is included, so it can run fast.

The above is not meant as taking a position on how bit-twiddling should or
should not be provided in Scheme, but rather as a meta-comment on the
vocabulary used in the discussion.

I take the point about the C string library. And I know that many current
C compilers don't actually have a separate "preprocessor" even though people
still call it that. So terminology does get blurred over time. Still, if
what you really mean is "optional," what's gained by taking over a different
word for it?

Ray Dillinger

unread,
Aug 9, 2007, 1:47:58 PM8/9/07
to
Nils M Holm wrote:

>>The system libraries in C absolutely rely
>>on this fact; most of them can't be implemented in portable
>>C, either.

> I do not think that Scheme should strive to emulate C.

Then how do you propose to do system programming and, eg,
dealing with devices that have memory mapped I/O and hardware
ports and etc?

Or are those things just too -- erm, real-world -- for scheme
to handle?

Bear

Nils M Holm

unread,
Aug 9, 2007, 2:02:48 PM8/9/07
to
Ray Dillinger <be...@sonic.net> wrote:
> Nils M Holm wrote:
> >>The system libraries in C absolutely rely
> >>on this fact; most of them can't be implemented in portable
> >>C, either.
>
> > I do not think that Scheme should strive to emulate C.
>
> Then how do you propose to do system programming and, eg,
> dealing with devices that have memory mapped I/O and hardware
> ports and etc?

By delegating them to C via a FFI. Oops, deja vu.

OK, let me try to make myself clear. I think there are two
obvious ways to add low-level features to Scheme: 1) an FFI
and 2) built-in low-level procedures.

What I am opposed to is implementing the LL procedures as a
"pseudo library" (a set of features that is "activated" by
using library syntax). This is what I mean by "emulating C".

IMO, the LL procedures should either be imported using an FFI,
or they should be optional procedures, but not an optional
(pseudo) library.

> Or are those things just too -- erm, real-world -- for scheme
> to handle?

Of course they are, but I see the demand. ;-)

David Rush

unread,
Aug 9, 2007, 7:23:24 PM8/9/07
to
On Aug 5, 7:20 pm, b...@cs.berkeley.edu (Brian Harvey) wrote:
> (But then, I thought R5 was moving in a wrong direction, so what do I know?)

Must. Not...

It was.

david rush
--
...apalled at another usenet-induced personal moral failure

David Rush

unread,
Aug 9, 2007, 7:34:42 PM8/9/07
to
On Aug 6, 4:59 pm, Steve Schafer <st...@fenestra.com> wrote:
> But if you're experimenting and innovating, then you do whatever. How
> does the existence of the language standard/building code impede you?

I don;t know if you recall the amount of grief given to certain high-
performance Scheme implementations over their 'non-conformance'
issues. Some of those idiosyncracies were a clear outgrowth of
particular optimization experiments. And some of those experiments
were based on very simple ideas and some on very complex ones - but
the implementors were continually lambasted for not conforming to
R5RS. And they even had the clearest documentation of their non-
conformance!

I *used* these systems heavily because I was doing some very heavy
lifting with data mining jobs. I don't recall any stupider debate
(except maybe #f/'()/nil :) than the ones about conformance issues -
but people seem to make a big deal about it anyway. The goal in R6RS
should be to make it easier to achieve conformance *and* easier to
experiment without getting the pedants' backs up. A good standards
document helps the documentation effort and frankly, R6RS achieves
none of the goals mentioned above.

There. I said it.

david rush
--
...knowing I will regret this in the morning :)

Steve Schafer

unread,
Aug 9, 2007, 10:33:19 PM8/9/07
to
On Thu, 09 Aug 2007 23:34:42 -0000, David Rush <kumo...@gmail.com>
wrote:

>I don;t know if you recall the amount of grief given to certain high-
>performance Scheme implementations over their 'non-conformance'
>issues. Some of those idiosyncracies were a clear outgrowth of
>particular optimization experiments. And some of those experiments
>were based on very simple ideas and some on very complex ones - but
>the implementors were continually lambasted for not conforming to
>R5RS. And they even had the clearest documentation of their non-
>conformance!

Innovators have to have thick skins. If you plan to do anything that is
out of the mainstream (like, say, using Scheme...), be prepared to be
criticized for it.

>The goal in R6RS should be to make it easier to achieve conformance
>*and* easier to experiment without getting the pedants' backs up.

People who want standards because they want to do Real Work just want
things to work, period. They don't want to deal with levels of
conformance, optional features, etc. Their requirements are incompatible
with a standard that embraces a spirit of innovation and
experimentation.

Steve Schafer
Fenestra Technologies Corp.
http://www.fenestra.com/

Steve Schafer

unread,
Aug 9, 2007, 10:33:18 PM8/9/07
to
On Tue, 07 Aug 2007 10:51:59 -0700, Ray Dillinger <be...@sonic.net>
wrote:

>I want scheme to be a language which allows me to express eternal
>truths about information, not tied to or restricted by the particular
>forms information may be expressed in on a particular system or even at
>a particular epoch of history.

There is only one eternal truth:

1. There are no eternal truths.

Idealism is a Good Thing. But so is awareness of the distinction between
idealism and reality.

Ray Dillinger

unread,
Aug 9, 2007, 11:35:39 PM8/9/07
to
Nils M Holm wrote:

> Ray Dillinger <be...@sonic.net> wrote:
>>Then how do you propose to do system programming and, eg,
>>dealing with devices that have memory mapped I/O and hardware
>>ports and etc?

> By delegating them to C via a FFI. Oops, deja vu.

Except C can't handle them either. Once you get down to
hardcore bitbashing, you have to break to assembly code,
even from C. When what you want is memory mapped I/O,
interrupt handling, and hardware-port twiddling, why
should you escape from scheme to C and then from C to
assembly? Why not just escape to assembly in the first
place?

The problem with any unnecessary use of a C-to-scheme
FFI is that C has a runtime model appropriate to a statically
typed, non-garbage-collected, tightly stack-disciplined
language. As such, it's highly representation-dependent,
limited in what values it can express by normal means, and
requires a lot of hoop-jumping to work correctly with
the garbage collector and continuations. The semantic
mismatch imposes a severe and unnecessary translation burden,
unless a scheme implementation is written, from the ground
up with the goal of supporting a C FFI.

The semantic mismatch to assembly is just as severe, but
that burden is, at least, necessary to the jobs we're trying
to do and I think that justifies it somewhat. And there are
a lot of uncertainties in the C-to-assembly interface such as
data layout, etc, that you just don't have to deal with if
you do it in one step.


> OK, let me try to make myself clear. I think there are two
> obvious ways to add low-level features to Scheme: 1) an FFI
> and 2) built-in low-level procedures.

> What I am opposed to is implementing the LL procedures as a
> "pseudo library" (a set of features that is "activated" by
> using library syntax). This is what I mean by "emulating C".


I don't think there's anything "pseudo" about such a library,
and I'm kind of annoyed at you for using the term to belittle
the opposing view without making any actual points about it
or comparisons. If you're doing something that can be implemented
in scheme, that's the way to go for the library because then it's
portable between scheme implementations. I've already said so.

But if you're doing something that can't, that doesn't mean
that a library is fake or can't be separate from the core
implementation or anything else you've been implying; it just
means that it has to be expressed in a language that can
handle it, by means appropriate to the environment you're
working with. If the implementation provides a machine-code
hook, then anybody can write such a library and provide it,
without even the implementation's main implementor having to
know about it.

Bear

Nils M Holm

unread,
Aug 10, 2007, 2:27:51 AM8/10/07
to
Ray, I guess that, except for the Machine Escape <--> FFI
controversy, we are talking about the same thing. And I
certainly did not intend to belittle your views!

Ray Dillinger <be...@sonic.net> wrote:
> Nils M Holm wrote:
> > Ray Dillinger <be...@sonic.net> wrote:
> >>Then how do you propose to do system programming and, eg,
> >>dealing with devices that have memory mapped I/O and hardware
> >>ports and etc?
>
> > By delegating them to C via a FFI. Oops, deja vu.
>
> Except C can't handle them either. Once you get down to
> hardcore bitbashing, you have to break to assembly code,
> even from C. When what you want is memory mapped I/O,
> interrupt handling, and hardware-port twiddling, why
> should you escape from scheme to C and then from C to
> assembly? Why not just escape to assembly in the first
> place?

Because C already is good at abstracting "bit bashing"
operations and in Scheme you would have to reinvent the
wheel. Plus, the FFI would grant you access to loads of
C libraries, which I think is even more important today
than actually shuffling bits.

> The problem with any unnecessary use of a C-to-scheme
> FFI is that C has a runtime model appropriate to a statically
> typed, non-garbage-collected, tightly stack-disciplined
> language. As such, it's highly representation-dependent,
> limited in what values it can express by normal means, and
> requires a lot of hoop-jumping to work correctly with
> the garbage collector and continuations. The semantic
> mismatch imposes a severe and unnecessary translation burden,
> unless a scheme implementation is written, from the ground
> up with the goal of supporting a C FFI.

You can always try to convert data at run time. The GC
problem, of course, is not trivial. But would machine
language escapes not pose a similar problem?

> The semantic mismatch to assembly is just as severe, but
> that burden is, at least, necessary to the jobs we're trying
> to do and I think that justifies it somewhat. And there are
> a lot of uncertainties in the C-to-assembly interface such as
> data layout, etc, that you just don't have to deal with if
> you do it in one step.

Yes, this is an advantage. On the other hand, your model would
insert an additional layer when dealing with C libraries, which
are ubiquitous today:

FFI: Scheme --> FFI --> Library
Scheme --> FFI --> C --> Machine code

Assembly Escapes: Scheme --> Assembly --> FFI --> Library
Scheme --> Assembly --> Machine code

Your model is simpler for accessing bits, mine is simpler for
accessing libraries.

> > OK, let me try to make myself clear. I think there are two
> > obvious ways to add low-level features to Scheme: 1) an FFI
> > and 2) built-in low-level procedures.
>
> > What I am opposed to is implementing the LL procedures as a
> > "pseudo library" (a set of features that is "activated" by
> > using library syntax). This is what I mean by "emulating C".
>
>
> I don't think there's anything "pseudo" about such a library,
> and I'm kind of annoyed at you for using the term to belittle
> the opposing view without making any actual points about it

> or comparisons. [...]

My intention is *not* to belittle anybodys view. Something that
appears to be a library but is in fact a part of the core langauge
is in my view not a library. Hence I invented a term that I thought
describes the thing neutrally. If that term offends you, I am sorry.
Feel free to invent a better one.

> But if you're doing something that can't, that doesn't mean
> that a library is fake or can't be separate from the core
> implementation or anything else you've been implying; it just
> means that it has to be expressed in a language that can
> handle it, by means appropriate to the environment you're
> working with. If the implementation provides a machine-code
> hook, then anybody can write such a library and provide it,
> without even the implementation's main implementor having to
> know about it.

But this is exactly what I tried to say all the time: Provide the
necessary hooks (optionally) and then implement bit manipulation
as a *real* library. OR implement an FFI. But do NOT implement it
as a core feature that *pretends* to be a Scheme library.

This is merely a minor esthetic point and not a technical issue,
and I think that if we disagree so strongly about it, there must
be some underlying misunderstanding.

Aaron Hsu

unread,
Aug 10, 2007, 10:05:33 AM8/10/07
to
On 2007-08-09 21:33:19 -0500, Steve Schafer <st...@fenestra.com> said:

> People who want standards because they want to do Real Work just want
> things to work, period. They don't want to deal with levels of
> conformance, optional features, etc. Their requirements are incompatible
> with a standard that embraces a spirit of innovation and
> experimentation.

This is not inherintly true. How do you define real work? Real work
occurs all the time in many different domains. Scheme (and by this
term, I mean the language represented in the Standard) should make it
possible to do real work in any problem domain of general programming,
including research and innovation. When it comes to normal business
oriented applications or what most people would call "real work," then
the standard should only be a base. It should make it as painless as
possible to integrate other work from other systems into a "real"
project. However, when it comes right down to it, the Implementation is
the real backbone of "real" work. The standard should exist to make it
easier to work between implementations, but it should not force every
implementation to be every other implementation.

Aaron Hsu

unread,
Aug 10, 2007, 10:11:36 AM8/10/07
to
On 2007-08-09 22:35:39 -0500, Ray Dillinger <be...@sonic.net> said:

> But if you're doing something that can't, that doesn't mean
> that a library is fake or can't be separate from the core
> implementation or anything else you've been implying; it just
> means that it has to be expressed in a language that can
> handle it, by means appropriate to the environment you're
> working with. If the implementation provides a machine-code
> hook, then anybody can write such a library and provide it,
> without even the implementation's main implementor having to
> know about it.

I think there is something to be said though for restricting the
libraries to only code that can be developed with the core language. I
think something like these System-Level procedures could make more
sense in the language spec if they were defined in the core, and then
merely given the optional tag. However, I think that we should still be
able, via the same extension mechanism (module system) used by the
Libraries, to specify that our code relies on, and requires this, and
any other optional Scheme feature.

I guess the lines between optional core language feature, library, and
module system then become a little more fuzzy.

Steve Schafer

unread,
Aug 10, 2007, 11:03:16 AM8/10/07
to
On Fri, 10 Aug 2007 09:05:33 -0500, Aaron Hsu
<aaro...@sacrificumdeo.net> wrote:

>When it comes to normal business oriented applications or what most
>people would call "real work," then the standard should only be a base.
>It should make it as painless as possible to integrate other work from
>other systems into a "real" project. However, when it comes right down
>to it, the Implementation is the real backbone of "real" work. The
>standard should exist to make it easier to work between
>implementations, but it should not force every implementation to be
>every other implementation.

That's just it; people who do Real Work _do_ want implementations to be
transparently interchangeable. In order to understand this mindset, you
have to think in bottom-line capitalistitic terms: You (as someone doing
Real Work) want vendors to produce new implementations that run faster,
or make the development process simpler, or any of a number of other
things that improve your bottom line, all without you having to lift a
finger (i.e., without you having to do anything that costs money).

You want vendors to compete against each other, but only to the extent
that it doesn't make you have to work any harder. You especially do
_not_ care if a vendor implements a nifty new feature, until the other
vendors also implement that feature, because you don't want to be locked
into that one vendor.

I'm not saying that this attitude makes any sense (and I don't share it
myself), only that it's The Way It Is in the Real World.

A good example of the problems of quasi-standardization is SQL. In many
areas, SQL underspecifies how a RDBMS should behave, leading to a
proliferation of incompatible implementations (trying to write a
non-trivial query that will run on more than one platform is
infuriating, to say the least). In others, the SQL standard is overly
ambitious (and sometimes heavy-handed), making implementation of the
full standard impractical.

Ray Dillinger

unread,
Aug 10, 2007, 1:45:27 PM8/10/07
to
Nils M Holm wrote:

>
> But this is exactly what I tried to say all the time: Provide the
> necessary hooks (optionally) and then implement bit manipulation
> as a *real* library. OR implement an FFI. But do NOT implement it
> as a core feature that *pretends* to be a Scheme library.

I think I'm missing something here. If I use (say) FooScheme's
machine-language escape to implement a library which allows
FooScheme to (among other things) use something like

(intr num)

to issue a hardware interrupt, what is "pseudo" or "pretend"
about that? What features would distinguish it from a "real"
library?? I think this is like calling buildings made of
stone real and buildings made from concrete fake; there's some
distinction evidently very important to you that I just don't
see, or whose significance I just don't grasp.

It would not be a core feature as I understand it; that is,
it would not be provided by the implementors and inseparable
from the core of FooScheme. It would be a library that is
not loaded by default, just like all other libraries, and gets
loaded like any other library when the command to load it is
executed by a program. It might even be distributed completely
separately from FooScheme, for people who've gotten FooScheme
earlier, or from elsewhere. There is no "core language feature
pretending to be a library" about it.

Bear

Nils M Holm

unread,
Aug 10, 2007, 2:17:38 PM8/10/07
to
Ray Dillinger <be...@sonic.net> wrote:
> Nils M Holm wrote:
> > But this is exactly what I tried to say all the time: Provide the
> > necessary hooks (optionally) and then implement bit manipulation
> > as a *real* library. OR implement an FFI. But do NOT implement it
> > as a core feature that *pretends* to be a Scheme library.
>
> I think I'm missing something here.

No, not really. At some point in the discussion I got the idea that you
meant something completely different*. As I understand it now, you mean
a library that is written in some other language and can be imported by
Scheme. OK, that is what I would call a "real library". Maybe the idea
is so obvious that I simply could not see the wood for the trees.

There is absolutely nothing "pseudo" about that approach.

* I remember some compilers (not Scheme) that intercept special names
in library import statements in order to activate specific language
features. I thought you wanted something like that. I am glad you
did not.

> If I use (say) FooScheme's
> machine-language escape to implement a library which allows
> FooScheme to (among other things) use something like
>
> (intr num)
>
> to issue a hardware interrupt, what is "pseudo" or "pretend"
> about that? What features would distinguish it from a "real"
> library?? I think this is like calling buildings made of
> stone real and buildings made from concrete fake; there's some
> distinction evidently very important to you that I just don't
> see, or whose significance I just don't grasp.
>
> It would not be a core feature as I understand it; that is,
> it would not be provided by the implementors and inseparable
> from the core of FooScheme. It would be a library that is
> not loaded by default, just like all other libraries, and gets
> loaded like any other library when the command to load it is
> executed by a program. It might even be distributed completely
> separately from FooScheme, for people who've gotten FooScheme
> earlier, or from elsewhere. There is no "core language feature
> pretending to be a library" about it.
>
> Bear

--

Aaron Hsu

unread,
Aug 10, 2007, 2:43:45 PM8/10/07
to
On 2007-08-10 10:03:16 -0500, Steve Schafer <st...@fenestra.com> said:

> On Fri, 10 Aug 2007 09:05:33 -0500, Aaron Hsu
> <aaro...@sacrificumdeo.net> wrote:
>
>> When it comes to normal business oriented applications or what most
>> people would call "real work," then the standard should only be a base.
>> It should make it as painless as possible to integrate other work from
>> other systems into a "real" project. However, when it comes right down
>> to it, the Implementation is the real backbone of "real" work. The
>> standard should exist to make it easier to work between
>> implementations, but it should not force every implementation to be
>> every other implementation.
>
> That's just it; people who do Real Work _do_ want implementations to be
> transparently interchangeable. In order to understand this mindset, you
> have to think in bottom-line capitalistitic terms: You (as someone doing
> Real Work) want vendors to produce new implementations that run faster,
> or make the development process simpler, or any of a number of other
> things that improve your bottom line, all without you having to lift a
> finger (i.e., without you having to do anything that costs money).
>
> You want vendors to compete against each other, but only to the extent
> that it doesn't make you have to work any harder. You especially do
> _not_ care if a vendor implements a nifty new feature, until the other
> vendors also implement that feature, because you don't want to be locked
> into that one vendor.

Being locked into one vendor is a serious concern, but I find that many
companies are not so worried in practice about being locked into a
single vendor. Take for example, the near ubiquity of Microsoft as the
desktop platform of choice for Windows computers. MS Word files as the
standard of choice for word processing documents. The MS Word format
has many programs implementing it in various forms, and each supports a
different feature-set. However, the companies, if they want X feature
not provided in word processors, A, B, or C, but in D, then go with D.

There are tons of examples of companies locking themselves into
Vendor-specific platforms, but it usually is not pleasant. But take for
example, a very real part of the business world (and I am limiting the
discussion of 'real' programs to the commercial software development
world, since this is the sense in which you seem to be using it, though
I consider R&D and other pursuits as validly 'real' as commercial
development), embedded devices. Say we develop a special scheme
implementation based on the R5.97RS standard, for our embedded device,
such as a PDA or some such, but we can't afford the overhead of some of
the features, so we remove those. For example, let's say we remove the
Unicode support and use a specific, non-standard character encoding,
which meets our specific needs. However, soon enough, we want to start
deploying software to support this on the Desktop. Our current Scheme
implementation was designed for a specific and unique chipset, which is
not i386, and we need to develop something nice and pretty on the
desktop. Naturally, we decide it is cheaper to use existing
implementations rather than port our custom version.

Now here comes the dilemma. Because of the overly excessive
restrictions of R5.97RS, we'll find that our usage of a non-standard
character encoding fundamentally affects our ability to be portable
with existing, compliant implementations. This isn't cool, and would
cost us a lot in terms of developer resources.

It would be better than, to have extensions to various Schemes, that
allow us to choose the encoding that we us. Then, our portable code
could specify a certain encoding as required, and the scheme
implementations that implemented it could then be used easily enough.

The whole point is that portability comes not from everyone being
forced to do the exact same thing, but for everyone to communicate in
the exact same way. What I mean is that each implementation should
share a common method of describing what features it does or does not
have, so that portable code will run on all implementations supporting
the necessary features. This is the only real way to support the
largely varying "real world." It permits the necessary flexibility to
meet the needs of specific groups, while also guaranteeing portability
between individual implementations. Most libraries could be implemented
in the core language, so if a feature is not provided in one scheme, it
could be provided by a third-party library that does the same thing and
has the same interface.

Griff

unread,
Aug 10, 2007, 2:55:05 PM8/10/07
to
On Aug 10, 1:43 pm, Aaron Hsu <aaron....@sacrificumdeo.net> wrote:

> Being locked into one vendor is a serious concern, but I find that many
> companies are not so worried in practice about being locked into a
> single vendor. Take for example, the near ubiquity of Microsoft as the
> desktop platform of choice for Windows computers.

"Real world" is about the investment in a system (in Scheme) not being
lost when the implementation for which it has been developed
disappears. This is the "win the lottery" scenario; what happens when
the Chez/Guile/Scheme48 implementers "win the lottery"? Hopefully the
standard means that your investment won't be lost.

Or am I dreaming? I'm not afraid to dream!

David Rush

unread,
Aug 10, 2007, 5:22:45 PM8/10/07
to
On Aug 10, 4:03 pm, Steve Schafer <st...@fenestra.com> wrote:
> On Fri, 10 Aug 2007 09:05:33 -0500, Aaron Hsu
> <aaron....@sacrificumdeo.net> wrote:
> >When it comes to normal business oriented applications or what most
> >people would call "real work," then the standard should only be a base.
>
> That's just it; people who do Real Work _do_ want implementations to be
> transparently interchangeable.

Nonsense. If that's true, then I suppose that I haven't been doing any
real work for the last seven years. I think that what you really mean
is that lazy programmers don't want to have to think. Which situation
I am highly in favor of as it guarantees me employment far into the
future fixing all the unscaleable crap that comes from configuration-
bunnies building glueware on top of glueware.

> In order to understand this mindset, you
> have to think in bottom-line capitalistitic terms: You (as someone doing
> Real Work) want vendors to produce new implementations that run faster,
> or make the development process simpler, or any of a number of other
> things that improve your bottom line, all without you having to lift a
> finger (i.e., without you having to do anything that costs money).

Wrong again. I as a capitalist entrepeneur who has worked at a huge
software house actually want a system on which I can readily hire
people to do the work that I have neither the time, inclination, or
personal desire to do. Making things go faster (especially when you're
dealing in orders of magnitude) is something that I expect to cost
real money. Improving development processes also involves spending
real money.

>From *that* point of view, the best documentation wins. Which means
(pretty much) PLT. Except of course that you *can't* hire Scheme
programmers very easily - there just aren't that many around and/or
they're getting paid to waste their brain cells in other ways. Which
makes this whole line of argument moot.

> You want vendors to compete against each other, but only to the extent
> that it doesn't make you have to work any harder. You especially do
> _not_ care if a vendor implements a nifty new feature, until the other
> vendors also implement that feature, because you don't want to be locked
> into that one vendor.

piffle. How many C compiler vendors truly *compete* against each
other? In practice people either use the most recent (or ancient)
compiler that builds their system without bugs. Ditto Perl, Java,
Ruby, Python, ad nauseum. Scheme and CL are close to the *only*
languages where there really is anything even remotely resembling
competition. The whole vendor lock-in dragon is really just a straw-
man.

> I'm not saying that this attitude makes any sense (and I don't share it
> myself), only that it's The Way It Is in the Real World.

After 20+ years I can say That Has Not Been My Experience.

> A good example of the problems of quasi-standardization is SQL.

The situation w/rt SQL and RDBMSes is not analogous. As far as I
understand it (and I am enjoying this knnowledge *ever* so much) SQL
is pretty broken even as a relational algebra. That's a pretty far cry
from Scheme, which is a fairly successful attempt at reifying the
untyped lambda-calculus.

david rush

Aaron Hsu

unread,
Aug 11, 2007, 2:14:02 PM8/11/07
to
On 2007-08-10 13:55:05 -0500, Griff <gre...@gmail.com> said:

> "Real world" is about the investment in a system (in Scheme) not being
> lost when the implementation for which it has been developed
> disappears. This is the "win the lottery" scenario; what happens when
> the Chez/Guile/Scheme48 implementers "win the lottery"? Hopefully the
> standard means that your investment won't be lost.

It is possible to develop significant, portable software using the R5RS
software and implementation extensions in such a way that should such a
situation as you suggest occur, you will experience minimal pain (if
any) in the move to a new distribution.

David Rush

unread,
Aug 11, 2007, 3:15:22 PM8/11/07
to
On Aug 11, 7:14 pm, Aaron Hsu <aaron....@sacrificumdeo.net> wrote:

> On 2007-08-10 13:55:05 -0500, Griff <gret...@gmail.com> said:
>
> > "Real world" is about the investment in a system (in Scheme) not being
> > lost when the implementation for which it has been developed
> > disappears. This is the "win the lottery" scenario; what happens when
> > the Chez/Guile/Scheme48 implementers "win the lottery"? Hopefully the
> > standard means that your investment won't be lost.
>
> It is possible to develop significant, portable software using the R5RS
> software and implementation extensions in such a way that should such a
> situation as you suggest occur, you will experience minimal pain (if
> any) in the move to a new distribution.

I have done it on a medium scale (tens of thousands of LOC) running on
all of what I consider the major platforms: Larceny, Gambit, Bigloo,
Stalin, PLT, Scheme48, Chicken, Chez (petite). But then again, I
implemented a pre-processor which, among other things, implemented
SRFI-0, so maybe that's cheating :)

SLIB might well be a better poster child for portability, and, as it
is all open source, you can see how it has been done.


Griff

unread,
Aug 11, 2007, 4:14:34 PM8/11/07
to
On Aug 11, 2:15 pm, David Rush <kumoy...@gmail.com> wrote:
> SLIB might well be a better poster child for portability, and, as it
> is all open source, you can see how it has been done.

I appreciate the hard work that everyone puts into writing portable
code. This reminds, though, me of an Eiffel library called Gobo.

It is maintained through the herculean effort of one man who coded
around all of the different variants of Eiffel available. Eventually
the variants diverged so much that (in my take on it) it simply wasn't
worth the effort anymore for him to maintain it, so he didn't.

That is what standards aim to do, make things easier.

Aaron Hsu

unread,
Aug 11, 2007, 6:10:09 PM8/11/07
to
On 2007-08-11 15:14:34 -0500, Griff <gre...@gmail.com> said:

> That is what standards aim to do, make things easier.

In this particular case, though, it will likely create a schism between
the various scheme implementations, because it ignores the desires of
parts of the community, in order to cater to some others. Scheme can't
operate under the assumptions of one-size fits all, because the Scheme
community is naturally already heavily invested in areas where
divergence (to some degree) is encouraged. To attempt to change this
outlook, by forcing everyone into one single paradigm, will mean that
some Schemes are going to go for it, and others won't, so you'll end up
with wider divergence than before, and portable code writing will not
improve; the contrast will be greater.

George Neuner

unread,
Aug 11, 2007, 10:10:58 PM8/11/07
to

This is why "real world" developers archive the binaries for their
entire tool chain and library set, source for open source components
and sometimes the host and target OSes (and service packs) as well.
That ensures that their own software can be maintained if the tools
change in incompatible ways or the vendors go out of business.

If you require ongoing support for proprietary one source tools (like
compiler bug fixes), it sometimes pays to negotiate a source code
escrow arrangement with the vendor. Provided you have the expertise
to maintain the tool (or can pay someone to do it), it protects you if
the vendor goes belly up or abandons the tool.

George
--
for email reply remove "/" from address

Jeffrey Mark Siskind

unread,
Aug 12, 2007, 11:17:49 AM8/12/07
to
> That's just it; people who do Real Work _do_ want implementations to be
> transparently interchangeable. In order to understand this mindset, you
> have to think in bottom-line capitalistitic terms: You (as someone doing
> Real Work) want vendors to produce new implementations that run faster,
> or make the development process simpler, or any of a number of other
> things that improve your bottom line, all without you having to lift a
> finger (i.e., without you having to do anything that costs money).
>
> You want vendors to compete against each other, but only to the extent
> that it doesn't make you have to work any harder. You especially do
> _not_ care if a vendor implements a nifty new feature, until the other
> vendors also implement that feature, because you don't want to be locked
> into that one vendor.

> People who want standards because they want to do Real Work > just want


> things to work, period. They don't want to deal with levels > of
> conformance, optional features, etc. Their requirements are > incompatible
> with a standard that embraces a spirit of innovation and
> experimentation.

There are fundamental flaws in your argument. I suspect that these
flaws are
what motivate many of those who advocate adoption of R6RS.

1. To the best of my knowledge there is only one vendor of Scheme
implementations: Chez. What I mean by vendor is a person or
organization
that is paid by customers to maintain and develop the core of the
implementation.
2. To the best of my knowledge, there is only one community-supported
Scheme
implementation: Guile. What I mean by community supported is that
there
is a sustainable community distinct from the original implementor
that
actively maintains and develops the core of the implementation.

The vast majority of Scheme implementations are part time labors of
love of a
single person or a small group that includes current and former
graduate
students of a single person. While most such implementations are open
source,
none has attracted an exogenous developer community. Many/most of such
efforts
are motivated by the fact that such implementations are part of the
implementor's research program. For them, innovation is the driving
force.
Satisfying a customer base is not.

Almost all of us in the Scheme community would love for Scheme to
catch on in
the mainstream. We would love for there to be many profitable vendors
selling
high-quality Scheme implementations. We would love for there to be a
vibrant
open-source community that actively maintains and develops many high-
quality
Scheme implementations. And we would love for there to be a large
customer
base of real-world Scheme users that would create a demand for the
above. Some
people believe that the only factor that prevents this from happening
is
standardization (of a big language). Thus they advocate adoption of
R6RS.

I think that there is a huge body of compelling historical evidence
that this
belief is false. There are at least three prior efforts to standardize
Lisp:
Common Lisp, IS Lisp, and EuLisp. All adopted standards that were
essentially
supersets of the functionality of R6RS. At one time, there were
numerous
profitable commercial vendors of Common Lisp implementations:
Symbolics, LMI,
TI, Gold Hill, Chestnut Hill, Lucid, Harlequin, Franz, Xerox, MCL,
ExperCommon
Lisp, Star Sapphire, IBCL, ... just to name a few. (There were others
that I
can't remember at the moment.) And there were many noncommercial
implementations actively maintained and developed either by
individuals or by
well funded research efforts: KCL, AKCL, GCL, CMULISP, CLISP, ... just
to name
a few. (Again, there were many others that I can't remember at the
moment.)
And there were many large mainstream companies that developed products
that
relied on these implementations. That marketplace demand fueled the
supply.

But that marketplace has essentially disappeared. Today there is only
one
commercial vendor: Franz. And only one active community-supported
development
effort: CMULISP.

I find it untenable to believe that standardization of R6RS with all
of its
"real world" features will catalyze growth of a Scheme marketplace.
What does
R6RS provide that the real-world needs that is not provided by Common
Lisp?
(No, as much as I like call/cc and a single namespace, they do not fit
the
bill.)

Scheme occupies a unique niche. A research niche and an educational
niche. It
is not a language. Not R6RS, not R5RS, not R4RS. It is an idea. Or a
collection of ideas. It is a framework. It is a way of thinking. It is
a
mindset. All of this is embodied in an ever growing family of
languages or
dialects, not a single language. It is a virus. It is the ultimate
programming-language virus. The cat is already out of the bag and
there is no
way to get it back in. Once someone gets the mindset, they can
implement their
own implementation, which is often a slightly different dialect. This
has
happened hundreds if not thousands of times over. (Probably hundreds
of
thousands or more if one counts all of the people doing homework for
Scheme
courses.) This happens for Scheme in a way that it doesn't for any
other
language. Scheme also has served as a testbed for innovative language
ideas
more than any other language, either by fueling such innovation or by
adopting
such innovation. I'm talking about the most major innovations of all
of
computer science. Things like: scoping, nondeterminism, parallelism,
lazy
evaluation, unification, constraint processing, stochastic
computation,
quantum computation, automatic differentiation, genetic programming,
types,
automated reasoning, ... just to name a few. Not crap like libraries,
unicode,
FFIs, assembler escapes, bit twiddling, module systems, case
sensitivity, ...

Before entertaining the standardization of a feature, I would need to
see:

1. Evidence that a sizable community of people need that feature and
agree on
the standardized spec. I won't be convinced by mere claims that a
feature
is needed because many people confuse desire with need. I need to
see
compelling evidence of actual need.
2. Evidence that there is a need for many/most/all implementations to
support
that feature in the same way. Again, I won't be convinced by mere
claims of
such need. I need to see compelling evidence of actual need to run
code
unmodified in many/most/all implementations. And it must be the
case that
addressing all such standardization issues successfully allows
such code
that needs to be run in many/most/all implementations to actually
do so.
I.e., that there are no other limiting factors that prevent such
and
require code modification.

I have written Scheme code many hours a day essentially every day for
the past
15 years. (And I did this in other dialects of Lisp for 11 years prior
to
that. And I did this in other languages for 8 years prior to that.)
Hundreds
of thousands of lines. Much of it runs on at least two Scheme
implementations:
Scheme->C and Stalin. Almost all of it requires capabilities that are
not
standardized in R4RS, R5RS, or R6RS. Things like operating system
access, file
system access, network access, window-system access, camera access,
microphone
access, speaker access, robot arm access, and access to millions of
lines of
code written by many other people in many other languages. This is as
real-world as it gets. R6RS does nothing to help that. My extensive
experience
is that I personally need nothing that R6RS adds over R4RS and many
things that
it doesn't add.

Brian Harvey

unread,
Aug 12, 2007, 12:51:07 PM8/12/07
to
Jeffrey Mark Siskind <qo...@purdue.edu> writes:
>Almost all of us in the Scheme community would love for Scheme to
>catch on in the mainstream.

I think you are (perhaps deliberately, for rhetorical purposes) conceding
too much here. At least one of us, and I suspect many of us, couldn't care
less what the mainstream does. For /our/ purposes, there are even some
advantages to not being in the mainstream.

Suppose it happened. Because of R6RS, or for other reasons, suddenly
everyone is programming in Scheme. The next thing that will happen is that
Microsoft will introduce something they call Scheme that's just enough
different from the standard so that anyone who uses it is stuck with Microsoft
forever. After that, someone will announce that they can compile Scheme
programs that run 20% faster than everyone else if they leave out call/cc.
Worst of all, the group voting on R7RS will consist mostly of industry
tycoons, swamping out the votes of Scheme's academic core fanatics.

No, I'm satisfied to watch the mainstream adopt Scheme's /ideas/, even as
they remain terrified of parentheses. Things like Guido van Rossum being
dragged kicking and screaming into lambda and lexical scope by his users.
Meanwhile we have Scheme as the beacon, as the language (almost, sigh)
uncorrupted by practicalities, God's programming language, as even people
who've never touched it themselves sometimes say.

Pascal Costanza

unread,
Aug 12, 2007, 5:28:39 PM8/12/07
to
Jeffrey Mark Siskind wrote:

> At one time, there were numerous profitable commercial vendors of
> Common Lisp implementations: Symbolics, LMI, TI, Gold Hill, Chestnut
> Hill, Lucid, Harlequin, Franz, Xerox, MCL, ExperCommon Lisp, Star
> Sapphire, IBCL, ... just to name a few. (There were others that I
> can't remember at the moment.) And there were many noncommercial
> implementations actively maintained and developed either by
> individuals or by well funded research efforts: KCL, AKCL, GCL,
> CMULISP, CLISP, ... just to name a few. (Again, there were many
> others that I can't remember at the moment.) And there were many
> large mainstream companies that developed products that relied on
> these implementations. That marketplace demand fueled the supply.
>
> But that marketplace has essentially disappeared. Today there is only
> one commercial vendor: Franz. And only one active
> community-supported development effort: CMULISP.

This is incorrect.

There are three well-supported commercial Common Lisp implementations:
Allegro Common Lisp, LispWorks and Corman Lisp. There is also Macintosh
Common Lisp and Scieneer Common Lisp, but their state is not clear, at
least not to me.

There are quite a few well-supported open-source Common Lisp
implementations: SBCL, CMUCL, CLISP, OpenMCL, ECL, Armed Bear Common
Lisp, GCL, and few smaller other implementations. SBCL seems to be the
most popular one these days, but at least CMUCL, CLISP and OpenMCL have
a considerable mind share as well. (I don't have a complete overview,
though.)

There are a couple of websites providing services to Common Lisp users
which are run by non-commercial entities: http://www.cliki.net/ ,
http://common-lisp.net/ and http://www.cl-user.net/

There is of course room for improvement, as always, but I would still
claim that Common Lisp is back on track since a couple of years, and
there is a promising amount of progress.

What role the ANSI Common Lisp specification plays here is not entirely
clear. Every now and then there are discussions on comp.lang.lisp
whether there should be a new attempt to change or add to the ANSI
Common Lisp standard. Some people believe that this would be beneficial,
but others disagree, based on various arguments. Since recently, we are
trying to boot up a process for Common Lisp which is similar to SRFI -
see http://cdr.eurolisp.org/ for more details.

The goals of R6RS are not really clear to me. My impression is that it
specifies too much for research purposes, but not enough for industrial
contexts. There have also been attempts in the past to standardize Lisp
dialects which were actually very close to Scheme - most notably Dylan
and EuLisp. I wonder why elements of these efforts were seemingly not
used as a basis for R6RS.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Rainer Joswig

unread,
Aug 13, 2007, 6:29:43 AM8/13/07
to
In article <5i9cc8F...@mid.individual.net>,
Pascal Costanza <p...@p-cos.net> wrote:

> Jeffrey Mark Siskind wrote:
>
> > At one time, there were numerous profitable commercial vendors of
> > Common Lisp implementations: Symbolics, LMI, TI, Gold Hill, Chestnut
> > Hill, Lucid, Harlequin, Franz, Xerox, MCL, ExperCommon Lisp, Star
> > Sapphire, IBCL, ... just to name a few. (There were others that I
> > can't remember at the moment.) And there were many noncommercial
> > implementations actively maintained and developed either by
> > individuals or by well funded research efforts: KCL, AKCL, GCL,
> > CMULISP, CLISP, ... just to name a few. (Again, there were many
> > others that I can't remember at the moment.) And there were many
> > large mainstream companies that developed products that relied on
> > these implementations. That marketplace demand fueled the supply.
> >
> > But that marketplace has essentially disappeared. Today there is only
> > one commercial vendor: Franz. And only one active
> > community-supported development effort: CMULISP.
>
> This is incorrect.
>
> There are three well-supported commercial Common Lisp implementations:
> Allegro Common Lisp, LispWorks and Corman Lisp. There is also Macintosh
> Common Lisp and Scieneer Common Lisp, but their state is not clear, at
> least not to me.

You can buy both.

MCL will get a new release supporting Unicode. Also a port
to Intel Macs is in the works, but that can take some time.
Especially since a new release of Mac OS X is coming
(with lots of internal changes) that also supports
the usual libraries on 64bit machines (64bit mode
is more friendly for some Lisps, because it has more registers).


...

> dialects which were actually very close to Scheme - most notably Dylan
> and EuLisp. I wonder why elements of these efforts were seemingly not
> used as a basis for R6RS.

Yeah, both (early Dylan and EuLisp) were good attempts. Not so
sure about R6RS. I have the feel that R6RS moves in the wrong direction.

>
>
> Pascal

dr.tf...@googlemail.com

unread,
Aug 13, 2007, 7:43:45 AM8/13/07
to
> Many/most of such
> efforts
> are motivated by the fact that such implementations are part of the
> implementor's research program. For them, innovation is the driving
> force.
> Satisfying a customer base is not.

Why do such experimental language implementations require any standard
whatsoever? I understand you as saying that R5RS suffices. If this
is your view, RSR5 can continue to satisfy such developers, whether or
not R6RS is ratified. Why should programming language researchers
care about standards? Why should programming language reassearchers
*oppose* ratification of R6RS, when it do not affect their interests?

>
> Almost all of us in the Scheme community would love for Scheme to
> catch on in
> the mainstream. We would love for there to be many profitable vendors
> selling
> high-quality Scheme implementations. We would love for there to be a
> vibrant
> open-source community that actively maintains and develops many high-
> quality
> Scheme implementations. And we would love for there to be a large
> customer
> base of real-world Scheme users that would create a demand for the
> above. Some
> people believe that the only factor that prevents this from happening
> is
> standardization (of a big language). Thus they advocate adoption of
> R6RS.
>
> I think that there is a huge body of compelling historical evidence
> that this
> belief is false. There are at least three prior efforts to standardize
> Lisp:
> Common Lisp, IS Lisp, and EuLisp. All adopted standards that were
> essentially

> supersets of the functionality of R6RS. ...

I agree that a standard like R6RS is neither necessary nor sufficient
for a programming language to succeed in the marketplace. But I
wouldn't go so far as to suggest that such standards are
counterproductive or unhelpful for the success of a language. The
failure of Common Lisp, and the other Lisp standards you mention does
give us reason to be skeptical about Scheme's potential for becoming
popular, or any Lisp dialect. But there is evidence of a renewed
interest in languages like Scheme, since very popular languages
(Python, Perl, Java) are becoming more and more like Scheme as time
progresses and new languages with Scheme-like features are attracting
significant attention (e.g. Scala, Groovy). So, maybe, just maybe,
there is a new window of opportunity, if the Scheme community can get
its act together.

> I find it untenable to believe that standardization of R6RS with all
> of its
> "real world" features will catalyze growth of a Scheme marketplace.
> What does
> R6RS provide that the real-world needs that is not provided by Common
> Lisp?
> (No, as much as I like call/cc and a single namespace, they do not fit
> the
> bill.)

Unicode and a Java/Python like module system (see the recent post
comparing PLT's module system, which is similar to R6RS's with Common
Lisp packages), for two. More importantly, the Scheme community seems
to be more active than the Common Lisp community in maintaining the
language, as evidenced by the SRFI process and R6RS initiative.
Common Lisp appears to have stagnated.

>
> Scheme occupies a unique niche. A research niche and an educational
> niche. It
> is not a language. Not R6RS, not R5RS, not R4RS. It is an idea.

Ideas don't need standards, an I don't see how R6RS could detract from
the idea of Scheme, or any other idea.
Is it important to you that the name "Scheme" be reserved for naming
the idea? I suppose one could come up with a new name for the idea,
or the language, but wouldn't this really benefit anyone in the Scheme
community?


> Before entertaining the standardization of a feature, I would need to
> see:
>
> 1. Evidence that a sizable community of people need that feature and

> agree on the standardized spec. ...

> 2. Evidence that there is a need for many/most/all implementations to
> support that feature in the same way.

I find the rather democratic procedure for R6RS a very interesting
experiment. But if R6RS fails, I sincerely hope that the implementors
of some the leading Scheme implementations (e.g. Chez, Larceny and PLT
Scheme) will reconsider this process and perhaps decide to agree upon
a specification for a "Scheme" language, presumably similar to R6RS,
among themselves. If single persons and organizations are free to
experiment with the Scheme idea, then why not arbitrary coalitions of
such individuals and groups? Even if this would not eliminate the
incompatibilities among Scheme implementations as much as a standard
might, it would at least help to reduce these incompatibilities ...
and may lead to a de facto standard if it is good enough to gain
momentum among users.

>
> I have written Scheme code many hours a day essentially every day for
> the past
> 15 years.

You seem very content with R6RS and the implementations you've been
using. In what way does R6RS threaten your work? Can't you continue
to work as you have in the past, with the same tools as in the past?

Tom Gordon
Fraunhofer FOKUS, Berlin

Andrew Reilly

unread,
Aug 13, 2007, 8:33:31 AM8/13/07
to
On Mon, 13 Aug 2007 04:43:45 -0700, dr.tfgordon wrote:

> Why should programming language researchers
> care about standards? Why should programming language reassearchers
> *oppose* ratification of R6RS, when it do not affect their interests?

The main argument that I can see is that it somehow raises the bar for
"necessary infrastructure" before you get to the point where you can
compile or use a large existing base of code. Being able to try ideas out
against large working code bases is important, and I would expect that
when R6RS is ratified, most of the new code finding its way into
publically available repositories will be R6RS-dependent, and that wich is
already there, and maintained, will migrate. So the amount of "relevant"
R5RS code readily available might diminish, in time.

Not being a scheme implementor, I'm just guessing, and can't really
estimate how much real "infrastructure" would be necessary to be able to
run, say, the existing R5.9xRS reference implementation, over and above
existing requirements.

--
Andrew

Brian Harvey

unread,
Aug 13, 2007, 12:08:47 PM8/13/07
to
dr.tf...@googlemail.com writes:
> Why should programming language reassearchers
>*oppose* ratification of R6RS, when it do not affect their interests?
[...]

>Common Lisp appears to have stagnated.

Perhaps it's /because/ of its elephantine standard that CL has stagnated?
The bigger the standard, the harder it is to experiment without violating
it. So perhaps the people who want to innovate are drawn to Scheme at least
in part because of its thin standard, with lots of wiggle room?

Pascal Costanza

unread,
Aug 13, 2007, 1:25:43 PM8/13/07
to
Brian Harvey wrote:
> dr.tf...@googlemail.com writes:
>> Why should programming language reassearchers
>> *oppose* ratification of R6RS, when it do not affect their interests?
> [...]
>> Common Lisp appears to have stagnated.
>
> Perhaps it's /because/ of its elephantine standard that CL has stagnated?

Common Lisp may have been dormant, but it certainly hasn't stagnated.

> The bigger the standard, the harder it is to experiment without violating it.

I don't think so - you can always ignore what you don't like when
experimenting.

> So perhaps the people who want to innovate are drawn to Scheme at least
> in part because of its thin standard, with lots of wiggle room?

For certain kinds of experimentation, a minimal language is certainly
useful, but for others, it's not.

It's a pity that over the years, Common Lisp has continuously been
portrayed in such a bad light by a certain group of people. Instead of
moving fundamental research from Scheme to Common Lisp, you now have to
target languages like Java in order to do "industrially relevant"
research. Was that really worth it?

Steve Schafer

unread,
Aug 13, 2007, 1:40:29 PM8/13/07
to
On Fri, 10 Aug 2007 21:22:45 -0000, David Rush <kumo...@gmail.com>
wrote:

>Nonsense. If that's true, then I suppose that I haven't been doing any


>real work for the last seven years. I think that what you really mean
>is that lazy programmers don't want to have to think.

Well, it's frequently not the programmers--lazy or not--who get to make
the relevant decisions. Management tends to fear change in the tool set
far more than programmers do. Whether that's laziness or conservatism is
difficult to judge in general.

>How many C compiler vendors truly *compete* against each other? In
>practice people either use the most recent (or ancient) compiler that
>builds their system without bugs.

That is in fact exactly my point. They do so because they have to, not
because they want to. They can't afford the cost (monetary or risk) of
moving to a new development environment, precisely because of the lack
of interoperability. (By the way, C compiler vendors used to compete
quite vigorously; Microsoft vs. Borland, for example.)

>The situation w/rt SQL and RDBMSes is not analogous. As far as I
>understand it (and I am enjoying this knnowledge *ever* so much) SQL
>is pretty broken even as a relational algebra. That's a pretty far cry
>from Scheme, which is a fairly successful attempt at reifying the
>untyped lambda-calculus.

SQL may indeed be broken, but that's not enough to explain why no two
RDBMS vendors use the same convention for dealing with dates and
times....

Steve Schafer

unread,
Aug 13, 2007, 1:49:12 PM8/13/07
to
On Fri, 10 Aug 2007 13:43:45 -0500, Aaron Hsu
<aaro...@sacrificumdeo.net> wrote:

>Being locked into one vendor is a serious concern, but I find that many
>companies are not so worried in practice about being locked into a
>single vendor.

Whether or not they're worried about it is one issue (the majority of
computer users are generally oblivious to the problem; most of the
awareness that does exist is concentrated in the software development
community). But I'm talking about what people want (that is, the people
who _do_ understand the problem), not what they're forced to live with.

>The whole point is that portability comes not from everyone being
>forced to do the exact same thing, but for everyone to communicate in
>the exact same way. What I mean is that each implementation should
>share a common method of describing what features it does or does not
>have, so that portable code will run on all implementations supporting
>the necessary features. This is the only real way to support the
>largely varying "real world." It permits the necessary flexibility to
>meet the needs of specific groups, while also guaranteeing portability
>between individual implementations.

This is, in principle, a Good Idea. But it doesn't really solve the
problem. It forces a burden onto the consumer that they really shouldn't
have to face. This is really just an analog of the "Rule of Least
Surprise" (aka "Principle of Least Astonishment") in user interface
design. In the absence of truly compelling arguments to the contrary,
stuff should just work.

Steve Schafer

unread,
Aug 13, 2007, 2:30:09 PM8/13/07
to
On Sun, 12 Aug 2007 08:17:49 -0700, Jeffrey Mark Siskind
<qo...@purdue.edu> wrote:

>There are fundamental flaws in your argument. I suspect that these
>flaws are what motivate many of those who advocate adoption of R6RS.

First of all, let me make it clear that I am neither for nor against
R6RS per se. I _do_ disagree with the notion that "heavy duty"
standardization hampers those who wish to experiment and innovate.

>I find it untenable to believe that standardization of R6RS with all of
>its "real world" features will catalyze growth of a Scheme marketplace.

But can the Scheme marketplace grow _without_ something like R6RS?

>I have written Scheme code many hours a day essentially every day for
>the past 15 years. (And I did this in other dialects of Lisp for 11
>years prior to that. And I did this in other languages for 8 years
>prior to that.) Hundreds of thousands of lines. Much of it runs on at
>least two Scheme implementations:
>Scheme->C and Stalin. Almost all of it requires capabilities that are
>not standardized in R4RS, R5RS, or R6RS. Things like operating system
>access, file system access, network access, window-system access,
>camera access, microphone access, speaker access, robot arm access, and
>access to millions of lines of code written by many other people in
>many other languages. This is as real-world as it gets.

Well, not necessarily. Interfacing is not what "Real World" means. In
the Real World, dollars are what counts. In the Real World, you build
something because someone is going to pay you for it.

Which is not to denigrate basic research in any way. (I used to be an
academic, and my wife still is.) It's just not the Real World.

Steve Schafer
Fenestra Technologies Corp.

http://www,fenestra.com/

Aaron Hsu

unread,
Aug 14, 2007, 12:38:56 AM8/14/07
to
On 2007-08-13 13:30:09 -0500, Steve Schafer <st...@fenestra.com> said:

> But can the Scheme marketplace grow _without_ something like R6RS?

Scheme needs something that has similar goals to R6RS, but it will
*break* or worse if put in the position of having to cater to something
like R6RS.

Griff

unread,
Aug 14, 2007, 4:46:52 PM8/14/07
to
On Aug 13, 11:38 pm, Aaron Hsu <aaron....@sacrificumdeo.net> wrote:

> Scheme needs something that has similar goals to R6RS, but it will
> *break* or worse if put in the position of having to cater to something
> like R6RS.

I hate to ask dumb questions, but here goes...

If R6RS is a total disaster, how long would it take to throw it out
and fix it in R7RS?

Aaron Hsu

unread,
Aug 14, 2007, 7:32:18 PM8/14/07
to
On 2007-08-14 15:46:52 -0500, Griff <gre...@gmail.com> said:

> If R6RS is a total disaster, how long would it take to throw it out
> and fix it in R7RS?

It would take until R8RS, at least. :-) One can't move in the direction
of disaster and then reverse it very easily. Objects in motion tend to
stay in motion, &c. I think it's much easier to throw it out now and
start over than throw it out later and start over.

David Rush

unread,
Aug 14, 2007, 8:34:04 PM8/14/07
to
On Aug 14, 9:46 pm, Griff <gret...@gmail.com> wrote:
> If R6RS is a total disaster, how long would it take to throw it out
> and fix it in R7RS?

Well it seems to me that many of the 'no' votes I am reading are more
along the lines of 'not quite' than 'total disaster'. I do have to
admit that the diversity of reasons for 'not quite' might lead one to
reasonably conclude that 'total disaster' is implicit, but I don't see
it. R^(5.97)RS was better than I expected, but the most essential
issues are very hard: type definition and meta-program composition.
Both cases share the same difficulties I think in that they reach
beyond the traditional boundaries of a 'language' definition. My main
problem with the R^(5.97)RS solutions is that they have not found an
elegant reduction of cases, but instead proliferate special cases and
patched behaviors.

Without a doubt, R^(5.97)RS would let people move on with practical
engineering - which has always been the realm of gluing together large
compromises with many small ones - but I don't feel that it advances
the state of the science of computing at all. But perhaps those
aspirations have moved on to the statically typed FP languages where
those issues of modularity have a very direct algebraic foundation.
It's just too bad that they're such a pain to use :)

david rush
--
You may have been trolled ;)

Tom Lord

unread,
Aug 15, 2007, 3:40:29 AM8/15/07
to
On Aug 14, 1:46 pm, Griff <gret...@gmail.com> wrote:

> If R6RS is a total disaster, how long would it
> take to throw it out and fix it in R7RS?

A couple of weeks.

-t

Griff

unread,
Aug 16, 2007, 2:08:14 PM8/16/07
to
I read a blurb about a research operating system today; and it had a
nice tagline. It made me think about the two seemingly incompatible
wants of most of the electorate.

Here is what I would call the optimistic tagline for the future of
Scheme.

"The Scheme programming language is an experiment in developing a
complete and usable modern programming language that offers room for
experimentation and research."

Aaron Hsu

unread,
Aug 16, 2007, 5:14:25 PM8/16/07
to
On 2007-08-16 13:08:14 -0500, Griff <gre...@gmail.com> said:

> "The Scheme programming language is an experiment in developing a

> complete and usable MODERN programming language that offers room for
> experimentation and research."

While I like this definition, I think it emphasises and limits Scheme's
usefulness. First of all, as an experiment itself, it reduces the
stability of the language as an idea, and encourages broad, sweeping
alterations to Scheme's core philosophy. The usage of complete and
usable implies too much in the way of features and not enough in the
way of application. I emphasized the word Modern in the above because I
feel it is the most perilous term in the definition. By using "Modern"
we are immediately condemning ourselves to both an attempt at current
fame and possibly future obscurity if our "experiment" cannot change to
keep up with such an ever moving target. In other words, the definition
of modern restricts the scope of the language to a VERY limited range
of motion. Additionally, experimentation and research in the above
texts are "tacked" on as a final note, receiving what I would call a
secondary status to the primary goals of usabiltiy, modernity, and
completeness. To me, this creates a language other than the inspiring
ideal of what I imagine Scheme could become.

Instead of modern and usable, I would use the term General Purpose,
meaning that it is capable of successfully and adequately accomplishing
any goal desired to be accomplished in the arena of computer languages
provided that the proper hardware and tools are provided in which do
it. That is, there should be no intrinsic limitation of the language
making it unsuitable for any programming task. I would leave out the
ideas of experimentation and research entirely, and let all arenas of
computer science rest firmly within the primary scope of the language's
capacity.

The term Modern, however, I wish to address more singularly. :-) As far
as languages go, Mathematics is a pretty long standinag one. It's
universal, and though syntax is a little different here and there, it
has a long history of facilitating clear communication to many people
of differing regions of the world. The principles of mathematics are
timeless, and I propose that Scheme should be similarly timeless.
Scheme should not try to bind itself to specific regions or methods of
implementing the ideas behind Computer Science, but it should represent
a means of communicating any concept of computer science, past, present
or future (conceviably). [The current proposed standard's requirement
of Unicode is an example of a failure in this approach.]

In other words, I don't want the Scheme standard to be a product of the
times, but rather, a product of the best ideas in the highest level
concepts of computer science. To this end, I think that dubbing it as a
"High Level" language would be better than using the term Modern.

Here's an alternative definition:

"The Scheme programming language is a general-purpose, high-level
programming language that emphasizes the elegant, simple, and minimal
expression of all forms, idioms, and techniques of computer science by
providing a means of such expression in a formally defined language."

Then, I recommend that another entity, the Scheme Library be introduced
as, "The Scheme Library is a set of programs designed to implement
various desirable features for practical computing that are widely
useful accross the domain of programming, which is written in the
Scheme language."

Just my thoughts.

Griff

unread,
Aug 17, 2007, 3:35:09 PM8/17/07
to
On Aug 16, 4:14 pm, Aaron Hsu <aaron....@sacrificumdeo.net> wrote:

> Instead of modern and usable, I would use the term General Purpose,
> meaning that it is capable of successfully and adequately accomplishing
> any goal desired to be accomplished in the arena of computer languages
> provided that the proper hardware and tools are provided in which do
> it.

That was my original thought:

http://www.r6rs.org/ratification/preliminary-results.html#X22

Message has been deleted

Brian Harvey

unread,
Aug 20, 2007, 2:29:22 AM8/20/07
to
Aaron Hsu <aaro...@sacrificumdeo.net> writes:
> By using "Modern"
>we are immediately condemning ourselves to both an attempt at current
>fame and possibly future obscurity if our "experiment" cannot change to
>keep up with such an ever moving target.

Don't worry, "modern" is just a euphemism for "lexically scoped." :-)

But anyway, my preferred tag line is

Languages should be designed, not by piling feature on top of
feature, but by removing the restrictions that make additional
features appear necessary.

0 new messages