Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Vote on R6RS, if you have the time to write a 150-word essay

172 views
Skip to first unread message

pbfr2nw...@temporaryinbox.com

unread,
May 18, 2007, 7:34:46 AM5/18/07
to
The good folks of the R6RS committee will give the community a last
chance to approve or reject the final R6RS draft. Not the whole
community, of course, but only those members who have enough time on
their hands to write a 150-word essay. Not only that, but if you dare
vote "nay", you'd better have an explanation. Additionally, even if
you go through these hoops, the editors reserve their right to keep
you out if they disagree with your opinions:

http://www.r6rs.org/ratification/

These requirements will ensure that most people gainfully employed or
running their own business will be kept out of the loop. Which is,
quite possibly, what was intended: by controlling the mix of people
who vote on the spec, the editors are controlling the outcome and
hoping to force an unrealistic standard down everyone's throats.

This thread is not about arguing for or against R6RS. The battle lines
are drawn. If you do not know enough about R6RS, just read the latest
draft from r6rs.org.

If you love Scheme, and think that R6RS will bring ruin to the
language, stand up against the R6RS mafia! Our first goal is to fix
the ratification rules. Do not let apathy get the best of you, or you
will have cause to regret it a few short years down the road.

sunnan...@gmail.com

unread,
May 18, 2007, 8:32:40 AM5/18/07
to
On May 18, 1:34 pm, pbfr2nwkj13g...@temporaryinbox.com wrote:
> The good folks of the R6RS committee will give the community a last
> chance to approve or reject the final R6RS draft. Not the whole
> community, of course, but only those members who have enough time on
> their hands to write a 150-word essay.

Your post had 220 words.

> If you love Scheme, and think that R6RS will bring ruin to the
> language, stand up against the R6RS mafia!

I love Scheme, so I'm interested in hearing what you and others think
is wrong with the R6RS. I don't mind what I've seen of it so far.

Jens Axel Søgaard

unread,
May 18, 2007, 8:48:31 AM5/18/07
to
pbfr2nw...@temporaryinbox.com wrote:

> This thread is not about arguing for or against R6RS.

You must be new to usenet.

--
Jens Axel Søgaard

Abdulaziz Ghuloum

unread,
May 18, 2007, 9:07:13 AM5/18/07
to
On May 18, 7:34 am, pbfr2nwkj13g...@temporaryinbox.com wrote:
> ...

> This thread is not about arguing for or against R6RS.

Correct. You, pbfr2nw...@temporaryinbox.com, demonstrate your
objective stand very eloquently in the next paragraph:

> If you love Scheme, and think that R6RS will bring ruin to the
> language, stand up against the R6RS mafia! Our first goal is to fix
> the ratification rules. Do not let apathy get the best of you, or you
> will have cause to regret it a few short years down the road.

Aziz,,,

sunnan...@gmail.com

unread,
May 18, 2007, 9:20:38 AM5/18/07
to
On May 18, 2:48 pm, Jens Axel Søgaard <use...@soegaard.net> wrote:

> pbfr2nwkj13g...@temporaryinbox.com wrote:
> > This thread is not about arguing for or against R6RS.
>
> You must be new to usenet.

Oh, I missed that line. Guess it's time to turn of the computer and go
home for the day...

Arun....@gmail.com

unread,
May 18, 2007, 10:52:44 AM5/18/07
to
Um, guys, whatever your preferences on this standard, don't you think
the 150-word statement is excessive requirement? I wasn't aware of the
voting rules, but now that I've read them I do think this is problem.

Steve Schafer

unread,
May 18, 2007, 11:03:20 AM5/18/07
to

I think it's a perfectly reasonable way to distinguish those who are
serious about contributing to the process from those who are just
ranting and have no substantive suggestions to offer. 150 words really
isn't a big deal -- this two-sentence paragraph alone contains
seventy-eight words (according to Microsoft Word's statistics -- your
mileage may vary), so two such paragraphs (which took all of five
minutes to write) would be more than enough to fulfill the word-count
requirement.

Steve Schafer
Fenestra Technologies Corp.
http://www.fenestra.com/

Arun....@gmail.com

unread,
May 18, 2007, 11:15:38 AM5/18/07
to
> I think it's a perfectly reasonable way to distinguish those who are
> serious about contributing to the process from those who are just
> ranting and have no substantive suggestions to offer. 150 words really
> isn't a big deal -- this two-sentence paragraph alone contains
> seventy-eight words (according to Microsoft Word's statistics -- your
> mileage may vary), so two such paragraphs (which took all of five
> minutes to write) would be more than enough to fulfill the word-count
> requirement.

You need to consider context. It's easy to write 500 words about
nothing. Look at your paragraph, you pad it with MS Word remarks and
cliche like YMMV...

It's not that easy to write 150 **focused** words on narrow topic like
what you think of Scheme: especially not if you're busy.

Maybe 70-100 words make better sense?

Excuse my english is not that good, but I hope you understand what I
said.

Arun K.

Pascal Costanza

unread,
May 18, 2007, 11:29:45 AM5/18/07
to
Arun....@gmail.com wrote:
>> I think it's a perfectly reasonable way to distinguish those who are
>> serious about contributing to the process from those who are just
>> ranting and have no substantive suggestions to offer. 150 words really
>> isn't a big deal -- this two-sentence paragraph alone contains
>> seventy-eight words (according to Microsoft Word's statistics -- your
>> mileage may vary), so two such paragraphs (which took all of five
>> minutes to write) would be more than enough to fulfill the word-count
>> requirement.
>
> You need to consider context. It's easy to write 500 words about
> nothing. Look at your paragraph, you pad it with MS Word remarks and
> cliche like YMMV...
>
> It's not that easy to write 150 **focused** words on narrow topic like
> what you think of Scheme: especially not if you're busy.

If you're too busy to have the time to write such an abstract, you're
probably also too busy to have the time to read the R6RS draft.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Arun....@gmail.com

unread,
May 18, 2007, 12:20:08 PM5/18/07
to
> > It's not that easy to write 150 **focused** words on narrow topic like
> > what you think of Scheme: especially not if you're busy.
>
> If you're too busy to have the time to write such an abstract, you're
> probably also too busy to have the time to read the R6RS draft.

Reading is easier than writing. And I don't think you need to read all
the small print to get a good ideea of a standard's quality (did you
ever read an ITU-T, or older CCITT standard?)

Beside, there's a natural limit to how much you can write on narrow
topics. I think in this case, the limit is about a paragraph. Beyond
that limit, the mental effort and time required to write words grow
like exponentials.

Isn't all this obvious?

Arun K.

Arun....@gmail.com

unread,
May 18, 2007, 12:20:58 PM5/18/07
to
> > It's not that easy to write 150 **focused** words on narrow topic like
> > what you think of Scheme: especially not if you're busy.
>
> If you're too busy to have the time to write such an abstract, you're
> probably also too busy to have the time to read the R6RS draft.

Reading is easier than writing. And I don't think you need to read all

Joe Marshall

unread,
May 18, 2007, 1:34:59 PM5/18/07
to
On May 18, 8:15 am, Arun.Kh...@gmail.com wrote:

> It's not that easy to write 150 **focused** words on narrow topic like
> what you think of Scheme: especially not if you're busy.

It isn't an essay on what you think of Scheme, it is a ``"Statement of
Interest" declaring what [your] stake is in the outcome of the
process''

> Excuse my english is not that good, but I hope you understand what I
> said.

I'm trying to phrase this in plain English and avoid using uncommon
words. Excuse me if I sound rude here, I'm trying to be direct.

The Statement of Interest would describe why someone should take your
opinion into account. If someone wrote ``I lead a research group that
uses Scheme and publishes over 50 papers a year and am currently
redesigning the curriculum for the North Elbonian computer lab, and a
ratified standard is crucial to making Scheme the official Elbonian
computer language.'' Then this person's vote would be considered.

If someone wrote ``I 3l33t, r6rs sucks, r5rs rocks, perl rulz!!1''
then they don't seem to have much of a stake in the language.

Someone with a stake in the language would likely be able to come up
with 150 words describing what that stake is. If you cannot, then
either submit a smaller essay, or seek some help expanding it. 150
words doesn't seem like an awful lot to me, but then I'm a native
English speaker.

Anton van Straaten

unread,
May 18, 2007, 2:24:59 PM5/18/07
to
Joe Marshall wrote:
> Someone with a stake in the language would likely be able to come up
> with 150 words describing what that stake is. If you cannot, then
> either submit a smaller essay, or seek some help expanding it.

Since the commercial market for Scheme programmers is quite small,
perhaps it could be augmented by a market for Statement-of-Interest
writers. I'm thinking $5/word sounds fair. Paypal accepted.

Seriously, thanks to Joe for a good explanation of the ratification
process rationale.

Anton

Abdulaziz Ghuloum

unread,
May 18, 2007, 2:51:16 PM5/18/07
to

The voting rules were published months ago. A mailing list was set up
to discuss the ratification process and its archived are
available[1]. Anybody who had a solid disagreement with the process
and cared enough should have given some input on that mailing list.

Aziz,,,

[1] http://lists.r6rs.org/pipermail/ratification-discuss/

Arun....@gmail.com

unread,
May 18, 2007, 3:10:54 PM5/18/07
to
> opinion into account. If someone wrote ``I lead a research group
> that uses Scheme and publishes over 50 papers a year and am
> currently redesigning the curriculum for the North Elbonian computer
> lab, and a ratified standard is crucial to making Scheme the
> official Elbonian computer language.'' Then this person's vote
> would be considered.

Thanks for the Elbonia reference. Nice. I didn't expect to find such
allusions this day and age.

Your example statement is not 150 words. And it's not realistic.
Someone from the industry will simply say for example, we like Lisp
because of power of macros, we chose Scheme from other Lisps because
of call/cc, and we use it to script Java programs. Or to provide
extensibility for a CAD application.

I don't think the boss will really let them go into little details
about exactly what they do. Not everybody lives in universities, you
know.

Leave canned examples aside. Why doesn't someone provide right here
his own 150-word statement, to prove that it makes sense?? Or maybe
nobody actually wrote one yet?

Arun K.

Jeffrey Mark Siskind

unread,
May 18, 2007, 3:47:51 PM5/18/07
to
> Leave canned examples aside. Why doesn't someone provide right here
> his own 150-word statement, to prove that it makes sense??

((email-address "qo...@purdue.edu")
(full-name "Jeffrey Mark Siskind")
(geographic-location "West Lafayette, IN, USA")
(affiliation "School of ECE, Purdue University")
(public-email-address "qo...@purdue.edu")
(web-page-url "http://www.ece.purdue.edu/~qobi")
(statement-of-interest
"I am the sole author of Stalin (a high-performance implementation
of R4RS)
and Stalingrad (a high-performance implementation of a functional,
numeric
subset of R4RS that adds first-class automatic differentiation). I
conduct
research in both programming languages & compilers and AI
(including
computer vision, computational linguistics, robotics, machine
learning, and
cognitive science). All of my publications since 1981 are supported
by
substantive implementations, all of which are in Lisp or Scheme.
All of this
code has been publicly released and some of it is widely used.
Every
course that I have ever taught (including AI, computer vision,
computational
linguistics, and programming languages since 1993) has been based
solely on
Lisp or Scheme. I currently teach AI every semester in a course
whose
material I have solely prepared and which is based solely on
Scheme. I have
shared the material that I have prepared for the courses I teach
with other
teachers, including the course software that I have prepared, all
of
which is written in Lisp or Scheme. Overall, several hundred
students have
taken my courses."))

For the record, I intend to vote to reject the current draft of R6RS.
I have stated
publicly in the past that I dislike R5RS and have given my reasons
why. I have
also stated publicly in the past that I oppose all standardization
efforts for Scheme
and have given my reasons why. I will state them again as part of the
justification
of my vote on the R6RS ratification process.

st...@cs.grinnell.edu

unread,
May 18, 2007, 3:49:08 PM5/18/07
to
Arun....@gmail.com writes:

> Leave canned examples aside. Why doesn't someone provide right here
> his own 150-word statement, to prove that it makes sense?? Or maybe
> nobody actually wrote one yet?

Here's mine. It took me about ten minutes to write, although I admit
that I did all the thinking that went into it some time ago.

========

I teach courses in computer science at a small liberal-arts
college. We use Scheme in our introductory course because its
simplicity and expressiveness enable our students to understand some
of the key ideas of computer science -- notably recursion, procedural
abstraction, and data abstraction -- quickly and accurately, without
the encumbrances of fragile syntax and quirky semantics. We also use
Scheme in more advanced courses (such as our courses on programming
languages, algorithms, automata theory, and computational
linguistics), particularly for small, specialized examples and
demonstrations. My interest in the R6RS standard is to keep the core
language small, simple, expressive, and accessible to novice
programmers, while accommodating, through the development of
libraries, the vast increase in the effective range and usability of
Scheme during the fifteen years since the publication of R5RS.

I am also working on a small textbook on algorithms in which
the code examples are presented in a purely functional subset of
Scheme. This work gives me an additional interest in R6RS: I'd like
to be able to use standard library mechanisms to control the subset
explicitly.

========

It's finals week here, which is a very busy time, so I was doing one
or two other things at the same time. The total elapsed time between
my reading Alan Bawden's announcement and my submitting the finished
statement may have been more like forty-five minutes.

I hope this helps.

Joe Marshall

unread,
May 18, 2007, 3:53:03 PM5/18/07
to
On May 18, 12:10 pm, Arun.Kh...@gmail.com wrote:

>
> Leave canned examples aside. Why doesn't someone provide right here
> his own 150-word statement, to prove that it makes sense?? Or maybe
> nobody actually wrote one yet?

Off the top of my head:
--------

Dear Editors,

My stake in Scheme is currently one of `active community
participant'. I have been an active Scheme user for more than twenty
years. I have been employed professionally by two different Scheme
implementors: MIT Scheme and PLT Scheme. In addition I have worked
on several other Scheme implementations including one I wrote in Z80
assembly code. I most recently ported a good chunk of MIT Scheme to
the C# platform.

I have attended several conferences associated with the Scheme
language and have published a paper on a technique to implement first-
class continuations on machines that restrict stack inspection. I am
an active participant in several on-line computer language discussion
groups. I have written an online tutorial for syntax-rules macros.

I have a significant amount of experience in Scheme, its
implementation, and its use. I have followed the r6rs process from
the beginning.

Based on this experience, I believe that I qualify for a vote on
ratification.

Thank you for your consideration.
--------

Arun....@gmail.com

unread,
May 18, 2007, 4:27:12 PM5/18/07
to
Guys, thanks for the statements, and congratulations for your
achievements. I guess with such resumes, it's easy to write 150 words.

I think many people from industry will not have publications and
conferences, or their own Scheme implementation, yet they may still be
Scheme programmers, and have an interest in what happens to this
language.

This "statement" now looks more like an application to be permitted to
vote. I think such a thing may be too elitist. I think that normal
people who simply use Scheme to achieve some task should be allowed to
vote. 70-100 words is more than enaugh to keep out "perl rulz" script
kiddies as Joe was talking about.

Pascal Costanza

unread,
May 18, 2007, 4:34:16 PM5/18/07
to
Arun....@gmail.com wrote:
> Guys, thanks for the statements, and congratulations for your
> achievements. I guess with such resumes, it's easy to write 150 words.
>
> I think many people from industry will not have publications and
> conferences, or their own Scheme implementation, yet they may still be
> Scheme programmers, and have an interest in what happens to this
> language.

They can summarize what kinds of applications they write, how many
programmers are employed to write them, what kinds of customers they
have, etc. pp.

Emilio Lopes

unread,
May 18, 2007, 4:42:06 PM5/18/07
to
Arun Khopa writes:

> I think many people from industry will not have publications and
> conferences, or their own Scheme implementation, yet they may still be
> Scheme programmers, and have an interest in what happens to this
> language.

I could imagine something along these lines for someone from the
"industry":

----------------------------------------------------------------------
I work for a large company which uses Scheme for some of its
internal applications. Some of these are web based, others are
console driven and yet others are servers accessed by clients
written in another languages, mostly Perl and Java. I estimate a
total of 50 users.

Because of the different character of these applications and due
to the fact that we aim to support different architectures, we
deploy these applications using at least three different Scheme
implementations. We were already able to release one of our
libraries as Free Software. We also sent small patches for one of
these implementations.

So it's important to us to be able to write portable Scheme code.
Also issues like Unicode are important when inter-operating with
other languages. Some of the decisions regarding R6RS will
certainly influence the choice of programming language used for
future projects in our company. We are even considering using
Scheme for (parts of) a commercial product.
----------------------------------------------------------------------

I wrote this quickly. As you probably see, English is not my mother
tongue either but I hope that the text above makes sense (it's mostly
fiction, BTW).

> This "statement" now looks more like an application to be permitted to
> vote. I think such a thing may be too elitist. I think that normal
> people who simply use Scheme to achieve some task should be allowed to
> vote.

"Normal people" using Scheme should rely on "their" implementors to
defend their interests.

--
Emílio C. Lopes Ich leb und weiß nit wie lang,
Munich, Germany ich stirb und weiß nit wann,
ich fahr und weiß nit wohin,
(Martinus von Biberach) mich wundert, dass ich fröhlich bin!

Ben Goetter

unread,
May 18, 2007, 5:28:58 PM5/18/07
to
Tossed on the newsgroup only as proof of concept.

"Your Statement of Interest [...] must actually address the question of
what your interest is in the Scheme standard. [...] It is not our intent
to run an essay competition here"

WHAT I DID ON MY SUMMER VACATION
BY BEN GOETTER
AGE NINE (AND A HALF)
CRESTLINE ELEMENTARY SCHOOL

I have been writing Scheme programs since 1985, back in the day when
LETREC was called LABELS; twenty-two years later, Scheme remains my
preferred medium for expressing an algorithm. Today I am
intermittently active in the community as the author of Pocket Scheme,
a Scheme programming environment hosted on handheld mobile devices
running Microsoft operating systems.

My primary interest about the proposed Scheme standard is that it
preserve the historic Scheme aesthetic of symmetry, regularity, and
transparency that make the language pleasant to use, and that its
implementation remain practical on contemporary and near-future
portable computing platforms. These platforms are known for severe
resource constraints (immediate memory, persistent storage,
computation cycles, electrical power capacity, and network bandwidth),
and input and output models other than the 7-bit ASCII TTY of the
traditional REPL.

By my measure, then, the three most important developments in the new
standard are its namespace management facilities (under which I
subsume the core-library split), the standardization of Unicode
character and string data, and the formal differentiation between
character-oriented and binary-oriented I/O.

Arun....@gmail.com

unread,
May 18, 2007, 5:31:32 PM5/18/07
to
> "Normal people" using Scheme should rely on "their" implementors to
> defend their interests.

There's maybe 20-30 implementors of popular implementations, but
thousands of users doing practical things with each implementation.
Each implementor carries one (1) vote, just like any guy from some
university who only cares about teaching his students algorithms (and/
or writing papers). See some problem? If implementor's votes were
weighted based on his community (which ofcourse impossible), or if
only implementors voted, then I would agree with you.

Arun K.

Ray Blaak

unread,
May 18, 2007, 5:52:02 PM5/18/07
to
Arun....@gmail.com writes:
> Guys, thanks for the statements, and congratulations for your
> achievements. I guess with such resumes, it's easy to write 150 words.

It is easy with and without such resumes. The examples show just how short 150
words actually are.

Give a blurb on why scheme is important to you, give a blurb on you
problem/support rationale, and you are done.

> This "statement" now looks more like an application to be permitted to
> vote. I think such a thing may be too elitist.

You are making too much of a problem out of this. All the 150 words
requirement does is require you to put some modest thought into a reasoned
explanation of your position.

It's really just a practical quality filter.

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
rAYb...@STRIPCAPStelus.net The Rhythm has my soul.

Arun....@gmail.com

unread,
May 18, 2007, 7:06:10 PM5/18/07
to
> WHAT I DID ON MY SUMMER VACATION
> BY BEN GOETTER
> AGE NINE (AND A HALF)
> CRESTLINE ELEMENTARY SCHOOL

Wow. You REALLY have too much time on your hand...

Tom Lord

unread,
May 18, 2007, 10:43:14 PM5/18/07
to
(
;; The email address supplied here will be used for all future
;; correspondence with you, but it will not be published:
(email-address "lo...@emf.net")

;; Your full name:
(full-name "Thomas Lord")

;; Your country, region, city, etc.:
(geographic-location "Berkeley, CA, USA")

;; The next three entries are optional. You may comment out (or
;; delete) any of them that you do not wish to supply. The
;; public-email-address will be published, so we won't be able to
;; stop spammers from harvesting it.
;; (affiliation "The Knights Who Say \"Ni!\"")
(public-email-address "lo...@emf.net")
;; (web-page-url "http://www.example.org/~fred/")

;; Please supply a statement declaring what your stake is in the
;; outcome of the Scheme standardization process. Your statement
;; must be original, IT MUST BE AT LEAST 150 WORDS LONG, and it must


;; actually address the question of what your interest is in the

;; Scheme standard. Be aware that we will read your statement, and
;; if we think you have seriously missed the mark, we will ask you to
;; submit another one. It is not our intent to run an essay
;; competition here, we are just looking for evidence that you're
;; taking this seriously. (On the other hand, what you write here
;; will become part of the permanent record of the Scheme language,
;; so this really would be a excellent place to pull out your best
;; argument for why Scheme is important!) Note that the example text
;; below is both unoriginal, and too short.
(statement-of-interest
"I am interested in Scheme in general, R6RS in particular,
because I am sometimes a Scheme implementor and sometimes a
Scheme user. I helped to create GNU Guile, writing most of the
code that first turned SCM into Guile. I had one false start
on a new implementation with the Pika Scheme project. I am
currently making a second attempt at a from-scratch implementation.
I use Scheme when I can but am frequently frustrated by the
functionality of the available implementations and libraries.
I believe that R6RS can be an important milestone for the
Scheme community but I also believe that the current draft is
simply wrong-headed in a number of ways.

I have a very simple-minded but, I believe, useful view of
what Scheme is and what The Report should say about it:

Scheme is a design pattern for programming languages. The
pattern applies in a context where we just assume, a priori,
that we have a lisp-style run time system (a tracing GC, cons
pairs, symbols, well thought-out numeric types, etc.). The
design problem is to implement, on that run-time system,
with as little effort as possible, as powerful a lisp as we
can -- and -- the same language we implement should also
be ammenable to static analysis in support of highly optimizing
compilation.

You can describe the Scheme-pattern's nature in lots of different
ways:
it's a three-register machine (code, environment, dynamic context);
it's reducible to a minimalist applicative-order-evaluation lambda
calculus; it's a more or less direct translation of an idealized
fixpoint semantics for algol-class language; it's, amazingly enough,
more or less the exact some EVAL code as an Actor language that was
arrived at via a completely different thought process; .... It is
simultaneously, and *simply*, all of these things at once.

And that's why Scheme is important in history. In isolation,
it might have just been a way to simplify the code in a compiler
(the rabbit thesis) or a hack for making tiny-yet-powerful
interpreters (e.g., SIOD). But it was quickly recognized that
the same insights, the same code, reflected some of the (still)
best thinking in programming language semantics, and was being
independently invented for very different reasons (Actors).
Not only was the design problem solution effective: it seemed
(and arguably seems) to be some a priori aspect of Nature.
The Scheme solution exists independently of its multiple re-inventors
who each arrived at the same thing but from such different
directions.

Closures as the be-all-end-all of data structures; continuations,
lambda, and application as the be-all-end-all of control; and
a unified approach to syntactic abstraction: Scheme is not
a programming langauge but a foundation on which programming langauges
may be built. It is a reframing of the programming language
problem that says brand doesn't matter: the liberty of continuations,
lambda and apply, and syntactic abstraction are all that matter.
Given those freedoms, anyone may have and use whatever language
they like. (c.f. Lewis' Humpty Dumpty).

R6RS, in my view, has a simple job. It needs to relax constraints
on syntax and extend the reach of syntactic abstraction so that
implementors may compete and discuss various ways to permit
Unicode text or homogenous array constants in source. It needs
to adopt a tighter, more aesthetic, more liberal definition of
some basic data types (characters first and foremost) so that
implementors may compete and discuss various ways to map Unicode
into implementations etc.. It needs to introduce *just enough* new
mechanism to support separately compilable yet interpretively
useful modules. It needs to give a mechanism for adding new
disjoint types. And then it needs to stop. Just there. That's
*all* it needs to do.

R6RS does *not* need to be split into two parts, vastly extended
in length, and to try to usurp all future research and competition
to create a standard library. Scheme's role is market maker,
not king."))


MJ Ray

unread,
May 19, 2007, 5:12:10 AM5/19/07
to
Abdulaziz Ghuloum <aghu...@gmail.com> write:
> The voting rules were published months ago.
> [...] Anybody who had a solid disagreement with the process

> and cared enough should have given some input on that mailing list.

(load "locked filing cabinet in basement behind beware of the leopard")

Now, apart from time travel, is there anything users can do to try to
correct this brain fart?

Regards,
--
MJ Ray - see/vidu http://mjr.towers.org.uk/email.html
Experienced webmaster-developers for hire http://www.ttllp.co.uk/
Also: statistician, sysadmin, online shop builder, workers co-op.
Writing on koha, debian, sat TV, Kewstoke http://mjr.towers.org.uk/

Emilio Lopes

unread,
May 19, 2007, 5:36:23 AM5/19/07
to
Arun Khopa writes:

>> "Normal people" using Scheme should rely on "their" implementors to
>> defend their interests.

> There's maybe 20-30 implementors of popular implementations, but
> thousands of users doing practical things with each implementation.
> Each implementor carries one (1) vote, just like any guy from some
> university who only cares about teaching his students algorithms (and/
> or writing papers). See some problem?

No, sorry. It seems to me that the people who have a problem with the
ratification process are mostly the ones who have little to do with
Scheme.

Abdulaziz Ghuloum

unread,
May 19, 2007, 9:42:26 AM5/19/07
to
On May 19, 5:12 am, MJ Ray <m...@phonecoop.coop> wrote:

> Abdulaziz Ghuloum <aghul...@gmail.com> write:
>
> > The voting rules were published months ago.
> > [...] Anybody who had a solid disagreement with the process
> > and cared enough should have given some input on that mailing list.
>
> (load "locked filing cabinet in basement behind beware of the leopard")
>
> Now, apart from time travel, is there anything users can do to try to
> correct this brain fart?

I don't know (I'm not Bawden/Steele/Wand), but you can try. You see,
if one is not involved in the r6rs process (whether it's the r6rs
formal comments, the ratification discussion, or the actual voting),
then one is out of the loop. The different committees *are* asking
the community to get involed by making every process public and
transparent and posting announcements and upcoming events and
deadlines.

Within a few weeks, the registration period will be over and then a
voting period will begin. I bet you that some will come, when the
voting starts, and complain about why they cannot vote and how they
didn't know that they had to register.

Aziz,,,

bunn...@gmail.com

unread,
May 21, 2007, 1:56:54 AM5/21/07
to

((email-address "fe...@call-with-current-continuation.org")
(full-name "Felix Winkelmann")
(geographic-location "Göttingen, Germany")
(statement-of-interest
"I'm the implementor and lead-maintainer of CHICKEN, a popular
Scheme implementation. I do not support R6RS out of following
reasons: it introduces arbitrary and redundant language
constructs into a language that has been designed from the start
to be minimalistic. The minimalism in Scheme was and still is the
base which distinguished it from all other languages and makes it
an excellent experimentation platform and language design tool.
Minimalism is the very essence of Scheme: to express as much as
possible with as little as possible. Designing such a thing is
naturally extremely hard which made the original authors of the
Scheme report (quite wisely) to decide on only making changes
with unanimous consent. R6RS does away with that process and
tries to remodel Scheme as some sort of mainstreamish language
and so completely removes itself from the roots that make Scheme
what it is. R6RS will improve cross-implementation portability,
but not by deliberate language design, but by artificially
reducing the number of conforming implementations. R6RS has not
adapted common and widely used SRFIs, yet includes less used or
even obscure fringe SRFIs (mostly by one author, who incidentally,
is member of the R6RS committee). In my opinion, R6RS shows
disturbing signs of dilettantism in the selection of language
features added to the report: arbitrariness, unnecessary syntax
constructs where procedures would do and completely redundant
language features.

It sucks, folks - face it.

DEAR SCHEME USERS: DO NOT ADOPT R6RS! SEVERAL MAJOR SCHEME
IMPLEMENTATIONS HAVE DECLARED NOT TO SUPPORT IT, WHILE NEARLY
ALL SCHEME IMPLEMENTATIONS WILL STAY R5RS COMPATIBLE OR PROVIDE
R5RS COMPATIBLE MODES IN THE FUTURE.

DEAR SCHEME IMPLEMENTORS: DO NOT ADAPT R6RS! DON'T WASTE YOUR
TIME RUNNING AFTER A BROKEN STANDARD WHILE YOU COULD HELP YOUR
USERS MORE BY ADDRESSING REAL PROBLEMS.
"))

wayo.c...@gmail.com

unread,
May 21, 2007, 2:24:55 AM5/21/07
to
On May 21, 12:56 am, "bunny...@yoho-gmail.com" <bunny...@gmail.com>
wrote:

> R6RS has not
> adapted common and widely used SRFIs, yet includes less used or
> even obscure fringe SRFIs (mostly by one author, who incidentally,
> is member of the R6RS committee).

Michael Sperber?

> SEVERAL MAJOR SCHEME IMPLEMENTATIONS HAVE DECLARED NOT TO SUPPORT IT,

I've seen the implementors of these declare that they're against R6RS:

Chicken
SCM
Stalin

I haven't seen explicit rejection by the authors of these, but I'm
guessing they may not follow:

Gambit-C
Bigloo

I mentioned Gambit because Feeley resigned from the project.

Anyone know of any other dissenters?

I guess the "Scheme Unterground" proved to be Bolsheviks and the real
resistance is coming from the Anarchists. :-)

We'll see these go R6RS:

MzScheme
Chez Scheme
Larceny
Scheme48

I see R6RS as basically a payload to get the PLT research in module
systems into a standard.

Ed

Ray Dillinger

unread,
May 21, 2007, 3:29:27 AM5/21/07
to
wayo.c...@gmail.com wrote:

> I see R6RS as basically a payload to get the PLT research in module
> systems into a standard.

I wanted to like R6RS. I really did. But I think I have to agree
that it's walking away from the "minimalist" stance that has up
to now defined scheme, and, it's not doing it in the best way.

PLT module systems are probably the best module systems we can
possibly get widespread agreement on - although I still don't
like them because they cannot support fully separate compilation -
you're still required to read other files (the ones containing
macro definitions, and the ones containing procedures that
macroexpanders call) in order to compile any file containing a
macro usage.

The character set extension to unicode is borked too. Suffice to
say, the standard tries to standardize far too much. The necessary
and appropriate thing to do would have been to drop requirements
that made it impossible to conform to Unicode and do it well, add
binary I/O to allow people to do binary work even if their character
codes aren't the length and values they assumed their characters
would be, and then shut up. R5.93RS proposes to drop case-insensitive
identifiers and introduces real binary I/O so people don't have to
"fake" it with code that won't work if characters are a different
width, encoding, or endianness than expected, which is good. But
then goes on to say entirely too much about scheme the language
supporting a particular character standard in a particular way,
instead of leaving implementors free to make design decisions
appropriate to whatever environments their systems are needed for.

There are many embedded systems where Unicode never reached,
including some with nonstandard character sets all their own.
There are many regions in the east and middle east that are
better served by other character standards given that unicode
is widely seen as having borked support for their characters.
There are many entities in extant scheme's character sets such
as MIT/GNU scheme that properly describe keystrokes rather than
characters (Alt-F1, control-Home, etc). The vanishment of these
non-unicode entities, if implemented, will be a flag day for
quite a bit of existing code (including, I believe, the Edwin
editor). Now anyone working with these environments will not
be allowed to make a scheme that's compatible with them.


For these reasons and others - as it stands, I'd vote against it.

Bear

Matthias Blume

unread,
May 21, 2007, 10:16:31 AM5/21/07
to
Ray Dillinger <be...@sonic.net> writes:

> PLT module systems are probably the best module systems we can
> possibly get widespread agreement on - although I still don't
> like them because they cannot support fully separate compilation -
> you're still required to read other files (the ones containing
> macro definitions, and the ones containing procedures that
> macroexpanders call) in order to compile any file containing a
> macro usage.

So, do you have any suggestion as to how a better module system would
be able to lift those restrictions? I don't think that this is even
theoretically possible unless you redefine "compilation" in such a way
that all its usual meaning is lost.

Besides, why do you want "full separate compilation" in the first
place?

> There are many entities in extant scheme's character sets such
> as MIT/GNU scheme that properly describe keystrokes rather than
> characters (Alt-F1, control-Home, etc). The vanishment of these
> non-unicode entities, if implemented, will be a flag day for
> quite a bit of existing code (including, I believe, the Edwin
> editor). Now anyone working with these environments will not
> be allowed to make a scheme that's compatible with them.

Of course they are allowed to! They just have to fix the
implementation of these environments. What you describe in the
paragraph above seems to be a bunch of hacks that deserve to be fixed
anyway.

> For these reasons and others - as it stands, I'd vote against it.

I also don't like R6RS, but for other reasons. I have stated these
many times in this forum. Primary examples are the continued practice
of not properly specifying the order of evaluation or the continued
inclusion of ugly and at the same time redundant features such as
call-with-values or the fact that the definitions of EQV? and LAMBDA
(together) break value-substitutability, effectively makeing LAMBDA
into an operation that has as side-effect. (For the record, I do
realize that there are plenty of people out there for whom reversing
these decisions would constitute reason to reject R6RS. I simply
disagree with them on these matters.)

If I were to vote (I won't), I'd vote against it, too.

Regards,
Matthias

Ray Dillinger

unread,
May 21, 2007, 12:49:07 PM5/21/07
to
Matthias Blume wrote:
> Ray Dillinger <be...@sonic.net> writes:
>
>
>>PLT module systems are probably the best module systems we can
>>possibly get widespread agreement on - although I still don't
>>like them because they cannot support fully separate compilation -

> So, do you have any suggestion as to how a better module system would


> be able to lift those restrictions? I don't think that this is even
> theoretically possible unless you redefine "compilation" in such a way
> that all its usual meaning is lost.

There's always the way Stalin did it; macros are file-local.
If I want repeatable and testable, non-environment-dependent
results under R6RS, that's the way I'll continue to do it.

I also favor another approach, pointed out a month or so
ago on this forum: there's an implementation (Chez?) that
compiles the macroexpanders themselves; they run (once)
the first time the macro is called from each call site,
install a code vector to substitute for the macro call
at the call site, and then "jump" back to it. That also
gives repeatable, testable, non-implementation-dependent
results. But yes, that's mixing compilation with execution.

> Besides, why do you want "full separate compilation" in the first
> place?

Mostly for reasons involving debugging and testing (Like,
compiling the same file should produce the same code no matter
which machine does it). But also for scalability, enabling
projects with many *millions* of lines of code without making
bottlenecks during (distributed) compilation.

>>There are many entities in extant scheme's character sets such
>>as MIT/GNU scheme that properly describe keystrokes rather than
>>characters (Alt-F1, control-Home, etc). The vanishment of these
>>non-unicode entities, if implemented, will be a flag day for
>>quite a bit of existing code (including, I believe, the Edwin
>>editor). Now anyone working with these environments will not
>>be allowed to make a scheme that's compatible with them.

> Of course they are allowed to! They just have to fix the
> implementation of these environments.

Well, thank you for volunteering to fix Edwin (and,
implicitly, a lot of other code as well) when its
character and keystroke binding structures explode under
R6RS. The community will really appreciate it.

But for the other systems, I do not think "compatible"
means what you think it means. If I'm writing for a
7-bit ASCII embedded system, or an IBM "dinosaur iron"
EBCDIC system, or a set-top box that uses the Z80 or
other microcontroller-defined character set, or
interfacing with someone's antique Baudot 5-bit shift-coded
teletype terminal, then "compatible" does not mean altering
entities I have NO CONTROL OVER to use Unicode.

If you're seriously going, "no characters should ever be
transmitted, written or read anywhere except in this single
encoding," that's a religious issue and does not belong in
a programming language spec. Pursue your jihad elsewhere,
please.

> I also don't like R6RS, but for other reasons. I have stated these
> many times in this forum. Primary examples are the continued practice
> of not properly specifying the order of evaluation

Oh, right. Unspecified order of evaluation is more syntactic
salt to punish anyone who uses a disapproved programming style
and force them to jump through hoops. I agree with you here;
specified-order should be the default because it works for any
programming style. Unspecified-order should be supported with
alternate structures, the way let* and freinds now support
specified order, specifically so FP systems can use them as
a speed tweak if the optimization levels are set high enough
and/or the compiler writer bothered to implement analysis that
will actually speed up execution by twiddling eval orders.

> or the continued
> inclusion of ugly and at the same time redundant features such as
> call-with-values

I wouldn't be so quick to throw this one out, as it completes a
symmetry between arguments and return values that has always been
implicit in a multi-argument lambda calculus. But formal lambda
calculus has always been a single argument and single return
value. According to that model, we would have to regard our
multiple arguments as syntactic sugar for passing a list as
an argument, and therefore multiple returns (as well as zero or
one return) as syntactic sugar for getting back a list as a
return value. So, if we take lambda calculus as our model,
then clearly call-with-values is incorrect because the return
isn't a list structure.

Bear

Pascal Costanza

unread,
May 21, 2007, 2:41:56 PM5/21/07
to
Ray Dillinger wrote:
> Matthias Blume wrote:
>> Ray Dillinger <be...@sonic.net> writes:
>>
>>
>>> PLT module systems are probably the best module systems we can
>>> possibly get widespread agreement on - although I still don't
>>> like them because they cannot support fully separate compilation -
>
>> So, do you have any suggestion as to how a better module system would
>> be able to lift those restrictions? I don't think that this is even
>> theoretically possible unless you redefine "compilation" in such a way
>> that all its usual meaning is lost.
>
> There's always the way Stalin did it; macros are file-local.
> If I want repeatable and testable, non-environment-dependent
> results under R6RS, that's the way I'll continue to do it.
>
> I also favor another approach, pointed out a month or so
> ago on this forum: there's an implementation (Chez?) that
> compiles the macroexpanders themselves; they run (once)
> the first time the macro is called from each call site,
> install a code vector to substitute for the macro call
> at the call site, and then "jump" back to it. That also
> gives repeatable, testable, non-implementation-dependent
> results. But yes, that's mixing compilation with execution.

It's a pity that the Lisp/Scheme world hasn't discovered just-in-time
compilation or even dynamic compilation yet.

According to http://www.ics.uci.edu/~franz/Site/pubs-pdf/C05.pdf the
best distribution format for code, such that just-in-time compilers work
well, is in the form of abstract syntax trees. So Lisp/Scheme source
code would actually already be better than any bytecode/machine-code format.

Matthias Blume

unread,
May 21, 2007, 2:53:43 PM5/21/07
to
Ray Dillinger <be...@sonic.net> writes:

> There's always the way Stalin did it; macros are file-local.

Of course, you can do that with R6RS. Just don't export macros from
your modules.

> If I want repeatable and testable, non-environment-dependent
> results under R6RS, that's the way I'll continue to do it.

So, what's the issue then? If you want fully separate compilation,
you can have it. If you want to rely on features that don't work with
fully separate compilation, then you can still do it (but you have to
give up on the latter). That's the right way to set things up,
IMNSHO.

> I also favor another approach, pointed out a month or so
> ago on this forum: there's an implementation (Chez?) that
> compiles the macroexpanders themselves; they run (once)
> the first time the macro is called from each call site,
> install a code vector to substitute for the macro call
> at the call site, and then "jump" back to it. That also
> gives repeatable, testable, non-implementation-dependent
> results. But yes, that's mixing compilation with execution.

Indeed, this just defers compilation until runtime -- in which case
one cannot possibly speak of separate /compilation/, at least not the
way I think of it.

>> Besides, why do you want "full separate compilation" in the first
>> place?
>
> Mostly for reasons involving debugging and testing (Like,
> compiling the same file should produce the same code no matter
> which machine does it).

Since you can get this effect if you don't take advantage of those
features that inhibit fully separate compilation, I don't think there
is any issue.

> But also for scalability, enabling
> projects with many *millions* of lines of code without making
> bottlenecks during (distributed) compilation.

SML works the same way, and being the author of SML/NJ's compilation
manager -- which precisely deals with these sorts of issues -- I can
atest to this being pretty much a non-issue in practice.

>>>There are many entities in extant scheme's character sets such
>>>as MIT/GNU scheme that properly describe keystrokes rather than
>>>characters (Alt-F1, control-Home, etc). The vanishment of these
>>>non-unicode entities, if implemented, will be a flag day for
>>>quite a bit of existing code (including, I believe, the Edwin
>>>editor). Now anyone working with these environments will not
>>>be allowed to make a scheme that's compatible with them.
>
>> Of course they are allowed to! They just have to fix the
>> implementation of these environments.
>
> Well, thank you for volunteering to fix Edwin (and,
> implicitly, a lot of other code as well) when its
> character and keystroke binding structures explode under
> R6RS. The community will really appreciate it.

Of course, I won't do any such thing. But how hard can it be to
create a backward-compatibility layer upon which Edwin and friends can
sit?

> But for the other systems, I do not think "compatible"
> means what you think it means. If I'm writing for a
> 7-bit ASCII embedded system, or an IBM "dinosaur iron"
> EBCDIC system, or a set-top box that uses the Z80 or
> other microcontroller-defined character set, or
> interfacing with someone's antique Baudot 5-bit shift-coded
> teletype terminal, then "compatible" does not mean altering
> entities I have NO CONTROL OVER to use Unicode.

In such situations you can always use a special-purpose I/O libraries
for legacy support.

> If you're seriously going, "no characters should ever be
> transmitted, written or read anywhere except in this single
> encoding," that's a religious issue and does not belong in
> a programming language spec.

I wasn't "going" that way.

> Pursue your jihad elsewhere, please.

Goodwin's law (21st century version).

William D Clinger

unread,
May 21, 2007, 4:02:15 PM5/21/07
to
Pascal Costanza wrote:
> It's a pity that the Lisp/Scheme world hasn't discovered just-in-time
> compilation or even dynamic compilation yet.

MacScheme, Macintosh Common Lisp, Chez Scheme, Larceny,
and (on x86 architectures) MzScheme have all used dynamic or
just-in-time compilation. I think Kawa and Bigloo rely on a JVM
JIT when running on a JVM, and there are probably many others
I don't know about.

Will

Pascal Costanza

unread,
May 21, 2007, 5:21:35 PM5/21/07
to

OK, I stand corrected.


Thanks,

jos...@corporate-world.lisp.de

unread,
May 21, 2007, 5:36:18 PM5/21/07
to
On May 21, 10:02 pm, William D Clinger <cesur...@yahoo.com> wrote:
> Pascal Costanza wrote:
> > It's a pity that theLisp/Scheme world hasn't discovered just-in-time

> > compilation or even dynamic compilation yet.
>
> MacScheme, Macintosh CommonLisp, Chez Scheme, Larceny,

> and (on x86 architectures) MzScheme have all used dynamic or
> just-in-time compilation. I think Kawa and Bigloo rely on a JVM
> JIT when running on a JVM, and there are probably many others
> I don't know about.
>
> Will

I think JIT is something else.

Macintosh Common Lisp just compiled everything. But that's
not JIT. It used 'incremental compilation'.
Incremental compilers are preferred in the Lisp world
mostly. Incremental compilers usually compile directly
to some machine or byte code. But there is no further
(on demand) compilation of this code then.

JIT in time would happen for example, if the compiled code is
shipped as a machine independent byte code format and
then on some machines compiled piece by piece into
machine code. I'd guess this has been used in some
Lisp, but the more widely used I know don't do this.
Will, does any Lisp to your knowledge does this?

There are some uses of 'dynamic compilation' in Lisp, though.

Some Lisp systems use a compiler at runtime to speed up CLOS.
Lisp Machines had similar things for Flavors. One could
prefill caches and then save images for example.
MCL also did prefilling of caches. Instead of compiling
an application and saving the image, one would run
the app to fill the code caches and then save the image.
Lisp Machines did also all kinds of fancy things when
saving an image, like 'sorting' the heap by type to
improve locality.

I would guess that people have done profiling of code at
runtime and feeding the input back into the compiler in Lisp before.
There is also some idea of switching data representation
at runtime based on usage ('tables' on the Lisp Machine).
Then there is a vast amount of research done on
genetic algorithms.

Ray Dillinger

unread,
May 21, 2007, 8:49:24 PM5/21/07
to
Matthias Blume wrote:
> Ray Dillinger <be...@sonic.net> writes:


>>I also favor another approach .... mixing compilation with execution.

> Indeed, this just defers compilation until runtime -- in which case
> one cannot possibly speak of separate /compilation/, at least not the
> way I think of it.

Pfft. This is part of the Lisp family. It's SUPPOSED to be
a dynamic language. Why cripple the semantics to force an
artificial separation of phases? Separate compilation *can*
mean compiling what you can see and leaving the rest for
linktime or runtime. But it should not mean "you can't
do that," whatever "that" happens to be.

>>... I do not think "compatible"
>>means what you think it means.... "compatible" does not

>>mean altering entities I have NO CONTROL OVER to use
>>Unicode.

> In such situations you can always use a special-purpose I/O libraries
> for legacy support.

<descent into sarcasm begins here - you may stop reading
if you like.>

Oh, right, and have some alternate data structure for holding
anything that's not blessed and approved by the Unicode
consortium because it's not really a "character," then?
And why is the simple, easy way of just using a scheme
implementation that's compatible with the rest of
the system and has the same character repertoire as the
rest of the system forbidden? Seriously, why should it
be?

> Goodwin's law (21st century version).

Hello! His name was Godwin, not Goodwin! Cripes, you're
never going to have any chance to discredit anybody if
you don't at least get the NAME right! And purely aside
from that, would "crusade" be any better?

Bear

William D Clinger

unread,
May 21, 2007, 9:35:37 PM5/21/07
to
jos...@corporate-world.lisp.de wrote:
> I think JIT is something else.
>
> Macintosh Common Lisp just compiled everything. But that's
> not JIT. It used 'incremental compilation'.

By your definition, which requires compilation "on
demand", Microsoft's Common Language Runtime does
not use JIT either. A Google search on "Common
Language Runtime", JIT will show that tens of
thousands of people disagree with your definition.

Pascal Costanza's point, as I understood it, was that


> According to http://www.ics.uci.edu/~franz/Site/pubs-pdf/C05.pdf the
> best distribution format for code, such that just-in-time compilers work
> well, is in the form of abstract syntax trees. So Lisp/Scheme source
> code would actually already be better than any bytecode/machine-code format.

In replying to Pascal, I accepted his assertion that
abstract syntax trees are a good distribution format
for code to be JITted, mainly because it's true.

jos...@corporate-world.lisp.de continued:


> Will, does any Lisp to your knowledge does this?

Yes. Both Larceny and Common Larceny can do that,
for example, but we generally find source code to
be more convenient, confirming Pascal's point.

> I would guess that people have done profiling of code at
> runtime and feeding the input back into the compiler in Lisp before.

That too has been done in at least one of the systems
I mentioned.

Will

Rainer Joswig

unread,
May 21, 2007, 11:52:56 PM5/21/07
to
In article <1179797737....@n59g2000hsh.googlegroups.com>,

William D Clinger <cesu...@yahoo.com> wrote:

> jos...@corporate-world.lisp.de wrote:
> > I think JIT is something else.
> >
> > Macintosh Common Lisp just compiled everything. But that's
> > not JIT. It used 'incremental compilation'.
>
> By your definition, which requires compilation "on
> demand",

JIT means compilation on demand at runtime. Macintosh Common
Lisp does not do compilation on demand at runtime.
It simply compiles everything either batch or incremental.
If you type in code it gets immediately compiled when
you hand it over to MCL. There is no intermediate representation
in form of byte-codes or similar formats like Lisp expressions.
The compiler is one-step from Lisp source to machine code.

> Microsoft's Common Language Runtime does
> not use JIT either.

Are you sure? I thought it does.

> A Google search on "Common
> Language Runtime", JIT will show that tens of
> thousands of people disagree with your definition.

"Google search"???? You are kidding?

Abdulaziz Ghuloum

unread,
May 22, 2007, 1:01:20 AM5/22/07
to
On May 21, 11:52 pm, Rainer Joswig <jos...@lispmachine.de> wrote:

> JIT means compilation on demand at runtime. Macintosh Common
> Lisp does not do compilation on demand at runtime.
> It simply compiles everything either batch or incremental.
> If you type in code it gets immediately compiled when
> you hand it over to MCL. There is no intermediate representation
> in form of byte-codes or similar formats like Lisp expressions.
> The compiler is one-step from Lisp source to machine code.

In my compiler, loading a library (R6RS) involves expanding it to an
intermediate core form and delaying the generation of machine code
until the library is later invoked (e.g. one of the bindings it
exports is needed). This would qualify as JIT by your definition, no?

Aziz,,,

Rainer Joswig

unread,
May 22, 2007, 1:13:53 AM5/22/07
to
In article <1179810080....@o5g2000hsb.googlegroups.com>,
Abdulaziz Ghuloum <aghu...@gmail.com> wrote:

Yes, the way you describe it, it sounds like it is doing some
form of JIT compilation.

Abdulaziz Ghuloum

unread,
May 22, 2007, 1:57:01 AM5/22/07
to
On May 22, 1:13 am, Rainer Joswig <jos...@lispmachine.de> wrote:

> Yes, the way you describe it, it sounds like it is doing some
> form of JIT compilation.

But this is an artificial distinction. For example, chaning one line
of code from
(lambda () (eval-core-code expanded-code))
to
(eval-core-code `(lambda () ,expanded-code))
would change my compiler from a JIT compiler to an incremental
compiler. This might be great for marketing, but technically, (eval
`(lambda () ,E)) is the same as (lambda () (eval E)).

Aziz,,,

Rainer Joswig

unread,
May 22, 2007, 2:23:19 AM5/22/07
to
In article <1179813420.9...@w5g2000hsg.googlegroups.com>,
Abdulaziz Ghuloum <aghu...@gmail.com> wrote:

'Just in time' just means that a resource is produced and provided
when it is needed. Not before. Resources are not stored before
they are needed. 'Just in time' is there to prevent creation
of stores of precomputed resources. Resources could be
never needed or could be needed much later in a process.

If any software system precompiles code before it is
demanded by a running program, this obviously can't be
'just in time'.

In a JIT scheme (not the programming language ;-) meant here),
a program starts to run before all its pieces are
available in executable format. The demand for executable code
fragments (whole functions, parts of functions, ...)
by the 'processor' then will trigger the compilation of those.

Compare this with a manufacturing process, where the
parts are delivered when they are needed in the process.
If the manufacturing process would only start when
all parts are already available, this would not be
'just in time'.

Pascal Costanza

unread,
May 22, 2007, 3:44:58 AM5/22/07
to
Rainer Joswig wrote:
> In article <1179797737....@n59g2000hsh.googlegroups.com>,
> William D Clinger <cesu...@yahoo.com> wrote:
>
>> jos...@corporate-world.lisp.de wrote:
>>> I think JIT is something else.
>>>
>>> Macintosh Common Lisp just compiled everything. But that's
>>> not JIT. It used 'incremental compilation'.
>> By your definition, which requires compilation "on
>> demand",
>
> JIT means compilation on demand at runtime. Macintosh Common
> Lisp does not do compilation on demand at runtime.
> It simply compiles everything either batch or incremental.
> If you type in code it gets immediately compiled when
> you hand it over to MCL. There is no intermediate representation
> in form of byte-codes or similar formats like Lisp expressions.
> The compiler is one-step from Lisp source to machine code.
>
>> Microsoft's Common Language Runtime does
>> not use JIT either.
>
> Are you sure? I thought it does.

According to http://www.cs.ucsb.edu/~urs/oocsb/papers/HotChips.pdf
(described from a Java perspective):

"Just-In-Time Compilers
- translate portable bytecodes to machine code
- happens at runtime (on the fly)
- standard JITs: compile on method-by-method basis when method is first
invoked"

"HotSpot Compilation
- lazy compilation: only compile/optimize the parts that matter
- combine compiler with interpreter
- seamless transition between interpreted and compiled code as necessary"

[My understanding is that the latter is what is typically referred to as
dynamic compilation, but maybe there is a better term.]

The "method-by-method basis" is probably just one of many possible
strategies. I think it can be argued that MCL, for example, employs
just-in-time compilation, because as soon as a piece of Lisp source code
is loaded, it is compiled. Obviously, this holds for other Lisp/Scheme
implementations as well.

To a certain extent, CLOS employs dynamic compilation: As soon as new
methods are added to a generic function at runtime, this will cause
existing effective methods of that generic function to be discarded and
recompiled on demand. Theoretically, it is possible that effective
methods are only compiled as soon as it is known that they are hot
spots, but as far as I know, no implementation does that. The generic
function invocation protocol of the CLOS MOP is also too simplistic to
enable more interesting dynamic optimizations. (The CLOS derivatives for
Scheme don't fix this, as far as I can tell. Jonathan Bachrach has done
some work in that direction for Dylan, though.)

So to make my original statement more precise, I wonder whether existing
Lisp/Scheme systems employ dynamic compilation (according to the above
characterization) in a systematic way.

Rainer Joswig

unread,
May 22, 2007, 4:58:23 AM5/22/07
to
In article <5bflbsF...@mid.individual.net>,
Pascal Costanza <p...@p-cos.net> wrote:

No, I'd say it is just incremental compilation.

What may be JIT compilation though is auto-loading Lisp source code
in combination with an incremental compiler. The executed
code has stubs that auto-load the definitions and lets them
compile then.

> To a certain extent, CLOS employs dynamic compilation: As soon as new
> methods are added to a generic function at runtime, this will cause
> existing effective methods of that generic function to be discarded and
> recompiled on demand. Theoretically, it is possible that effective
> methods are only compiled as soon as it is known that they are hot
> spots, but as far as I know, no implementation does that.

I think effective methods don't need a compiler if you
add/delete methods. You need
to invoke the compiler, if you change the method combination.

Early PCL (Portable Common Loops) needed the compiler more often, IIRC.
One saw the compiler coming in at runtime whenever an effective
method was invoked the first time. CMUCL used
to do that for a long time, since it used PCL.

Flavors did that, too. There was a call to precompile things.

> The generic
> function invocation protocol of the CLOS MOP is also too simplistic to
> enable more interesting dynamic optimizations. (The CLOS derivatives for
> Scheme don't fix this, as far as I can tell. Jonathan Bachrach has done
> some work in that direction for Dylan, though.)
>
> So to make my original statement more precise, I wonder whether existing
> Lisp/Scheme systems employ dynamic compilation (according to the above
> characterization) in a systematic way.

Hmm.

>
> Pascal

--
http://lispm.dyndns.org

Matthias Blume

unread,
May 22, 2007, 8:52:02 AM5/22/07
to
Ray Dillinger <be...@sonic.net> writes:

> Matthias Blume wrote:
>> Ray Dillinger <be...@sonic.net> writes:
>
>
>>>I also favor another approach .... mixing compilation with execution.
>
>> Indeed, this just defers compilation until runtime -- in which case
>> one cannot possibly speak of separate /compilation/, at least not the
>> way I think of it.
>
> Pfft. This is part of the Lisp family. It's SUPPOSED to be
> a dynamic language.

"Supposed to"? According to whom?

> Why cripple the semantics to force an
> artificial separation of phases?

Why cripple the semantics by not letting me do things that /naturally/
require a separation of phases?

> Separate compilation *can* mean compiling what you can see and
> leaving the rest for linktime or runtime.

But then that's not separate compilation. It is separate "let's wait
and see what we can compile now, leaving the possibility that we don't
compile anything until later when the whole program is available, and
nothing at all is separate".

> But it should not mean "you can't do that," whatever "that" happens to be.

But you are precisely saying that you can't have a Flatt-style module
system which lets you export macros from one module to another and get
a sensibel notion of separate compilation. The "fully separate" part
is a red herring in my experience.

> Oh, right, and have some alternate data structure for holding
> anything that's not blessed and approved by the Unicode
> consortium because it's not really a "character," then?

No, you don't.

> And why is the simple, easy way of just using a scheme
> implementation that's compatible with the rest of
> the system and has the same character repertoire as the
> rest of the system forbidden? Seriously, why should it
> be?

It is not. It won't be "R6RS compliant", but in the kind of extreme
situtation that you describe, why would you care?

>> Goodwin's law (21st century version).
>
> Hello! His name was Godwin, not Goodwin! Cripes, you're
> never going to have any chance to discredit anybody if
> you don't at least get the NAME right!

Well, you got the idea regardless of the spelling. I wasn't trying to
"discredit" you, I was merely complaining about your discussion style.
Even if you don't agree with what I am saying, you can at least remain
civil.

> And purely aside from that, would "crusade" be any better?

Of course not.

William D Clinger

unread,
May 22, 2007, 8:51:44 AM5/22/07
to
Rainer Joswig wrote:
> JIT means compilation on demand at runtime. [...]

He also quoted part of a sentence I wrote:

> > Microsoft's Common Language Runtime does
> > not use JIT either.

That was a misleading quotation because he
omitted the qualifying clause of the sentence:
"By your definition,".

> Are you sure? I thought it does.

By most people's definition, the CLR uses JIT
compilation. By Rainer's definition, it does
not.

> If any software system precompiles code before it is
> demanded by a running program, this obviously can't be
> 'just in time'.

The CLR compiles on load. It also requires
the preliminary step of compiling to CIL, the
Common Intermediate Language. This makes the
JIT^H^H^H load-time compiler language-neutral.

Will

Rainer Joswig

unread,
May 22, 2007, 8:57:00 AM5/22/07
to
In article <1179838303....@b40g2000prd.googlegroups.com>,

William D Clinger <cesu...@yahoo.com> wrote:

> Rainer Joswig wrote:
> > JIT means compilation on demand at runtime. [...]
>
> He also quoted part of a sentence I wrote:
>
> > > Microsoft's Common Language Runtime does
> > > not use JIT either.
>
> That was a misleading quotation because he
> omitted the qualifying clause of the sentence:
> "By your definition,".
>
> > Are you sure? I thought it does.
>
> By most people's definition, the CLR uses JIT
> compilation. By Rainer's definition, it does
> not.

That's what I was asking. But you don't give a reference.

Here what I find via Google Search ;-) :

http://www.c-sharpcorner.com/UploadFile/nrsurapaneni/ILtheLangOfCLRbyNRS12222005040956AM/ILtheLangOfCLRbyNRS.aspx

"Just in Time Compiler converts the IL code back to a
platform/device specific code. In .NET you have three types of
JIT compilers.

Pre-JIT (Compiles entire code into native code at one stretch)
Ecno-JIT (Compiles code part by part freeing when required)
Normal JIT (Compiles only that part of code when called and places in cache)

So atleast the last would be a JIT by 'my' definition. The second one
probably. The first one not.


>
> > If any software system precompiles code before it is
> > demanded by a running program, this obviously can't be
> > 'just in time'.
>
> The CLR compiles on load. It also requires
> the preliminary step of compiling to CIL, the
> Common Intermediate Language. This makes the
> JIT^H^H^H load-time compiler language-neutral.
>
> Will

--
http://lispm.dyndns.org

Rainer Joswig

unread,
May 22, 2007, 9:23:16 AM5/22/07
to
In article <1179838303....@b40g2000prd.googlegroups.com>,

William D Clinger <cesu...@yahoo.com> wrote:

> Rainer Joswig wrote:
> > JIT means compilation on demand at runtime. [...]
>
> He also quoted part of a sentence I wrote:
>
> > > Microsoft's Common Language Runtime does
> > > not use JIT either.
>
> That was a misleading quotation because he
> omitted the qualifying clause of the sentence:
> "By your definition,".
>
> > Are you sure? I thought it does.
>
> By most people's definition, the CLR uses JIT
> compilation. By Rainer's definition, it does
> not.


Here are 'most people's' definitions of 'Just in Time compilation':

Wikipedia:

http://en.wikipedia.org/wiki/Just-in-time_compilation

"just-in-time compilation (JIT), also known as dynamic translation,
is a technique for improving the runtime performance of a computer
program. It converts, at runtime, code from one format into another,
for example bytecode into native machine code. The performance
improvement originates from caching the results of translating
blocks of code, and not simply evaluating each line or operand
separately (see Interpreted language), or compiling the code at
development time. JIT builds upon two earlier ideas in run-time
environments: bytecode compilation and dynamic compilation."

Note the words 'at runtime'.


SUN Java, Hotspot:
http://java.sun.com/products/hotspot/whitepaper.html

"Most attempts to accelerate Java programming language
performance have focused on applying compilation techniques
developed for traditional languages. Just-in-time (JIT) compilers
are essentially fast traditional compilers that translate the Java
technology bytecodes into native machine code on the fly. A JIT
running on the end user's machine actually executes the bytecodes
and compiles each method the first time it is executed.

However, there are several issues with JIT compilation.
First, because the compiler runs on the execution machine
in user time, it is severely constrained in terms of compile speed:
if it is not very fast, then the user will perceive a significant
delay in the startup of a program or part of a program. This
entails a trade-off that makes it far more difficult to perform
advanced optimizations, which usually slow down compilation
performance significantly."

Note 'on the fly' and 'in user time'.


Maybe Microsoft's marke^h^h^h^h^h technical documentation
has a better definition?


>
> > If any software system precompiles code before it is
> > demanded by a running program, this obviously can't be
> > 'just in time'.
>
> The CLR compiles on load. It also requires
> the preliminary step of compiling to CIL, the
> Common Intermediate Language. This makes the
> JIT^H^H^H load-time compiler language-neutral.
>
> Will

--
http://lispm.dyndns.org

Benedikt Rosenau

unread,
May 22, 2007, 9:26:19 AM5/22/07
to
bunn...@yoho-gmail.com <bunn...@gmail.com> wrote:

> It sucks, folks - face it.

You forgot to mention: R6RS is turning Scheme into a SML clone -
which is neither.

That is one of the worst ideas I have seen after Wirth came up
with Pascal.
Benedikt

William D Clinger

unread,
May 22, 2007, 10:57:45 AM5/22/07
to
Rainer Joswig wrote:
> That's what I was asking. But you don't give a reference.

I might be wrong about the current CLR. Several
of the CLR designers and implementors told me
that the JIT compiler of the original version
compiles on load; the loading was on demand,
though. Compiling on load made the startup
times slow for many applications (Common Larceny
among them), so they released NGEN (which is the
pre-JIT of your reference).

Just loading the pre-compiled code for a large
program was still slow, so they may well have
changed the JIT to compile on call instead of
compiling on load. It's been a couple of years
since I looked at this.

> Pre-JIT (Compiles entire code into native code at one stretch)
> Ecno-JIT (Compiles code part by part freeing when required)
> Normal JIT (Compiles only that part of code when called and places in cache)

The EconoJIT compiler is even newer than NGEN.
In the original version of the CLR, compiled
code was never garbage collected.

Rainer also wrote:
> What may be JIT compilation though is auto-loading Lisp source code
> in combination with an incremental compiler. The executed
> code has stubs that auto-load the definitions and lets them
> compile then.

Several of the Lisp/Scheme systems that I
mentioned do that, including some versions of
Larceny.

We could also use dynamic recompilation, but
that would have been dicey in Common Larceny
before EconoJIT.

Larceny uses on-the-fly code generation in some
of its FFIs, including the implementation of
JavaDot notation in Common Larceny. I don't
care whether you want to call that JIT or
incremental compilation.

> Note the words 'at runtime'.

In the systems we are talking about, much of
the load time occurs at run time.

In his most recent message, Pascal explains
that he takes "dynamic compilation" to mean
a combination of three things, two of which
involve an interpreter:

> - combine compiler with interpreter
> - seamless transition between interpreted and compiled code as necessary"

By that definition, dynamic compilation is not
possible in systems (like the CLR) that lack an
interpreter. It would of course be possible in
Common Larceny and other systems that are built
on top of the CLR and add an interpreter for some
source language.

Will

Bruce Lewis

unread,
May 22, 2007, 11:22:31 AM5/22/07
to

Kawa compiles to bytecodes before the JVM comes into play, so I'm pretty
sure it's the already-macro-expanded code that gets JIT-compiled. Do
the non-Java implementations do JIT macro expansion?

Abdulaziz Ghuloum

unread,
May 22, 2007, 11:27:46 AM5/22/07
to
On May 22, 9:26 am, Benedikt Rosenau <rose...@crow.addict.de> wrote:

> bunny...@yoho-gmail.com <bunny...@gmail.com> wrote:
> > It sucks, folks - face it.
>
> You forgot to mention: R6RS is turning Scheme into a SML clone -
> which is neither.

Huh?

> That is one of the worst ideas I have seen after Wirth came up
> with Pascal.

And you have come up with what again?

Aziz,,,

Abdulaziz Ghuloum

unread,
May 22, 2007, 11:30:51 AM5/22/07
to
On May 22, 2:23 am, Rainer Joswig <jos...@lispmachine.de> wrote:

> 'Just in time' just means that a resource is produced and provided
> when it is needed. Not before. Resources are not stored before
> they are needed. 'Just in time' is there to prevent creation
> of stores of precomputed resources. Resources could be
> never needed or could be needed much later in a process.

What's a "resource"? Isn't any intermediate representation (including
a string or an input port) considered a resource that has to be
allocated before it's needed? It seems just arbitrary that machine
code is a resource and an intermediate IL isn't.

Aziz,,,

Rainer Joswig

unread,
May 22, 2007, 11:49:20 AM5/22/07
to
In article <1179845864.9...@y2g2000prf.googlegroups.com>,

William D Clinger <cesu...@yahoo.com> wrote:

> Rainer Joswig wrote:
> > That's what I was asking. But you don't give a reference.
>
> I might be wrong about the current CLR. Several
> of the CLR designers and implementors told me
> that the JIT compiler of the original version
> compiles on load; the loading was on demand,
> though. Compiling on load made the startup
> times slow for many applications (Common Larceny
> among them), so they released NGEN (which is the
> pre-JIT of your reference).
>
> Just loading the pre-compiled code for a large
> program was still slow, so they may well have
> changed the JIT to compile on call instead of
> compiling on load. It's been a couple of years
> since I looked at this.

I guess that Scheme/Lisp don't get much benefit from
JIT compilers, since incremental native code compilers
deliver better performance. The advantages of
an architecture like Microsoft's CLR are interoperability
and portability. Does that pay back? Would
users give up better performance for 'better'
integration?


>
> > Pre-JIT (Compiles entire code into native code at one stretch)
> > Ecno-JIT (Compiles code part by part freeing when required)
> > Normal JIT (Compiles only that part of code when called and places in cache)
>
> The EconoJIT compiler is even newer than NGEN.
> In the original version of the CLR, compiled
> code was never garbage collected.
>
> Rainer also wrote:
> > What may be JIT compilation though is auto-loading Lisp source code
> > in combination with an incremental compiler. The executed
> > code has stubs that auto-load the definitions and lets them
> > compile then.
>
> Several of the Lisp/Scheme systems that I
> mentioned do that, including some versions of
> Larceny.

What was the reason for it? Was it to save space? Was
it to add some kind of library mechanism? I remember
that MacScheme had to run in very little memory
at that time.

>
> We could also use dynamic recompilation, but
> that would have been dicey in Common Larceny
> before EconoJIT.
>
> Larceny uses on-the-fly code generation in some
> of its FFIs, including the implementation of
> JavaDot notation in Common Larceny. I don't
> care whether you want to call that JIT or
> incremental compilation.
>
> > Note the words 'at runtime'.
>
> In the systems we are talking about, much of
> the load time occurs at run time.
>
> In his most recent message, Pascal explains
> that he takes "dynamic compilation" to mean
> a combination of three things, two of which
> involve an interpreter:
>
> > - combine compiler with interpreter
> > - seamless transition between interpreted and compiled code as necessary"
>
> By that definition, dynamic compilation is not
> possible in systems (like the CLR) that lack an
> interpreter. It would of course be possible in
> Common Larceny and other systems that are built
> on top of the CLR and add an interpreter for some
> source language.
>
> Will

--
http://lispm.dyndns.org

Rainer Joswig

unread,
May 22, 2007, 11:57:44 AM5/22/07
to
In article <1179847851.6...@a26g2000pre.googlegroups.com>,
Abdulaziz Ghuloum <aghu...@gmail.com> wrote:

It has to be allocated/created before its needed, right. The question
is when you do it: Before the program runs or at runtime
piece by piece on demand.

--
http://lispm.dyndns.org

Abdulaziz Ghuloum

unread,
May 22, 2007, 12:32:33 PM5/22/07
to
On May 22, 2:23 am, Rainer Joswig <jos...@lispmachine.de> wrote:

> 'Just in time' just means that a resource is produced and provided
> when it is needed. Not before. Resources are not stored before
> they are needed. 'Just in time' is there to prevent creation
> of stores of precomputed resources. Resources could be
> never needed or could be needed much later in a process.

What's a "resource"? Isn't any intermediate representation (including


a string or an input port) considered a resource that has to be
allocated before it's needed?

Aziz,,,

Pascal Costanza

unread,
May 22, 2007, 12:39:40 PM5/22/07
to
William D Clinger wrote:

> I don't care whether you want to call that JIT or incremental
> compilation.

I agree in the sense that the terminology is not the important part, but
what actually happens behind the scenes. (Sorry that I have started the
confusion with my vague statements in the beginning.)

Until now, I have understood dynamic compilation to name a technique
that allows a runtime environment to compile source code or an
intermediate representation to machine code on demand, and potentially
undo the compilation and compile again. The inclusion of an interpreter
is useful because that allows a runtime system to profile execution at
runtime and only compile those parts that actually turn out to be
executed the most. (The advantage is that the compiler can spend more
time on optimizing the hot spots whereas a system without an interpreter
has to compile everything, which means that on average it should take
less time to compile.)

Undoing compilation is an interesting part in these techniques because
this allows a compiler to perform certain "optimistic" optimization and
retract them as soon as the underlying assumptions don't hold anymore.
It's that part that I think hasn't been tackled by Lisp/Scheme systems yet.

Here are some of the opportunities that I see:

+ In Scheme, variables that refer to function bindings can be assigned
to. This means that such functions cannot be inlined / open-coded in the
general case because after reassignment, the inlined version would not
be the correct version of the function anymore.

For example:

(define foo (lambda (x) (+ x x)))

(define bar (lambda (...) ... (foo ...) ...))

Such invocations of foo cannot be inlined because somewhere else in the
code, the following assignment may take place: (set! foo (lambda (x) (*
x x))) [or some such].

A dynamic compiler could perform the inlining and arrange to be
triggered if such an assignment takes place such that the inlining can
be undone accordingly.


+ Macro expansion is typically performed at compile time (or as an early
pass in program execution). This means that when a macro gets redefined,
the affected invocations will not be expanded again. A dynamic compiler
could arrange to be triggered if such a macro gets redefined such that
the macro expansion can be performed again with the new definition.


+ Likewise (as an advanced exercise ;), functions that don't evaluate
their arguments (nlambda, fexpr, etc.) may become compilable. That's a
wild guess and most certainly requires more investigation, but could be
very interesting because it could potentially allow dropping macros
without a significant loss of performance.

George Neuner

unread,
May 22, 2007, 12:55:16 PM5/22/07
to
On Tue, 22 May 2007 05:52:56 +0200, Rainer Joswig
<jos...@lispmachine.de> wrote:

>In article <1179797737....@n59g2000hsh.googlegroups.com>,
> William D Clinger <cesu...@yahoo.com> wrote:
>>
>> Microsoft's Common Language Runtime does
>> not use JIT either.
>
>Are you sure? I thought it does.

Depends on your definition of JIT 8-)

Normally the CLR compiles/links each method individually just prior to
its first execution. However, there is an optional runtime mode
called "install-time" in which the entire assembly is batch
compiled/linked immediately upon loading. A lot of people run their
app servers in this mode and design their programs to preload critical
assemblies. This makes program execution generally faster at the
expense of startup time.

And, of course, there is NGen which compiles assemblies off line and
caches the native code for later execution.

In any case, the CLR checks whether a method has been native compiled
before executing it. The CLR never executes MSIL bytecode.

George
--
for email reply remove "/" from address

dr.tf...@googlemail.com

unread,
May 22, 2007, 1:04:53 PM5/22/07
to
Its been rather disheartening to see how divided the Scheme community
is over R6RS, in particular among implementors. From my viewpoint as
a user, R6RS provides good solutions to the main problems which have
prevented Scheme from becoming a more "mainstream" language for
applications. I'm really looking forward to using it and wish it much
success. Functional programming is making a comeback and Scheme has a
window of opportunity, but only if it provides such basics as modules,
records, Unicode and exception handling. I would like to congratulate
the editors on the quality of their work, and for being willing to
persevere despite the often heated debate.

I hope that the leading implementators will be able to come together
behind a viable Scheme standard, and bury their difference for the
good of the community as a whole. In the end, however, Scheme's
success does not depend on there being many implmementations, just a
few good ones. As a user, I would be more than content with MzScheme,
Larceny, Scheme48 and and Chez Scheme making commitments to implement
R6RS. As for the others: they can go their own way. Users will vote
with their feet.

Abdulaziz Ghuloum

unread,
May 22, 2007, 1:38:22 PM5/22/07
to
On May 22, 2:23 am, Rainer Joswig <jos...@lispmachine.de> wrote:

> 'Just in time' just means that a resource is produced and provided
> when it is needed. Not before. Resources are not stored before
> they are needed. 'Just in time' is there to prevent creation
> of stores of precomputed resources. Resources could be
> never needed or could be needed much later in a process.

Sorry. What "resource" are you talking about? Any intermediate
representation (even a string containing the file name or contents)
*is* a resource that has to be allocated and stored before it is
needed. 'Just in time' is meaningless as you describe it. I can have
my parser be a little more expensive and my jit compiler really fast
or make the parser a little cheaper and the jit more expensive. I can
vary the time/space allocated for the parser or the jit at will.
Should I call my compiler 35%nonjit+65%jit compiler?

Aziz,,,

William D Clinger

unread,
May 22, 2007, 2:15:44 PM5/22/07
to
Bruce Lewis wrote:
> Kawa compiles to bytecodes before the JVM comes into play, so I'm pretty
> sure it's the already-macro-expanded code that gets JIT-compiled. Do
> the non-Java implementations do JIT macro expansion?

SCM does. Most implementations of R5RS Scheme perform
macro expansion as a separate pass prior to compilation
proper, although the granularity (compilation unit) varies.

Macintosh Common Lisp, MacScheme, Larceny, and Chez
Scheme normally go from un-macro-expanded source code
all the way to machine code. Larceny can also generate an
S-expression equivalent of byte code, which could be used
as a distribution format, which might satisfy one of Rainer's
definitions of JIT compilation, but we don't do that except
when bootstrapping or linking together a heap image, and
I doubt whether many of Larceny's users do that either.
(They could, but it isn't documented very well; it exists
mainly to support cross-compilation during bootstrapping.)

Common Larceny normally goes from un-macro-expanded
source to IL, and let's Microsoft's JIT compiler do the rest.
If Rainer considers Microsoft's JIT compiler to be a true
JIT, then Common Larceny is truly JIT-compiled.

Will

Ray Dillinger

unread,
May 22, 2007, 2:27:51 PM5/22/07
to
Matthias Blume wrote:
> Ray Dillinger <be...@sonic.net> writes:

>>Why cripple the semantics to force an
>>artificial separation of phases?

> Why cripple the semantics by not letting me do things that /naturally/
> require a separation of phases?

>>Separate compilation *can* mean compiling what you can see and
>>leaving the rest for linktime or runtime.

> But then that's not separate compilation. It is separate "let's wait
> and see what we can compile now, leaving the possibility that we don't
> compile anything until later when the whole program is available, and
> nothing at all is separate".

Perhaps we should stop using the word "compiler" then, if its
meaning is so restricted. What I want is a program that does
as much preparation of the program to run well, as it can do
given the resources it has available.

If I jump through a (little) hoop to keep all macro definitions
on the local machine and avoid the use of 'eval' etc, I expect
"as much preparation as it can do" to mean writing machine code
in a linkable form - that would be your vision of "compilation."

If I fail to jump through that hoop, I still expect the system
to do as much preparation of the program to run as it can. But
in this case "as much preparation as it can do" will mean some
vectors of machine code mixed with some parse trees and stubs
and callbacks and a compiler-style symbol table and analysis
matrices, etc, for the runtime system to work with and JIT
things that weren't resolvable at preparation time.

The job of the program is the same, as I see it; to do as much
preparation as it can do before runtime. But if the programmer
or sysadmin doesn't make it possible to do everything before
runtime (for example if the programmer uses 'eval' or if
preparation is done on a machine that doesn't have macro
definitions available, or if your user permissions don't
allow your account to read the files the macro definitions
are in) then the preparation of at least some machine code
or bytecode vectors, and subsequent jumping to them, is a
necessary part of the program's runtime semantics.

The "right answer" as far as I'm concerned is that the program
I call a compiler ought to be able to use whatever it's given
to do as much as it can do. But, we shouldn't be *required*
to give it more than the bare minimum which is one file of
code, with or without macro definitions available. If you
want it to halt with an error on the absence of macro definitions,
or more generally on anything that's not statically resolvable,
then invoke the system "-Wall -Werror" or equivalent, or make a
flag named something like "-StaticCode" to mean reject any
program unless it can prove that no machine code will have
to be generated at runtime (and the corresponding "-StaticType"
to mean reject any code unless you can completely prove its
type correctness).

Solving these things before runtime should be a lot like
providing type declarations; it should always be optional and
their presence or absence should never, *ever* change the
semantics of a correct program, but a good system should
give us the choice of whether or not to treat its absence
as an error. If it's there, then systems that really care
about implementation quality should definitely use it.

Abdulaziz Ghuloum

unread,
May 22, 2007, 2:41:15 PM5/22/07
to
Ray Dillinger wrote:

> The "right answer" as far as I'm concerned is that the program
> I call a compiler ought to be able to use whatever it's given
> to do as much as it can do.

Does an example of such compiler exist?

> But, we shouldn't be *required*
> to give it more than the bare minimum which is one file of
> code, with or without macro definitions available.

Without definitions, how can a compiler know whether (f) is a macro call
or a procedure call? It can't. Without the compile time environment,
your program is just a meaningless s-expression that the compiler can do
nothing with (other than keep it as an s-expression) since every single
identifier in the program may be a variable or a macro.

Ray, it's been a long time since you've been talking about this system
where the compiler is smart enough to delay/recompute/recompile parts of
the system based on discoveries that it makes during runtime (such as
rebinding macros, closing ports, set!ing eval, etc.). Any progress on
an actual implementation (or a prototype) of such system?

Aziz,,,

William D Clinger

unread,
May 22, 2007, 3:12:07 PM5/22/07
to
Rainer Joswig wrote:
> [...] The advantages of

> an architecture like Microsoft's CLR are interoperability
> and portability. Does that pay back? Would
> users give up better performance for 'better'
> integration?

Yes. Many would and do. Common Larceny
wouldn't make sense otherwise; it's much slower
than Larceny or Petit Larceny.

Concerning auto-loading:

> What was the reason for it? Was it to save space? Was
> it to add some kind of library mechanism? I remember
> that MacScheme had to run in very little memory
> at that time.

Memory would have been the main motivation in
MacScheme, but in Larceny one of the motivations
has been that many SRFIs have to redefine some
of Larceny's standard procedures, and some of the
SRFIs are inconsistent with other SRFIs. You can't
pre-load all of them without screwing up your top level.

Getting back on topic, that kind of problem is one of
the motivations for R6RS libraries.

Will

Matthew Swank

unread,
May 22, 2007, 3:26:13 PM5/22/07
to
On Tue, 22 May 2007 10:04:53 -0700, dr.tfgordon wrote:

> Its been rather disheartening to see how divided the Scheme community
> is over R6RS, in particular among implementors.

Was R5RS this divisive?

Matt

--
"You do not really understand something unless you
can explain it to your grandmother." - Albert Einstein.

Matthias Blume

unread,
May 22, 2007, 4:32:49 PM5/22/07
to
Ray Dillinger <be...@sonic.net> writes:

> The "right answer" as far as I'm concerned is that the program
> I call a compiler ought to be able to use whatever it's given
> to do as much as it can do. But, we shouldn't be *required*
> to give it more than the bare minimum which is one file of
> code, with or without macro definitions available.

As far as I am concerned, this is precisely the WRONG answer. In
those days when I was still working on my own Scheme implementation
and on my own design of a module system (which, by no coincidence,
shared precisely those properties that you complain about), I found my
productivity in writing correct Scheme code JUMP up by a noticable
margin after I turned the "always complain about undefined global
variables" feature on. Back then this came as a bit of a revelation,
and the experience has steadily led me away from everything-is-dynamic
language designs ever since.

> Solving these things before runtime should be a lot like
> providing type declarations; it should always be optional and
> their presence or absence should never, *ever* change the
> semantics of a correct program,

Why not? I don't see the rationale for these requirements.

> If it's there, then systems that really care
> about implementation quality should definitely use it.

Let me rephrase this for you:

"You don't like the proposed module system because it interferes
with the habits of programmers that do not really care about
implementation quality."

Why didn't you say so right away?

Matthias

Anton van Straaten

unread,
May 22, 2007, 8:58:51 PM5/22/07
to
Ray Dillinger wrote:
> Pfft. This is part of the Lisp family. It's SUPPOSED to be
> a dynamic language.

Just because a child is born into a family of alcoholics, doesn't mean
that the child will become an alcoholic.

In its title, the first Scheme report described Scheme as "An
Interpreter for Extended Lambda Calculus". That report showed that it
was unnecessary to rely on dynamic semantics in order to implement a
*correct* version of Lisp's then-buggy "lambda".

Extrapolating from that one data point, one might hypothesize that
Scheme is a language which prefers static approaches over dynamic
approaches, all else being equal. Examination of prominent modern
Scheme implementations largely confirms this hypothesis, particularly
for module systems and macro systems.

> Why cripple the semantics to force an
> artificial separation of phases?

The separation of phases that I'm familiar with in typical Scheme
implementations is not artificial. It's based on clear semantic
principles which apply even in an implementation which ignores them and
conflates the phases.

Crossing phase boundaries in the wrong direction tends to have a cost,
often in the ability to reason about a program. As a base position to
build up from, it makes a lot of sense to disallow such phase mixing,
and provide the desired functionality in a more controlled way. For
example, modern Schemes provide a parameter feature (as in SRFI 39) to
provide the dynamic scope that Lisp's buggy dynamic lambda used to provide.

> Separate compilation *can*
> mean compiling what you can see and leaving the rest for

> linktime or runtime. But it should not mean "you can't


> do that," whatever "that" happens to be.

What if "that" means "compromising the integrity of the semantics of the
program"?

One can, of course, argue that languages should just let programmers aim
and shoot anywhere, whether it's at their actual problem or at their own
feet or hearts. That's something that dynamic languages are good at,
and helps to explain their popularity. But as the lambda example and
subsequent examples like macro systems have shown, Scheme has
traditionally pursued a more controlled approach.

If you examine the intent behind this, it's not to take power away from
programmers, but to provide the necessary power in a controlled and
relatively safe fashion, that doesn't undermine the ability to reason
about programs.

Unfortunately, it's harder to pursue such a disciplined approach than it
is to just hand over the keys to the armory. That doesn't mean it's not
worth doing.

> Hello! His name was Godwin, not Goodwin! Cripes, you're
> never going to have any chance to discredit anybody if

> you don't at least get the NAME right! And purely aside


> from that, would "crusade" be any better?

And Scheme is the Israel of Lisp's Middle East?

Anton

Ray Dillinger

unread,
May 22, 2007, 9:31:47 PM5/22/07
to
Abdulaziz Ghuloum wrote:

> Without definitions, how can a compiler know whether (f) is a macro call
> or a procedure call? It can't.

Good point. And where the difference is crucial to semantics,
I think that has to be regarded as a problem.

> Ray, it's been a long time since you've been talking about this system
> where the compiler is smart enough to delay/recompute/recompile parts of
> the system based on discoveries that it makes during runtime (such as
> rebinding macros, closing ports, set!ing eval, etc.). Any progress on
> an actual implementation (or a prototype) of such system?

I built a prototype (bytecode machine plus incremental compiler).
Its execution speed for statically determinable code was, well,
that of a bytecode machine - as fast as Java 1.0 or thereabouts,
nowhere near as fast as machine code. Its compilation and
analysis, however, were so excessively slow as to be not really
usable, especially as input program size increased. I read many
books on compilers trying to find ways to make it better, but
wound up only shifting some of the unreasonableness from its
speed to its memory requirements, leaving both unreasonable.

Then I became frustrated with it and put the project on hold
waiting for some more real inspiration to make it better. The
discussions on this newsgroup (and C.L.L) sometimes provoked
the flashes of insight or inspirations that drove the project
this far; I keep reading in hopes that someone smarter than
me will eventually say something that provokes in me a flash
of insight or inspiration that drives it further.

What I learned, the fruit of a lot of work and a few profound
understandings and inspirations along the way, but which
doesn't seem like it can be done in a reasonable length of
time or memory space, is that it's possible to provide a
'lambda' form that unifies the semantics of Lisp macros and
Lisp functions, let the programmer use or abuse it in whatever
ways he wants, and then figure out which functions can be
statically reduced to machine code and and which ones have
"macro-like" semantics.

I defined "defmacro" (or a function that mimics its semantics)
in terms of the extended "lambda" form as an experiment - the
idea behind the experiment is that the functions produced by
this form are *Always* guaranteed to be macro-like, so any
failure to detect and reduce them before runtime means there's
an unhandled case. But even so, I can't seem to find a rapid,
reasonable, and reliable way to detect them.

Limited success, Qualified failure, ongoing learning experience,
or independent research... hard to say. Still waiting for more
inspiration or ambition. Perhaps I should count it as a
humbling experience and shut up about advocating its peculiar
worldview of Lisp.

Bear


Matthew Swank

unread,
May 22, 2007, 9:26:31 PM5/22/07
to
On Wed, 23 May 2007 00:58:51 +0000, Anton van Straaten wrote:

> Scheme is a language which prefers static approaches over dynamic
> approaches, all else being equal.

Which begs the question; why draw the line at (static) types?

Matthew Swank

unread,
May 22, 2007, 9:37:26 PM5/22/07
to
On Tue, 22 May 2007 18:31:47 -0700, Ray Dillinger wrote:

> What I learned, the fruit of a lot of work and a few profound
> understandings and inspirations along the way, but which
> doesn't seem like it can be done in a reasonable length of
> time or memory space, is that it's possible to provide a
> 'lambda' form that unifies the semantics of Lisp macros and
> Lisp functions, let the programmer use or abuse it in whatever
> ways he wants, and then figure out which functions can be
> statically reduced to machine code and and which ones have
> "macro-like" semantics.

Have you looked at metaocaml: http://www.metaocaml.org/? Your mention of a
'unified' lambda made me think of staged compilation and partial
evaluation.

Anton van Straaten

unread,
May 22, 2007, 10:45:24 PM5/22/07
to
Matthew Swank wrote:
> On Wed, 23 May 2007 00:58:51 +0000, Anton van Straaten wrote:
>
>
>>Scheme is a language which prefers static approaches over dynamic
>>approaches, all else being equal.
>
>
> Which begs the question; why draw the line at (static) types?

Because all else isn't sufficiently equal in that case. :)

A deeper answer would need to explore the historical context of early
Scheme, as well as the many experiments with static typing in Scheme,
none of which have enjoyed wide use... yet[*].

Anton

[*] Research continues. I think approaches like Typed Scheme are
promising: http://www.ccs.neu.edu/home/samth/typed-scheme.html

Kjetil S. Matheussen

unread,
May 22, 2007, 11:10:59 PM5/22/07
to

I'm (mostly) a user too, and I like R6RS for the same reasons. However,
I'm very dissapointed about the lack of define-macro and (most
importantly) the order of arguments is still not defined. A third reason I
don't like R6RS is the adoption of syntax-case. I need to read up on it,
but I have used it a little bit, and it seems like a horrible hack to me.
The fact that SISC sometimes use unreasonable long time (many minutes on
short programs) to unexpand expressions because it use a syntax-case-able
macroexpander is also a little bit worrying (I know that there are better
expanders than the one SISC uses, but still).

So, I don't know what I'm going to vote. As it stands, R6RS is better
than R5RS, but since there have been no change in either define-macro or
the order of arguments, voting yes might give a sign that I think thats
okey, which I don't.

Abdulaziz Ghuloum

unread,
May 23, 2007, 12:16:23 AM5/23/07
to
Kjetil S. Matheussen wrote:

> So, I don't know what I'm going to vote. As it stands, R6RS is better
> than R5RS, but since there have been no change in either define-macro or
> the order of arguments, voting yes might give a sign that I think thats
> okey, which I don't.

I will vote no because r6rs still distinguishes #f from the
empty list. Death to r6rs! Long live r3rs!

Aziz,,,

Kjetil S. Matheussen

unread,
May 23, 2007, 12:39:25 AM5/23/07
to

Now you are being childish. I have perfectly good rational reasons for
what I said and it has got nothing to do with being protective or being
conservative. I'm a professional programmer using scheme as my main
language, and I know perfectly well what features makes my work easier or
harder. Lack of define-macro is not a problem because most implementations
provides it anyway (and as a last resort I could use syntax-case, or
implement my own low-level macrosystem, which is very easy even with
r5rs), but the order of arguments makes code uglier (adding let*'s and
stuff) and its also sometimes the cause of bugs. A fourth problem with
r6rs which I forgot is that it still distinguish between toplevel define
and local define. I don't see any rational reason why the programmer
should be disallowed to put defines other places than right after a lamda.
In my opinion, these are artificial restrictions put in because people
like you think you know better than others what programming style is the
best. Perhaps you do too (although I doubt it), but its not very
diplomatic, and its very provocating and very unlispish, by which I mean
that you don't give the programmer freedom to define his own language.

Abdulaziz Ghuloum

unread,
May 23, 2007, 1:23:17 AM5/23/07
to
Kjetil S. Matheussen wrote:

> I'm (mostly) a user too, and I like R6RS for the same reasons. However,
> I'm very dissapointed about the lack of define-macro

Fresh from the mill is your own R6RS library that gives you all
the power of define-macro.


(library (LISP-1.5)
(export define-macro)
(import (r6rs) (r6rs syntax-case)) ;;; add phases if you want
(define-syntax define-macro
(lambda (x)
(syntax-case x ()
[(_ (name fml* ...) b b* ...)
#'(define-macro name (lambda (fml* ...) b b* ...))]
[(_ name proc)
#'(define-syntax name
(let ([p proc])
(lambda (stx)
(syntax-case stx ()
[(id . rest) (identifier? #'id)
(datum->syntax #'id
(apply p (syntax->datum #'rest)))]))))]))))


And here is a simple script to go with it (tested under one
r6rs implementation):

#!r6rs
(import (r6rs) (LISP-1.5))
(define-macro (foo x) `(+ ,x y))
(display
(let ([y 16] [+ -])
(foo 5))) ;=> -11
(newline)

Now you can take it, and either post a formal comment to the
r6rs-discuss list to include it, write a SRFI so that all
implementations would include it by default, post it to your
website, or just include it with your programs/libraries.

Now honestly, R6RS gives you the power to define such library
and use it under any conforming implementation. Sweet and
simple. If you don't want it, and feel happy with the way
r5rs does things, this is fine too. You just better have a
good reason for denying R6RS from the rest of the community
that so desperately wants it.

Now this is not directed to you personally and sorry if it
sounds like it. It's just that I'm frustrated by the state
of affairs. I mean, say I disagree with a few things in
R6RS; which I do. Should I just go ahead and say no and let
all the effort that went into it go to waste and deny people
who want it a chance to use it. Why would I do that? If
I'm happy with r4rs, I will use an r4rs implementation (there
are a few). If I'm happy with r5rs, I will use an r5rs
implementation. If I'm not happy with r6rs and refuse to get
involved in the r6rs process, I will simply ignore it and let
those who want it suffer.

Why should I care to vote on a standard that I don't care
about?

Aziz,,,

Abdulaziz Ghuloum

unread,
May 23, 2007, 2:08:51 AM5/23/07
to
Kjetil S. Matheussen wrote:

> What I lack is
> an explonation why the argument order is not specified. I don't see any
> rational reason why it shouldn't be. If I get an explanation which makes
> sense, the chance of voting yes to R6RS significantly increases.

I think the problem here is that people who want a specified order
of evaluation cannot agree among themselves which order of
evaluation is best. There are those who think arguments should be
evaluated left-to-right while others want it right-to-left. There
are those who want the operator to be evaluated first and those who
want it to be evaluated last. If everybody agreed on an evaluation
order, I think it would've been in the standard long time ago.

Aziz,,,

Anton van Straaten

unread,
May 23, 2007, 2:16:43 AM5/23/07
to
Kjetil S. Matheussen wrote:
> If I get an explanation which makes
> sense, the chance of voting yes to R6RS significantly increases.
...
> Well, I care. But most likely I'm not going to vote. :-)

There's a moral in here somewhere...

Kjetil S. Matheussen

unread,
May 23, 2007, 2:29:50 AM5/23/07
to

On Wed, 23 May 2007, Anton van Straaten wrote:

> Kjetil S. Matheussen wrote:
>> Now you are being childish. I have perfectly good rational reasons for
>> what I said and it has got nothing to do with being protective or being
>> conservative. I'm a professional programmer using scheme as my main
>> language, and I know perfectly well what features makes my work easier or
>> harder.
>

> It's unrealistic to look to R6RS to define the language which a professional
> programmer will use as their main language. You presumably don't use pure
> R5RS, with no implementation-specific or other extensions such as SRFIs, as
> your main language. If R6RS were to add everything a professional programmer
> needs, the objections we've been hearing so far would seem weakly-voiced by
> comparison.


>
>> Lack of define-macro is not a problem because most implementations
>> provides it anyway (and as a last resort I could use syntax-case, or
>> implement my own low-level macrosystem, which is very easy even with
>> r5rs), but the order of arguments makes code uglier (adding let*'s and
>> stuff) and its also sometimes the cause of bugs.
>

> Expanding on my point above, you of course always have the option of using an
> implementation which guarantees an order of evaluation, as many of them do.
>

It depends. Currently I'm forced to work with a lisp built on java. After
evaluating many alternatives, I ended up with SISC. SISC does not specify
the order of arguments, but it had other features which made me choose it.
My favourite scheme is Guile. I like it because it offers great meta
programming capabilities and is superb for debugging. Guile does not
specifiy order either, although it would only demand a few lines of
change in the code. The developers of guile doesn't do it, I don't know
why.


> If your objection has to do with wanting to write code that's portable
> between Scheme implementations, then once again a realistic perspective is
> that there are still many other obstacles to writing portable code,
> particular for the kind of code a typical professional programmer wants to
> write. Given that, does focusing on evaluation order as a show-stopping
> problem really make sense?
>

No, I don't care that much about portability.

>> I don't see any rational reason why the programmer should be disallowed to
>> put defines other places than right after a lamda. In my opinion, these
>> are artificial restrictions put in because people like you think you know
>> better than others what programming style is the best. Perhaps you do too
>> (although I doubt it), but its not very diplomatic, and its very
>> provocating and very unlispish, by which I mean that you don't give the
>> programmer freedom to define his own language.
>

> But you do have the freedom to define your own language: you can use macros
> to make definitions work the way you want them to.
>
> From that perspective, having a core form like "lambda" implement a
> relatively minimal semantics makes sense: it's more suitable as a building
> block for defining your own language.
>
> Try this as an exercise: implement a 'my-lambda' form which allows
> interleaved definitions and expressions, in terms of R5RS or R6RS. Now
> imagine that R6RS supported interleaved definitions and expressions in
> lambdas, and try implementing a 'not-my-lambda' form which disallows that
> interleaving. Which core approach is most flexible?
>

This is what I do. I have lambdas that does the debugging for me, that
expands macros, implements low-level macros (in a debuggable way),
specifies orders, and lets me put defines wherever I want, plus some other
things. Now, when I do all this with SISC (the scheme implementation that
I for various reasons has to work with right now), debugging stops
working. Any kind of advanced macro (or something, I don't know exactly
what) makes SISC stop giving any information about where something went
wrong in case of an error. Therefore I implemented my own debugging
system. This worked great, until my program started to grow. Then it
turned out that SISC can spend minutes compiling relatively small source
files because of the expanded code. Now, all this is of course because of
limitations with SISC, but SISC is most likely the best alternative.
Before I started, I evaluated bigloo, kawa, jscheme, abcl and sixx, and
they all had their limitations, and I ended up with SISC.

My point is that SISC would probably work just fine for me, if I hadn't
started to mess up with a bunch of macros to force it to let me program
the way I wanted to. Then it started to crawl. If however, SISC had
followed a standard where these few small things had been specified, I
hadn't had the need to run my own special trix and macros to make the
implementation do what I wanted, and the chance of getting all the
problems I have would have been lower.

> This brings us back to the point that R6RS is not the entire definition of a
> language for mainstream programmers. It would be more appropriate to think
> of it as the definition of a language which Scheme implementors can build on,
> while enjoying a greater degree of compatibility with other R6RS
> implementations. By the same token, it will help the authors of Scheme
> libraries achieve portability more easily, for non-trivial libraries.
>
> The way to meet mainstream and other programming needs is not to throw out
> Scheme's core features and replace them with something more full-featured,
> but rather to ensure that the core language continues to
> support extension in different directions.
>

This is all well and good. But the problem is that scheme is also being
used for real work.

Pascal Costanza

unread,
May 23, 2007, 2:49:41 AM5/23/07
to
Anton van Straaten wrote:
> Ray Dillinger wrote:
>> Pfft. This is part of the Lisp family. It's SUPPOSED to be
>> a dynamic language.
>
> Just because a child is born into a family of alcoholics, doesn't mean
> that the child will become an alcoholic.
>
> In its title, the first Scheme report described Scheme as "An
> Interpreter for Extended Lambda Calculus". That report showed that it
> was unnecessary to rely on dynamic semantics in order to implement a
> *correct* version of Lisp's then-buggy "lambda".

From "The Art of the Interpreter":

"We saw in Part One that an interactive top-level loop necessarily
violates referential transparency. We wish to deal with the computer as
an entity with state, which changes over time by interacting with a
user. In particular, we want the computer to change over time by
accumulating procedure definitions."

More specifically:

"If we stubbornly insist on maintaining absolute referential
transparency in our language, we are forced to eliminate the incremental
top level loop. A program must be constructed monolithically. We must
read in all our procedure definitions at once, close them all together,
and then take one or more shots at running them. (This is the way many
Algol implementations work; development of large systems can be very
difficult if parts cannot be separately constructed and compiled.) We
are forced to give up interactive debugging, because we cannot redefine
erroneous procedures easily. We are forced to give up incremental
compilation of separate modules."

Also see the note on "Debugging" on page 27.

So I agree with Ray that Scheme was at least originally envisaged as a
dynamic language.

> Extrapolating from that one data point, one might hypothesize that
> Scheme is a language which prefers static approaches over dynamic
> approaches, all else being equal. Examination of prominent modern
> Scheme implementations largely confirms this hypothesis, particularly
> for module systems and macro systems.

This is not a feature of modern Schemes, but was actually part of the
first two reports on Scheme - they both had fluid variables.

Again, from "The Art of the Interpreter":

"Dynamic scoping provides an important abstraction for dealing with side
effects in a controlled way. A low-level procedure may have state
variables which are not of interest to intermediate routines, but which
must be controlled at a high level. Dynamic scoping allows any procedure
to get access to parts of the state when necessary, but permits most
procedures to ignore the existence of the state variables. The existence
of many dynamic variables permits the decomposition of the state in such
a way that only the part of interest need be dealt with."

See fifth entry at http://library.readscheme.org/page1.html

Pascal Costanza

unread,
May 23, 2007, 2:57:16 AM5/23/07
to
Abdulaziz Ghuloum wrote:
> Kjetil S. Matheussen wrote:
>
>> I'm (mostly) a user too, and I like R6RS for the same reasons.
>> However, I'm very dissapointed about the lack of define-macro
>
> Fresh from the mill is your own R6RS library that gives you all
> the power of define-macro.
>
>
> (library (LISP-1.5)
> (export define-macro)
> (import (r6rs) (r6rs syntax-case)) ;;; add phases if you want
> (define-syntax define-macro
> (lambda (x)
> (syntax-case x ()
> [(_ (name fml* ...) b b* ...)
> #'(define-macro name (lambda (fml* ...) b b* ...))]
> [(_ name proc)
> #'(define-syntax name
> (let ([p proc])
> (lambda (stx)
> (syntax-case stx ()
> [(id . rest) (identifier? #'id)
> (datum->syntax #'id
> (apply p (syntax->datum #'rest)))]))))]))))

Lisp 1.5 didn't have macros. That came later - see
http://www.dreamsongs.com/Files/Hopl2.pdf

Also, the definition above may give you define-macro, but certainly not
"all the power of define-macro." Over the decades, a number of features
and fixes had been added to such macro facilities in Lisp to make them
more reliable.

Anton van Straaten

unread,
May 23, 2007, 3:46:38 AM5/23/07
to
Pascal Costanza wrote:
> So I agree with Ray that Scheme was at least originally envisaged as a
> dynamic language.

It almost sounds as though your definition of "dynamic language" is "a
language which allows violation of referential transparency", since
that's the core issue mentioned in the quotes you gave.

In that case, SML/NJ and OCaml are dynamic languages. Confirming this
by comparison to the quotes, both support incremental top-level loops
and accumulating function definitions. Neither require programs to be
constructed entirely monolithically.

But if those two are dynamic languages, then even Schemes with
statically-oriented module systems and strong macro phase separation can
be considered dynamic languages -- in which case perhaps I needn't have
responded to the point about Scheme being obligated to be a dynamic
language by virtue of its family ties.

Of course, if those languages don't count as dynamic for some reason,
then we need a better specification of the specific goal(s), so that we
can examine what's preventing their achievement in the presence of e.g.
compiler phase separation.

>> Extrapolating from that one data point, one might hypothesize that
>> Scheme is a language which prefers static approaches over dynamic
>> approaches, all else being equal. Examination of prominent modern
>> Scheme implementations largely confirms this hypothesis, particularly
>> for module systems and macro systems.
>
>
> This is not a feature of modern Schemes, but was actually part of the
> first two reports on Scheme - they both had fluid variables.

Thanks for pointing that out. I think my subconscious knew that. ;)

Anton

Anton van Straaten

unread,
May 23, 2007, 4:15:03 AM5/23/07
to
Kjetil S. Matheussen wrote:
> It depends. Currently I'm forced to work with a lisp built on java.
> After evaluating many alternatives, I ended up with SISC. SISC does not
> specify the order of arguments, but it had other features which made me
> choose it.

Last time I looked, SISC specified a right-to-left order of evaluation.
My understanding is that this choice provided better performance on
the JVM, given SISC's particular compilation & evaluation model.
Flexibility like this is part of what has allowed Scheme to thrive on so
many different platforms, from PDAs to supercomputers, and from bare
metal to the JVM and CLR.

> My favourite scheme is Guile. I like it because it offers great meta
> programming capabilities and is superb for debugging. Guile does not
> specifiy order either, although it would only demand a few lines of
> change in the code. The developers of guile doesn't do it, I don't know
> why.

One way in which Scheme users can improve their situation is by talking
to the implementors of the implementations they use. The Guile
developers presumably have reasons for their choice. R5RS is not the
reason, because R5RS allows implementations to choose a fixed order of
evaluation (as does R6RS).

> Now, all
> this is of course because of limitations with SISC, but SISC is most
> likely the best alternative. Before I started, I evaluated bigloo, kawa,
> jscheme, abcl and sixx, and they all had their limitations, and I ended
> up with SISC.

I'm curious about whether you've brought these problems to the attention
of the SISC authors, who in my experience are very responsive.

In any case, standardization by itself is not going to give more
resources to the developers of Scheme implementations so that they can
address problems like these. So you're suggesting that if R6RS were
simply to specify exactly the features you need for the work you're
interested in doing, it would solve your problem because you wouldn't
need to extend the language. This solution doesn't really scale,
though, because everyone's needs are different. For example, many
people wouldn't dream of using a Scheme implemented on top of Java,
whereas others find it essential.

>> The way to meet mainstream and other programming needs is not to throw
>> out Scheme's core features and replace them with something more
>> full-featured, but rather to ensure that the core language continues to
>> support extension in different directions.
>>
>
> This is all well and good. But the problem is that scheme is also being
> used for real work.

Considering that both positions are valid, it's a challenging problem,
isn't it?

Anton

Kjetil S. Matheussen

unread,
May 23, 2007, 5:04:45 AM5/23/07
to

On Wed, 23 May 2007, Anton van Straaten wrote:

> Kjetil S. Matheussen wrote:
>> It depends. Currently I'm forced to work with a lisp built on java. After
>> evaluating many alternatives, I ended up with SISC. SISC does not specify
>> the order of arguments, but it had other features which made me choose it.
>
> Last time I looked, SISC specified a right-to-left order of evaluation.

Okay. I think right-to-left is a bit weird though, especially when the
function-call/let-block/quasiquote spans more than one line. Never heard
of a programming language where the execution happens from the bottom to
the top, but it could perhaps make sense, especially if the rest of scheme was
evaluated from bottom to top as well. :-)

> My
> understanding is that this choice provided better performance on the JVM,
> given SISC's particular compilation & evaluation model. Flexibility like this
> is part of what has allowed Scheme to thrive on so many different platforms,
> from PDAs to supercomputers, and from bare metal to the JVM and CLR.
>

I find this a little bit hard to believe. Since sisc code runs relatively
slow anyway, its hard to believe that this would make a significant
difference...

>> My favourite scheme is Guile. I like it because it offers great meta
>> programming capabilities and is superb for debugging. Guile does not
>> specifiy order either, although it would only demand a few lines of change
>> in the code. The developers of guile doesn't do it, I don't know why.
>
> One way in which Scheme users can improve their situation is by talking to
> the implementors of the implementations they use. The Guile developers
> presumably have reasons for their choice. R5RS is not the reason, because
> R5RS allows implementations to choose a fixed order of evaluation (as does
> R6RS).
>

I think I even posted a patch for quasiquote order, which I think is most
important to be specified. Here is one of the responses:


"depending on the order of evaluation of function arguments, and, please,
since you read c.l.scheme: there's not to be argued about here :-/ So
guile changed the order of evaluation for function arguments - that's not
good or bad. There is no reason why the implementors shouldn't do this,
after all, that's the whole point of leaving evaluation order unspecified
(so implementors can safely do this)."


And another one:

"I think it is wise not to change this. If GUILE starts specifying a
certain execution order, people will rely on this order, which will
guarantees that their code will break on other Schemes."

>> Now, all this is of course because of limitations with SISC, but SISC is
>> most likely the best alternative. Before I started, I evaluated bigloo,
>> kawa, jscheme, abcl and sixx, and they all had their limitations, and I
>> ended up with SISC.
>
> I'm curious about whether you've brought these problems to the attention of
> the SISC authors, who in my experience are very responsive.
>

Yeah. But both problems are already known problems, and the development
doesn't seem to be very active. Othervise, sisc is great though, very
reliable.

> In any case, standardization by itself is not going to give more resources to
> the developers of Scheme implementations so that they can address problems
> like these. So you're suggesting that if R6RS were simply to specify exactly
> the features you need for the work you're interested in doing, it would solve
> your problem because you wouldn't need to extend the language. This solution
> doesn't really scale, though, because everyone's needs are different. For
> example, many people wouldn't dream of using a Scheme implemented on top of
> Java, whereas others find it essential.
>

I'm only suggesting to standardize a couple of things which won't break
existing code, limit any coders freedom (quite the opposite), or put a
significant burden on the implementors, and lastly IMHO should be a
beautiful and natural part of the language. :-)

>> > The way to meet mainstream and other programming needs is not to throw
>> > out Scheme's core features and replace them with something more
>> > full-featured, but rather to ensure that the core language continues to
>> > support extension in different directions.
>> >
>>
>> This is all well and good. But the problem is that scheme is also being
>> used for real work.
>
> Considering that both positions are valid, it's a challenging problem, isn't
> it?
>

Well...

Benedikt Rosenau

unread,
May 23, 2007, 7:54:20 AM5/23/07
to
Abdulaziz Ghuloum <aghu...@gmail.com> wrote:

[...]

>> That is one of the worst ideas I have seen after Wirth came up
>> with Pascal.

> And you have come up with what again?

Your comment reminds me of certain political parties that claim
they can only be criticized from the inside, after joining. Your
objection is formal at best and without relation to content. Yet,
participating in the process would just have lent credibility to
the farce.
Benedikt

bunn...@gmail.com

unread,
May 23, 2007, 8:11:50 AM5/23/07
to
On May 19, 3:42 pm, Abdulaziz Ghuloum <aghul...@gmail.com> wrote:

> Within a few weeks, the registration period will be over and then a
> voting period will begin. I bet you that some will come, when the
> voting starts, and complain about why they cannot vote and how they
> didn't know that they had to register.

Then why not extend the registration period?


cheers,
felix

Matthias Blume

unread,
May 23, 2007, 8:45:34 AM5/23/07
to
Benedikt Rosenau <ros...@crow.addict.de> writes:

> Abdulaziz Ghuloum <aghu...@gmail.com> wrote:
>
> [...]
>
>>> That is one of the worst ideas I have seen after Wirth came up
>>> with Pascal.
>
>> And you have come up with what again?
>
> Your comment reminds me of certain political parties that claim
> they can only be criticized from the inside, after joining.

No. I think nobody would hold it against you if you would actually
substantiated your verbal assault on Pascal. Abdulaziz' comment
should be read as: "Who are you? Why should be believe that your
opinion has any value?"

> Your objection is formal at best and without relation to content.

There was "content"? You seem to equate Pascal with "bad idea", which
I find more than just puzzling. Pascal was certainly not a bad idea
at all. It certainly isn't an ideal language, and it is not my
favorite language either. But Wirth deserves all the credit he got
for it.

> Yet, participating in the process would just have lent credibility
> to the farce.

The kind of drive-by-criticism that you display certainly does not lend
credibility to you.

Abdulaziz Ghuloum

unread,
May 23, 2007, 11:43:21 AM5/23/07
to

Maybe I'm misunderstanding you, but wouldn't this just delay the
problem?

Aziz,,,

Benedikt Rosenau

unread,
May 23, 2007, 12:19:59 PM5/23/07
to
Matthias Blume <fi...@my.address.elsewhere> wrote:

> No. I think nobody would hold it against you if you would actually
> substantiated your verbal assault on Pascal.

Pascal was mentioned in a passing reference, because this group is
not about Pascal, as you may have noticed. However, the content of
the reference should have been clear to almost anyone, and your
reading of Abdulaziz' comment is highly idiosyncratic.


> There was "content"? You seem to equate Pascal with "bad idea", which
> I find more than just puzzling. Pascal was certainly not a bad idea
> at all. It certainly isn't an ideal language, and it is not my
> favorite language either. But Wirth deserves all the credit he got
> for it.

Pascal was a slight variation on Algol60 designed for a course on
building compilers. Quite a lot of its flaws can be traced to its
being limited to an one pass compiler. It added little to existing
Algol60 dialects, ignored much of the progress achieved in ten years,
and dropped some valuable, one may even say defining features of
Algol.

I let a man of your austere reputation figure out what is meant.
You do know something about the Algol family, don't you? If not,
feel free to ask me. As a hint, do you think there is a reason
why the Scheme standard mentions Algol, and not Pascal?

Later, Wirth had the chin to imply that Pascal came as a remedy to
the troubles of the Algol revision process and hyped it as the better
language. There are quite of lot of people who say that Wirth single-
handedly threw back the development of programming languages by a
decade, and I agree.
BenedikT

Pascal Costanza

unread,
May 23, 2007, 1:04:57 PM5/23/07
to
Anton van Straaten wrote:
> Pascal Costanza wrote:
>> So I agree with Ray that Scheme was at least originally envisaged as a
>> dynamic language.
>
> It almost sounds as though your definition of "dynamic language" is "a
> language which allows violation of referential transparency", since
> that's the core issue mentioned in the quotes you gave.

No, of course that's not what I mean. I find Guy Steele's definition of
"dynamic language" very useful. He suggested at one of the "Dynamic
Languages Wizards" panels that a dynamic language is a language in which
as many decisions as possible are deferred to runtime - see
http://www.ai.mit.edu/projects/dynlangs/wizards-panels.html

The fact that in a Scheme top level, I can refer to a function that
isn't defined yet means that it can only be resolved later. The fact
that Scheme originally had fluid (dynamically scoped) variables
alongside lexical ones, and provided good arguments for their inclusion,
shows that Scheme originally included both static and dynamic concepts.
(It even specified how fluid variables interact with call/cc - however,
I don't know whether these specifications were sound.)

> In that case, SML/NJ and OCaml are dynamic languages. Confirming this
> by comparison to the quotes, both support incremental top-level loops
> and accumulating function definitions. Neither require programs to be
> constructed entirely monolithically.

Not quite: If I understand correctly, a new function definition in the
top-level loops of these languages can only refer to already existing
functions. Furthermore, if you redefine a function in such a top-level
loop, already existing invocations of that function will not be
affected. This ensures that a function definition can be completely
processed once it is entered into a running system. (At least that's
what an ML programmer once told me - please correct me if I am wrong.)

Also note that the definition of dynamic language above is a relative
one - it says "as many decisions as possible", not "all decisions." So
there is clearly a spectrum here, and a language is either closer to one
side or closer to the other side of the spectrum (and sometimes this is
even different in the same language for different aspects of that
language). I have a hard time imagining a useful language that is either
completely dynamic or completely static. (I doubt that the terms
"completely dynamic" or "completely static" are even meaningful.)

One point, however, in which R6RS really goes overboard is Section 5.7.
"Syntax violations." (R5.93RS): "If a top-level or library form is not
syntactically correct, then the execution of that top-level program or
library must not be allowed to begin.", and in the preceding paragraph,
the definition of "syntactically incorrect" includes write accesses to
immutable variables.

What did the designers of R6RS smoke here? If you interpret that section
strictly, this means that an implementor of R6RS Scheme _must not_
implement a debugger that allows you to execute a partially
syntactically incorrect program. (If the implementor does this, it's not
Scheme anymore because the implementations would violate the R6RS
specification.) That's worse than the current state of the art in any
decent static language.

If I had a stake in the outcome of the R6RS process, that would be my
top-most reason to vote against R6RS. (But I don't have such a stake, so
I am not going to vote.)

So to summarize: Scheme was originally conceived as a dynamic language
(which I think is strongly backed by the original lambda papers), but at
least that section in R5.93RS pushes it in the complete opposite direction.

Matthias Blume

unread,
May 23, 2007, 1:12:46 PM5/23/07
to
Benedikt Rosenau <ros...@crow.addict.de> writes:

> As a hint, do you think there is a reason why the Scheme standard
> mentions Algol, and not Pascal?

Yes, there is a reason. But that does not mean that Pascal is a bad
idea. The Scheme report does not mention a lot of things, many of
which are good ideas. And even if Pascal and Algol were at odds, they
still could each be good ideas.

> There are quite of lot of people who say that Wirth single-
> handedly threw back the development of programming languages by a
> decade, and I agree.

I don't. I do agree that PL was thrown back more than a decade around
that time, but that wasn't Wirth's fault.

Abdulaziz Ghuloum

unread,
May 23, 2007, 1:20:41 PM5/23/07
to
Pascal Costanza wrote:

> What did the designers of R6RS smoke here? If you interpret that section
> strictly, this means that an implementor of R6RS Scheme _must not_
> implement a debugger that allows you to execute a partially
> syntactically incorrect program. (If the implementor does this, it's not
> Scheme anymore because the implementations would violate the R6RS
> specification.) That's worse than the current state of the art in any
> decent static language.

I think this is what everybody is misunderstanding of the role of
a language standard (such as R6RS). The standard does NOT prohibit
an implementation from adding non-standard features. All
implementations of R6RS will have such features and more including
an R5RS-like interactive repl, executing partially-defined programs,
inspecting and mutating library bindings at runtime, and so on.

The standard talks about standard features; things that you can rely
on when you use a standard-conforming implementation in a standard-
conforming mode. So, for example, I can invoke some foo Scheme as:

foo --annoy-user-greatly --strict-r6rs-conformance --complain=all \
--r6rs-program=test.r6rs-scheme

and if these flags make foo behave like the language in R6RS, then
foo is standard conforming. This does not mean that foo Scheme
cannot implement a debugger or execute a partially syntactically
incorrect programs or any of that.

There are so many misconceptions about R6RS it's not even funny.

Aziz,,,

Abdulaziz Ghuloum

unread,
May 23, 2007, 2:05:45 PM5/23/07
to
Pascal Costanza wrote:

> Lisp 1.5 didn't have macros. That came later - see
> http://www.dreamsongs.com/Files/Hopl2.pdf

I'll check it out.

> Also, the definition above may give you define-macro, but certainly not
> "all the power of define-macro." Over the decades, a number of features
> and fixes had been added to such macro facilities in Lisp to make them
> more reliable.

Do you have a reference to where define-macro would be defined?


Aziz,,,

Pascal Costanza

unread,
May 23, 2007, 2:35:42 PM5/23/07
to
Abdulaziz Ghuloum wrote:
> Pascal Costanza wrote:
>
>> What did the designers of R6RS smoke here? If you interpret that
>> section strictly, this means that an implementor of R6RS Scheme _must
>> not_ implement a debugger that allows you to execute a partially
>> syntactically incorrect program. (If the implementor does this, it's
>> not Scheme anymore because the implementations would violate the R6RS
>> specification.) That's worse than the current state of the art in any
>> decent static language.
>
> I think this is what everybody is misunderstanding of the role of
> a language standard (such as R6RS). The standard does NOT prohibit
> an implementation from adding non-standard features.

Sure, but if the standard says "must not do A", but an implementation
decides that it does A anyway, that's a violation of the standard, not
an addition to it, no?

> There are so many misconceptions about R6RS it's not even funny.

If that's indeed a misconception on my side, I'd be very happy.

Pascal Costanza

unread,
May 23, 2007, 2:46:46 PM5/23/07
to

A copy of the Lisp 1.5 Programmer's Manual (and a lot of other
interesting historical documents) can be found at
http://community.computerhistory.org/scc/projects/LISP/

The article that described the introduction of macros is available at
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-057.pdf (also
linked from that page). The copy is hard to read, but the text is also
reproduced in its entirety in Steele's and Gabriel's HOPL paper.

William D Clinger

unread,
May 23, 2007, 3:48:40 PM5/23/07
to
Abdulaziz Ghuloum wrote:
> I think this is what everybody is misunderstanding of the role of
> a language standard (such as R6RS). The standard does NOT prohibit
> an implementation from adding non-standard features.

Yet it is written as though it does, and I think the
reason it was written that way is (1) some of the
editors really did want to prevent implementations
from adding certain kinds of non-standard features,
and (2) mistakenly thought this could be accomplished
by legalistic phrasing in the standard.

That was a bad move politically, as it has created
needless animosity toward the R6RS.

Like you, I understand that implementations of the
R6RS are likely to default to a more useful mode
that ignores some of the R6RS-mandated craziness.
The Larceny developers' wiki outlines one possible
approach [1].

Will

[1] http://larceny.ccs.neu.edu/larceny-trac/wiki/ConformanceModes

David Rush

unread,
May 23, 2007, 4:00:20 PM5/23/07
to
On May 21, 3:16 pm, Matthias Blume <f...@my.address.elsewhere> wrote:
> If I were to vote (I won't), I'd vote against it, too.

Folks, I am late to the game. I love Scheme and do *all* my private
hacking in it, and have done for years. I *will* be sending in my 150
words later on. I have only skimmed the spec so far, so I will not
comment further until I have given it deeper thought.

But I will say this: Matthias is, (IMNSHO) as a keep member of that
statically-typed Scheme variant's community, possibly our best and
most faithful 'enemy' - his objection speaks very loudly.

Besides, he has the Holy name 'Matthias' :)

david
--
Once and future Schemer

It is loading more messages.
0 new messages