R6RS query

19 views
Skip to first unread message

Peter Keller

unread,
Dec 14, 2009, 11:23:02 AM12/14/09
to
Hello,

I have an odd question...

Since some ambiguities have arisen in the language spec for R6RS,
why weren't these discovered through the formal semantics of the
language? Could some kind of an automated tool have discovered them? I
don't have any specifics in mind, it is more of a general question.

Thank you.

-pete

Aaron W. Hsu

unread,
Dec 14, 2009, 10:39:45 PM12/14/09
to
Peter Keller <psi...@merlin.cs.wisc.edu> writes:

>Since some ambiguities have arisen in the language spec for R6RS,
>why weren't these discovered through the formal semantics of the
>language? Could some kind of an automated tool have discovered them? I
>don't have any specifics in mind, it is more of a general question.

Some people have played with doing this, and it did reveal some things,
but even the formal semantics can only go so far, because they don't
cover the whole language.

Aaron W. Hsu
--
A professor is one who talks in someone else's sleep.

Peter Keller

unread,
Dec 14, 2009, 10:47:40 PM12/14/09
to
Aaron W. Hsu <arc...@sacrideo.us> wrote:
> Peter Keller <psi...@merlin.cs.wisc.edu> writes:
>
>>Since some ambiguities have arisen in the language spec for R6RS,
>>why weren't these discovered through the formal semantics of the
>>language? Could some kind of an automated tool have discovered them? I
>>don't have any specifics in mind, it is more of a general question.
>
> Some people have played with doing this, and it did reveal some things,
> but even the formal semantics can only go so far, because they don't
> cover the whole language.

I'm very new to the study of formal semantics, but what kinds of things
does R6RS do for which no formal semantics can be written? And, could those
things have been encapsulated in a formal semantics, or are they categorically
out of scope for formal semantics?

Thank you.

-pete

William D Clinger

unread,
Dec 15, 2009, 12:26:58 PM12/15/09
to
Peter Keller wrote:
> I'm very new to the study of formal semantics, but what kinds of things
> does R6RS do for which no formal semantics can be written? And, could those
> things have been encapsulated in a formal semantics, or are they categorically
> out of scope for formal semantics?

So far as I know, the omitted features could have been covered
by a formal semantics but were not covered by the particular
formal semantics given in non-binding Appendix A of the R6RS.

That appendix begins with a paragraph that says:

It does not cover the entire language. The notable missing
features are the macro system, I/O, and the numerical tower.
The precise list of features included is given in section A.2.

The library system is also missing, but the person(s) who wrote
that paragraph may have regarded the library system as part of
the macro system.

Will

Peter Keller

unread,
Dec 16, 2009, 11:47:18 AM12/16/09
to

Given the ambiguity found in R6RS, should R7RS have a full semantics for
it for every feature for both the small and large languages? Would this
work lead to a more consistent language where people don't get surprised by
things?

Thank you.

-pete

William D Clinger

unread,
Dec 16, 2009, 3:16:41 PM12/16/09
to
Peter Keller wrote:
> Given the ambiguity found in R6RS, should R7RS have a full semantics for
> it for every feature for both the small and large languages? Would this
> work lead to a more consistent language where people don't get surprised by
> things?

On several issues involving R6RS macros and libraries,
the most well-informed experts are themselves bitterly
divided over what the semantics should be, or what the
R6RS allows. Reaching agreement on informal semantics
is prerequisite to formalization; otherwise the formal
semantics would represent nothing more than its authors'
opinion---if that.

Will

Peter Keller

unread,
Dec 16, 2009, 3:42:54 PM12/16/09
to
William D Clinger <cesu...@yahoo.com> wrote:
> On several issues involving R6RS macros and libraries,
> the most well-informed experts are themselves bitterly
> divided over what the semantics should be, or what the
> R6RS allows. Reaching agreement on informal semantics
> is prerequisite to formalization; otherwise the formal
> semantics would represent nothing more than its authors'
> opinion---if that.

If the portions of the language are left unspecified due to conflict, then
implementors will inevitably end up choosing different interpretations
anyway, leading to non-portable code.

In light of this, should *a* semantics simply be chosen and formalized in
the name of portability and progress?

Thank you.

-pete

Aaron W. Hsu

unread,
Dec 16, 2009, 7:49:36 PM12/16/09
to
Peter Keller <psi...@merlin.cs.wisc.edu> writes:

>In light of this, should *a* semantics simply be chosen and formalized in
>the name of portability and progress?

Sometimes this is okay to do, but for certain core things in the
language, forcing things that don't have benefits or clear benefits
doesn't appeal to me and some others.

The standards document isn't really the place to make a new language,
IMO. It's a good place to document standard behaviors that have been
seeing real use. Where implementations really do differ, I don't know
that's it's a good idea to force them all to conform to one system. We
could do that, but that's pretty much like then choosing a reference
implementation Scheme and saying, "That's our standard."

Aaron W. Hsu

unread,
Dec 16, 2009, 7:52:20 PM12/16/09
to
Peter Keller <psi...@merlin.cs.wisc.edu> writes:

>Given the ambiguity found in R6RS, should R7RS have a full semantics for
>it for every feature for both the small and large languages? Would this
>work lead to a more consistent language where people don't get surprised by
>things?

In addition to the other response I see to this, I'd like to add that
even if everyone agreed on what these semantics should be, it would be
very hard to write them, though technically not impossible. In that
case, you're talking about a lot of effort that will very likely be
buggy and wrong, unless most of the time was spent on this. In light of
the timetable set forth by the Steering Committtee, I am not sure that
this is what the Working Groups should spend their time on initially,
especially given the divergence of opinions in so many areas.

Also, just as a comment, we should remember that Scheme's diversity is
not just a weakness, but also serves as a real, important strength. We
have to approach portability and standardization in a way that enables
Scheme's freedom of expression "culture."

Benjamin L. Russell

unread,
Dec 17, 2009, 4:34:23 AM12/17/09
to
On Wed, 16 Dec 2009 18:52:20 -0600, Aaron W. Hsu <arc...@sacrideo.us>
wrote:

>Also, just as a comment, we should remember that Scheme's diversity is


>not just a weakness, but also serves as a real, important strength. We
>have to approach portability and standardization in a way that enables
>Scheme's freedom of expression "culture."

Actually, this point has been raised before with respect to a
different language: Haskell. In "A History of Haskell: Being Lazy
with Class" [1], the authors state as follows (in the last paragraph
of section "3.4 Haskell has no formal semantics," in the second
column on page 9):

>Nevertheless, we always found it a little hard to admit that a language
>as principled as Haskell aspires to be has no formal definition.
>But that is the fact of the matter, and it is not without its advantages.
>In particular, the absence of a formal language definition
>does allow the language to evolve more easily, because the costs of
>producing fully formal specifications of any proposed change are
>heavy, and by themselves discourage changes.

We now see history about to repeat itself.

-- Benjamin L. Russell

[1] Hudak, Paul, Hughes, John, Peyton Jones, Simon, and Wadler,
Philip. "A History of Haskell: Being Lazy With Class." San Diego,
California: _The Third ACM SIGPLAN History of Programming Languages
Conference (HOPL-III)_ (2007): 12-1 - 12-55, 2007.
<http://research.microsoft.com/en-us/um/people/simonpj/papers/history-of-haskell/history.pdf>
--
Benjamin L. Russell / DekuDekuplex at Yahoo dot com
http://dekudekuplex.wordpress.com/
Translator/Interpreter / Mobile: +011 81 80-3603-6725
"Furuike ya, kawazu tobikomu mizu no oto."
-- Matsuo Basho^

Peter Keller

unread,
Dec 17, 2009, 12:58:12 PM12/17/09
to
Benjamin L. Russell <DekuDe...@yahoo.com> wrote:
> On Wed, 16 Dec 2009 18:52:20 -0600, Aaron W. Hsu <arc...@sacrideo.us>
> wrote:
>
>>Also, just as a comment, we should remember that Scheme's diversity is
>>not just a weakness, but also serves as a real, important strength. We
>>have to approach portability and standardization in a way that enables
>>Scheme's freedom of expression "culture."
>
> Actually, this point has been raised before with respect to a
> different language: Haskell. In "A History of Haskell: Being Lazy
> with Class" [1], the authors state as follows (in the last paragraph
> of section "3.4 Haskell has no formal semantics," in the second
> column on page 9):
>
>>Nevertheless, we always found it a little hard to admit that a language
>>as principled as Haskell aspires to be has no formal definition.
>>But that is the fact of the matter, and it is not without its advantages.
>>In particular, the absence of a formal language definition
>>does allow the language to evolve more easily, because the costs of
>>producing fully formal specifications of any proposed change are
>>heavy, and by themselves discourage changes.
>
> We now see history about to repeat itself.

My take on it is that I don't see different popular implementations
actually evolving in the manner people think Scheme evolves. Or, the ones
that do actually evolve (non-popular ones) are used by (in my perception)
a tiny fraction of the community (like dozens or less of individual
people) and don't follow the standard much anyway, by definition.

I sometimes wonder if people see different implementations implementing
the same idea in different ways, suppose structures, or hash tables,
or multiple different APIs to the same semantic library, like opengl
or posix, etc mistake it for evolution. I believe the the majority of
the variety that people speak of between implementations is actually
ephemeral.

The only *real* differences between implementations of a given standard
are optimization passes, internal representations, and how they organize
their libraries (think Chickens's eggs unlimited, etc, etc, etc).

Real evolution, like multilisp, clojure, bigloo's/common lisp's type
system, or arc--which actually affect the meaning or expression of
the language, goes ignored.

Thank you.

-pete


Benjamin L. Russell

unread,
Dec 17, 2009, 2:59:16 PM12/17/09
to
On 17 Dec 2009 17:58:12 GMT, Peter Keller <psi...@merlin.cs.wisc.edu>
wrote:

>My take on it is that I don't see different popular implementations
>actually evolving in the manner people think Scheme evolves. Or, the ones
>that do actually evolve (non-popular ones) are used by (in my perception)
>a tiny fraction of the community (like dozens or less of individual
>people) and don't follow the standard much anyway, by definition.
>
>I sometimes wonder if people see different implementations implementing
>the same idea in different ways, suppose structures, or hash tables,
>or multiple different APIs to the same semantic library, like opengl
>or posix, etc mistake it for evolution. I believe the the majority of
>the variety that people speak of between implementations is actually
>ephemeral.
>
>The only *real* differences between implementations of a given standard
>are optimization passes, internal representations, and how they organize
>their libraries (think Chickens's eggs unlimited, etc, etc, etc).
>
>Real evolution, like multilisp, clojure, bigloo's/common lisp's type
>system, or arc--which actually affect the meaning or expression of
>the language, goes ignored.

The Clojure mailing list actually gets much more traffic than
comp.lang.scheme, and Clojure has received a lot of press lately.

What makes you think that the "real evolution" of Clojure is being
"ignored?"

-- Benjamin L. Russell

>
>Thank you.
>
>-pete
>

Peter Keller

unread,
Dec 17, 2009, 3:28:36 PM12/17/09
to
Benjamin L. Russell <DekuDe...@yahoo.com> wrote:
> The Clojure mailing list actually gets much more traffic than
> comp.lang.scheme, and Clojure has received a lot of press lately.
>
> What makes you think that the "real evolution" of Clojure is being
> "ignored?"

When I say ignored, I specifically mean with respect to any RNRS document.
Because it appears to me that since it isn't labeled "Scheme" the ideas
embodied in it won't make it back to any kind of RNRS standard since
they are too far away from traditional Scheme constructs.

It seems to me that Clojure is sort of a marriage between Common Lisp
and Scheme. In my opinion, that's a good thing. If Clojure has one or
two very popular killer apps, RNRS will become irrelevant as Clojure
becomes a mainstream language much larger than any one community.

Isn't this what Schemers wanted all along, some sutble dialect of Scheme
to hit it big? Why not help it along?

-pete

Aaron W. Hsu

unread,
Dec 17, 2009, 8:18:09 PM12/17/09
to
Peter Keller <psi...@merlin.cs.wisc.edu> writes:

>When I say ignored, I specifically mean with respect to any RNRS document.
>Because it appears to me that since it isn't labeled "Scheme" the ideas
>embodied in it won't make it back to any kind of RNRS standard since
>they are too far away from traditional Scheme constructs.

The standards documents in Schemeland document the current common
practices among Scheme implementations in a straightforward way, or, at
least, they were supposed to.

>It seems to me that Clojure is sort of a marriage between Common Lisp
>and Scheme. In my opinion, that's a good thing. If Clojure has one or
>two very popular killer apps, RNRS will become irrelevant as Clojure
>becomes a mainstream language much larger than any one community.

I'd disagree with your assertion that Clojure is some kind of marriage
of Scheme and Common Lisp.

>Isn't this what Schemers wanted all along, some sutble dialect of Scheme
>to hit it big? Why not help it along?

What exactly do you want a Standards document to do?

Pascal Costanza

unread,
Dec 18, 2009, 4:51:35 AM12/18/09
to
On 17/12/2009 21:28, Peter Keller wrote:
> Benjamin L. Russell<DekuDe...@yahoo.com> wrote:
>> The Clojure mailing list actually gets much more traffic than
>> comp.lang.scheme, and Clojure has received a lot of press lately.
>>
>> What makes you think that the "real evolution" of Clojure is being
>> "ignored?"
>
> When I say ignored, I specifically mean with respect to any RNRS document.
> Because it appears to me that since it isn't labeled "Scheme" the ideas
> embodied in it won't make it back to any kind of RNRS standard since
> they are too far away from traditional Scheme constructs.
>
> It seems to me that Clojure is sort of a marriage between Common Lisp
> and Scheme. In my opinion, that's a good thing. If Clojure has one or
> two very popular killer apps, RNRS will become irrelevant as Clojure
> becomes a mainstream language much larger than any one community.

Common Lisp and Scheme have been declared dead already numerous times
before. They will survive another dozen, or so, of such declarations of
death.

> Isn't this what Schemers wanted all along, some sutble dialect of Scheme
> to hit it big? Why not help it along?

Clojure makes too many compromises to adapt itself to the Java
infrastructure. Which may or may not be a good idea, depending on where
you come from or what your goals are.

Common Lisp and Scheme have different goals, though. They are more
quests for doing the 'right thing' in terms of programming language
design. Adapting a language to the Java infrastructure is certainly not
'the right thing', Java and its VM are obviously just too silly for that.


Pascal

--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Grant Rettke

unread,
Dec 18, 2009, 11:46:38 AM12/18/09
to
On Dec 17, 7:18 pm, Aaron W. Hsu <arcf...@sacrideo.us> wrote:
> What exactly do you want a Standards document to do?

A "lot" of folks wanted R6RS to be the driver for a "real world" (for
lack of a better word) Scheme.

Benjamin L. Russell

unread,
Dec 18, 2009, 8:27:14 PM12/18/09
to
On Fri, 18 Dec 2009 10:51:35 +0100, Pascal Costanza <p...@p-cos.net>
wrote:

>Clojure makes too many compromises to adapt itself to the Java
>infrastructure.

Agreed. In particular, Java doesn't support first-class
continuations, and therefore, neither does Clojure. Some research has
been conducted, albeit unsuccessfully, in trying to combine
Scheme-style continuations with Java-style exceptions [1], but
according to the abstract, in this system attempted, the following
problem occurs:

>[W]hen a first-class continuation is captured, a continuation of Java
>must be also saved to heap. However, Java does not support this
>facility. In our proposal, when a continuation is captured, only a
>continuation of Scheme part is saved to heap and the continuation
>of Java part is left on the control stack of Java VM. When the
>continuation is called, whether the continuation of Java part is left on
>the stack of Java VM or not is checked, and if left, this call works as a
>traditional continuation call. If not, this works as a partial continuation
>call which has only the Scheme part.

For those whose main interest in Scheme is in first-class
continuations, lack of support for such can be a deciding factor in
not choosing Clojure.

-- Benjamin L. Russell

[1] Ugawa, Tomoharu, Taiichi Yuasa, Tsuneyasu Komiya, and Masahiro
Yasugi. "Implementation of Continuations and Exceptions for a Scheme
System with Java Interface [in Japanese]." Tokyo, Japan: Information
Processing Society of Japan (IPSJ) 42(SIG_11(PRO_12)) (2001): 25-36.
<http://ci.nii.ac.jp/naid/110002726064/en>.

Benjamin L. Russell

unread,
Dec 18, 2009, 8:42:42 PM12/18/09
to
On 17 Dec 2009 20:28:36 GMT, Peter Keller <psi...@merlin.cs.wisc.edu>
wrote:

>Isn't this what Schemers wanted all along, some sutble dialect of Scheme


>to hit it big? Why not help it along?

Clojure is not "some subtle dialect of Scheme." Clojure lacks support
for some crucial constructs in Scheme, such as first-class
continuations and tail recursion. No programming language can claim
to be a proper dialect of Scheme without such constructs. In general,
no programming language can claim to be a proper dialect of Scheme
without at least substantial compliance with some RnRS Scheme
specification (although not necessarily the latest one).

Clojure can claim to be a "Scheme-like language," but that is a far
cry from "some subtle dialect of Scheme." In order to qualify as
"some subtle dialect of Scheme," Clojure first needs to qualify as a
"dialect of Scheme" (which it is not), and not just as a "Scheme-like
language."

-- Benjamin L. Russell

Peter Keller

unread,
Dec 19, 2009, 12:47:28 AM12/19/09
to
Benjamin L. Russell <DekuDe...@yahoo.com> wrote:

From the point of view of the type of people who frequent this group, you
are correct. From the point of view of people who know abstractly about
CL and Scheme, Clojure would be a dialect of Scheme simply due to it
being Lisp-1. Your average programmer *doesn't even understand* call/cc
or how to use it, and so while infinite extents of call/cc is a defining
feature of Scheme, it isn't for people just wandering by--because not many
other languages they would happen know even support the concept!

And when I said Clojure is a marriage of Scheme and CL, I still think
it is. Clojure sort of picked and choosed between different features of
both languages (while mixing in Java) and took what it wanted.

You can easily see the lineage from Clojure's own page on the differences
between it and CL and Scheme:

http://clojure.org/lisps

So, is Clojure and evolutionary language with Scheme as a parent, or is it not?

The answer to your question, which is most probably a fast and visceral
"the hell it is", is why I say whatever good things came out of Clojure
probably won't make it back into a RNRS document.

-pete

Peter Keller

unread,
Dec 19, 2009, 1:09:11 AM12/19/09
to
Aaron W. Hsu <arc...@sacrideo.us> wrote:
> What exactly do you want a Standards document to do?

I want a standards document which makes it plain that the common things
that everyone wants to do (and does!) are in fact codified and standard. I
don't want freedom of choice in the banal. I want boring, well-trod,
and obvious behavior for obviously common actions.

I want a standards document to give me a reasonable belief that if I write
code conforming to it, in 20 years of a project under heavy development
by lots of people, it'll still be working with modern implementations
instead of having to run in a 15 year old dead implementation installed
on a virtual machine.

I want a standard FFI to talk to C and C++ with Java for extra credit.
If the modus operandi for "building vast libraries" in various
implementation is to immediately turn around and write wrappers to
C libraries, then MAKE IT STANDARD AND EASY TO DO!

-pete

Aaron W. Hsu

unread,
Dec 19, 2009, 3:37:49 AM12/19/09
to
Grant Rettke <gre...@gmail.com> writes:

I readily admit that I still don't get what is so hard about writing
real world Scheme programs, or what was ever so hard about it, above and
beyond the difficulty of writing real world programs in general.

Aaron W. Hsu

unread,
Dec 19, 2009, 3:43:28 AM12/19/09
to
Peter Keller <psi...@merlin.cs.wisc.edu> writes:

>So, is Clojure and evolutionary language with Scheme as a parent, or is it not?

What does it take to be evolutionary?

In my mind, this means that it presents something new, useful, and above
and beyond the current status quo of the existing parent. In me limited
experience with Clojure, I don't see this. Not only do I lose many
features that are important to me (some of these are important only to
experienced Schemers, and some are important to the beginners, whether
they know it or not, but I submit that both are important
considerations, since no one should remain a beginner throughout their
experience with a language), but the supposed gains are fairly limited.
I gain a set of data structures that are already available to me in
Scheme, and a set of paradigms that I already use in Scheme. There are
some interesting things (transaction stuff, for example), but they're
not particularly compelling, since Clojure is still tied to too many
things that don't suit me, such as the JVM.

Let's not forget hygiene by default procedural macros; hygienic macros
are a truly great evolution that Scheme made, IMO, and represents
innovation in the language. There are other such examples, but I don't
know if Clojure counts. Maybe in many more years after it has caught up
with Scheme in other terms, it will be.

Aaron W. Hsu

unread,
Dec 19, 2009, 3:49:32 AM12/19/09
to
Peter Keller <psi...@merlin.cs.wisc.edu> writes:

>Aaron W. Hsu <arc...@sacrideo.us> wrote:
>> What exactly do you want a Standards document to do?

>I want a standards document which makes it plain that the common things
>that everyone wants to do (and does!) are in fact codified and standard. I
>don't want freedom of choice in the banal. I want boring, well-trod,
>and obvious behavior for obviously common actions.

Mundane and rudimentary interfaces and behaviors that everyone agrees on
should indeed be standardized; I think we agree here.

>I want a standards document to give me a reasonable belief that if I write
>code conforming to it, in 20 years of a project under heavy development
>by lots of people, it'll still be working with modern implementations
>instead of having to run in a 15 year old dead implementation installed
>on a virtual machine.

Have you ever tried to do this with any other system? Trying to run 15
year old C++ or C code is a nightmare. Many other lanugages haven't even
been around that long. And yet, I can run code that was written quite a
long time ago from some random Scheme implementation on my Scheme
implementation of choice with minimal adjustments. Scheme is remarkably
resistant to bit rot if you ask me, and it didn't need some all
encompassing standard to do that.

>I want a standard FFI to talk to C and C++ with Java for extra credit.
>If the modus operandi for "building vast libraries" in various
>implementation is to immediately turn around and write wrappers to
>C libraries, then MAKE IT STANDARD AND EASY TO DO!

Some people seem to think that this is the way to go. I would disagree
with those people, but I agree that it would be nice to have an FFI.
Unfortunately, the state of the art FFI systems of Scheme
Implementations today simply don't identify enough consensus to make
this anywhere near an easy task. See, You expect a C and C++ interface,
but others expect a Java interface, and others want an interface that
doesn't deal with C, C++, or Java at all! In fact, why should I force
myself to use a C FFI? What if I have a custom library written in
Assembly? What then? This is a perfectly valid use case for a FFI
binding, and something that people do need to do. Whether you consider
it a more legitimate use of an FFI than wrappers around C code is up for
wild debate, but I don't see FFI as being an area ripe for
standardization at the moment. Maybe a small subset, yet, but that small
subset is so small that I wonder if it is worth it even then. Better to
just write properly abstracted code so that it's easy to plug in
different FFIs.

Pascal Costanza

unread,
Dec 19, 2009, 5:13:46 AM12/19/09
to

Why does it matter what average programmers do and don't understand?
Should a language be worse just because its users are worse? That
doesn't seem to make a lot of sense...

> The answer to your question, which is most probably a fast and visceral
> "the hell it is", is why I say whatever good things came out of Clojure
> probably won't make it back into a RNRS document.

This is an abstract discussion. A more concrete discussion would be
this: "Clojure does XYZ, and it's goood. How do we get XYZ into an RNRS?"

So, what XYZs do you have in mind?

Peter Keller

unread,
Dec 19, 2009, 11:39:16 AM12/19/09
to
Pascal Costanza <p...@p-cos.net> wrote:
> On 19/12/2009 06:47, Peter Keller wrote:
> Why does it matter what average programmers do and don't understand?
> Should a language be worse just because its users are worse? That
> doesn't seem to make a lot of sense...

I'm not saying we should get rid of call/cc. I'm saying it wouldn't be a
point of comparison to people idly looking at Scheme-like languages.

Also, I think there is a common viewpoint, which I personally don't adhere to,
that because things like (when ...) and whatnot _can be written_ in Scheme,
Scheme _shouldn't provide it_ in a standard out of "elegance". In reality,
people simply want those definitions, they don't want to have to write them,
and they want them to be working.

If this means that things like that end up in a standard library, fine
by me. In fact, this is what happened with R6RS, and I thought it was
a great idea. I genuinely hope the standard library section of R7RS is
preserved and expanded.

> This is an abstract discussion. A more concrete discussion would be
> this: "Clojure does XYZ, and it's goood. How do we get XYZ into an RNRS?"
>
> So, what XYZs do you have in mind?

Multimethods and metadata (on any object, instead of just the couple
that Clojure performs). Those are the two that I've actually wished for
in Scheme when writing it.

Once you have multimethods, you can probably get the sequence idea from
Common Lisp and associated functions, like (remove ...) that works on
vectors, strings, lists, hashes: (remove 'key hash) depending on the
arguments.

-pete

Peter Keller

unread,
Dec 19, 2009, 12:17:31 PM12/19/09
to
Aaron W. Hsu <arc...@sacrideo.us> wrote:
> Peter Keller <psi...@merlin.cs.wisc.edu> writes:
>
>>Aaron W. Hsu <arc...@sacrideo.us> wrote:
> Have you ever tried to do this with any other system? Trying to run 15
> year old C++ or C code is a nightmare. Many other lanugages haven't even
> been around that long. And yet, I can run code that was written quite a
> long time ago from some random Scheme implementation on my Scheme
> implementation of choice with minimal adjustments. Scheme is remarkably
> resistant to bit rot if you ask me, and it didn't need some all
> encompassing standard to do that.

Yes. During the last 10 years of my professional life, I work on a
project which is 25 years old in C and 15 years old in C++. The truth of
the matter is that the C code barely required any maintenence due to C
language changes (I think the only ones we worried about was K&R arguments
declaration for functions got changed to ANSI and prototypes were required
the other was 32 -> 64 bits) and the C++ stuff required much more. It
turns out the reason why the C++ code required more maintenance due to
language issues is because noone could write a conforming compiler in
a reasonable amount of time. Because of this certain kinds of compiler
specific C++ got written everywhere and it took a while to excise it
once compilers became more correct. Before you jump on this stating you
don't want a vast scheme reference cause the same problem will happen,
I'll state there is probably one or two orders of magnitude difference
between the complexity of the C++ spec and the Scheme spec and there
always will be!

My tounge in cheek response to your bitrot comment is: "Yeah, as long
as your scheme code didn't try to communicate to the environment around it."

>>I want a standard FFI to talk to C and C++ with Java for extra credit.
>>If the modus operandi for "building vast libraries" in various
>>implementation is to immediately turn around and write wrappers to
>>C libraries, then MAKE IT STANDARD AND EASY TO DO!
>
> Some people seem to think that this is the way to go. I would disagree
> with those people, but I agree that it would be nice to have an FFI.
> Unfortunately, the state of the art FFI systems of Scheme
> Implementations today simply don't identify enough consensus to make
> this anywhere near an easy task.

The consensus is that a CFFI exists in almost all scheme implementations. I
believe it to be erroneous, in this context, that consensus means the
actual ffi api is identical between implementations. The functionality is
common. I'd hazard a guess that any one implementation's CFFI could be
put into another implementation and it would work the same.

Well, at least for R6RS this could be true, since records aren't different
between different implementations and might now be used to represent
C structures.

> See, You expect a C and C++ interface,
> but others expect a Java interface, and others want an interface that
> doesn't deal with C, C++, or Java at all! In fact, why should I force
> myself to use a C FFI? What if I have a custom library written in
> Assembly? What then?

You are presenting a false choice of an FFI for everything, or none at all.

The fact is, there are thousands of hugely popular and well
used C libraries out there--many of which are wrapped into scheme
implementations. I'd be perfectly happy not providing an FFI to assembly
because you can write a piece of C code to talk to the assembly and
then write the FFI to that. And while people do have such libraries,
I'd boldly claim they are statistically insignificant.

C++ is harder since it requires a CLOS like thing in order to get the
"feel" of the API translation correct. I'd give that one up too, just
to have a common CFFI.

If the implementors debated on a CFFI which caused each one of them a little
bit of work, but could be implemented by all, I predict a huge influx of
wrapped libraries because people know they can move their opengl library from
implementation A to implementation B without trouble. Then, it would allow
centralized repositories to arise for such wrappers, and that's a good thing.

With the few R6RS implementations out right now, there is still a chance it
could be done!

> Better to just write properly abstracted code so that it's easy to plug in
> different FFIs.

This may shock you, but I disagree on this point. This, in practice, means
extra code which has to be maintained/debugged and can go wrong. I've
had to deal with code like this, and in my opinion it was complicated
and it sucked. And, someone else could just write a different abstraction
over another library, and that would be complicated and suck as well--but
more importantly, it would be *different*, which means harder to maintain
or debug.

IMHO, the scheme community needs to stop applying "variety is strength"
to every aspect of Scheme's problem space. It just doesn't make sense in
some areas and leads to unecessary functionality duplication.

Thank you.

-pete

arc

unread,
Dec 19, 2009, 10:00:24 PM12/19/09
to
Peter,

This is sounding like a classic case of schemer malaise to me.
'Scheme is/was great in many respects, but is too balakanized | hard
to write (real-world | portable) code for | backwards | concerned with
(elegance | theoretical niceities) | not keeping up with cool features
Y of language(s) X'. That's a feeling a lot of people have, anyone
who's followed discussions in any Scheme forum for any length of time
has seen it N times. And it is very much a feeling, not in itself an
argument. Often its emotive basis is fairly clear from the way it's
expressed: the exact problems identified vary from time to time (for
example, you've varied from wanting radical new features that change
the meaning of the language to wanting code that you can write and
still run in 15 years, which are totally not the same thing and I
would think are quite opposed to one another), and proposed solutions
are often a bit overly optimistic, when they aren't totally
unrealistic. I'm not wanting to invalidate your feeling; I often feel
the same way myself, in fact. But it's important to realise that
while of course it's based on certain facts about Scheme and the
Scheme community, it's not *determined* by those facts. Other, more
optimistic reactions, equally fact-based, are also possible: witness
Aaron Hsu's reaction:

On Dec 19, 9:37 pm, Aaron W. Hsu <arcf...@sacrideo.us> wrote:
> Grant Rettke <gret...@gmail.com> writes:


> >On Dec 17, 7:18=A0pm, Aaron W. Hsu <arcf...@sacrideo.us> wrote:
> >> What exactly do you want a Standards document to do?

> >A "lot" of folks wanted R6RS to be the driver for a "real world" (for
> >lack of a better word) Scheme.
>
> I readily admit that I still don't get what is so hard about writing
> real world Scheme programs, or what was ever so hard about it, above and
> beyond the difficulty of writing real world programs in general.
>

> Aaron W. Hsu
> --
> A professor is one who talks in someone else's sleep.

Once in the grip of schemer's malaise, of course one wants to propose
solutions (if one isn't given over to despair entirely), and the most
common solution proposed is a new standard, that will pull Scheme out
of the swamps, fix all the problems, and (re?)instate Scheme to its
rightful place as Prince amongst Languages, or at least make the
situation a lot less depressing.

However, standards documents are limited in the respect to which they
can do anything like this, especially given the Scheme community as it
is.

There's two attitudes one can have towards language standards. One is
the 'radical' attitude, according to which standards can and should be
bold in adopting new technologies and new ways of doing things, and
moreover do things like tow the language community (kicking and
schreaming, if necessary) into the new millenium. The other is the
'conservative' attitude, where the role of a standard is, well,
conservative: they should specify common ways of doing things which
have already been tested and are already being done, just in non-
standard ways. SRFI-9 'Defining Record Types' is a good example of a
conservative specification: records or structures were already present
in many Schemes, and SRFI-9 itself is based on Scheme48's system.
It's also, I believe, one of the most successful SRFIs. I think
you'll find that radical SRFIs are typcially not very successful at
all.

The radical view is tempting, especially if one is looking for
remedies, but it's very questionable as to how realistic it is. I
think R6RS has shown something of the limits of radical proposals.
There wasn't that much that "actually affect[ed] the meaning or
expression of the language" but what it did have in this regard
contributed (along with other things, most of which also are in tune
with the radical view of standards) to a massive division in the
Scheme community.

Radical standards are, I believe, simply never going to work.
Especially not with this community, but I suspect they're risky
always. The appropriate route from feature X of language Y to a
Scheme standard is for it to be implemented in a Scheme implementation
or two first, and only then standardized. If no-one's been interested
enough to implement it in their implementation, then there's every
chance that it's not of sufficient interest to the Scheme community to
be bothered with, and it's certainly premature for standardization.

This may make it look as though standards suffer from the 'not
invented here' syndrome, as they're never standardizing anything that
isn't in some Scheme already, but standards are not where you should
be doing your innovating.

Conservative standards therefore are in my view the only appropriate
standards, and are in any case the only standards that are going to
work very well. And they're not going to (directly, at least) assuage
the envy of feature X of language Y.


But fortunately they can help with your other worries. Of the concrete
things you've said you want (multimethods, FFI, and metadata) two of
them are excellent candidates for standardization: only metadata (as
far as I know) has not been already implemented by a Scheme. MIT
Scheme and Scheme48 (at least) both have multimethods (they call them
'generic procedures') and generic procedures are also a common feature
in Scheme OO systems like TinyClos, so really they're quite widespread
already. Standardizing generic procedures would therefore not so much
be noticing that Clojure has them and getting them into an RnRS, but
rather noticing that Schemes are already doing them except a bit
differently, and putting a standard way of doing them into an RnRS (or
an SRFI). FFIs are similarly widespread.

And I think conservative standards would be enough to address your
general worry about being able to write code that can still be run in
15 years time.

As far as metadata goes, assuming no-one's implemented it in a Scheme
already, what you ought to do is lobby your favourite scheme
implementors to implement it. Or do it yourself. Note that if no-one
wants to implement it, it remains a feature on your wish-list but not
on anyone's to-do list, and obviously it's simply a fantasy to suppose
that standards are going to specify your personal wish-list items.

On Dec 20, 5:39 am, Peter Keller <psil...@merlin.cs.wisc.edu> wrote:
> Pascal Costanza <p...@p-cos.net> wrote:


> > This is an abstract discussion. A more concrete discussion would be
> > this: "Clojure does XYZ, and it's goood. How do we get XYZ into an RNRS?"
>
> > So, what XYZs do you have in mind?
>
> Multimethods and metadata (on any object, instead of just the couple
> that Clojure performs). Those are the two that I've actually wished for
> in Scheme when writing it.
>
> Once you have multimethods, you can probably get the sequence idea from
> Common Lisp and associated functions, like (remove ...) that works on
> vectors, strings, lists, hashes: (remove 'key hash) depending on the
> arguments.
>
> -pete


On the other hand, it's also worth cultivating a more positive
attitude to what can already be done in Scheme. To this end, it's
worth remembering that several notable schemers have stated that they
maintain thousands of lines of portable 'real-world' Scheme code, and
they don't feel a huge need for anything over and above R5RS, or
frequently even R4RS. I believe Aubrey Jaffer, Jeffrey Mark Siskind,
and Gerald Sussman have all made claims of this sort. On the other
hand, several implementations also have really quite rich libraries.
The problems that lead to schemer's malaise are not going to go away
in a hurry - even to the extent they can be addressed by a standard,
that process will take years. But this needn't affect your life so
very negatively: you can write useful code today nevertheless.

So stop worrying, and learn to love the bomb :]


-A.

Peter Keller

unread,
Dec 20, 2009, 1:00:24 AM12/20/09
to
arc <a...@stuff.gen.nz> wrote:
> So stop worrying, and learn to love the bomb :]

Maybe you're right.

When I really consider what it is that is upsetting me, it really comes
down to massive functionality duplication between different schemes
simply because some people wanted to do the same thing a different way
under the guise of different is always better. In my opinion, it is all
those valuable man years wasted that causes my malaise--I wonder what could
have been....

Deep down, I'm trying to deny that this isn't going to change any time
soon.

I should just try and accept it.

Later,
-pete

Pascal Costanza

unread,
Dec 20, 2009, 6:44:26 AM12/20/09
to
On 19/12/2009 17:39, Peter Keller wrote:
> Pascal Costanza<p...@p-cos.net> wrote:
>> This is an abstract discussion. A more concrete discussion would be
>> this: "Clojure does XYZ, and it's goood. How do we get XYZ into an RNRS?"
>>
>> So, what XYZs do you have in mind?
>
> Multimethods and metadata (on any object, instead of just the couple
> that Clojure performs). Those are the two that I've actually wished for
> in Scheme when writing it.

That's much more concrete. Interestingly enough, both could be realized
by adding weak pointers and hash tables (although for efficiency,
generic functions / multimethods would probably need more).

What is "metadata on any object" used for?

Aaron W. Hsu

unread,
Dec 20, 2009, 9:18:53 AM12/20/09
to
Pascal Costanza <p...@p-cos.net> writes:

>On 19/12/2009 17:39, Peter Keller wrote:
>> Pascal Costanza<p...@p-cos.net> wrote:
>>> This is an abstract discussion. A more concrete discussion would be
>>> this: "Clojure does XYZ, and it's goood. How do we get XYZ into an RNRS?"
>>>
>>> So, what XYZs do you have in mind?
>>
>> Multimethods and metadata (on any object, instead of just the couple
>> that Clojure performs). Those are the two that I've actually wished for
>> in Scheme when writing it.

>That's much more concrete. Interestingly enough, both could be realized
>by adding weak pointers and hash tables (although for efficiency,
>generic functions / multimethods would probably need more).

Fortunately, weak pointers and hash tables are very common features. :-)
I would be surprised to find a general purpose Scheme implementation
that did not already have them.

>What is "metadata on any object" used for?

I imagine something like symbol property lists for any object.

Grant Rettke

unread,
Dec 20, 2009, 11:19:39 AM12/20/09
to
On Dec 19, 2:37 am, Aaron W. Hsu <arcf...@sacrideo.us> wrote:

> Grant Rettke <gret...@gmail.com> writes:
> >On Dec 17, 7:18=A0pm, Aaron W. Hsu <arcf...@sacrideo.us> wrote:
> >> What exactly do you want a Standards document to do?
> >A "lot" of folks wanted R6RS to be the driver for a "real world" (for
> >lack of a better word) Scheme.
>
> I readily admit that I still don't get what is so hard about writing
> real world Scheme programs, or what was ever so hard about it, above and
> beyond the difficulty of writing real world programs in general.

When I mow my parents acre of land I like to use a lawn-mower rather
than a nail-clipper, but that is just my personal preference ;).

"Real world" just means "batteries included". People want all of the
good stuff built-in. You are making a good case to just pick one
distribution and stick with it.

Grant Rettke

unread,
Dec 20, 2009, 11:20:46 AM12/20/09
to
On Dec 19, 2:43 am, Aaron W. Hsu <arcf...@sacrideo.us> wrote:

> Peter Keller <psil...@merlin.cs.wisc.edu> writes:
> >So, is Clojure and evolutionary language with Scheme as a parent, or is it not?
>
> What does it take to be evolutionary?
>
> In my mind, this means that it presents something new, useful, and above
> and beyond the current status quo of the existing parent.

Your description treads the line between evolutionary and
revolutionary. No fair.

Grant Rettke

unread,
Dec 20, 2009, 11:22:08 AM12/20/09
to
On Dec 19, 2:49 am, Aaron W. Hsu <arcf...@sacrideo.us> wrote:
> Peter Keller <psil...@merlin.cs.wisc.edu> writes:
> >I want a standard FFI to talk to C and C++ with Java for extra credit.
> >If the modus operandi for "building vast libraries" in various
> >implementation is to immediately turn around and write wrappers to
> >C libraries, then MAKE IT STANDARD AND EASY TO DO!
>
> Some people seem to think that this is the way to go. I would disagree
> with those people, but I agree that it would be nice to have an FFI.
> Unfortunately, the state of the art FFI systems of Scheme
> Implementations today simply don't identify enough consensus to make
> this anywhere near an easy task.

Wat. How many implementations need to implement a C FFI before it is
commonplace? But you said consensus. That is revealing.

Standards documents are tailored to implementers; not users.

Benjamin L. Russell

unread,
Dec 20, 2009, 1:31:16 PM12/20/09
to
On Sat, 19 Dec 2009 19:00:24 -0800 (PST), arc <a...@stuff.gen.nz>
wrote:

>Peter,
>
>This is sounding like a classic case of schemer malaise to me.

It is interesting that you should use the term "schemer malaise." I
think that you are actually describing a specific instance of a more
general malaise, which should be more properly referred to as "bipolar
lisp-dialect programmer malaise" (my term, based on the term "bipolar
lisp programmer" by Mark Tarver [1]). In particular, your description
reminds me of the one in the article "The Bipolar Lisp Programmer" [1]
(see http://www.lambdassociates.org/blog/bipolar.htm?repost) (cached
at
http://74.125.153.132/search?q=cache:NcGBpP8WxmwJ:www.lambdassociates.org/blog/bipolar.htm%3Frepost+Qi+%2BLIsp+%2Bbipolar&cd=4&hl=en&ct=clnk&gl=jp&client=firefox-a).

Although the article was originally written to describe the "Lisp
character" (Tarver's term), the points raised seem common in at least
some ways to the "scheme malaise" as well. This point seems
especially true regarding the aspect of depression that often comes
with using a language which is in many ways at the pinnacle of
elegance, while continually carrying a dilemma between elegance and
usefulness in standardization.

There is probably such as thing as a "Scheme character," which is
almost identical to the following "Lisp character" described by
Tarver:

>Generally what we're talking about here is a student of outstanding brilliance. Someone who is used to acing most of his assignments; of doing things at the last minute but still doing pretty well at them. At some level he doesn't take the whole shebang all that seriously; because, when you get down to it, a lot of the rules at school are pretty damned stupid. In fact a lot of the things in our world don't make a lot of sense, if you really look at them with a fresh mind.
>
>So we have two aspects to this guy; intellectual acuteness and not taking things seriously. The not taking things seriously goes with finding it all pretty easy and a bit dull. But also it goes with realising that a lot of human activity is really pretty pointless, and when you realise that and internalise it then you become cynical and also a bit sad - because you yourself are caught up in this machine and you have to play along if you want to get on. Teenagers are really good at spotting this kind of phony nonsense. Its also the seed of an illness; a melancholia that can deepen in later life into full blown depression.
>
>Another feature about this guy is his low threshold of boredom. He'll pick up on a task and work frantically at it, accomplishing wonders in a short time and then get bored and drop it before its properly finished. He'll do nothing but strum his guitar and lie around in bed for several days after. That's also part of the pattern too; periods of frenetic activity followed by periods of melancholia, withdrawal and inactivity. This is a bipolar personality.

When I read his description, I felt that I was reading my own
description in many ways.

Since the above site is temporarily offline because of Internet
connection-related difficulties pending Mark Tarver's return to
England from India next week [2], please refer to the above-mentioned
cached version of the page meanwhile.

-- Benjamin L. Russell

[1] Tarver, Mark. "The Bipolar Lisp Programmer." Online posting. 2007.
20 Dec. 2009.
<http://www.lambdassociates.org/blog/bipolar.htm?repost>.

[2] Tarver, Mark. "resolving support and internet connection." Online
posting. <news:gmane.lisp.qi>. 18 Nov. 2009. 20 Dec. 2009. Also
available at
<http://groups.google.co.uk/group/qilang/browse_thread/thread/457ba940cf90edcc#>.

Peter Keller

unread,
Dec 20, 2009, 1:53:11 PM12/20/09
to
Aaron W. Hsu <arc...@sacrideo.us> wrote:
> Pascal Costanza <p...@p-cos.net> writes:
>>What is "metadata on any object" used for?
>
> I imagine something like symbol property lists for any object.

The specific cases I found are these:

1. Implementation of a scheme compiler front end:

There are two ways to do this, the easy scheme way as defined in SICP
and Lisp in Small Pieces where you just wander the evaluable form
you get from (read), the hard way, where one writes a full lexer and
parser using tools like lex and yacc. I call the first method "implicit"
and the second method "explicit". The difference between implicit and
explicit implementations are eye-popping in complexity and code size. I've
written very far into an explicit front end of a scheme compiler, and
that means the entire compiler is also explicit. I just don't have an
easy representation for anything because I have to carry along all the
vast annotation data I want.

Implicit compilers are easy to bootstrap, but god forbid you have a
syntax error or want to provide an error message about something about
a line in a file somewhere. (read) throws away a lot of raw information
about what it found where. That's annoying. If objects could somehow have
their lexical analysis data associated with them, in addition to their
parsing data (so expression parenthetical boundaries can be determined
out of the stream that produced it by inspecting a token inside of the
expression), then maybe compiler front ends could stay implicit and easy
to write. It would also be useful for things like sending forms across
sockets to other scheme process to (read).

2. Using scheme as a config language. (Kind of a specialization of 1)

This is one of the "bread and butter" things about scheme, it is easily
embeddable. However, a few thousand line scheme config file you expect
to (read) and muck about in leaves me stuck with how do I tell the user
they made a semantic error on line X or they made a syntax error on line
Y? After I (read) it, that information is long gone. My only recourse is
to explicitly parse the file, and that's dumb when something as powerful
as (read) exists.

I guess it comes down to scheme is structurally reflective only, not also
othographically reflective.

Thank you.

-pete

Benjamin L. Russell

unread,
Dec 20, 2009, 1:55:46 PM12/20/09
to
On Sun, 20 Dec 2009 08:19:39 -0800 (PST), Grant Rettke
<gre...@gmail.com> wrote:

Have you been studying Haskell [1]?

Perhaps what is needed is "Scheme: Batteries Included."

-- Benjamin L. Russell

[1] "Haskell: Batteries Included." 2009. Haskell.org. 20 Dec. 2009.
<http://hackage.haskell.org/platform/contents.html>.

Peter Keller

unread,
Dec 20, 2009, 2:05:47 PM12/20/09
to
Benjamin L. Russell <DekuDe...@yahoo.com> wrote:
> On Sat, 19 Dec 2009 19:00:24 -0800 (PST), arc <a...@stuff.gen.nz>
> wrote:
>
>>This is sounding like a classic case of schemer malaise to me.
>
> http://74.125.153.132/search?q=cache:NcGBpP8WxmwJ:www.lambdassociates.org/blog/bipolar.htm%3Frepost+Qi+%2BLIsp+%2Bbipolar&cd=4&hl=en&ct=clnk&gl=jp&client=firefox-a).

> When I read his description, I felt that I was reading my own
> description in many ways.

A very interesting article. In some ways damming, because I really do
many of those things, in other ways, uplifting, because I know it'll
pass eventually.

Later,
-pete

Aaron W. Hsu

unread,
Dec 20, 2009, 3:52:16 PM12/20/09
to
Peter Keller <psi...@merlin.cs.wisc.edu> writes:

>Also, I think there is a common viewpoint, which I personally don't adhere to,
>that because things like (when ...) and whatnot _can be written_ in Scheme,
>Scheme _shouldn't provide it_ in a standard out of "elegance". In reality,
>people simply want those definitions, they don't want to have to write them,
>and they want them to be working.

I don't really think this is accurate, because you can see many of these
definitions in the R5RS standard, and even more in the R6RS standard.

>If this means that things like that end up in a standard library, fine
>by me. In fact, this is what happened with R6RS, and I thought it was
>a great idea. I genuinely hope the standard library section of R7RS is
>preserved and expanded.

The R6RS did have a great goal of trying to take libraries that should
be portable and making them so, but it didn't quite do so because it
broke de facto standards that were already in place outside of the
standard, and introduced new things that weren't standard at all. Not
all of R6RS is bad though, and in fact, it's a quite usable standard. We
can learn from this and improve it, which is what I hope happens in the
next standard.

>> This is an abstract discussion. A more concrete discussion would be
>> this: "Clojure does XYZ, and it's goood. How do we get XYZ into an RNRS?"
>>
>> So, what XYZs do you have in mind?

>Multimethods and metadata (on any object, instead of just the couple
>that Clojure performs). Those are the two that I've actually wished for
>in Scheme when writing it.

Scheme has always had property lists on symbols, but I do believe these
features are available in many Schemes, and there might even be a
portable interface for this stuff.

>Once you have multimethods, you can probably get the sequence idea from
>Common Lisp and associated functions, like (remove ...) that works on
>vectors, strings, lists, hashes: (remove 'key hash) depending on the
>arguments.

Just out of curiosity, do you have a specific use case where this is a
common solution and the right solution to a given task? In other words,
where do we see this sort of solution in active use in Scheme code that
is in the wild? These are obviously possible and have been implemented
in some sense in Schemes, so the question is how common are they and are
the use cases general enough to warrant standardizing them?

Aaron W. Hsu

unread,
Dec 20, 2009, 4:31:20 PM12/20/09
to
Peter Keller <psi...@merlin.cs.wisc.edu> writes:

>Before you jump on this stating you
>don't want a vast scheme reference cause the same problem will happen,
>I'll state there is probably one or two orders of magnitude difference
>between the complexity of the C++ spec and the Scheme spec and there
>always will be!

I am not actually opposed to a vast Scheme reference that is properly
done and that doesn't require everyone to provide the same features, but
I also believe in a highly conservative approach. I believe you will
discover that there is no real bound on the complexity of the Scheme
specification but that bound enforced by the community and editors, and
thus, if we ignore this issue thinking that Scheme will never be like
that, we may one day find that it *is* complex to a fault because it was
never a priority to avoid this. In other words, regardless of whether we
feel secure in a Scheme standard's direction, if that direction is
important, it should be explicitly considered and followed, lest it
cease to be the direction of importance.

>My tounge in cheek response to your bitrot comment is: "Yeah, as long
>as your scheme code didn't try to communicate to the environment around it."

I can tell you that I've used non-trivial Scheme code that interacts
with its environment, and C++/C code form that same era. Properly
written C code and Scheme code are about equal in how easy it is to make
them run on todays Scheme implementations, with Scheme having a slight
advantage for me because the fixes are simpler for me. C++ code from
that timeframe I have still not been able to make work, because it was
too much code to change.

>The consensus is that a CFFI exists in almost all scheme implementations. I
>believe it to be erroneous, in this context, that consensus means the
>actual ffi api is identical between implementations. The functionality is
>common. I'd hazard a guess that any one implementation's CFFI could be
>put into another implementation and it would work the same.

Unfortunately, no. Let's limit our programming questions to native code
compilers, and for the sake of interest, let's limit this to Larceny,
PLT Scheme, and Chez Scheme. Just in these three, we see three very
different FFIs. Can they all create interfaces to whatever C library you
want? Yes. Can they all create interfaces to any dynamic shared object
that is compiled? Yes. Can you make trivial transformations of one FFI
form to the other? Not really.

All of them more or less make it possible for you to get access to a
foreign procedure, but some offer more than this, and others combine
features together. One big difference is the way they handle Foreign
Types. PLT Scheme's FFI (there are Four different FFIs in PLT from what
I can see), focuses heavily on types, where almost everything is an
object referenced and "cast" into some type. There are all sorts of
string types, and enumerations, and the like. Larceny and Chez share a
more syntactically common interface, but even there they have different
types and different interfaces.

Yes, PLT Scheme technically doesn't actually implement anything special
in its interface that can't be duplicated in the others, but the way to
duplicate the functionality is not obvious. For example, when I wrote my
foreign type library to help me do some of the things that PLT provides
in its FFI, everyone with whom I spoke was very suprised by the library,
since usually they would just write a C stub. Larceny's manual actuall
recommends writing C stubs in some cases, as opposed to trying to do it
all in Scheme.

Put another way, you aren't going to find any syntax-rules macro that
will let you translate a generic FFI interface into the PLT Scheme,
Larceny, and Chez FFIs. It's much more work than that.

Then of course you have the C interface that is exposed to C code that
wants to interface with Scheme. If you eliminate this, then you also
make it more difficult for old code to be migrated. If you allow it, you
are entering terribly implementation specific ground that is likely to
cause lots of infighting if anyone even cares enough to try to make it
work. Moreover, if you decide to not standard this, then you still have
to have implementation specific Stub files unless you make a very
complex Scheme FFI.

This doesn't even take into consideration the Java implementations.

I would like to see a standard FFI, but only after a SRFI and a common
use pattern have been well established. That's the first step. No one
seems to have stepped up to try to provide a portable FFI library, and
if they haven't done even this, why should we even consider it for
standardization? Let's put something out in the wild and let it be used
first. If it is taken up, then good!

Actually, I have done something like this, or started to do so at least.
I have a set of basic Foreign Type abstractions that are easily ported
to other Scheme implementations. It has just been released though, so
the community naturally hasn't had any time to really place with it. I
don't expect many will.

>Well, at least for R6RS this could be true, since records aren't different
>between different implementations and might now be used to represent
>C structures.

Um, no. You can't have records representing C structures because they
are quite different.

>The fact is, there are thousands of hugely popular and well
>used C libraries out there--many of which are wrapped into scheme
>implementations. I'd be perfectly happy not providing an FFI to assembly
>because you can write a piece of C code to talk to the assembly and
>then write the FFI to that. And while people do have such libraries,
>I'd boldly claim they are statistically insignificant.

You'd be perfectly happy with this, but the Scheme standards are
designed to present general, well designed, and standard features.
Unless the FFI is standardized in such a way that it is easy to extend
it, the only option is to later add a bunch of different FFIs for every
language. This isn't a good standardization practice.

The misconception here, I think, is that everyone needs a standard
before we can have a portable FFI. That's not true. If we want a
standard FFI, we should start by writing a portable FFI wrapper that
provides a common interface among all the Scheme implementations. Let
people use that, and they don't *need* a standards document, because
there is a de facto standard library already out there. I want to see
that first.

>With the few R6RS implementations out right now, there is still a chance it
>could be done!

Then write the portable R6RS library that interfaces with all the
systems, and after that we can talk standardization.

>> Better to just write properly abstracted code so that it's easy to plug in
>> different FFIs.

>This may shock you, but I disagree on this point. This, in practice, means
>extra code which has to be maintained/debugged and can go wrong. I've
>had to deal with code like this, and in my opinion it was complicated
>and it sucked. And, someone else could just write a different abstraction
>over another library, and that would be complicated and suck as well--but
>more importantly, it would be *different*, which means harder to maintain
>or debug.

>IMHO, the scheme community needs to stop applying "variety is strength"
>to every aspect of Scheme's problem space. It just doesn't make sense in
>some areas and leads to unecessary functionality duplication.

What I am suggesting is that people need to stop looking at a standards
document as the only saving grace and feature set in a language. Other
languages don't do this when they want to be pragmatic, and Schemers
certainly should be above this. We don't need a standard to get our work
done. We need code. We need people to hack. Someone needs to write a
portable FFI framework (I think someone might have done this already,
but I haven't seen it). If this is around, then we can see if people use
it.

Aaron W. Hsu

unread,
Dec 20, 2009, 4:33:12 PM12/20/09
to
Grant Rettke <gre...@gmail.com> writes:

>"Real world" just means "batteries included". People want all of the
>good stuff built-in. You are making a good case to just pick one
>distribution and stick with it.

In the real world, when you want to get real work done on a real
project, that's what you do. Naturally, you write so that moving to
another Scheme implementation if necessary won't be a complete pain, and
most of your code will port easily enough, but you still use a single
implementation. Show me anyone who doesn't do this in the real world. If
there are any at all, they are a very rare breed.

Aaron W. Hsu

unread,
Dec 20, 2009, 4:33:54 PM12/20/09
to
Grant Rettke <gre...@gmail.com> writes:

>On Dec 19, 2:43=A0am, Aaron W. Hsu <arcf...@sacrideo.us> wrote:
>> Peter Keller <psil...@merlin.cs.wisc.edu> writes:

>> >So, is Clojure and evolutionary language with Scheme as a parent, or is =


>it not?
>>
>> What does it take to be evolutionary?
>>
>> In my mind, this means that it presents something new, useful, and above
>> and beyond the current status quo of the existing parent.

>Your description treads the line between evolutionary and
>revolutionary. No fair.

I vote incremental improvement over radical change.

Aaron W. Hsu

unread,
Dec 20, 2009, 4:36:38 PM12/20/09
to
Grant Rettke <gre...@gmail.com> writes:

>Wat. How many implementations need to implement a C FFI before it is
>commonplace? But you said consensus. That is revealing.

The question is how many implementations and how much code needs to
exist that has a common set of functionality provided by a mostly common
interface for it to be worth standardizing.

>Standards documents are tailored to implementers; not users.

I don't see any surprise here. User's shouldn't be restricted to just
the standard libraries. They use other libraries, other code, and they
should certainly be using any features they feel like that are
non-standard, if it makes sense and is done right.

Standards help provide a common base, but it's just a base, however big.

Aaron W. Hsu

unread,
Dec 20, 2009, 4:51:05 PM12/20/09
to
Peter Keller <psi...@merlin.cs.wisc.edu> writes:

>Implicit compilers are easy to bootstrap, but god forbid you have a
>syntax error or want to provide an error message about something about
>a line in a file somewhere. (read) throws away a lot of raw information
>about what it found where. That's annoying. If objects could somehow have
>their lexical analysis data associated with them, in addition to their
>parsing data (so expression parenthetical boundaries can be determined
>out of the stream that produced it by inspecting a token inside of the
>expression), then maybe compiler front ends could stay implicit and easy
>to write. It would also be useful for things like sending forms across
>sockets to other scheme process to (read).

This is easily achieved with something like 'read-token', records, and
reader support for records. See Chez Scheme's read-token for an example
of read token [1], and chances are your Scheme implementation already
has an implementation of serializable records. That's the normal way to
do things, and it gets you what you need.

>2. Using scheme as a config language. (Kind of a specialization of 1)

>This is one of the "bread and butter" things about scheme, it is easily
>embeddable. However, a few thousand line scheme config file you expect
>to (read) and muck about in leaves me stuck with how do I tell the user
>they made a semantic error on line X or they made a syntax error on line
>Y? After I (read) it, that information is long gone. My only recourse is
>to explicitly parse the file, and that's dumb when something as powerful
>as (read) exists.

Either you are loading Scheme code, or your are parsing your own
language that is sort of like Scheme. If you want to do the parsing
yourself, then see above. However, it's much easier to create a sandbox
library in which to evaluate the incoming code, thereupon receiving the
full error handling and reporting of your Scheme system, and very little
pain to you. Why parse when you can have the system do all that for you?

Andrew Reilly

unread,
Dec 20, 2009, 4:55:30 PM12/20/09
to
On Sun, 20 Dec 2009 15:33:54 -0600, Aaron W. Hsu wrote:

> I vote incremental improvement over radical change.

The grown-up version of the student-radical chant:

"What do we want? Incremental change.
When do we want it? In due course."

You know it's the right answer... ;-)

Cheers,

--
Andrew

Aaron W. Hsu

unread,
Dec 21, 2009, 12:05:02 AM12/21/09
to
Aaron W. Hsu <arc...@sacrideo.us> writes:

>Peter Keller <psi...@merlin.cs.wisc.edu> writes:

>>Multimethods and metadata (on any object, instead of just the couple
>>that Clojure performs). Those are the two that I've actually wished for
>>in Scheme when writing it.

>Scheme has always had property lists on symbols, but I do believe these
>features are available in many Schemes, and there might even be a
>portable interface for this stuff.

I should also mention a feature that is in the current pre-release of
(Petite) Chez Scheme 7.9.4 called Compile-time Properties and Values.
Quoting from the Release Notes:

Two mechanisms are now available for attaching information to
identifiers in the compile-time environment: compile-time values and
compile-time properties. These mechanisms are useful for communicating
information between macros. For example, a record-definition macro
might use one of these mechanisms to store information about the
record type in the compile-time environment for possible use by record
definitions that define subtypes of the record type.

This, compiled with normal Scheme property lists could be what you want,
but use with caution. These are not the sort of things that you use
every day. :-)

Grant Rettke

unread,
Dec 21, 2009, 3:12:57 PM12/21/09
to
On Dec 20, 3:31 pm, Aaron W. Hsu <arcf...@sacrideo.us> wrote:
> What I am suggesting is that people need to stop looking at a standards
> document as the only saving grace and feature set in a language. Other
> languages don't do this when they want to be pragmatic, and Schemers
> certainly should be above this.

Said other languages usually have a single, best, implementation; so
they don't have the same wants as many Scheme users. By design, they
don't need a standards document; there is only one implementation.

Grant Rettke

unread,
Dec 21, 2009, 3:16:12 PM12/21/09
to
On Dec 20, 3:33 pm, Aaron W. Hsu <arcf...@sacrideo.us> wrote:
> Show me anyone who doesn't do this in the real world. If
> there are any at all, they are a very rare breed.

Show me another language that is 30 years old that just got a 'sort'
procedure added to its spec ;). (I would feel sorry for that one, too)

Aaron W. Hsu

unread,
Dec 21, 2009, 3:52:18 PM12/21/09
to
Grant Rettke <gre...@gmail.com> writes:

Portability is all well and good, and it is something that we can always
improve. But when it comes down to really getting work done, you're
going to want to move outside the standard, and outside of a series of
unfortunate compatibility layers.

In that case, just get it working, and worry about other implementations
and ports when the time comes to it. It is useless to complain about the
standard and then paralyze yourself from writing any code. You need to
write code, and the more people write code, the better.

Aaron W. Hsu

unread,
Dec 21, 2009, 3:53:41 PM12/21/09
to
Grant Rettke <gre...@gmail.com> writes:

Show me a place where this mattered when writing code. Just because it's
not in some primary Scheme reference standard doesn't mean we can't use
it. SRFIs, after all, exist, along with a lot of standard best
practices. Why not take advantage of them?

Benjamin L. Russell

unread,
Dec 22, 2009, 9:28:56 AM12/22/09
to
On Sun, 20 Dec 2009 15:31:20 -0600, Aaron W. Hsu <arc...@sacrideo.us>
wrote:

>The misconception here, I think, is that everyone needs a standard


>before we can have a portable FFI. That's not true. If we want a
>standard FFI, we should start by writing a portable FFI wrapper that
>provides a common interface among all the Scheme implementations.

The problem is that creating "a common interface among all the Scheme
implementations" is a lot of work because there are many important and
subtle differences among "all the Scheme implementations," and most
programmers are too lazy and busy to write more than just one
interface for their own particular Scheme implementation. The
resulting interface is usually incompatible with other Scheme
implementations. It is usually easier to write an
implementation-specific FFI wrapper than a portable one.

What is really needed is not so much a single Scheme implentation as a
single *reference* Scheme implementation. The problem with Scheme is
that there are multiple *reference* Scheme implementations used by
many Schemers; you yourself cited Larceny, PLT Scheme, and (Petite)
Chez Scheme. For Haskell, for example, there is only GHC.

-- Benjamin L. Russell

Aaron W. Hsu

unread,
Dec 22, 2009, 1:13:24 PM12/22/09
to
Benjamin L. Russell <DekuDe...@Yahoo.com> writes:

>What is really needed is not so much a single Scheme implentation as a
>single *reference* Scheme implementation. The problem with Scheme is
>that there are multiple *reference* Scheme implementations used by
>many Schemers; you yourself cited Larceny, PLT Scheme, and (Petite)
>Chez Scheme. For Haskell, for example, there is only GHC.

I for one do *not* want a single reference Scheme implementation. There
is simply too much disagreement on matters that I care about for me to
risk on a single implementation. Implementation details matter, but I
don't want those details hammered into stone. It's really not that bad
to deal with FFIs on different systems. Just look at my sockets library,
and you can see how the translations are pretty straightforward. I plan
to port my sockets library to PLT Scheme soon, and that should give a
real world example.

Andrew Reilly

unread,
Dec 22, 2009, 8:04:10 PM12/22/09
to
On Tue, 22 Dec 2009 12:13:24 -0600, Aaron W. Hsu wrote:
> I for one do *not* want a single reference Scheme implementation. There
> is simply too much disagreement on matters that I care about for me to
> risk on a single implementation. Implementation details matter, but I
> don't want those details hammered into stone.

Agreed. Indeed, it seems to me that research and development of
implementation details is one of the most fertile and interesting aspects
of the scheme landscape.

> It's really not that bad
> to deal with FFIs on different systems. Just look at my sockets library,
> and you can see how the translations are pretty straightforward. I plan
> to port my sockets library to PLT Scheme soon, and that should give a
> real world example.

This is (an example of) one of the things that I find interesting about
the non-shared library landscape: evolution that not only doesn't
converge on a single "best practice", but which seems to actually diverge.

What is it that your sockets library does, (or, more specifically, allows
its clients to do) that the existing PLT scheme/tcp and scheme/udp
libraries don't? OK, I suppose that one might reasonably want to muck
about with ICMP packets or multicast sessions (or anything else that
doesn't fall under the banner of TCP and UDP.) Is that it? In short:
why is porting your sockets library a better option than porting your
application(s) to the existing sockets library? I'm just looking for
motivations, not casting aspersions. I find the detail variation in all
of the records, object systems, system interfaces, hash tables and what-
not quite fascinating, but since I use all of those, I'm sticking to a
single implementation for now.

[I've used the PLT scheme/tcp library in a GUI network client, and it did
everything that I needed, but I guess that wasn't exhaustive...]

Cheers,

--
Andrew

Benjamin L. Russell

unread,
Dec 22, 2009, 9:02:39 PM12/22/09
to
On Tue, 22 Dec 2009 12:13:24 -0600, Aaron W. Hsu <arc...@sacrideo.us>
wrote:

>I for one do *not* want a single reference Scheme implementation. There


>is simply too much disagreement on matters that I care about for me to

>risk on a single implementation....

I agree that "[t]here is simply too much disagreement on matters that
[you] care about...."

Interestingly, this aspect of disagreement in the Scheme/Lisp
community was actually discussed in a poll on opinions concerning the
importance of the "eval" construct, conducted by Kazimir Majorinc [1]
in September of 2009.

In his analysis, Majorinc writes as follows:

>In Lisp community, it is exactly opposite: almost 15% Lisp programmers
>think that eval is essential, and about 5% think eval is evil. It is
>somehow strange that more Lispers than Rubyists are extremely against
>eval. Both extremes are significant, and it guarantees consistent
>disagreements and discussions on the topic. That means, community
>cannot be united - and it is not united, of course. Even in this single
>issue, fragmentation of Lisp community is justified.

His phrase of "consistent disagreements and discussions on the topic"
seems consistent with discussion on many other programming
language-related topics in the Scheme and Lisp communities overall.
That is, Schemers are consistently inconsistent with one another on
most major issues.

This phenomena is distinctly unlike what I have witnessed in some
other programming circles. For example, the Haskell community tends
to have much more of a consensus on most key issues (even though there
tends to be constant bickering there over the importance of the slogan
"avoid success at all costs" and what logo to choose, as well as
intermittent bickering over semantics).

This phenomenon is simultaneously a good and a bad thing. It is a
good thing in a way because almost anybody who likes Scheme can find
at least some implementation to like and work with, and even if
somebody gets semi-ostracized from some implementation-specific
community, it is easy to find another major implementation and its
related community as an alternative.

However, it is also bad in a way because it means that there is no
such thing as "the Scheme community"; instead, there are multiple
Scheme communities which communicate with one another, and which
sometimes work together temporarily on some issues, but Schemers
overall are a divided group, and they shall probably be forever
divided, just as other Lispers shall probably be so as well.

This aspect impedes the practicality of the language as a whole,
because it means that anybody who writes Scheme code that does almost
anything useful must write it for a specific implementation, and
understand that there will almost always be some other Scheme
implementation for which the code will not work out-of-the-box.

In other words, it is probably impossible for anybody to create a
"Scheme: Batteries Included" version of the language, because any such
project just begs the question, "Which implementation of Scheme?"

-- Benjamin L. Russell

[1] Majorinc, Kazimir. "Kazimir Majorinc's Programming Notes.:
Opinions on Eval in Lisp, Python and Ruby - The Results of The Poll."
Online posting. Sep. 2009. 23 Dec. 2009.
<http://kazimirmajorinc.blogspot.com/2009/10/opinions-on-eval-in-lisp-python-and.html>.

Aaron W. Hsu

unread,
Dec 23, 2009, 4:03:01 AM12/23/09
to
Andrew Reilly <areil...@bigpond.net.au> writes:

>On Tue, 22 Dec 2009 12:13:24 -0600, Aaron W. Hsu wrote:
>> It's really not that bad to deal with FFIs on different systems. Just
>> look at my sockets library, and you can see how the translations are
>> pretty straightforward. I plan to port my sockets library to PLT Scheme
>> soon, and that should give a real world example.

[...]

>What is it that your sockets library does, (or, more specifically, allows
>its clients to do) that the existing PLT scheme/tcp and scheme/udp
>libraries don't? OK, I suppose that one might reasonably want to muck
>about with ICMP packets or multicast sessions (or anything else that
>doesn't fall under the banner of TCP and UDP.) Is that it? In short:
>why is porting your sockets library a better option than porting your
>application(s) to the existing sockets library? I'm just looking for
>motivations, not casting aspersions. I find the detail variation in all
>of the records, object systems, system interfaces, hash tables and what-
>not quite fascinating, but since I use all of those, I'm sticking to a
>single implementation for now.

The main motivation behind it is to provide a proof of concept of a
real, practical, full-scale, reliable, and complete non-trivial
low-level library that is easily ported. I really don't care if it runs
on PLT for myself, but it matters to me that others can use it on their
Scheme system without too much hassle. Thus, porting to PLT Scheme
provides an example path by which another user can easily convert or
port my library to their other Scheme.

Nonetheless, there are things about my sockets library that are not
usually found in other libraries. I gained my inspiration from Taylor
Campbell's rant [1], which basically laments the specialization and
over-abstraction of many sockets libraries for no good reason. Since the
BSD sockets library does in fact provide a rather complete abstraction
for network related stuff, it doesn't make sense to intentionally limit
or impair that paradigm. Conveniences on top of it for certain tasks ---
like starting a server loop, for example --- certainly does make sense,
but I aim at a different target. My library is a slightly lower level
library that lightly wraps the BSD sockets paradigm into Scheme
procedures and structures, without removing any of the functionality.
That is, you can extend the library almost trivially to use any socket
type, such as Bluetooth, without requiring modifications of the source
code (I hope; it's a bug if this is required). It also supports both
blocking and non-blocking operations in a nice way, and is lightweight
enough to allow any non-blocking paradigm to be applied on top of the
library to enable integration of non-blocking operations with other
libraries. For example, rather than polling the sockets, one could
utilize the Concurrent ML libraries that exist (I am playing with the
one from Taylor Campbell), or you could use libev, or any other such
library.

Moreover, the library allows you to specialize the sockets library to
utilize features that are specific to your system, while still remaining
portable. So you could utilize Linux sockets features, even though they
won't be portable, and you don't have to muck around with the sockets
library code.

I'm still making modifications to the library, and the Arctic Repository
should have a new version out soon (with Windows support), which has a
significantly altered code organization to make porting even easier.

And of course, having this common low-level API allows more portable
networking paradigms that describe higher level operations. You can
simply use this low level interface as a dependency, and your high level
stuff can be assumed to be reasonable portable. My goal is to make it
easier for me to write and share non-trivial networking Scheme programs.
This means that it's easier for me to write the sockets library and port
it once to many Schemes, than to have to port each of the programs that
utilize networking features to each individual Scheme.

Consider it an essay whose thesis is, "Stop quibbering about
non-portable Scheme code and start writing portable Scheme code!" We'll
see if it convinces anyone.

Benjamin L. Russell

unread,
Dec 23, 2009, 1:02:24 PM12/23/09
to
On Wed, 23 Dec 2009 03:03:01 -0600, Aaron W. Hsu <arc...@sacrideo.us>
wrote:

>I gained my inspiration from Taylor


>Campbell's rant [1], which basically laments the specialization and
>over-abstraction of many sockets libraries for no good reason.

Your footnote corresponding to the reference "[1]" appears to be
missing. I'd be interested in reading the "Taylor Campbell's rant"
that you have mentioned; do you have a reference?

This sounds like a very interesting project with important
implications for cross-Scheme-implementation development!

Incidentally, do you have any plans to create any libraries which can
duplicate the functionality of Croquet and Cobalt [1] in developing a
tool for creating multi-user online virtual world-based collaborative
development tools? Such tools would be very interesting in using
reflectivity, since they would enable users to alter the world (or
language) itself in real-time from within a running application. If
you watch the "Croquet Software Demo Movie (August 2007)" in the
upper-right corner of the "Main Page - Croquet Consortium" site (see
http://www.opencroquet.org/index.php/Main_Page), you should get an
idea of the implications of the project, which would allow avatars to
point, click, and, most importantly, program in adding functionality
to a virtual space. If this functionality could be duplicated with a
basis in a pseudo-functional programming language such as Scheme,
rather than with a basis limited to a message-passing-paradigm based
language such as Squeak (an implementation of Smalltalk), that would
be a potentially fascinating project.

-- Benjamin L. Russell

[1] Russell, Benjamin L. "How difficult would creating a collaborative
multi-user online virtual world application be in PLT Scheme?" Online
posting. 11 June 2009. 24 Dec. 2009. <news://gmane.lisp.scheme.plt>.
Also available at
<http://groups.google.com/group/plt-scheme/msg/c03178e7d1537b3b>.

arc

unread,
Dec 23, 2009, 6:10:04 PM12/23/09
to
On Dec 21, 7:31 am, Benjamin L. Russell <DekuDekup...@Yahoo.com>
wrote:

> On Sat, 19 Dec 2009 19:00:24 -0800 (PST), arc <a...@stuff.gen.nz>
> wrote:
>
> >Peter,
>
> >This is sounding like a classic case of schemer malaise to me.
>
> It is interesting that you should use the term "schemer malaise."  I
> think that you are actually describing a specific instance of a more
> general malaise, which should be more properly referred to as "bipolar
> lisp-dialect programmer malaise" (my term, based on the term "bipolar
> lisp programmer" by Mark Tarver [1]).  In particular, your description
> reminds me of the one in the article "The Bipolar Lisp Programmer" [1]
> (seehttp://www.lambdassociates.org/blog/bipolar.htm?repost) (cached
> athttp://74.125.153.132/search?q=cache:NcGBpP8WxmwJ:www.lambdassociates...).

>
> Although the article was originally written to describe the "Lisp
> character" (Tarver's term), the points raised seem common in at least
> some ways to the "scheme malaise" as well.  This point seems
> especially true regarding the aspect of depression that often comes
> with using a language which is in many ways at the pinnacle of
> elegance, while continually carrying a dilemma between elegance and
> usefulness in standardization.
>
> There is probably such as thing as a "Scheme character," which is
> almost identical to the following "Lisp character" described by
> Tarver:
>
> >Generally what we're talking about here is a student of outstanding brilliance.  Someone who is used to acing most of his assignments; of doing things at the last minute but still doing pretty well at them.    At some level he doesn't take the whole shebang all that seriously; because, when you get down to it, a lot of the rules at school are pretty damned stupid.  In fact a lot of the things in our world don't make a lot of sense, if you really look at them with a fresh mind.  
>
> >So we have two aspects to this guy; intellectual acuteness and not taking things seriously.  The not taking things seriously goes with finding it all pretty easy and a bit dull.  But also it goes with realising that a lot of human activity is really pretty pointless, and when you realise that and internalise it then you become cynical and also a bit sad - because you yourself are caught up in this machine and you have to play along if you want to get on.  Teenagers are really good at spotting this kind of phony nonsense.  Its also the seed of an illness; a melancholia that can deepen in later life into full blown depression.
>
> >Another feature about this guy is his low threshold of boredom. He'll pick up on a task and work frantically at it, accomplishing wonders in a short time and then get bored and drop it before its properly finished.  He'll do nothing but strum his guitar and lie around in bed for several days after. That's also part of the pattern too; periods of frenetic activity followed by periods of melancholia, withdrawal and inactivity.   This is a bipolar personality.
>
> When I read his description, I felt that I was reading my own
> description in many ways.

Interesting. It also sounds more than a bit like me, although
particularly the bits that you didn't quote :] (although I'm not much
of a programmer at all, let alone a brilliant one).

However, while I'm sure Tarver is on to something with his description
of a 'LISP type', and I'm sure that this is one source of what I've
called schemer malaise, I'm not sure that I'd want to tie this to a
specific personality type. I think frustration with lack of coherence
and a perceived lack of progress with the scheme community could
easily stem from much more grounded, practical sorts of people, or
maybe even people who are more sensitive to social interaction than
technical issues and just want everyone to get along :].

-A.

Aaron W. Hsu

unread,
Dec 23, 2009, 10:39:09 PM12/23/09