Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Alternatives to values/call-with-values

77 views
Skip to first unread message

Category 5

unread,
Aug 2, 2003, 9:57:15 AM8/2/03
to
In a series of threads back in January 1995, Matthias Blume argued
passionately against the presence of values/call-with-values in Scheme
on the grounds that they add nothing to the language as a language -
that is, they grant no additional expressiveness beyond what is already
possible with list and apply, and in fact detract from expressiveness by
forcing us to write things like function composition as

(define (compose f g)
(lambda args
(call-with-values
(lambda () (apply g args))
f)))

rather than in the simpler and more obvious way. Matthias felt that the
introduction of these constructs was a divergence from Scheme's spirit
of orthogonality and challenged Schemers to come up with an argument
justifying their inclusion in the language. The sparks flew, and the
results appear to have been that most people agreed with Matthias Blume,
with the notable exceptions of William Clinger and Kent Dybvig. The
primary arguments in favour of values/call-with-values were that they
allow implementors to optimise generated code in ways that are
impossible or more difficult in the list/apply case.

Matthias (and many others) seemed to feel that this was an extremely
suspect reason for altering the foundations of Scheme, and that it
amounted to sacrificing the spirit of the language for a "performance
hack."

Another justification given for values/call-with-values was that they
create a symmetry with the multiple arguments accepted by procedures.
Matthias countered by stating that there was no need to have procedures
accept multiple arguments at all and that Scheme could learn from
languages like ML in this respect.

William Clinger argued in favour of values/call-with-values for some
time. But in February 1995, in <3gr4ls$2...@narnia.ccs.neu.edu>, he
appeared to have second thoughts of a sort. He writes:

> I have misunderstood several of the points that Matthias Blume has
> been trying to make. As I now understand his arguments, Matthias has
> not been criticizing the multiple values proposal. Instead he has
> been criticizing various misfeatures of Scheme that make multiple
> values appear necessary.
> [...]
> I will then outline some changes to Scheme that would make multiple
> values unnecessary. These changes are radical, yet they would not
> break any code written in R4RS Scheme.
> [...]
> This proposal would address all three of the above criticisms without
> breaking *any* portable Scheme code. On the other hand, this proposal
> would require changes to all existing implementations of Scheme.

The proposal involves the introduction of a new data type called a
'sequence'. With this approach,

> [E]very procedure takes exactly one argument and returns exactly one
> result. The argument and the result will always be a sequence. [...]
> An expression in tail-recursive position returns a one-element
> sequence. Continuations accept sequences containing an arbitrary
> number of elements as results.

To my surprise, this proposal generated almost no discussion. In fact,
it seemed to mark the end of the original one, and since then people
seem to have resigned themselves to values/call-with-values, occasional
fulminations aside.

My questions:

1. Is this proposal still viable for Scheme, as a language, today?

2. What are the thoughts of implementors on the proposal?

3. Have people really resigned themselves to values/call-with-values,
both
a) as a specific means of handling multiple return values, and
b) as the philosophical departure from the roots of the language
they (arguably) represent?

4. Does anyone have any other ideas for dealing with the problem of
multiple return values that haven't been raised yet?

--

cr88192

unread,
Aug 2, 2003, 11:31:23 AM8/2/03
to
>
> To my surprise, this proposal generated almost no discussion. In fact,
> it seemed to mark the end of the original one, and since then people
> seem to have resigned themselves to values/call-with-values, occasional
> fulminations aside.
>
> My questions:
>
> 1. Is this proposal still viable for Scheme, as a language, today?
>
> 2. What are the thoughts of implementors on the proposal?
>
> 3. Have people really resigned themselves to values/call-with-values,
> both
> a) as a specific means of handling multiple return values, and
> b) as the philosophical departure from the roots of the language
> they (arguably) represent?
>
> 4. Does anyone have any other ideas for dealing with the problem of
> multiple return values that haven't been raised yet?
>
as of yet I did not bother to implement these. if I did it would probably
jusy be in the form of macros:
(defmacro values x
`(list ,@x))

(defmacro call-with-values (f g)
`(apply ,f ,g))

yes, working hygenic macros were another thing I never got to...
instead I had a half-working macro system that I just used to implement
defmacro...

cr88192

unread,
Aug 2, 2003, 11:40:47 AM8/2/03
to
>
> (defmacro call-with-values (f g)
> `(apply ,f ,g))
>
I was stupid, probably should have been:
(defmacro call-with-values (f g)
`(apply ,g (,f)))

this is the problem with not remembering exactly what they do and not
bothering to look at r5rs for the answer...

Matthias Radestock

unread,
Aug 2, 2003, 12:21:00 PM8/2/03
to
Category 5 wrote:

>Will Clinger wrote (a long time ago):


>>[E]very procedure takes exactly one argument and returns exactly one
>>result. The argument and the result will always be a sequence. [...]
>>An expression in tail-recursive position returns a one-element
>>sequence. Continuations accept sequences containing an arbitrary
>>number of elements as results.
>
>
> To my surprise, this proposal generated almost no discussion. In fact,
> it seemed to mark the end of the original one, and since then people
> seem to have resigned themselves to values/call-with-values, occasional
> fulminations aside.
>
> My questions:
>
> 1. Is this proposal still viable for Scheme, as a language, today?

I certainly believe that values/call-with-values are an area that ought
to be looked at for R6RS.

> 2. What are the thoughts of implementors on the proposal?

I prefer it to values/call-with-values but share the concerns expressed
by Henry Baker: sequences are second class, and a = [a] is troublesome.

> 3. Have people really resigned themselves to values/call-with-values,
> both
> a) as a specific means of handling multiple return values, and

No. See answer #1.

> b) as the philosophical departure from the roots of the language
> they (arguably) represent?

No, but see answer #2 - if values/call-with-values are a "departure from
the roots of the language" then so are sequences (because of their
second-class nature).

> 4. Does anyone have any other ideas for dealing with the problem of
> multiple return values that haven't been raised yet?
>

See the "splicing values" idea I posted a while back:

http://groups.google.com/groups?selm=3CB962FD.4030208%40sorted.org

It has no observable second-class values and is based around the notion
that [a ... [b ...] c ...] = [a ... b ... c ...]


Matthias.

Jens Axel Søgaard

unread,
Aug 2, 2003, 2:57:30 PM8/2/03
to
Category 5 wrote:
> In a series of threads back in January 1995, Matthias Blume argued
> passionately against the presence of values/call-with-values in Scheme
> on the grounds that they add nothing to the language as a language -
> that is, they grant no additional expressiveness beyond what is already
> possible with list and apply, and in fact detract from expressiveness by
> forcing us to write things like function composition as
>
> (define (compose f g)
> (lambda args
> (call-with-values
> (lambda () (apply g args))
> f)))
>
> rather than in the simpler and more obvious way. Matthias felt that the
> introduction of these constructs was a divergence from Scheme's spirit
> of orthogonality and challenged Schemers to come up with an argument
> justifying their inclusion in the language. The sparks flew, and the
> results appear to have been that most people agreed with Matthias Blume,
> with the notable exceptions of William Clinger and Kent Dybvig. The
> primary arguments in favour of values/call-with-values were that they
> allow implementors to optimise generated code in ways that are
> impossible or more difficult in the list/apply case.
...

> 4. Does anyone have any other ideas for dealing with the problem of
> multiple return values that haven't been raised yet?

I'm not quite whether this counts, but it's somewhat related:

<http://www.cs.indiana.edu/~dyb/papers/mrvs.ps.gz>


* J. Michael Ashley and R. Kent Dybvig. "An Efficient Implementation of
Multiple Return Values in Scheme". 1994 ACM Conference on LISP and
Functional Programming

--
Jens Axel Søgaard

Alex Yakovlev

unread,
Aug 2, 2003, 9:42:34 PM8/2/03
to
Hello,

"Category 5" <catfi...@chaosnet.org> wrote in message
news:yy10d6fo...@chaosnet.org...


> In a series of threads back in January 1995, Matthias Blume argued
> passionately against the presence of values/call-with-values in Scheme

[...]
> My questions:

I'd like to add another question - R5RS states
(1) values can be defined as:

(define (values . things)
(call-with-current-continuation
(lambda (cont) (apply cont things))))

but (2) Except for continuations created by the call-with-values procedure,
all continuations take exactly one value.

may I ask - why is the (2) statement needed?

won't it be convinient for any continuation to accept multiple values?
for example:

(+ (values 1 2 3)) => 6

it's shorter than:

(call-with-values (lambda () (values 1 2 3)) +)

and that's how I implemented values in jsScheme.

Alex


usenet

unread,
Aug 2, 2003, 9:51:47 PM8/2/03
to
Matthias Radestock wrote:

> I prefer it to values/call-with-values but share the concerns expressed
> by Henry Baker: sequences are second class, and a = [a] is troublesome.

The XQuery language (now being standardized by W3C - see
http://www.w3.org/XML/Query) makes sequences first class.
It makes for a very useful and interesting language.
However, functions still take multiple parameters (each of
which can be a sequence), and return a single result (which
can be a sequence). It would have been interesting if they'd
gone further, and has a single parameter sequence, which you
could take apart using pattern matching. I'd like to work
out the pragmatics of such a design.

(I'm in teh middle of implementing XQuery using the Kawa
framework - see http://www.gnu.org/software/qexo/).

David Rush

unread,
Aug 5, 2003, 10:36:31 AM8/5/03
to
Category 5 <catfi...@chaosnet.org> writes:
> William Clinger argued in favour of values/call-with-values for some
> time. But in February 1995, in <3gr4ls$2...@narnia.ccs.neu.edu>, he
> appeared to have second thoughts of a sort. He writes:
>
> > [...]
> > This proposal would address all three of the above criticisms without
> > breaking *any* portable Scheme code. On the other hand, this proposal
> > would require changes to all existing implementations of Scheme.
>
> The proposal involves the introduction of a new data type called a
> 'sequence'. With this approach,

> My questions:


>
> 1. Is this proposal still viable for Scheme, as a language, today?

Probably not, although, not having looked at the idea *at all*, I find
it interesting. I expect that the big implementations (PLT, Bigloo)
have too much invested in their current architectures to embrace such
a fundamental change.

> 3. Have people really resigned themselves to values/call-with-values,
> both
> a) as a specific means of handling multiple return values, and
> b) as the philosophical departure from the roots of the language
> they (arguably) represent?

No, but I've been whingeing about it in the last year or so. As far as
your a) goes, I no longer use call-with-values/values in my own code,
I pass the multiple-valued continuation directly. This works out
better for a lot of reasons, most notably that frequently you use
multiple values to return both a condition and a value. In those
cases, you simply turn the original function call into (effectively)
a conditional construct where you can have *different* sets of values
available in different cases. In other multiple-valued scenarios, the
syntactic overhead of constructiona lambda expression and passing it
down the call chain is, to my tastes, no worse that that involved in
using the call-with-values construct in the first place.

I agree completely with Matthias Blume on b)

> 4. Does anyone have any other ideas for dealing with the problem of
> multiple return values that haven't been raised yet?

Just drop the construct from the langusge. It's completely unecessary
since you can use explicit CPS. I have also posted a different
approach to call/values which considers it as a continuation injection
operator (as opposed to call/cc, which is a continuation-extraction
operator); however, Matthias Blume pointed out to me some type
difficulties with that which prevented the implementation of
call-with-values as a pure source-code transformation. I believe those
could be resolved by changing the semantics of call/values. Of course
the only standards body that is likely to ratify *my* suggestions is
the Nepalese Journal for Deconstructionist Sociologists and
Sheep-herders.

david rush
--
He who would make his own liberty secure, must guard even his enemy
from repression
-- Thomas Paine

Anthony Carrico

unread,
Aug 6, 2003, 1:01:18 PM8/6/03
to
David Rush <dr...@aol.net> wrote:
> No, but I've been whingeing about it in the last year or so. As far as
> your a) goes, I no longer use call-with-values/values in my own code,
> I pass the multiple-valued continuation directly.

One point that doesn't get much play in the multiple values debates is
the zero values case. When I write a side effecting procedure, I end
it with a (values) call. If multiple values were banned, this would be
the situation when I would first notice the pinch.

Using cps seems unreasonably heavy handed in this simple situation. An
oxymoronic "no-value" value to satisfy "functional" semantics for a
non-function seems stilted compared to zero values which seems to
exactly capture the situation. How did you handle this case after you
jetisoned multiple values from your life?

--
Anthony Carrico

Joe Marshall

unread,
Aug 6, 2003, 2:11:04 PM8/6/03
to
David Rush <dr...@aol.net> writes:

>> 3. Have people really resigned themselves to values/call-with-values,
>> both
>> a) as a specific means of handling multiple return values, and
>> b) as the philosophical departure from the roots of the language
>> they (arguably) represent?
>

> No, but I've been whining about it in the last year or so. As far as


> your a) goes, I no longer use call-with-values/values in my own code,
> I pass the multiple-valued continuation directly. This works out
> better for a lot of reasons, most notably that frequently you use
> multiple values to return both a condition and a value. In those
> cases, you simply turn the original function call into (effectively)
> a conditional construct where you can have *different* sets of values
> available in different cases. In other multiple-valued scenarios, the
> syntactic overhead of constructiona lambda expression and passing it
> down the call chain is, to my tastes, no worse that that involved in
> using the call-with-values construct in the first place.

I rarely use values/call-with-values and often go to explicit CPS.
One reason is all the damn syntax:

(call-with-values (lambda () (produce-values x))
(lambda (val1 val2)
body))

Is equivalent to:

(cps-produce-values x
(lambda (val1 val2)
body))

The CPS version is more concise than the call-with-values version!

(Yes, I could write a macro, but then I'd have to ensure it's
available, and it won't match other people's macros, etc. etc.)

And while I understand why it is an error to return multiple values to
a continuation expecting a single value, it sure is convenient. With
the current system, if you wish to add an extra return value, you have
to adjust the code at *every* return site (provided you can enumerate
all the return sites!). In Common Lisp, unexpected return values are
silently discarded, so modifying a function to return extra
information is a local change. Discarding the extra return values is
somewhat sloppy semantically, but far more convenient from a coding
standpoint.

Jens Axel Søgaard

unread,
Aug 6, 2003, 3:31:43 PM8/6/03
to
Jens Axel Søgaard wrote:
[in another thread]

>Ashley Yakeley wrote:
>>Is this a valid R5RS program?

>> (call-with-values (lambda () (list (values 1 2) (values 3 4))) list)

>No.

> When an argument of a call is evalutated only one return value is
> expected. Thus the evaluation contexts of (values 1 2) and (values 3
> expects only one value but receive two - and an error should be
> signaled.

Perhaps I was too fast.

Jens Axel:


>> 4. Does anyone have any other ideas for dealing with the problem of
>> multiple return values that haven't been raised yet?
>
>
> I'm not quite whether this counts, but it's somewhat related:
>
> <http://www.cs.indiana.edu/~dyb/papers/mrvs.ps.gz>
>
>
> * J. Michael Ashley and R. Kent Dybvig. "An Efficient Implementation of
> Multiple Return Values in Scheme". 1994 ACM Conference on LISP and
> Functional Programming

There's an interesting footnote in that paper. It says that the text
of R5RS does specify what an implementation is to do when an
continuation recieves too many values, but that the semantics gives
an error. However, it says there's consensus that it's ok to ignore
extra values.

Will's version 1 of the wording for R5RS supports this:

<http://zurich.ai.mit.edu/pipermail/rrrs-authors/1988-June/000976.html>


But later in
<http://zurich.ai.mit.edu/pipermail/rrrs-authors/1988-October/001055.html>:

3.8. Multiple return values. Not adopted. Debate centered on
whether returning a single "multiple value" should be equivalent to
returning that value normally. A secondary issue concerned whether
extra return values should be ignored or an error. We approached
consensus on this.

After much debate it was decided to go on to other issues and let
Dybvig and Hanson discuss the matter in private. Their discussion
later that night led to a conclusion that we don't understand the
issues well enough to charge ahead with either proposal 3.7 or 3.8
or a variation.


Where was the final decision made?

--
Jens Axel Søgaard

felix

unread,
Aug 7, 2003, 1:29:03 AM8/7/03
to
Anthony Carrico <acar...@memebeam.org> wrote in message news:<yvaYa.210$Ps.1...@newsread2.prod.itd.earthlink.net>...

>
> One point that doesn't get much play in the multiple values debates is
> the zero values case. When I write a side effecting procedure, I end
> it with a (values) call. If multiple values were banned, this would be
> the situation when I would first notice the pinch.
>

This is not necessarily working on R5RS compliant implementations,
if the call-site does not expect multiple values.


cheers,
felix

David Rush

unread,
Aug 8, 2003, 2:43:13 PM8/8/03
to
Anthony Carrico <acar...@memebeam.org> writes:
> David Rush <dr...@aol.net> wrote:
> > No, but I've been whingeing about it in the last year or so. As far as
> > your a) goes, I no longer use call-with-values/values in my own code,
> > I pass the multiple-valued continuation directly.
>
> One point that doesn't get much play in the multiple values debates is
> the zero values case. When I write a side effecting procedure, I end
> it with a (values) call. ... Using cps seems unreasonably heavy

> handed in this simple situation.

ISTM that (values) is similiarly heavy-handed.

> An
> oxymoronic "no-value" value to satisfy "functional" semantics for a
> non-function seems stilted compared to zero values which seems to
> exactly capture the situation. How did you handle this case after you
> jetisoned multiple values from your life?

I actually try to make sure that all functions have a meaningful
return value, even is their primary purpose in life is
side-effecting. I found that anything else started causing me major
performance problems on type-sensitive compilers (read Stalin). The
big problem with (values) is that it doesn't have a well-defined type,
although I will grant you that it is better than my occasional lazy
practice of not even paying attention to the return value of a
side-effecting procedure.

Anthony Carrico

unread,
Aug 8, 2003, 3:24:22 PM8/8/03
to
felix <fe...@proxima-mt.de> wrote:
> This is not necessarily working on R5RS compliant implementations,
> if the call-site does not expect multiple values.

Ya, I know I'm getting a little off the R5RS track with this.

It seems like in an language embracing multiple values (and clearly
r5rs isn't firmly in that category), it would make most sense for the
continuations of the leading body statements to take an arbitrary
number of values. I think that expecting zero values would actually
make more sense than expecting one value, since any leading body
statements are obviously being used purely for their side effects, not
their value(s).

--
Anthony Carrico

Anthony Carrico

unread,
Aug 8, 2003, 3:47:26 PM8/8/03
to
David Rush <dr...@aol.net> wrote:
>> Using cps seems unreasonably heavy handed in this simple situation.
>
> ISTM that (values) is similiarly heavy-handed.

Um, not in terms of syntax at the call site!, but perhaps you are
talking about the 'burden' of (values) itself.

> I actually try to make sure that all functions have a meaningful
> return value, even is their primary purpose in life is
> side-effecting. I found that anything else started causing me major
> performance problems on type-sensitive compilers (read Stalin).

Hmm, I don't know much about Stalin, but if that is the case, then
perhaps Stalin isn't modeling procedure return the values' types
correctly. How is {[int-t int-t] -> [bogus-t]} any worse than {[int-t
int-t] -> []}?

(That wasn't a retorical question, I'd be interested to know why
multiple result types would be any different than multiple parameter
types. Is it something to do with the fact that procedures have only
one entry signature, but possible many continuation signatures? If so,
how would single return values simplify that situation for the
analysis?)

--
Anthony Carrico

David Rush

unread,
Aug 9, 2003, 5:14:46 AM8/9/03
to
Anthony Carrico <acar...@memebeam.org> writes:
> David Rush <dr...@aol.net> wrote:
> > I actually try to make sure that all functions have a meaningful
> > return value, even is their primary purpose in life is
> > side-effecting. I found that anything else started causing me major
> > performance problems on type-sensitive compilers (read Stalin).
>
> Hmm, I don't know much about Stalin, but if that is the case, then
> perhaps Stalin isn't modeling procedure return the values' types
> correctly. How is {[int-t int-t] -> [bogus-t]} any worse than {[int-t
> int-t] -> []}?

Well, for one Stalin only supports R4RS. secondly, the R5RS DS has two
bottom values specified in it, #unspecified and #undefined. The set!
operator (and possibly all of the mutation operators, IIRC) have type

location * object -> #unspecified

Stalin just treats those bottom types as _|_ for the purposes of type
(and representation) inference. When they get unified with some other
type they just vanish. Actually, I'm probably wrong, but that's how
I'd do it. (Actually, I wouldn't, I'd reify #unspecified, but that's
another rant...)

> (That wasn't a retorical question, I'd be interested to know why
> multiple result types would be any different than multiple parameter
> types.

we need to be clearer about the terms of this discourse. Does multiple
parameter types mean

type1 * type1 -> type3

or

(con1 type1 | con2 type2) -> type3

?

If it's the first, there is no difference, hence the proposal which
has been re-opened about sequence types. In the second case, there is
a mandatory type dispatch that's going to occur somewhere. This type
dispatch causes extra overhead in terms of both storage and CPU
cycles.

> Is it something to do with the fact that procedures have only
> one entry signature, but possible many continuation signatures?

Indirectly. The real difficulty is that you can't know the return arity
of a procedure without traversing the entire call-graph of the procedure.
For a whole-program compiler, this is just another increment to the
compile-time in order to achieve (ISTM) very little added
expressivity. For separate compilation, it is a significant problem
which creates run-time overhead. Others have argued that the overhead
can be minimized, but I don't see enough value in the construct in the
first place to care.

Anthony Carrico

unread,
Aug 10, 2003, 12:44:36 PM8/10/03
to
David Rush <dr...@aol.net> wrote:
> Anthony Carrico <acar...@memebeam.org> writes:
>> David Rush <dr...@aol.net> wrote:
>> > I actually try to make sure that all functions have a meaningful
>> > return value, even is their primary purpose in life is
>> > side-effecting. I found that anything else started causing me major
>> > performance problems on type-sensitive compilers (read Stalin).
>>
>> Hmm, I don't know much about Stalin, but if that is the case, then
>> perhaps Stalin isn't modeling procedure return the values' types
>> correctly. How is {[int-t int-t] -> [bogus-t]} any worse than {[int-t
>> int-t] -> []}?
>
> Well, for one Stalin only supports R4RS.

If I remember correctly, there aren't multiple values in R4RS, so
compatibility with R4RS would be a very good reason to avoid multiple
values.

>> I'd be interested to know why multiple result types would be any
>> different than multiple parameter types.
>
> we need to be clearer about the terms of this discourse. Does multiple
> parameter types mean
>

> type1 * type1 -> type3 {A}
>
> or
>
> (con1 type1 | con2 type2) -> type3 {B}

Umm... {A}, but actually I don't understand {B}. What are "con1" and
"con2"? I could understand "type1 | type2 -> type3", that is how I
would describe the situation created by "many continuation signatures"
(with or without multiple values).

> If it's the first, there is no difference,

Great: if there is no difference then it shouldn't be any more trouble
for the analysis to do-what-it-does for "multiple values" as it does
for "multiple parameters".

> hence the proposal which has been re-opened about sequence types.

No. Only if the values (and parameters, I gather, for the ML fans)
where exclusively sequences, but if we had the option of passing (aka
returning) sequences and non-sequences, then it would be a different
thing.

That's why this talk about sequences scares me enough to speak
up. Think about the continuations for the operator and arguments of a
call. I'm fine in the multiple-values-world where one value must be
returned/passed to these. I'm fine in the nice-sequences-world where a
sequence of length one must be returned/passed to these (but I think
the syntax would suck) . I'm freaked out in the
whacked-sequences-world where one of anything is fine, sequence or not
-- I just can't make any sense of a type system like that.

The whacked-sequences-world seems like the warped appropriation of
some functional pincipal into the wrong context, like somebody skipped
some level of abstraction, like a subtle but deadly mistake, like
something that would be impossible to analyze or compile
efficiently. I'm confused, since over the years I've seen the position
advocated by respected people who are orders of magnitude more
in-the-know than myself; I must be in the wrong, but I can't figure
out where I'm in the wrong!

> ... I don't see enough value in the construct in the first place to
> care.

And so it's back to syntax. In engineering terms I see a lot of
value. With multiple parameters and multiple values you are making a
distinction between objects and control flow that you aren't making
without them. It would be good news if that valuable distinction
didn't make any difference to the math and the compiler.

--
Anthony Carrico

Anton van Straaten

unread,
Aug 10, 2003, 6:28:09 PM8/10/03
to
Anthony Carrico wrote:
> That's why this talk about sequences scares me enough to speak
> up. Think about the continuations for the operator and arguments of a
> call. I'm fine in the multiple-values-world where one value must be
> returned/passed to these. I'm fine in the nice-sequences-world where a
> sequence of length one must be returned/passed to these (but I think
> the syntax would suck) . I'm freaked out in the
> whacked-sequences-world where one of anything is fine, sequence or not
> -- I just can't make any sense of a type system like that.

Isn't this exactly what ML does with tuples, though? Or are you saying that
you think that ML can get away with that because of static typing? Still,
given the ML function "fun foo x = x", we can do any of these:

foo 4 => 4 : int
foo (4) => 4 : int
foo (4,5) => (4,5) : int * int
foo "bar" => "bar" : string
foo ("bar") => "bar" : string
foo ("bar", "baz") => ("bar", "baz") : string * string

Does that qualify as "whacked-tuples" in your book? Or do you see a
difference in the Scheme case?

The proposal as Clinger described it actually sounded pretty attractive to
me. I avoid using multiple values as much as possible, partly because I
dislike their non-first-class-ness and find them unwieldy, but I would use
first-class tuples/sequences if they were available.

In case it hasn't already been mentioned, sequences as Clinger described
them are already present in the Scheme formal semantics, since they are
needed internally. The proposal would provide a first-class type that
mirrors a type already present in and central to the semantics.

I somehow doubt that you have to be concerned about this getting adopted any
time soon, though...

Anton

David Rush

unread,
Aug 11, 2003, 6:46:11 AM8/11/03
to
Anthony Carrico <acar...@memebeam.org> writes:
> David Rush <dr...@aol.net> wrote:
> > Anthony Carrico <acar...@memebeam.org> writes:
> >> David Rush <dr...@aol.net> wrote:
> > Well, for one Stalin only supports R4RS.
>
> If I remember correctly, there aren't multiple values in R4RS, so
> compatibility with R4RS would be a very good reason to avoid multiple
> values.

Gambit is also only R4RS, which means two of the top four most efficient
code-generating Scheme compilers don't support it directly. This was
the reason why I started writing call/values-free code. Then I found
out that it wasn't painful, and that it was in some ways better.

> >> I'd be interested to know why multiple result types would be any
> >> different than multiple parameter types.
> >
> > we need to be clearer about the terms of this discourse. Does multiple
> > parameter types mean
> >
> > type1 * type1 -> type3 {A}
> > or
> > (con1 type1 | con2 type2) -> type3 {B}
>
> Umm... {A}, but actually I don't understand {B}. What are "con1" and
> "con2"?

A bad attempt at approximating the SML

datatype Foo = con1 of type1 | con2 of type2

which would result in a type signature of

Foo -> type3

> > If it's the first, there is no difference,
>
> Great: if there is no difference then it shouldn't be any more trouble
> for the analysis to do-what-it-does for "multiple values" as it does
> for "multiple parameters".

I agree completely. In a purely theoretical sense functions have only
one argument type and one return type. Scheme pragmatics are somewhat
different.

> > hence the proposal which has been re-opened about sequence types.
>
> No. Only if the values (and parameters, I gather, for the ML fans)
> where exclusively sequences, but if we had the option of passing (aka
> returning) sequences and non-sequences, then it would be a different
> thing.

I haven't read the actual proposal, only OP's summary thereof. I was
waiting unutil the heavyweights weighed in with some commentary, since
I am also in the midst of a severe production system crisi at the
moment. Perhaps I'll get a chance to read it this week.

> Think about the continuations for the operator and arguments of a
> call. I'm fine in the multiple-values-world where one value must be
> returned/passed to these. I'm fine in the nice-sequences-world where a
> sequence of length one must be returned/passed to these (but I think
> the syntax would suck) . I'm freaked out in the
> whacked-sequences-world where one of anything is fine, sequence or not

That does seem odd; however,

> -- I just can't make any sense of a type system like that.

Well, it makes sense in a backwards-compatibility kind of way. Up
until R[45]RS Scheme seems to have avoided worrying about those sorts
of issues (NIL is not #f! Long live #f!), but in the intervening
years, people have written a fair bit of code. It seems there is less
motivation to change.

The point being that since Scheme is dynamically typed, type checks
are delayed until (primitive) function application. An arity 1
function, be it a normal application or a continuation is still an
arity 1 function. If you apply a non-sequence operator to a sequence
value you've written an erroneous program; however, you may write a
program that has a single interface which has both a sequence and
non-sequence calling convention (perhaps conditioned on some global
state). That's pretty icky, but the Scheme car doesn't actually cons
with seatbelts, so...

> I'm confused, since over the years I've seen the position
> advocated by respected people who are orders of magnitude more
> in-the-know than myself; I must be in the wrong, but I can't figure
> out where I'm in the wrong!

Not necessarily. They may just be willing to make a different set of
trade-offs than you.

> > ... I don't see enough value in the construct in the first place to
> > care.
>
> And so it's back to syntax. In engineering terms I see a lot of
> value. With multiple parameters and multiple values you are making a
> distinction between objects and control flow that you aren't making
> without them.

Assuming I understand you (which I'm not entirely sure about),
yes. In engineering terms, R4RS Scheme already had multiple values,
you just had to explicitly inject the continuation. R5RS added a crude
syntax (over which people have built much nicer constructs) which
actually hides the fundamental operation.

> It would be good news if that valuable distinction
> didn't make any difference to the math and the compiler.

I don't think it makes a lot of difference to the math. Practically,
it can mean a fair amount of difference to the compiler and run-time
systems.

Matthias Blume

unread,
Aug 11, 2003, 10:42:01 AM8/11/03
to
David Rush <dr...@aol.net> writes:

> I haven't read the actual proposal, only OP's summary thereof. I was
> waiting unutil the heavyweights weighed in with some commentary,

I'm not the heavyweight you are looking for, but since the "OP"
specifically mentioned me, I want to thank him (her?) for giving an
excellent summary of my original complaints.

To admit the truth, back in '95 I did not fully appreciate (or
understand?) Will Clinger's proposal. As usual, when you don't bother
to understand something reasonable, you are bound to reinvent it:

http://groups.google.com/groups?q=g:thl776716467d&selm=fo1ydnbnx8.fsf%40trex10.cs.bell-labs.com

I still don't have any hope that something like this will get adopted
into Scheme anytime soon (nor do I really care). But I thought that
the reference might be interesting to some...

Cheers,
Matthias

Category 5

unread,
Aug 11, 2003, 12:32:01 PM8/11/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:

> I'm not the heavyweight you are looking for, but since the "OP"
> specifically mentioned me, I want to thank him (her?) for giving an
> excellent summary of my original complaints.

You're welcome. =)

My motivation was simply that you gave exceptionally clear expression to
my own rather inchoate misgivings about the call-with-values approach.
The only missing piece of the puzzle was your initial lack of response
to William Clinger's subsequent proposal. The pieces are starting to
come together, though.

> To admit the truth, back in '95 I did not fully appreciate (or
> understand?) Will Clinger's proposal. As usual, when you don't bother
> to understand something reasonable, you are bound to reinvent it:
>

> [ <fo1ydnb...@trex10.cs.bell-labs.com> ]

I found this a day or two after my original post, thanks to a reference
into the thread cited by Matthias Radestock. That thread was most
enlightening and tackled a lot of thorny issues.

However, as a result of reading Clinger's original proposal, Matthias
Radestock's proposal in <3CB962FD...@sorted.org>, and yours, I'm
left with a picture of their relative differences that's less than
clear. There seem to be concerns about first-classness and [x] = x
winding through the various arguments people have put forward, and I'm
quite curious to see where things stand now. Do any of these proposals
seem more attractive to people than the others? Are they really just
different ways of presenting the same well-defined notion?

Most importantly, can everyone agree on the satisfactoriness of this (or
one of these) ideas *in principle* as a sounder alternative to R5RS
values/call-with-values?

I realise this kind of broad agreement is a very elusive goal in the
Scheme world, especially for such a radical (to use Will Clinger's word)
proposal. But after seeing people beat on the question and the various
answers (for more than eight years!), it really seems like we have
something good here, and I didn't want to see it vanish into the
bit-bucket because we've all since poured enough syntactic sugar on top
of call-with-values to avoid thinking too much about how unpleasant it
is to have such an artifact in the Revised Report.

Sometimes radical corrections are appropriate when a freak storm throws
the ship off course. I use the analogy advisedly given what seems to be
the very questionable history of how values/call-with-values actually
made it into the language.

I also think we're very lucky to have in this proposal a radical
correction which doesn't break any existing code.

If philosophical agreement can be reached, the only outstanding question
is how implementors feel about it. I was hoping some of them would
chime in to say that they find the idea totally unworkable, utterly
trivial, or something in between.

Although the proposal requires "changes to all existing implementations
of Scheme"---how serious are those changes, really?

> I still don't have any hope that something like this will get adopted
> into Scheme anytime soon (nor do I really care).

Thanks for reaffirming your support for this idea. I know you don't
care, but I do. It will be the death of me, no doubt. =)

--

Matthias Blume

unread,
Aug 11, 2003, 1:14:05 PM8/11/03
to
Category 5 <catfi...@chaosnet.org> writes:

> However, as a result of reading Clinger's original proposal, Matthias
> Radestock's proposal in <3CB962FD...@sorted.org>, and yours, I'm
> left with a picture of their relative differences that's less than
> clear. There seem to be concerns about first-classness and [x] = x
> winding through the various arguments people have put forward, and I'm
> quite curious to see where things stand now. Do any of these proposals
> seem more attractive to people than the others? Are they really just
> different ways of presenting the same well-defined notion?

I think that everything is well-defined. Having said this, however, I
do understand some of the misgivings that people have about the
approach. Everything really comes down to the question of whether it
is acceptable that "[x] = x".

A few remarks about this:

0. There are no phisosophical problems with [x] = x. After all, [x]
is certainly isomorphic to x, so we might as well identify the
two. Also, notice that even if we don't have tuples with such
behavior, I can already define (in today's Scheme) a tupling
function with this behavior:

(define (tuple . l)
(if (= (length l) 1) (car l) l))

1. Contrary to what many Lispers think, I don't think that most
people would have serious problems with having rules such as [x]
= x. After all, we do this all the time in math and in (other)
programming languages where placing extra parentheses does not
alter the semantics. However, in Scheme (and any other Lisp),
this is indeed rather unnatural given that (x) and x are
certainly not identical, neither as data nor as code.

2. On the practical side, whether or not having such things is
problematic is certainly in the eye of the beholder. For most
programming tasks, the issue will not be noticable at all.
However, if tuples end up being used like lists or vectors
(meaning: people write code that iterate over the elements,
etc.), then the "inconsistency" will be very annoying. Given that
there are already perfectly good sequence types without this
problem, I don't see this as too much of an issue, though.

I would not want to have this behavior for lists, vectors, or
strings. Modula-2, iirc, made the mistake of using "c" as a
notation for the character c while string literals of any other
length (0, >1) were taken as notation for a different type.
During my brief encounters with the language, I found this very
disturbing. Similarly bad, SML'90 did not have a char type and
used single-char strings in their place. (In all fairness, at
least the types were the same for all string literals, though.)

Also, it is worth remembering the point I made in point 0: people
can already create all kinds of "inconsistent" behavior in their
code, even for the set of currently available sequence types, so
100% "generic" is not possible for those either.

3. Meta-programming (including macros) can, as some have pointed
out, get tricky if the meta-level code wants to do something
generic for argument lists of various lengths. (This is simply
an instance of the "treat tuples as vectors or lists" scenario.)

Personally, I am not a huge fan of meta-programming in general or
macros in particular, so I don't see this as a problem for me.
But since macros are a big deal in Scheme now, it will probable
get in some people's way.

In many cases there will be easy workarounds. Example: My
"ml-nlffigen" program generator tool (generating SML code for
interfacing C libraries) uses a hack where information about C
function types is encoded in SML (phantom-)types. The arguments
are encoded as an SML tuple type. To avoid the pathological
tuple-of-one case, I simply added an otherwise unused element
("unit") to the encoding to make sure that there are always at
least two elements... (Of course, a hack like this is only
possible because the interface in question is not exposed to
anyone other than the program generator tool.)

Matthias

Anton van Straaten

unread,
Aug 11, 2003, 3:24:03 PM8/11/03
to
"Category 5" <catfi...@chaosnet.org> wrote:
> Most importantly, can everyone agree on the satisfactoriness of this (or
> one of these) ideas *in principle* as a sounder alternative to R5RS
> values/call-with-values?

My $0.02, as a data point: the idea appeals to me, and certainly sounds
preferable to the current multiple values mechanism. But because it has
broader implications, I'd want to play with it in a toy implementation
before I would check a box that said "switch Scheme to use tuples". While I
like the way ML works in this area, I wouldn't trust my ability to map that
onto Scheme, in my head.

> I also think we're very lucky to have in this proposal a radical
> correction which doesn't break any existing code.

I agree, with the caveat above. Which brings me to a point about
implementing this: given the nature of this change, couldn't a
Scheme-with-tuples be used fairly "easily" to implement a perfectly standard
R5RS Scheme, with parameter lists and R5RS multiple values implemented in
terms of tuples? If so, such an implementation would essentially appear to
provide an extension to R5RS in the form of first-class tuples, even though
internally, tuples would be the more fundamental feature.

If this is possible, it also means that the proposal could begin life as a
SRFI, providing a possible way to migrate in this direction.

> Although the proposal requires "changes to all existing implementations
> of Scheme"---how serious are those changes, really?

If you're not implementing it, you shouldn't downplay it. I suspect this
would be a fairly serious change in any real Scheme implementation.

Anton

Anthony Carrico

unread,
Aug 11, 2003, 11:28:42 PM8/11/03
to
Anton van Straaten <an...@appsolutions.com> wrote:
> Isn't this exactly what ML does with tuples, though?
...

> foo 4 => 4 : int
> foo (4) => 4 : int

If you say so, it must be true (I'm not an ML programmer, I only know
what I picked up from Appel's "Compiling with Continuations", and
these threads). Thanks for clearing that up. I went back and read
Clinger's 2 Feb 1995 proposal; it uses this same idea. I don't know if
Clinger was being serious or sarcastic, but I'll take it seriously for
a minute.

From Clinger:
(eq? '[[[[foo]]]] 'foo) => #t

> Does that qualify as "whacked-tuples" in your book?

Yes it does (but I'm still listening).

I'm currently in the camp that thinks R5RS multiple values already
make perfect sense. They seem to jive 100% with my intuition that
Scheme is really just a convenient direct style syntax for CPS. From
that point of view continuations can have multiple results because
continuations are just lambdas and lambdas can have multiple
parameters. How could this be controversial?

The only value I see in the "whacked-tuples" model is it makes things
nice for people who happen to want to write a procedure whose mode of
operation is to pass along an arbitrary number of results from one
procedure to the corresponding parameters of another. This particular
control flow isn't even possible in the functional world, so why is
anyone interested in calling it "compose"? It seems to me that people
are trying to generalize something simple (compose in the world of
functions with one parameter and one result) into a context where it
doesn't make any sense. So let's call that compose-with-a-twist. Now,
What makes compose-with-a-twist so important that we should worry
about it?

The logic:
1. Scheme has multiple values.
2. Let's extend compose to multiple values.
3. Hmm... this seems a little strange.
4. Let's redesign Scheme's call mechanism.

I submit that the error may have been made in step 2!

If people still want to continue with step 4, at least consider the
following: Clinger showed that there is no price to be payed for
multiple values (when you aren't using them). To my knowledge he
didn't show that there is no price to be payed for sequences (aka
tuples).

In particular, who's job is it to unwind all those tuples (aka
sequences) of one? The tuple constructors? The accessors? During the
normal course of program flow, who's job is it to make sure they don't
build up in the first place? Are they a space leak in certain
unfortunate code?

The other thing I find strange about tuples is that tuple constructors
must be "special" procedures since they take more than one argument in
a one argument only world.

Thanks everyone for an interesting and thoughtful thread.

--
Anthony Carrico

be...@sonic.net

unread,
Aug 12, 2003, 12:16:19 AM8/12/03
to
Anthony Carrico wrote: > Anton van Straaten <an...@appsolutions.com> wrote: > > Isn't this exactly what ML does with tuples, though? > ... > > foo 4 => 4 : int > > foo (4) => 4 : int > If you say so, it must be true (I'm not an ML programmer, I only know > what I picked up from Appel's "Compiling with Continuations", and > these threads). Thanks for clearing that up. I went back and read > Clinger's 2 Feb 1995 proposal; it uses this same idea. I don't know if > Clinger was being serious or sarcastic, but I'll take it seriously for > a minute. > From Clinger: > (eq? '[[[[foo]]]] 'foo) => #t > > Does that qualify as "whacked-tuples" in your book? > Yes it does (but I'm still listening). > I'm currently in the camp that thinks R5RS multiple values already > make perfect sense. They seem to jive 100% with my intuition that > Scheme is really just a convenient direct style syntax for CPS. From > that point of view continuations can have multiple results because > continuations are just lambdas and lambdas can have multiple > parameters. How could this be controversial? > The only value I see in the "whacked-tuples" model is it makes things > nice for people who happen to want to write a procedure whose mode of > operation is to pass along an arbitrary number of results from one > procedure to the corresponding parameters of another. This particular > control flow isn't even possible in the functional world, so why is > anyone interested in calling it "compose"? It seems to me that people > are trying to generalize something simple (compose in the world of > functions with one parameter and one result) into a context where it > doesn't make any sense. So let's call that compose-with-a-twist. Now, > What makes compose-with-a-twist so important that we should worry > about it? Three words: data directed programming. If you have a DD problem, using an OO language is a pain in the butt (it always wants an inheritance hierarchy which is seldom mirrored by anything intrinsic to the problem) and using a functional language is okay but suboptimal. So DD programming is usually done in imperative languages like C or Pascal. But the ability to pass arbitrary multiple results directly in as parameters to another function would make scheme into a better DD language than any now extant. The basic problem when you do DD programming in an imperative language is that you wind up having to manage composition of procedures with scores of parameters and scores of return values (which, in C, you implement as parameters passed by reference). Structurally speaking, it's not particularly hard to grasp, but it's so damn easy to get lost in the parameter lists and so damn easy to get off by one when you're counting arguments, that anything you could use to elide the lists (and also efficiently pass the extra parameters on the stack) would be deeply welcome and liberating to people doing DD programming. I know, it's one layer lower than OO and functional languages by nature can go a couple layers higher; but some problems really are best solved a level below OO, just as some are best solved using OO methods and some are best solved using Unification methods. Bear

Anton van Straaten

unread,
Aug 12, 2003, 3:07:51 AM8/12/03
to
Anthony Carrico wrote:
> From Clinger:
> (eq? '[[[[foo]]]] 'foo) => #t

SML/NJ says: ((((4)))) = 4 => true : bool

> I'm currently in the camp that thinks R5RS multiple values already
> make perfect sense. They seem to jive 100% with my intuition that
> Scheme is really just a convenient direct style syntax for CPS. From
> that point of view continuations can have multiple results because
> continuations are just lambdas and lambdas can have multiple
> parameters. How could this be controversial?

The first controversial issue there is "lambdas can have multiple
parameters". You suggest (later in your post) that an error might have been
made extending compose to multiple values - but perhaps the error is made in
the way multiple values are thought about and handled in the first place.

Anyway, by the same token, how could it be controversial to treat a sequence
of multiple values as a first-class entity? The R5RS formal semantics does
it internally:

If you look at the definition for 'cwv' in the R5RS formal semantics (end of
sec 7.2.4), translated into something more Scheme-like, and factoring out
some of the CPS and helper procedures like twoarg (which checks arg count)
and applicate (essentially 'apply'), it looks like the code below (note if
you compare to R5RS, you might notice an extra 'k' in my version, because
R5RS has an omission in the definition of cwv - there should be a kappa
after the last occurrence of e*, otherwise call-with-values would terminate
a program):

(define call-with-values
(lambda (producer consumer k)
((lambda (e*) (k (consumer e*))) (producer))

Now, e* is a "sequence" in the formal semantics. All that's being done here
is that the sequence of multiple values produced by the producer function,
is passed to an anonymous procedure which passes them to the consumer
function. In fact, if you factor out all the CPS except for the provided
continuation k, it looks like this:

(define call-with-values
(lambda (producer consumer k)
(k (consumer (producer)))))

So one question is, if this is what's happening in the formal semantics,
mightn't it be a good idea to be able to do something similar in Scheme?
Why are we forced to do something so simple using a very special procedure,
call-with-values, and if we write code like the above, it is an error?

From my own perspective, I've always found multiple values not to fit
intuitively with the rest of Scheme. This has nothing to do with any of the
political history or the fact that they didn't exist in earlier versions of
Scheme: I learned Scheme post-R5RS, so to me, multiple values have always
been there. But they never really seemed to fit.

It was about a year ago that I first read the 1995 discussion about this
first-class sequence proposal. I found Matthias B's arguments convincing,
and I had enough familiarity with ML at that point that it was instantly
attractive to me.

The first-class sequence proposal seems to provide at least three benefits
which I consider worthwhile:

1. Allow a multiple-value return result to be treated as a first-class
value. While I understand that Scheme is not necessarily about making
everything first class, I find the non-first-classness of multiple-values
unpleasant. It limits how functions which return multiple values can be
invoked. I find that to be against the spirit of Scheme. I could expand on
that, but in brief, the inability to reliably use an expression of the form
(let ((x (f ...))... for any function f is hard for me to understand,
although I do understand the continuation justification. The fact that the
primary motivating factor seems to be efficiency/performance for
multiple-value return only makes things worse, for me - surely there are
ways to achieve performance without requiring special and somewhat unwieldy
non-first-class constructs.

2. Expose a more efficient structure for argument-passing than lists.
Smart compilers already do this under the hood. The formal semantics
already treats arguments as "sequences", for some of the same reasons as
compilers do, afaict: because lists are an unnecessarily complex &
inefficient structure for argument passing. But at the Scheme level, if you
don't bind all a function's arguments to individual variables, you get a
list, which isn't always what's wanted. The same argument that argues for
multiple value return for performance/efficiency reasons, argues for
something other than lists as a way to package arguments.

3. Create consistency between argument passing and multiple value return.
Repeating part of point 2, right now we have what looks like real lists for
arguments at the Scheme level, which can be too heavy; and we have values
which can't be dealt with directly and have to be funnelled through a
procedure's argument list via call-with-values (talking about features
defined by R5RS). The first-class sequence proposal seems to provide a
middle ground which cleans things up and improves flexibility.

> The only value I see in the "whacked-tuples" model is it makes things
> nice for people who happen to want to write a procedure whose mode of
> operation is to pass along an arbitrary number of results from one
> procedure to the corresponding parameters of another. This particular
> control flow isn't even possible in the functional world, so why is
> anyone interested in calling it "compose"?

The end result of the first-class sequence proposal would be something very
similar to ML, so it's hardly at variance with the functional world,
depending on what you mean by "functional world". One definition of the
functional world has lambdas only taking one parameter...

> It seems to me that people are trying to generalize something
> simple (compose in the world of functions with one parameter
> and one result) into a context where it
> doesn't make any sense. So let's call that compose-with-a-twist. Now,
> What makes compose-with-a-twist so important that we should worry
> about it?
>
> The logic:
> 1. Scheme has multiple values.
> 2. Let's extend compose to multiple values.
> 3. Hmm... this seems a little strange.
> 4. Let's redesign Scheme's call mechanism.
>
> I submit that the error may have been made in step 2!

Or step 1! My logic doesn't start with step 1. It starts with wanting a
consistent and usable solution to the issue of passing multiple values
around: whether to functions, or from functions.

> If people still want to continue with step 4, at least consider the
> following: Clinger showed that there is no price to be payed for
> multiple values (when you aren't using them).

But some people find them to be a less-than-useful or attractive mechanism,
whereas the problem they are supposed to solve does exist. The sequence
proposal (arguably) provides a more complete & elegant solution.

> To my knowledge he didn't show that there is no price
> to be payed for sequences (aka tuples).

True. That would need to be investigated, as would the usability within
Scheme of the proposal (IMO).

> In particular, who's job is it to unwind all those tuples (aka
> sequences) of one? The tuple constructors? The accessors? During the
> normal course of program flow, who's job is it to make sure they don't
> build up in the first place? Are they a space leak in certain
> unfortunate code?

Maybe Matthias B. could describe that. Appel's book says (ch 5.3), talking
about primops: "Each [primop] category is converted into [CPS] in one of two
ways, depending on whether the operator takes one argument or more than
one." It makes sense that it would happen at function call sites and other
tuple construction locations.

> The other thing I find strange about tuples is that tuple constructors
> must be "special" procedures since they take more than one argument in
> a one argument only world.

Every procedure call would implicitly construct a tuple, wouldn't it? An
explicit tuple constructor would only be needed in certain situations. In
many respects, it simply seems like exposing something that already happens
internally. Multiple values are already very "special". One way to look at
this proposal is it makes them less special.

Note that I'm not claiming I'm "right" about all this. I don't think my
understanding is in-depth enough to make that judgement. But the proposal
sounds worthwhile to me, and I haven't yet seen any reasons that, to me,
argue for rejecting it.

Anton

David Rush

unread,
Aug 12, 2003, 6:22:52 AM8/12/03
to
"Anton van Straaten" <an...@appsolutions.com> writes:
> Note that I'm not claiming I'm "right" about all this. I don't think my
> understanding is in-depth enough to make that judgement. But the proposal
> sounds worthwhile to me, and I haven't yet seen any reasons that, to me,
> argue for rejecting it.

Actually, I think you've argued the point very well. Indeed, it cuts
to the heart of my difficulties with call/values (aside from R4RS
incompatibility). I certainly could live with Matthias Blume's
proposal. Will Clinger's proposal seems to be very concerned with the
eq?-ness of sequence/tuple objects which Matthias Blume seems to have
engineered out of the systemm by making tuples immutable.

So what are we mere users missing here?

Michael Sperber

unread,
Aug 12, 2003, 9:01:37 AM8/12/03
to
>>>>> "bear" == Ray Dillinger <be...@sonic.net> writes:

bear> But the ability to pass arbitrary multiple results directly in
bear> as parameters to another function would make scheme into a better
bear> DD language than any now extant.

Huh? That's *exacty* what CALL-WITH-VALUES does. Could you clarify
what you mean?

--
Cheers =8-} Mike
Friede, Völkerverständigung und überhaupt blabla

Lauri Alanko

unread,
Aug 12, 2003, 9:09:08 AM8/12/03
to
Michael Sperber <spe...@informatik.uni-tuebingen.de> virkkoi:

> >>>>> "bear" == Ray Dillinger <be...@sonic.net> writes:
>
> bear> But the ability to pass arbitrary multiple results directly in
> bear> as parameters to another function would make scheme into a better
> bear> DD language than any now extant.
>
> Huh? That's *exacty* what CALL-WITH-VALUES does. Could you clarify
> what you mean?

I'd hazard a guess that the essential difference is in the convenience
between:

(call-with-values (lambda () e) f)

and

(f e)


Lauri Alanko
l...@iki.fi

Matthias Blume

unread,
Aug 12, 2003, 10:35:37 AM8/12/03
to
Michael Sperber <spe...@informatik.uni-tuebingen.de> writes:

> >>>>> "bear" == Ray Dillinger <be...@sonic.net> writes:
>
> bear> But the ability to pass arbitrary multiple results directly in
> bear> as parameters to another function would make scheme into a better
> bear> DD language than any now extant.
>
> Huh? That's *exacty* what CALL-WITH-VALUES does. Could you clarify
> what you mean?

"directly" as opposed to "via CALL-WITH-VALUES"

(Just a guess.)

Matthias

Matthias Blume

unread,
Aug 12, 2003, 10:41:28 AM8/12/03
to
"Anton van Straaten" <an...@appsolutions.com> writes:

> Anthony Carrico wrote:
> > From Clinger:
> > (eq? '[[[[foo]]]] 'foo) => #t
>
> SML/NJ says: ((((4)))) = 4 => true : bool

Yes. And any mathematicias will agree that

(x) = x

and therefore, by extension,

((((x)))) = x

I really don't see why people would get bent out of shape by something
like that. Also, notice that (as I mentioned before several times)
you can already define such a bracketing construct (call it TUPLE)
with these properties it RnRS Scheme, so it can't be all that "strange":

(define (tuple . l)
(if (= (length l) 1) (car l) l))

> > In particular, who's job is it to unwind all those tuples (aka


> > sequences) of one? The tuple constructors? The accessors? During the
> > normal course of program flow, who's job is it to make sure they don't
> > build up in the first place? Are they a space leak in certain
> > unfortunate code?
>
> Maybe Matthias B. could describe that.

I already did.

> > The other thing I find strange about tuples is that tuple constructors
> > must be "special" procedures since they take more than one argument in
> > a one argument only world.
>
> Every procedure call would implicitly construct a tuple, wouldn't it? An
> explicit tuple constructor would only be needed in certain situations.

Exactly. In fact, you can define your explicit tuple constructor
(assuming my proposal) as, the rest of the machinery is in how LAMBDA
and function application works:

(define (tuple l) l)

Now, is that neat, or what? :-)

Matthias

Matthias Blume

unread,
Aug 12, 2003, 10:44:06 AM8/12/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:

> Exactly. In fact, you can define your explicit tuple constructor
> (assuming my proposal) as, the rest of the machinery is in how LAMBDA
> and function application works:

Oops, this sentence got garbled. Read as:

"In fact, you can define your explicit tuple constructor

(assuming my proposal) as:

(define (tuple l) l)

(The rest of the machinery is in how LAMBDA and function application works.)"

Matthias

Michael Sperber

unread,
Aug 12, 2003, 10:47:05 AM8/12/03
to
>>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:

Matthias> Michael Sperber <spe...@informatik.uni-tuebingen.de> writes:

>> >>>>> "bear" == Ray Dillinger <be...@sonic.net> writes:
>>
bear> But the ability to pass arbitrary multiple results directly in
bear> as parameters to another function would make scheme into a better
bear> DD language than any now extant.
>>
>> Huh? That's *exacty* what CALL-WITH-VALUES does. Could you clarify
>> what you mean?

Matthias> "directly" as opposed to "via CALL-WITH-VALUES"

Matthias> (Just a guess.)

Probably right, though.

I don't see a totally baby-bottom-smooth way to paper over the
distinction between procedures and continuations, I guess. Scheme
uses CALL-WITH-VALUES to paper over the gap, ML uses tuples. Both are
cool in some contexts, awkward in others.

be...@sonic.net

unread,
Aug 12, 2003, 11:17:36 AM8/12/03
to

Yeah. Or more to the point being able to say

(f (e (j (i param1 param2 param3 .... param20))))

to compose a bunch of procedures that each take 20 arguments
and return 20 results.

Bear

Anthony Carrico

unread,
Aug 12, 2003, 3:03:28 PM8/12/03
to
Michael Sperber <spe...@informatik.uni-tuebingen.de> wrote:
> I don't see a totally baby-bottom-smooth way to paper over the
> distinction between procedures and continuations, I guess. Scheme
> uses CALL-WITH-VALUES to paper over the gap, ML uses tuples. Both are
> cool in some contexts, awkward in others.

Amen brother! The only baby-bottom-smooth resolution is to prevent
functions from returning at all. Too bad it's such a pain in the
baby's bottom.

Second class parameter/values are a control construct; that's
Scheme. Ignoring parameters/values and just passing/returning a single
first class vectors/list/sequence/tuple and calling its elements your
"parameters/results" is also an option inside of Scheme.

If you ditch the second class parameters/values, you no longer have
Scheme. In that case, the lambda-the-ultimate goto-with-parameters
mental model is gone. Let's just call it "MLisp" and see if it can
prove itself on its own merits (rather than just downplaying the
merits of the Scheme model).

I'll certainly admit that the Scheme rest parameter is an artifact of
some by-gone interpreter's implementation, so allow the Scheme rest
parameter to be bound to something opaque which can be implemented as
a list of arguments, set of registers, a tuple, etc. Now, with some
functions and macros, it ought to be possible define a Scheme in
native MLisp, and a Mlisp in native Scheme. Then we get some
answers.

I'd welcome the chance to learn about ML concepts with a lisp syntax
and syntax-case macros, I just wouldn't call it a logical extension of
what I understand Scheme to be.

--
Anthony Carrico

Matthias Blume

unread,
Aug 12, 2003, 4:57:30 PM8/12/03
to
Anthony Carrico <acar...@memebeam.org> writes:

> If you ditch the second class parameters/values, you no longer have
> Scheme.

I don't see why that would be so. In fact, most (all?) Scheme
programs would still run, so what's the problem?

> In that case, the lambda-the-ultimate goto-with-parameters
> mental model is gone.

Again, I don't understand this statement at all. What does one have
to do with the other?

> Let's just call it "MLisp" and see if it can
> prove itself on its own merits (rather than just downplaying the
> merits of the Scheme model).

Remind me: what are those merits of the "Scheme model" (whatever
*that* is), and how do they rely on the second-class-ness of
parameters and values?

I always thought that if there is a "Scheme model", then it is all
about making as many things as possible first-class. (Not that that's
necessarily a good thing. In many cases I'd argue that it is not.
Parameters and values, though, aren't one of these cases.)

Matthias

Tom Lord

unread,
Aug 12, 2003, 5:00:27 PM8/12/03
to

I'm with Anthony Carrico, and then some:

Anthony:

Second class parameter/values are a control construct; that's

Scheme. [....]



If you ditch the second class parameters/values, you no longer
have Scheme. In that case, the lambda-the-ultimate
goto-with-parameters mental model is gone. Let's just call it
"MLisp" and see if it can prove itself on its own merits
(rather than just downplaying the merits of the Scheme model).


I'm not so sure about this tuple stuff fitting in with "the spirit of
Scheme" -- or even making much sense.

I'm looking at this form of proposal:

http://groups.google.com/groups?q=g:thl776716467d&selm=fo1ydnbnx8.fsf%40trex10.cs.bell-labs.com


I've divided this possibly too long post into three parts:


* questions unanswered in the proposal
* some surprising behaviors implied by the proposal
* the spirit of scheme and its implications here

* questions unanswered in the proposal

Matthias wrote the proposal in a very informal style which makes it
hard to tell precisely what it says. Informally, I get the gist, but
when I try to imagine how the details should be filled in, I don't
quite see it.

One example: the propsal tells us that

(lambda (x1 ... xN) <body>)

is syntactic sugar for:

(lambda (arg)
(let ((x1 (project arg 1))
...
(xN (project arg N)))

<body>))

where `(project arg <N>)' is a "form expressing projection for tuples".

I've long regarded `let' as syntactic sugar for lambda. That is,
in the manner of rabbit or R5RS (7.3), one can define:

(let ((<name1> <exp1>)
...
(<nameN> <expN>))
<body1>
<body2> ...)

to be syntactic sugar for:

((lambda (<name1> ... <nameN>) <body1> <body2>...) <exp1> ... <expN>)

and so I don't see how Matthias' definition isn't circular. Is part
of the proposal here that `let' is no longer what R5RS calles a
"Derived expresion type"? Does this proposal entail an extensive
rewrite of the definition? Can we see that report?

People in this thread have been asking questions which I'll paraphrase
as "Why not do this? What's the impact?" I think that to really
answer those questions, minimally, one ought to produce the proposed
changes for R6RS in detail; ideally, work out its implications
for various compilation and interpretation strategies.

* some surprising behaviors implied by the proposal

The proposal tells us that:

(f 1 2 3)

is just syntax for:

(f (tuple 1 2 3))

and also that tuples are to be "good old-fashioned first-class values
like everything else in Scheme".

Taking these proposals at their word, and putting them together, we
get some suprises, like:


(length (list (tuple 1 2 3))) => 3

and so, for example, unlike R5RS, it is no longer certain that (absent
errors):

(let ((x <any value>))
(eq? 1 (length (list x))))
=> 1


I suppose that one could _partly_ "rescue" the tuple propsal by
asserting that:

(let ((x (tuple 1 2 3)))
<body>)

is an error and that only a multi-binding construct such as:

(let ((x y z (tuple 1 2 3)))
<body>)

is permissable, but then in what sense is it that tuples are
"first class values" again?

* the spirit of scheme and its implications here

I don't think there's any real definition of the "spirit of scheme"
but I do think it's a topic that can be discussed meaningfully.

I look for evidence of "the spirit of Scheme" in a few places: the
rabbit thesis, the "Lambda: The Ultimate..." papers, the RnRS series,
my own experience as implementor and user, the experience of other
implementors and users. I also like to look at some of the influences
floating around in the early days of Scheme, such as work of
Christopher Strachey, et al.

I seem to recall seeing many spirit-oriented discussions on this list
focus intensely on just one sentence which can be found in R5RS, the
first sentence of the introduction:

Programming languages should be designed not by piling feature
on top of feature, but by removing the weaknesses and
restrictions that make additional features appear necessary.

That's an important sentence, but I like the next one too (emphasis
added):

Scheme demonstrates that a very small number of rules for
forming expressions, with no restrictions on how they are
composed, suffice to form a _practical_ and efficient
programming language that is flexible enough to support most
of the major programming paradigms in use today.


And these two from the summary:


It was designed to have an exceptionally _clear_and_simple_
semantics and few different ways to form expressions. A _wide_
variety_of_programming_paradigms_, including _imperative_,
functional, and _message_passing_styles_, _find_convenient_
expression_ in Scheme.


So the first observation I want to make is that the tuples proposal,
at least arguably, removes a clear and simple semantics for lambda
and procedure application and replaces them with a somewhat opaque
and complicated semantics. It's not at all obvious to me that
tuples really make the math of the semantics any easier -- it is clear
to me that tuples and single-argument-lambda introduce a serious
indirection between procedures and procedure application, and that
other important view: that procedure application corresponds to a
machine-language "goto while passing parameters".

And the second observation I want to make is that it is not obviously
in keeping with the spirit of scheme to try to turn it into a
particular lambda calculus of interest to ML fans. You'd want to at
least demonstrate, say, a really simple yet kick-ass compiler and
interpreter grounded in that approach first. (I'm using "not
obviously" advisedly -- I'd also say "not obviously not". I'm just
trying to say that saying "call-with-values doesn't _feel_ right to
me; tuples feel better" isn't enough. It was the suprising
harmonization of theory with practice that caused Scheme to resonate
so loudly when it appeared.)

I'll dig a little deeper into two points in particular. First, the
"goto with parameters" view; second, the question "what is the
scheme-spirit take on functional languages".


** goto with parameters

I thought that part of the "spirit of scheme" is that one
interpretation of function application is that is a "goto with
parameters".

Another part of the "spirit of scheme" is that "returning a value"
is an example of function application.

A third part of the "spririt of scheme" is to avoid unecessary
restrictions on how expressions may be composed.

Putting those three parts of the "spirit of Scheme" together, it is
clear that R4RS was missing some generality. It contained the
arbitrary restriction that one could pass multiple parameters in most
function applications, but not in the function applications implied by
"returning". All continuations took precisely one parameter.

The "goto with parameters" idea makes sense at several levels.
Operationally, it's a vague but useful way to characterize some
important properties the kinds of computers we know how to build best
-- so when we're programming in those terms we are, in some sense,
programming "close to the hardware". Pragmatically, the goto idea
leads to very simple compilation and interpretation strategies --
strategies very clearly related to the surface language.
Mathematically, the goto idea is expressible in a minimally augmented
example of an applicative-order lambda calculus. The goto idea, along
with first-class continuations, very concisely provides, explains, and
generalizes all important control flow constructs found in similar
languages. Those four levels can all be grasped comparatively easily
by intuition. Physics, the practice of programming, math, and
explicative power all converging and harmonizing over a very intuitive
abstract structure --- with five-part harmonies like that, do you
really wonder why I sometimes say "space aliens program in scheme"?

The tuples idea doesn't exactly abandon the idea of the "goto with
parameters" idea -- but it does throw in some new layers of
indirection there. Intuition is told, "don't worry, this all gets
put back together in a reasonable way by the compiler". I think
something is lost in this proposal.

One way to view the idea of "first class tuples" is that it is the
idea of reifying the parameter passing conventions as values which
represent the intention to pass a particular set of parameters.

One way to view the idea of "call-with-values" and "values" is that it
is based on exactly the opposite idea: of avoiding creating values
which represent the intention to pass a particular set of parameters.

Paradoxes like the `length' examples shown above illustrate that there
is a distinction to be made between applying a procedure to a single
first-class value which can be interpreted as a reified "intention to
pass a particular list of parameters" and a procedure application to
that list of parameters. Scheme already has a mechanism for making
that distinction: `apply'.

** the scheme-spirit take on functinoal languages

The interesting, foundational work on implementing functional
languages bears a lot of resemblence to the rabbit thesis. A
compiler for a tiny core language is exhibited, and then many
high-level concepts are expressed as transformations into that
language.

The static-typing work done by functional language researchers clearly
does not apply to Scheme as a whole. Certainly it could be
reinterpreted as a compilation/linting strategy for a subset of Scheme
programs. (R5RS 1.1: "Scheme has latent as opposed to manifest
types." and R5RS Summary: "Scheme is [...] a dialect of the Lisp
program language [....]")

Where I think Scheme research _might_ pay off with language
changes/extensions is from consideration of implementations of the
normal-order lambda calculus. Does graph reduction usefully fit into
Scheme?

-t

Anton van Straaten

unread,
Aug 12, 2003, 6:27:54 PM8/12/03
to
Anthony Carrico wrote:
> Second class parameter/values are a control construct; that's
> Scheme.

Although, until R5RS, there was no issue about multiple values because they
didn't exist. Then multiple values were added, and as a result, that part
of Scheme became something that's not considered completely Scheme by some
people - a view I have some sympathy with.

> If you ditch the second class parameters/values, you no longer have
> Scheme.

I'm sure we all have different ideas about what makes Scheme Scheme. That
one hasn't ever occurred to me, except as a sort of restriction - minor in
the case of parameters, but rearing its head more with multiple value
return. "Ditching" second class parameters/values makes it sound as though
something is lost - but what? I listed a number of things I saw as lost
with multiple values, including the very Scheme-like ability to reliably
store the results of any function in a variable.

> In that case, the lambda-the-ultimate goto-with-parameters
> mental model is gone.

How so? Can you give an example of a description of some operation, from
the "lambda-the-ultimate goto-with-parameters" model perspective, which is
somehow unreasonable or broken with first-class sequences?

If we want to relate this discussion to lambda-the-ultimate-ness, one
problem is that real lambdas only take one parameter (that should be read in
the same way as "real men don't eat quiche".) So where do we look for
models of how to handle lambdas with multiple parameters? I don't know, and
would appreciate any pointers. Scheme provides one model, but as I've
argued, to me its multiple values handling seems out of place. ML provides
another model, which *in ML* I certainly like, and don't have that out of
place feeling about. I'm less familiar with Haskell, but its model seems
quite similar to ML's - in particular, (4) == 4, and f (x,y) passes a single
tuple (x,y) to the function f.

Given this, the idea that we're talking about the ML'izing of Scheme here
seems invalid. We may be talking about bringing Scheme more in line with
other lambda-based functional languages with pattern-matched argument
binding, though.

> Let's just call it "MLisp" and see if it can
> prove itself on its own merits (rather than just downplaying the
> merits of the Scheme model).

I see the proposal as essentially a kind of extension, although one that
removes the need for some existing core constructs. In a Scheme with
sequences, you'll be able to use (f (g)) where g returns multiple values,
instead of (call-with-values g f). The former seems much more like "the
Scheme model" to me. call-with-values can be implemented in terms of that,
but not vice versa.

So, I'm not understanding the big problem here, and I don't see which merits
of the Scheme model are being downplayed. I'm willing to try to understand,
given or pointed to an explanation.

> I'll certainly admit that the Scheme rest parameter is an artifact of
> some by-gone interpreter's implementation, so allow the Scheme rest
> parameter to be bound to something opaque which can be implemented as
> a list of arguments, set of registers, a tuple, etc. Now, with some
> functions and macros, it ought to be possible define a Scheme in
> native MLisp, and a Mlisp in native Scheme. Then we get some
> answers.

Something that addressed the concerns of both sides would obviously be
ideal. RnRS seems capable of straddling almost any line, what's one more?
;)

> I'd welcome the chance to learn about ML concepts with a lisp syntax
> and syntax-case macros, I just wouldn't call it a logical extension of
> what I understand Scheme to be.

I certainly don't see the proposal as a way to migrate Scheme towards ML. I
wouldn't be in favor of that. I like ML for some kinds of applications, but
I value the flexibility that Scheme provides. I'm just not seeing how this
proposal takes anything away from that flexibility.

Instead of "MLisp", may I suggest "2Scheme", as a contraction of
TupleScheme. This also has parallels with Lisp1 & Lisp2, which may be the
kind of split we're beginning to create here... :^}

Anton

Anthony Carrico

unread,
Aug 12, 2003, 10:08:43 PM8/12/03
to
Wow! All I can say is thank you Tom Lord for doing such a great job of
expressing my concern. Most of the questions Matthias Blume and Anton
van Straaten asked back to me are best answered in your post.

I think Tom did a great job of expressing how (people like me)
understand the Scheme model in terms of goto with parameters, and how
(for people like me) the "tuple" model seems one step removed from the
machine. If I may argue-by-irony, in "Compiling with Continuations",
Andrew Appel takes the output from ML front end and maps it to
Scheme's goto with parameters model. A lot of the optimizations he
describes involve cleaning up all those tuple accesses!

Anton van Straaten <an...@appsolutions.com> wrote:

> Although, until R5RS, there was no issue about multiple values because they
> didn't exist. Then multiple values were added, and as a result, that part
> of Scheme became something that's not considered completely Scheme by some
> people - a view I have some sympathy with.

I feel just opposite way. Until R5RS, it felt like there was a hole in
Scheme because you you couldn't inject a continuation with an
arbitrary signature. R5RS didn't add something new to the picture, it
filled that curious hole in the existing mechanics. In fact, it made
the kind of function composition that you desire possible.

> How so? Can you give an example of a description of some operation, from
> the "lambda-the-ultimate goto-with-parameters" model perspective, which is
> somehow unreasonable or broken with first-class sequences?
>
> If we want to relate this discussion to lambda-the-ultimate-ness, one
> problem is that real lambdas only take one parameter (that should be read in
> the same way as "real men don't eat quiche".) So where do we look for
> models of how to handle lambdas with multiple parameters? I don't know, and
> would appreciate any pointers.

Ah. I apologize, as you say we are coming from very different points
of view. I am not talking about those "real lambdas". Not at
all. Those lambdas have little to do with what I'm talking about, it's
a different mechanism, they just happen to have the same alias by some
accident of history. I'm talking about the Steele/Sussman
ultimate-lambdas. See "The Original 'Lambda Papers' by Guy Steele and
Gerald Sussman" at http://library.readscheme.org/page1.html. Sorry for
the confusion.

> In a Scheme with sequences, you'll be able to use (f (g)) where g
> returns multiple values, instead of (call-with-values g f). The
> former seems much more like "the Scheme model" to me.
> call-with-values can be implemented in terms of that, but not vice
> versa.
>
> So, I'm not understanding the big problem here, and I don't see which merits
> of the Scheme model are being downplayed. I'm willing to try to understand,
> given or pointed to an explanation.

Ok, the big problem is that, in my world, "(f (g)) where g returns
multiple values" is an error, not a virtue! This is what I was trying
to point out when I said that maybe this whole discussion fell off the
Scheme wagon back at "compose". On the other hand, if in some
circumstance I do want f to take all of g's values as parameters, I
have an easy way to do it: make f into the continuation of g. How do I
do that? The call-with-values procedure.

There is a big huge difference between "(g) initializing the first
argument of f" and "f being the continuation of (g)". The proponents
of this idea conflate the two situations, "what's the big deal?, its
the same thing!". In my world, it's not the same thing, it's very
different!

In your world, the operator is indeed the continuation of the first
and only argument. It sort of makes every call into a
call-with-values.

I understand that you all aren't in my world, and there is probably
some value in your world, but I hope you can understand that it
sometimes seems like you guys are trying to sweep some very
fundamental issues under the rug. Again, read Tom's post (well, except
the final paragraph :). I'm sure it is more coherent than mine.

--
Anthony Carrico

ol...@pobox.com

unread,
Aug 12, 2003, 10:49:15 PM8/12/03
to
The Glasgow Haskell Compiler GHC has something _very_ like Scheme multiple
values: unboxed tuples. Unsurprisingly, they are second class. Here's
an excerpt from GHC documentation:

<blockquote>
7.2.2. Unboxed Tuples

Unboxed tuples aren't really exported by GHC.Exts, they're available by default
with -fglasgow-exts. An unboxed tuple looks like this:

(# e_1, ..., e_n #)

Unboxed tuples are used for functions that need to return multiple values, but
they avoid the heap allocation normally associated with using fully-fledged
tuples. When an unboxed tuple is returned, the components are put directly into
registers or on the stack; the unboxed tuple itself does not have a composite
representation. Many of the primitive operations listed in this section return
unboxed tuples.

- Unboxed tuples may only be constructed as the direct result of a function,
and may only be deconstructed with a case expression. eg. the following are
valid:

f x y = (# x+1, y-1 #)
g x = case f x x of { (# a, b #) -> a + b }

but the following are invalid:

f x y = g (# x, y #)
g (# x, y #) = x + y

The IO and ST monads use unboxed tuples to avoid unnecessary allocation during
sequences of operations.

</blockquote>

I would not be surprised if Simon Peyton-Jones got that idea from
Scheme.


I must say that I use multiple values and multiple-valued functions a
great deal. OTH, I hardly ever mention call-with-values, directly.
Multiple-valued functions _greatly_ help in writing purely functional
code that involves the threading of a state:

(define (open:input-buffer-layer schema url-rest properties)
(let*-values
(((properties parent-duct)
(find-prop-in-open-properties properties
'parent-duct #f))
((properties buffer buffer-length curr-pos end-pos)
(let*-values
(((properties buffer-size)
(find-prop-in-open-properties properties 'buffer-size #f)))
(if buffer-size
(if (not (and (integer? buffer-size) (>= buffer-size 0)))
(open-error error:bad-args "buffer-size: bad type or range"
buffer-size)
(values properties
(gvector-make-like parent-unit buffer-size) buffer-size 0 0))
(let*-values
(((properties buffer read-pos available-read)
(find-prop-in-open-properties properties
'buffer #f 'read-pos #f 'available-read #f))
...
))))))))

(or (null? properties)
(open-error error:excessive-open-properties
"for input-buffer:" properties))

As you can see, 'properties' is the list of properties that gets
threaded through. Functions like find-prop-in-open-properties search
the list for some keys, return the found values (or the default
values), and the remainder of the list. At the end, the code checks
that all properties given by the user have been taken care of.
The code is purely functional and referentially transparent. This
code should be familiar to some people on this newsgroup. And yes, it
can run on Gambit.


I also found it quite convenient to write

(let*-values (((v1 v2 v3) (apply values lst))
((x) (+ v1 v2)))
...)

For one thing, the code looks like pattern-matching on the list. For
another, the above expression frees me from explicitly checking that
the list lst indeed has exactly three elements. I get deconstruction,
binding, and error checking in one line. Here's a similar example from
the real code:

(let*-values
(((prev-bbox areas . products) (apply values seed))
((nlat wlon slat elon) (apply values (string-split prev-bbox))))

Michael Sperber

unread,
Aug 13, 2003, 11:05:09 AM8/13/03
to
>>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:

Matthias> Remind me: what are those merits of the "Scheme model" (whatever
Matthias> *that* is), and how do they rely on the second-class-ness of
Matthias> parameters and values?

The merits are that there is a symmetry between parameters and return
values.

If you're asking for "merits over the ML model"---here are some of my
gripes with it:

- There are two ways of doing multi-parameter functions---via currying
or tuples, with no clear indication of which one is superior. (I
haven't seen a clear discipline written up when to use one or the
other.) With currying, "parameter lists" aren't first-class,
either.

- Zero-parameter functions are not really a problem in practice, but
they break my notion of baby-bottom-smoothness.

- I'd say implementing tuples without excessive allocation through
boxing is more difficult than implementing the Scheme model. (In
fact, I remember ML implementations that didn't bother in the first
place. The result was worse than Emacs.)

Matthias Blume

unread,
Aug 13, 2003, 11:38:20 AM8/13/03
to

I am no longer able (nor eager) to invest too much of my time and my
emotional energy in arguing these things, so I will (try to) limit
myself to this one last article on the topic.

At the time it was written, my proposal was not meant as a serious
suggestion for changing the definition of Scheme. It merely served to
show that there are ways of getting the benefits of multiple return
values without the ballast of having to deal with contortions such as
call-with-values.

Having said this, I will answer your three areas of concern in order.
I'll keep it as brief as I can.

1. It is, obviously, true that one cannot explain the semantics of
LAMBDA with those of LET if at the same time one is to explain LET
in terms of LAMBDA.

My attempt at explaining my new LAMBDA semantics using LET were not
meant to be formal in any sense. (Why do people always take usenet
articles and treat them as if they were POPL submissions? They are
not.)

Anyway, I also explained the semantics of my new LAMBDA quite
clearly without using LET, I think. Here is a brief summary:

- all functions (including continuations) take precisely one argument
- there is a distiguished value, call it "[]", which should be
though of as an "empty tuple"
- given two or more values v_1,...,v_k (k >=2), there is a "k-tuple value"
containing these values (call it "[v_1 ... v_k]")
- LAMBDA forms:
* (LAMBDA () ...) evaluates to a procedure whose argument must be
the empty tuple []. Passing any other argument is considered a
runtime error.
* (LAMBDA (x) ...) evaluates to a procedure whose single argument
will be bound to the formal parameter x for the purpose of
evaluating the body.
* (LAMBDA (x_1 ... x_k) ...) evaluates to a procedure whose single
argument must be a k-tuple [v_1 ... v_k]. For the purpose
of evaluating the body, each v_i will be bound to the corresponding
x_i.
* (LAMBDA x ...) evaluates to a procedure which inspects its single
argument. If the argument is the empty tuple [], then x will
be bound to the empty list; if the argument v is not a tuple, then
x will be bound to the freshly allocated singleton list (v);
if the argument is a k-tuple [v_1 ... v_k], then x will be bound
to a freshly allocated list (v_1 ... v_k).
- Application forms:
* (f) calls the procedure that f evaluates to and passes []
is the single argument.
* (f arg) calls the procedure that f evaluates to and passes
the result of evaluating arg as its single argument.
* (f arg_1 ... arg_k) with k >= 2 calls the procedure that f
evaluates to and passes the tuple [v_1 ... v_k] as its single
argument where each v_i is obtained by evaluating the
corresponding arg_i.

A suitable modification of the denotational semantics given in
RnRS is easy (or so I believe). I leave it as an exercise to the
interested reader. :-)

Remarks:

* I claim that under the above rules, all Scheme programs that
currently do not exhibit runtime errors will continue to run and
behave as they did before. This should be fairly straightforward
to prove by a simple case analysis, matching applications with
corresponding lambdas.

* One can quibble about the equality properties of tuples. I
leave this open here. If we allow EQ? on tuples, then one
obvious axiom should be:

If t is a tuple [v_1 ... v_k] and (EQ? t u) ==> #t, then
u is a tuple [w_1 ... w_k] and \forall i . (EQ? v_i w_i) ==> #t.

I would not want to place many (if any) more restrictions than
that on implementations (see next paragraph).

* In my original article I also sketched out an implementation
strategy which could succinctly be described as "lazy tupling".
In all Scheme programs that are currently legal there would never
be any tuple that gets physically constructed, so there is no inherent
cost in using first-class tuples. (Current implementations already
have to check for arity mismatches, and tupling can quite easily be
piggy-backed on these arity tests. One can optimize common cases
by having multiple entry points for each function: one for the
1-argument case, one for the 0-or-k-argument case.)

* The R5RS procedure VALUES can be defined as the identity function.
(This is the same as an explicit tuple constructor.)

* The R5RS procedure CALL-WITH-VALUES can be defined as

(define call-with-values (LAMBDA (p c) (c (p ()))))

* Projections from tuples can be defined as

(define project_i (LAMBDA (x_1 ... x_{i-1} x_i . rest) x_i))

2. Variable-arity procedures (such as LIST):

Tom has a point here: It may come as a surprise to some that it is no
longer true that regardless of what the value of x we have

(length (list x)) ==> 1

In fact, if LIST is defined as

(define LIST (lambda l l))

then it acts as a converter from tuples to lists. This might even be
more useful than trying to preserve the "expected" behavior above.

In any case, I personally do not see variable-arity functions as
more than gratuituous cuteness. They are never really necessary. In
my code, when I use LIST I really want to use records (preferably with
named fields). Modern dialects of Scheme offer those. When lists are
used they way they were meant to be used (i.e., as an inductive data
structure), LIST is seldom useful as a function. Most of the other
built-in variable argument functions (+, *, ...) are even less useful,
in my view. In my code I rarely use them with anything else than
exactly two arguments. Having the obvious fold built into the
operator is what I consider "cute" but not terribly helpful.

The bottom line is that I can envision a Scheme which does not rely so
heavily on variable argument lists, so if there is any problem with
those in relation to tuples, it might be a non-issue.

(The syntactic convenience provided by variable-arity functions could
easily be recovered by macros. Procedure values, however, do not need
to be able to accept a variable number of arguments. In cases where
one really wants to pass a statically unknown number of values to a
function, one can do that already by explicitly passing a list of
these values.)

3. The "spirit of Scheme". I will keep this very short since it is
mostly mumbo-jumbo anyway.

I have no idea what this lambda-as-goto has to do with tuples. The
two things are completely separate issues. The existence of
variable-argument lists in RnRS is proof enough that if there is
any such problem at all, it already exists in present Scheme.

In any case, this thread is testimony to the fact that different
people have quite different ideas of what the "sprit of Scheme" is.
Moreover, they seem to be able to arrive at opposite ends of the
design spectrum by following their intuition about this "spirit".
This, if nothing else, shows that "spirit of Scheme" is a
wishy-washy concept at best.

Matthias

Matthias Blume

unread,
Aug 13, 2003, 11:46:15 AM8/13/03
to
Michael Sperber <spe...@informatik.uni-tuebingen.de> writes:

> >>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:
>
> Matthias> Remind me: what are those merits of the "Scheme model" (whatever
> Matthias> *that* is), and how do they rely on the second-class-ness of
> Matthias> parameters and values?
>
> The merits are that there is a symmetry between parameters and return
> values.

But there isn't, and that's the exact problem.

> If you're asking for "merits over the ML model"---here are some of my
> gripes with it:
>
> - There are two ways of doing multi-parameter functions---via currying
> or tuples, with no clear indication of which one is superior. (I
> haven't seen a clear discipline written up when to use one or the
> other.) With currying, "parameter lists" aren't first-class,
> either.

The same is true for Scheme (although its awkward syntax will most
likely discourage most programmers from using curried functions).

> - Zero-parameter functions are not really a problem in practice, but
> they break my notion of baby-bottom-smoothness.

Sorry, but I have no idea of what your "notion of
baby-bottom-smoothness" is. (Nor do I want to find out. :-)

> - I'd say implementing tuples without excessive allocation through
> boxing is more difficult than implementing the Scheme model.

No, it is not. See my other articles on this topic. In fact, it is
fairly easy to implement the tuple model in such a way that *no*
tuples get constructed for currently legal Scheme programs. (In other
words, tuples would only get constructed in programs that take
advantage of the new semantics.)

> (In fact, I remember ML implementations that didn't bother in the first
> place. The result was worse than Emacs.)

That may very well be so. But what does this have to do with the
current debate? Just because someone somewhere sometime in some
completely different context did a bad job at implementing a feature
does not meant that the feature is bad.

Matthias

Michael Sperber

unread,
Aug 13, 2003, 12:58:38 PM8/13/03
to
>>>>> "Matthias" == Matthias Blume <matt...@shimizu-blume.com> writes:

>> If you're asking for "merits over the ML model"---here are some of my
>> gripes with it:
>>
>> - There are two ways of doing multi-parameter functions---via currying
>> or tuples, with no clear indication of which one is superior. (I
>> haven't seen a clear discipline written up when to use one or the
>> other.) With currying, "parameter lists" aren't first-class,
>> either.

Matthias> The same is true for Scheme (although its awkward syntax will most
Matthias> likely discourage most programmers from using curried functions).

You're not getting my point: the syntactic overhead in Scheme of
currying over multiple parameters implies a clear discipline. In ML,
the notational costs are very similar (slightly lower for currying),
which muddies the water and forces people to put protocol conversions
in their program where the essence of what they're doing doesn't imply
them.

>> - I'd say implementing tuples without excessive allocation through
>> boxing is more difficult than implementing the Scheme model.

Matthias> No, it is not. See my other articles on this topic. In fact, it is
Matthias> fairly easy to implement the tuple model in such a way that *no*
Matthias> tuples get constructed for currently legal Scheme programs.

I guess we differ on assessing the relative costs of implementing
these things.

>> (In fact, I remember ML implementations that didn't bother in the first
>> place. The result was worse than Emacs.)

Matthias> That may very well be so. But what does this have to do with the
Matthias> current debate?

Not much, but at least that somebody else didn't find it as quite as
easy as you do.

Matthias> Just because someone somewhere sometime in some completely
Matthias> different context did a bad job at implementing a feature
Matthias> does not meant that the feature is bad.

I never attributed "badness" to any feature that's part of the
discussion---neither in words nor in intentions.

Note that I also never said I like the "Scheme model" significantly
better than the "ML model." I like both about the same. (I *do*
dislike the R4RS model.) I sure don't see a clear advantage in the ML
model, though.

David Rush

unread,
Aug 13, 2003, 1:17:40 PM8/13/03
to
Michael Sperber <spe...@informatik.uni-tuebingen.de> writes:
> Note that I also never said I like the "Scheme model" significantly
> better than the "ML model." I like both about the same. (I *do*
> dislike the R4RS model.) I sure don't see a clear advantage in the ML
> model, though.

Well, IMNSHO, R5RS is the worst of the solutions. I think I like
Blume/Clinger better than R4RS though.

David Rush

unread,
Aug 13, 2003, 1:24:35 PM8/13/03
to
Anthony Carrico <acar...@memebeam.org> writes:
> Ok, the big problem is that, in my world, "(f (g)) where g returns
> multiple values" is an error, not a virtue! This is what I was trying
> to point out when I said that maybe this whole discussion fell off the
> Scheme wagon back at "compose". On the other hand, if in some
> circumstance I do want f to take all of g's values as parameters, I
> have an easy way to do it: make f into the continuation of g.

I'm sorry but you have just restated the definition of
composition. In (f (g)), f *is* the continnuation of (g). Just what is
it that you consider an error?

> How do I do that? The call-with-values procedure.

Which is arguably a much clunkier syntax than the natural expression
of composition.

David Rush

unread,
Aug 13, 2003, 1:36:33 PM8/13/03
to
Anthony Carrico <acar...@memebeam.org> writes:
> Ok, the big problem is that, in my world, "(f (g)) where g returns
> multiple values" is an error, not a virtue! This is what I was trying
> to point out when I said that maybe this whole discussion fell off the
> Scheme wagon back at "compose". On the other hand, if in some
> circumstance I do want f to take all of g's values as parameters, I
> have an easy way to do it: make f into the continuation of g.

I'm sorry but you have just restated the definition of


composition. In (f (g)), f *is* the continnuation of (g). Just what is
it that you consider an error?

> How do I do that? The call-with-values procedure.

Which is arguably a much clunkier syntax than the natural expression
of composition.

> There is a big huge difference between "(g) initializing the first


> argument of f" and "f being the continuation of (g)".

There isn't once you apply the CPS-transform.

> The proponents
> of this idea conflate the two situations, "what's the big deal?, its
> the same thing!". In my world, it's not the same thing, it's very
> different!

I don't understand your world then. In a fully CPS-converted program,
all tail calls are continuations. Passing a parameter to a
continuation is no different from passing a parameter to a `regular'
function. FWIW, this actually causes some difficulties in
CPS-oriented compilers and has given rise to the A-Normal transform,
but even there the main (handwave) difference is in the number of
continuation functions introduced. All functions still use the same
argument application mechanism.

The point is that whatever you think about your world, they really
*are* the same, mathematically. Practically, compiler use various
implementation techniques to optimize function calls depending on the
context. The beauty of Blume's proposal is that it has a minimal
(he asserts zero) run-time cost for existing RnRS programs.

> In your world, the operator is indeed the continuation of the first
> and only argument. It sort of makes every call into a
> call-with-values.

Precisely. This is the basic theory of the _Lambda, the Ultimate_
papers, actually.

Lauri Alanko

unread,
Aug 13, 2003, 1:53:33 PM8/13/03
to
Matthias Blume <matt...@shimizu-blume.com> virkkoi:

> * The R5RS procedure CALL-WITH-VALUES can be defined as
>
> (define call-with-values (LAMBDA (p c) (c (p ()))))
^
I presume this should be (p).


Lauri Alanko
l...@iki.fi

Anthony Carrico

unread,
Aug 13, 2003, 1:59:50 PM8/13/03
to
Matthias Blume <matt...@shimizu-blume.com> wrote:
> I am no longer able (nor eager) to invest too much of my time and my
> emotional energy in arguing these things, so I will (try to) limit
> myself to this one last article on the topic.

I think that you are correct. We have fully explored both sides of the
debate. If some language designers can find users willing to swallow
an application rule for for k=0, 2, 3, 4... with an exception for k=1,
and a paradoxical tuple definition to cover the exception up, then you
have a winning proposal. It's fun to explore crazy ideas, otherwise
we'd make no progress. A successful crazy idea has a big payoff, I
just haven't seen it yet for this one. Turning k=1 into a "free"
call-with-values doesn't seem like any kind of payoff to me, certainly
not something to trade a uniform application rule for. Maybe it will
make sense to me in another context someplace down the road. Thank
you for the time invested, I always learn a lot from your work.

--
Anthony Carrico

Anthony Carrico

unread,
Aug 13, 2003, 2:57:39 PM8/13/03
to
David Rush <dr...@aol.net> wrote:
> Anthony Carrico <acar...@memebeam.org> writes:
>> Ok, the big problem is that, in my world, "(f (g)) where g returns
>> multiple values" is an error, not a virtue! This is what I was trying
>> to point out when I said that maybe this whole discussion fell off the
>> Scheme wagon back at "compose". On the other hand, if in some
>> circumstance I do want f to take all of g's values as parameters, I
>> have an easy way to do it: make f into the continuation of g.
>
> I'm sorry but you have just restated the definition of
> composition. In (f (g)), f *is* the continnuation of (g). Just what is
> it that you consider an error?

Assume f has two parameters and g has two results.

Consider a loose interpretation of the "what happens to extra values"
question. The error is that the application fails to initialize f's second
argument.

Consider a strict resolution of the "what happens to extra values"
question. The error is that g passes two values to the continuation that
intiailizes f's first argument.

See the cps below for precise details.

>> How do I do that? The call-with-values procedure.
>
> Which is arguably a much clunkier syntax than the natural expression
> of composition.

The syntax you call the "natural expression of composition" isn't an
expression of composition. G's continuation is the initializer of f's
first argument, not f itself. It is only equivalent if (a) f has only
one parameter, and (b) g has only one result.

>> There is a big huge difference between "(g) initializing the first
>> argument of f" and "f being the continuation of (g)".
>
> There isn't once you apply the CPS-transform.

Sure there is. Watch:

(define f-ds (lambda (x y) ...))
-CPS>
(define f-cps (lambda (K x y) ...)

(define g-ds (lambda () (values a b)))
-CPS>
(define g-cps (lambda (K) (K a b)))

(f-ds (g-ds))
-CPS>
(g-cps
(lambda (x-tmp)
((lambda (y-tmp)
(f-cps K x-tmp y-tmp))
***MISSING***)))

***MISSING** our first error. To see the other error, just substitute g-cps
into that expression.

((lambda (K) (K a b))
(lambda (x-tmp)
((lambda (y-tmp)
(f-cps K x-tmp y-tmp))
***MISSING***)))

and reduce:

((lambda (x-tmp)
((lambda (y-tmp)
(f-cps K x-tmp y-tmp))
***MISSING***))
a b***EXTRA***)

and so b***EXTRA*** is our second error. Obviously b***EXTRA was supposed
to end up where ***MISSING*** is, so the CPS transform very clearly shows
both how problems that result from the "compose" confusion.

>> In your world, the operator is indeed the continuation of the first
>> and only argument. It sort of makes every call into a
>> call-with-values.
>
> Precisely. This is the basic theory of the _Lambda, the Ultimate_
> papers, actually.

No, the lambda described in the ultimate papers doesn't work that way, and
it doesn't require magic tuples to fix the mistakes introduced by the
feature. Call-with-values introduces the same feature without any mistakes,
so I can't understand why a few people seem to hate it.

--
Anthony Carrico

Matthias Blume

unread,
Aug 13, 2003, 4:26:52 PM8/13/03
to
Lauri Alanko <l...@iki.fi> writes:

> Matthias Blume <matt...@shimizu-blume.com> virkkoi:
> > * The R5RS procedure CALL-WITH-VALUES can be defined as
> >
> > (define call-with-values (LAMBDA (p c) (c (p ()))))
> ^
> I presume this should be (p).

Oops, yes. Of course.

Matthias

Anton van Straaten

unread,
Aug 13, 2003, 8:57:30 PM8/13/03
to
Anthony Carrico wrote:
> The syntax you call the "natural expression of composition" isn't an
> expression of composition. G's continuation is the initializer of f's
> first argument, not f itself. It is only equivalent if (a) f has only
> one parameter, and (b) g has only one result.

With the current proposal, that last statement is true in all cases, so
there's no problem there.

<caveat>
Before I continue, I want to say that I'm interested in this discussion
independent of the idea of changing Scheme. A big part of my interest in
Scheme comes from an interest in languages. We're talking about a Scheme
variation here, one which could probably be implemented and used to
implement a standard R5RS Scheme. So as far as I'm concerned, there's no
need to frame the discussion in terms of changing the Scheme standard. If
others want to do that, I can't stop them, but there would clearly be
opposition to such an idea, and this is not the RRRS authors list anyway.
</caveat>

Discussing composition has helped me pin down something I like about the
tuple proposal.

With the proposal, multiple-argument applications like (f x y z) are just
syntactic sugar for (f (tuple x y z)). The different rules for application
in the multiple-argument case are only at the syntactic level. If you add
to that the idea of continuing to pass these tuples around in unboxed form
unless they're required in first-class form, then the above equivalence can
be thought of as a conceptual model for standard Scheme, a way to think
about argument passing. However, in a Scheme which supported first-class
tuples, the model could actually be exploited if needed.

One advantage of the ability to manipulate argument tuples would be that
neither 'apply' nor 'call-with-values' would be needed. Here are the two
ways in current Scheme to invoke a multiple argument function when we have
the arguments as a list, or as a function which produces multiple values
(ignoring 'apply' with multiple arg parameters, for simplicity):

(apply f arg-list)
(call-with-values arg-list-producer f)

In tuple-Scheme, these two statements become, respectively:

(f (list->tuple arg-list))
(f (arg-list-producer))

I find that an attractive consistency. Application is application, you
don't need multiple magic functions to perform application in different
circumstances.

Anton

Ray Blaak

unread,
Aug 14, 2003, 1:40:13 AM8/14/03
to
Anthony Carrico <acar...@memebeam.org> writes:

> Matthias Blume <matt...@shimizu-blume.com> wrote:
> > I am no longer able (nor eager) to invest too much of my time and my
> > emotional energy in arguing these things, so I will (try to) limit
> > myself to this one last article on the topic.
>
> I think that you are correct. We have fully explored both sides of the
> debate. If some language designers can find users willing to swallow
> an application rule for for k=0, 2, 3, 4... with an exception for k=1,
> and a paradoxical tuple definition to cover the exception up, then you
> have a winning proposal.

This has been fun, but I am not at all convinced that the tuple proposal is a
good idea.

What is the problem that it is trying to solve? Essentially the composability
of functions; that the (arbitrary) results of any function can be readily
passed as parameters to another.

The tuple proposal seems like a heavy-handed solution if what you are really
after is composability. Why must we change the very meaning of application?
Composability can be improved with a few good macros, I would expect, e.g.

(compose f g arg1 ... argN)

expands to:

(call-with-values
(lambda () (g arg1 ... argN))
f)

Scheme is an exercise in simplicity, not for simplicity's sake, but to try and
capture the essential aspects of programming.

Scheme's function application semantics is more "direct" than ML's,
conceptually simpler.

Viewing Scheme as a building blocks language it is better to have the
essentials in place first, and then build upon them.

Also, history matters. Scheme supports functions with varying numbers of
arguments. Multi-valued results are dropped in a single-valued context. (list
V) always returns a single element list for any V.

Existing code relies on these things, and fundamental changes like this simply
mean that we would no longer have Scheme, but another language.

> It's fun to explore crazy ideas, otherwise we'd make no progress.

That's true. I have certainly enjoyed this thread.

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
rbl...@STRIPCAPStelus.net The Rhythm has my soul.

Anthony Carrico

unread,
Aug 14, 2003, 2:50:31 AM8/14/03
to
Anton van Straaten <an...@appsolutions.com> wrote:
> Anthony Carrico wrote:
>> The syntax you call the "natural expression of composition" isn't an
>> expression of composition. G's continuation is the initializer of f's
>> first argument, not f itself. It is only equivalent if (a) f has only
>> one parameter, and (b) g has only one result.
>
> With the current proposal, that last statement is true in all cases, so
> there's no problem there.

I think that David Rush and I decided that it is probably possible to
have the current Scheme sematics with the traditional multiple
parameters/results syntax, or an alternate single parameter/result +
tuple syntax. I believe my point stands in either syntax (I am most
comfortable in the parameters/results syntax, so I used it for the
example).

I know that my cps doesn't capture the proposal's extra machinery. It
does point out where that machinery would have to go to make things
right. One of my misgivings about the proposal is that I can't figure
out how to express it in the cps transform without resorting to
conditionals. Can you?

> Discussing composition has helped me pin down something I like about the
> tuple proposal.

...


> I find that an attractive consistency.

Where you see syntactic consistency, I see semantic inconsistency.

> Application is application, you don't need multiple magic functions
> to perform application in different circumstances.

You are correct. The proposal adds a nifty rule which overloads some
Scheme error cases with the apply and call-with-values features:

1. Upon encountering a "too few arguments" error, the machinery backs
up and tries an apply.

2. Upon encountering a "too many arguments" error, the machinery backs
up and tries passing a tuple.

3. Upon encountering a "too many results" error, the machinery backs
up and tries a call-with-values.

4. Upon encountering a "too few results" error, the machinery backs up
and tries a apply + call-with-values.

This does indeed result in a compact syntax for apply and
call-with-values. My claim is just that it also has some collateral
damage:

a. It aliases, and therefore masks, those four "too (few, many) x
(arguments, results)" errors.

b. Apply and call-with-values are no longer neatly packaged up, you
have to work out dynamically when you are in those situations based on
what is actually actually passed.

c. While all of the machinery that gets invoked in these four
situations has counterparts in the R5RS model, the machinery that
makes the decisions to do so automatically does not. Corollary: this
is not a simplification, this is extra complexity beyond the R5RS
model.

d. The rule turns three things with simple semantics (application,
apply, call-with-values) into a single thing with tricky case-by-case
semantics.

There may be advantages to the proposal for certain applications, and
I know you are excited about these advantages, but please do not deny
the collateral damage which I list unless you can show that it really
doesn't exist. I know Matthias Blume is listening, so I'm desperately
trying to avoid mentioning the "Spirit of Scheme" :).

--
Anthony Carrico

Anton van Straaten

unread,
Aug 14, 2003, 9:12:59 AM8/14/03
to
Ray Blaak wrote:
> What is the problem that it is trying to solve? Essentially the
composability
> of functions; that the (arbitrary) results of any function can be readily
> passed as parameters to another.

...and also that the (arbitrary) results of any function can be readily
stored in a variable, e.g. via define or set! So the problem that's being
solved could be described as ensuring that the results of any function is
always first class.

> Viewing Scheme as a building blocks language it is better to have the
> essentials in place first, and then build upon them.

I agree with this. In this case, though, there seems a limit to the degree
to which the "problem" can be papered over and built upon.

> Also, history matters. Scheme supports functions with varying
> numbers of arguments. Multi-valued results are dropped in a
> single-valued context.

In the not-so-distant history of Scheme, it was once the case that the
results of any function were always first class.

> (list V) always returns a single element list for any V.

I agree that's a disturbing aspect of the tuple proposal as it currently
stands.

Anton

Anton van Straaten

unread,
Aug 14, 2003, 9:13:59 AM8/14/03
to
Anthony Carrico wrote:
> I know that my cps doesn't capture the proposal's extra machinery.
> It does point out where that machinery would have to go to make
> things right. One of my misgivings about the proposal is that I can't
> figure out how to express it in the cps transform without resorting
> to conditionals. Can you?

To quote Lord Sainsbury of Turville, "My Lords, when I have a moment I shall
bend my mind to that question."

I'm off on vacation...

Anton

Matthias Blume

unread,
Aug 14, 2003, 10:32:24 AM8/14/03
to
Ray Blaak <bl...@telus.net> writes:

>
> What is the problem that it is trying to solve? Essentially the composability
> of functions; that the (arbitrary) results of any function can be readily
> passed as parameters to another.

More generally, it is there to actually restore the symmetry between
call and return. (You have to be kidding if you claim that
CALL-WITH-VALUES achieves this. In fact, it makes the lack of
symmetry painfully obvious.)

> The tuple proposal seems like a heavy-handed solution if what you are really
> after is composability.

No, I'm after simplicity.

> Why must we change the very meaning of application?

We don't.

> Composability can be improved with a few good macros, I would expect, e.g.
>
> (compose f g arg1 ... argN)
>
> expands to:
>
> (call-with-values
> (lambda () (g arg1 ... argN))
> f)

Composability is just one example. The problem stands for many other
higher-order functions. Moreover, I don't see where it is
"heavy-handed". If anything, I find CALL-WITH-VALUES "heavy-handed".

> Scheme is an exercise in simplicity, not for simplicity's sake, but
> to try and capture the essential aspects of programming.

Right. And that's why, IMNSHO, some abomination like CALL-WITH-VALUES
should have no place there.

> Scheme's function application semantics is more "direct" than ML's,
> conceptually simpler.

Huh? Can you, please, elaborate on this (unfounded) claim? Having
worked on implementations of either language, I find Scheme's
(current) semantics far more difficult and indirect.

> Also, history matters. Scheme supports functions with varying numbers of
> arguments. Multi-valued results are dropped in a single-valued context. (list
> V) always returns a single element list for any V.

This sounds confused.

a) All current Scheme programs run unchanged, so nothing is lost.
b) (list ...) returns a list of as many elements as there are
arguments in "...". If "..." is V but denotes a tuple of k
elements, then, since tuples are first-class multiple arguments,
we really have k arguments.
(I know, what you want is that if ... contains k *lexical* arguments,
then (list ...) should return a list of k elements. Right?
But why? Is there any reason to believe that the world will stop
once this is no longer so?
(The fact that it is so in current Scheme is merely an consequence
of the absence of first-class multiple arguments.)

> Existing code relies on these things,

No, it does not. As I have said many times by now, no existing Scheme
code will break.

> and fundamental changes like
> this simply mean that we would no longer have Scheme, but another
> language.

Ok, we are back to the "spirit of Scheme" mumbo-jumbo. Call it what
you will, but I find it far superior in expressiveness, in
cleanliness, and in simplicity than CALL-WITH-VALUES.

Matthias

Ray Blaak

unread,
Aug 14, 2003, 1:24:45 PM8/14/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:
> > Why must we change the very meaning of application?
>
> We don't.

I have just been reading about how to map application to the tuple approach in
the 0, 1, and n-args cases. Is that not a change?

> > Scheme's function application semantics is more "direct" than ML's,
> > conceptually simpler.
>
> Huh? Can you, please, elaborate on this (unfounded) claim? Having
> worked on implementations of either language, I find Scheme's
> (current) semantics far more difficult and indirect.

I find Scheme's existing approach to be a simpler mental model. Whether you
as an implementor found Scheme's semantics complex to implement is another
matter. It is the "clean" programming model that I am after.

I of course realize that this kind of preference is ultimately a religious
matter.

> b) (list ...) returns a list of as many elements as there are
> arguments in "...". If "..." is V but denotes a tuple of k
> elements, then, since tuples are first-class multiple arguments,
> we really have k arguments.

So you want tuples to be first class, but spread on function application? This
is asking for problems.

Consider:

(define f (lambda () (values 1 2)))
(define g (lambda (x y) (+ x y)))
(define tuple-list (lambda tuple (cons tuple '())))

(define result (f))
(g result)
(tuple-list result)

Currently, of course, (g result) is an error. What is it under the tuple
proposal? If tuples are properly first class, then they should be assignable,
of course. Can you construct a list of them? How? Does TUPLE-LIST work?
Everytime you try to pass them as an argument to CONS or LIST they will
"spread".

Consider assigning them to a record field. Field assignments are commonly
implemented as macros expanding into helper functions. Tuples can be passed in
to a function only if they are the only argument to a function form of
(lambda v ...)
no? If there are other arguments specified then the tuple's contents are
spread and bound to the arguments.

Perhaps it is possible through some contorted way to assign tuples to data
structures, but it seems difficult.

There is an inherent contradiction. Tuples are first class, can be assigned
with set! or define, but cannot be usefully passed as a parameter, precisely
because tuples have a special meaning in function application.

I think that the tuple approach just trades one pain (call-with-values) for
another (first class confusions).

> > Existing code relies on these things,
>
> No, it does not. As I have said many times by now, no existing Scheme
> code will break.

The approach of having multiple values that are dropped unless you need them
is considered (at least by some) to be useful. The canonical example is a
division operator that returns both the quotient and remainder. Most people
are interested in the quotient only. The remainder is calculated anyway so it
is cheap to return as well in case anyone is interested.


--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,

bl...@telus.net The Rhythm has my soul.

Anthony Carrico

unread,
Aug 14, 2003, 2:11:22 PM8/14/03
to
Matthias Blume <matt...@shimizu-blume.com> wrote:

> Ray Blaak <bl...@telus.net> writes:
> More generally, it is there to actually restore the symmetry between
> call and return. (You have to be kidding if you claim that
> CALL-WITH-VALUES achieves this. In fact, it makes the lack of
> symmetry painfully obvious.)

There was "originally" no symmetry to restore. The similarity between
call and return is that a return is a call to a continuation. This
similarity was not symmetric to the extent that unlike procedures,
R4RS continuations never had arbitrary signatures, so call-with-values
introduced that symmetry. I'm not kidding, call-with-values EXACTLY
achieves this, no more, no less.

>> The tuple proposal seems like a heavy-handed solution if what you are really
>> after is composability.
>
> No, I'm after simplicity.

No, you are only after syntatic conciseness for apply and
call-with-values, you are consistantly neglecting complexity
introduced by the change.

The only reason things still work in your proposal is that tuples are
given a paradoxical behavior. Without that magic the proposal would be
equivalent (mod syntax) to the status quo, so clearly there is
something new going on.

>> Scheme's function application semantics is more "direct" than ML's,
>> conceptually simpler.
>
> Huh? Can you, please, elaborate on this (unfounded) claim? Having
> worked on implementations of either language, I find Scheme's
> (current) semantics far more difficult and indirect.

This claim is not unfounded, it has already been shown. Excerpts from
my previous post:

(define f-cps (lambda (K x y) ...)

(define g-cps (lambda (K) (K a b)))

((lambda (x-tmp)
((lambda (y-tmp)
(f-cps K x-tmp y-tmp))
***MISSING***))
a b***EXTRA***)

You introduced some magic that gets b***EXTRA*** over to
***MISSING***. That magic is definitely nifty, but it is not
conceptually simpler. You are wrong to tell Ray Blaak that your magic
doesn't add extra complexity to the "very meaning of application". It
adds all the complexity of (a) apply, (b) call-with-values, and (c) a
mechanism to decide when to use them.

> No, it does not. As I have said many times by now, no existing
> Scheme code will break.

The opposite isn't true: broken Scheme code won't break properly, and
if you don't think that is important, then you are not a working
programmer. Of course you can add some extra complexity to a system
and say "look the system still works", but that in-and-of-itself is no
argument for the extra complexity!

I would say the proposal qualifies as a hack since it is something
clever that has some nasty side effects, and complicates system
semantics, but nonetheless introduces an exploitable loophole with
some interesting properties.

Again, I'm not knocking the merits of your proposal. I'm calling
attention to your failure to acknowledge the demerits. Can you debunk
them? Are you actually blind to them? Or do you just wish to deny
them?

--
Anthony Carrico

be...@sonic.net

unread,
Aug 14, 2003, 2:16:01 PM8/14/03
to
Anton van Straaten wrote:
>
> Ray Blaak wrote:
> > What is the problem that it is trying to solve? Essentially the
> composability
> > of functions; that the (arbitrary) results of any function can be readily
> > passed as parameters to another.
>
> ...and also that the (arbitrary) results of any function can be readily
> stored in a variable, e.g. via define or set! So the problem that's being
> solved could be described as ensuring that the results of any function is
> always first class.

Well.... there is another solution, of course.

What I have in mind involves "cells", so I'll start by
explaining what I mean by that word. A "cell" is basically
what's abstracted in scheme's idea of a "memory location."
A cons has two cells, a named variable has one, and so on.

But a cell could be abstracted differently. We could make
cells able to hold an arbitrary number of values. Then
you'd have a language where lambdas are "real lambdas" that
take one argument and return one result - just that both
argument and result might have multiple values. And you
could store any function result in any variable, just as
you expect.

But what we are getting at here is full circle - the basic
design of LISP used conses to achieve precisely this effect.
Our "multiple argument functions" were originally implemented
as functions of a single list argument - in fact the ancient
simplest syntax for lambda, (lambda arg expr) still means
exactly that - it takes one argument, arg. And arg is a list
of values. and it returns one result, which is what expr
evaluates to. And expr may evaluate to a list of values.
And expr's result may be stored in any variable, etc.

So.... I don't think (values) and (call-with-values) added
anything important to the language. We were led to thinking
we needed them because we quit thinking about arguments and
results in terms of being lists. Around LISP 1.5, I think, we
buried the syntax for cons cell allocation for agument lists
to allow implementors to use more efficient schema to pass
arguments. And I think returning lists of results was more-or-
less common practice then, which was completely symmetrical
with how functions were called. But later on, we then wanted
our return values to be lists again, and we invented the values
statement and call-with-values. And it seems to not fit.

I think the only reason values and call-with-values appear
to be necessary is because the cons cell model of lists is
constrained in some way we're finding unacceptable or
inconvenient for the purpose of passing arguments and results.
What's the limitation we're trying to overcome here? Is it
just a performance hack? Or is it something else?

Bear

Anton van Straaten

unread,
Aug 14, 2003, 3:04:34 PM8/14/03
to
Ray Blaak wrote:
> Consider:
>
> (define f (lambda () (values 1 2)))
> (define g (lambda (x y) (+ x y)))
> (define tuple-list (lambda tuple (cons tuple '())))
>
> (define result (f))
> (g result)
> (tuple-list result)
>
> Currently, of course, (g result) is an error. What is it under the tuple
> proposal? If tuples are properly first class, then they should be
assignable,
> of course. Can you construct a list of them? How? Does TUPLE-LIST work?
> Everytime you try to pass them as an argument to CONS or LIST they will
> "spread".

My understanding was that the spreading only happens when a single argument
is being applied. Multiple-argument application is syntactic sugar for (f
(tuple ...)). Multiple-element tuples can contain other tuples, so there's
no problem passing boxed tuples ("unspread") via a multiple-argument
application. Given this, the result of (g result) would be 3, and
(tuple-list result) would be ({1 2}) where {} denotes a tuple.

> There is an inherent contradiction. Tuples are first class, can be
assigned
> with set! or define, but cannot be usefully passed as a parameter,
precisely
> because tuples have a special meaning in function application.

I agree that being unable to control spreading - even if only in the
single-argument case - is not completely satisfactory. Maybe there could be
either a way to guard tuples from being unboxed/spread, or the opposite:
default to not spreading the tuple if a single tuple is provided as an
argument, and provide a way to specify when spreading is desired. The
latter would amount to the tuple version of an 'apply' function. This
actually ends up relating to what Bear just posted, about this all coming
full circle, except using tuples instead of lists to represent arguments.
I'm not quite sure how this would all fit in with the (x)=x rule, though -
maybe that wouldn't be needed, or not in the same form.

Anton

Anton van Straaten

unread,
Aug 14, 2003, 3:04:34 PM8/14/03
to
Ray Blaak wrote:
> Consider:
>
> (define f (lambda () (values 1 2)))
> (define g (lambda (x y) (+ x y)))
> (define tuple-list (lambda tuple (cons tuple '())))
>
> (define result (f))
> (g result)
> (tuple-list result)
>
> Currently, of course, (g result) is an error. What is it under the tuple
> proposal? If tuples are properly first class, then they should be
assignable,
> of course. Can you construct a list of them? Ho ? Does TUPLE-LIST work?

> Everytime you try to pass them as an argument to CONS or LIST they will
> "spread".

My understanding was that the spreading only happens when a single argument


is being applied. Multiple-argument application is syntactic sugar for (f
(tuple ...)). Multiple-element tuples can contain other tuples, so there's
no problem passing boxed tuples ("unspread") via a multiple-argument
application. Given this, the result of (g result) would be 3, and

(tuple-list result) ould be ({1 2}) where {} denotes a tuple.

> There is an inherent contradiction. Tuples are first class, can be
assigned
> with set! or define, but cannot be usefully passed as a parameter,
precisely
> because tuples have a special meaning in function application.

I agree that being unable to control spreading - even if only in the


single-argument case - is not completely satisfactory. Maybe there could be
either a way to guard tuples from being unboxed/spread, or the opposite:
default to not spreading the tuple if a single tuple is provided as an
argument, and provide a way to specify when spreading is desired. The
latter would amount to the tuple version of an 'apply' function. This
actually ends up relating to what Bear just posted, about this all coming

full circle, except using tuples instead of lists |o represent arguments.


I'm not quite sure how this would all fit in with the (x)=x rule, though -

miybe that wouldn't be needed, or not in the same form.

Anton

Matthias Blume

unread,
Aug 14, 2003, 4:47:44 PM8/14/03
to
Anthony Carrico <acar...@memebeam.org> writes:

> The opposite isn't true: broken Scheme code won't break properly, and
> if you don't think that is important, then you are not a working
> programmer.

There are no guarantees in any Scheme standard for broken Scheme code
to "break properly" in any sense of the word. (This is one reason why
I, as a working programmer, do not use Scheme anymore.)

> Of course you can add some extra complexity to a system
> and say "look the system still works", but that in-and-of-itself is no
> argument for the extra complexity!

If there were any extra complexity, and if that extra complexity were
all there is to it, then I could agree with you. But I don't see it
as extra complexity. In fact, I see it as a simplification and
generalization. And it has the benefit of removing the need for the
bad hack known as CALL-WITH-VALUES, and with it the need for all kinds
of other code contortions which I do not want to rehash here again. So
there are clear benefits.

> I would say the proposal qualifies as a hack since it is something
> clever that has some nasty side effects, and complicates system
> semantics, but nonetheless introduces an exploitable loophole with
> some interesting properties.

Hmm. I don't see any "nasty side effects". I don't see that it
complicates any semantics. I also don't see any loopholes. (And what
would qualify as an "expoit" of said loophole?) So what are you
talking about?

> Again, I'm not knocking the merits of your proposal. I'm calling
> attention to your failure to acknowledge the demerits. Can you debunk
> them? Are you actually blind to them? Or do you just wish to deny
> them?

All the so-called "demerits" basically appeal to some wishy-washy
concept of "the spirit of Scheme". I think we all agree on what the
concrete properties of the proposal are (that isn't so hard), so the
rest of the argumentation boils down to emotions and other
not-so-well-defined things. It is very hard if not impossible to
argue about aesthetics; beauty, as always, is in the eye of the
beholder. If you find the proposal "ugly" or a "hack", then there is
no way for me to "debunk" this. I am aware of the fact that you find
it ugly -- and that's all I can do.

I will overlook your accusations (denial, blindness, failure to
acknowledge whatever) for the time being. Just look up some of my
earlier articles in this thread and be reminded that I was the one who
listed some of the issues that might bite people under the proposal.

Matthias

PS: As far as the problem with the CPS transform is concerned, here is
how I would approach it:

First it is worth remembering that under the proposal all functions
have precisely one argument. Any other number of lexical arguments is
syntactic sugar for tupling. Likewise, lambdas with anything but
precisely one formal parameter are syntactic sugar for a
single-argument lambda which is expected to be a tuple to be
deconstructed into separate variables.

Since this is so, it is helpful to explicitly de-couple lambda/apply
from the orthogonal issue of tuple construction/deconstruction.
Concretely, it is helpful to use a CPS-level operator for tuple
costruction (TUPLE) and a CPS-level syntactic form for tuple
deconstruction (MATCH-TUPLE) so that:

(TUPLE) evaluates to empty tuple
(TUPLE x) evaluate to x
(TUPLE x1 ... xk) (k > 1), evaluate to k-tuple

(MATCH-TUPLE Y (x) ...) evaluate ... with x bound to the value of Y
(MATCH-TUPLE Y () ...) test that Y evaluates to the empty tuple,
then run ...
(MATCH-TUPLE Y (x1 ... xk) ...) (k > 1): Y must evaluate to k-tuple,
bind i-th element of tuple to xi, resp.
eval ... in resulting environment

Then the CPS conversion becomes extremely simple. All direct-style
functions turn into a 2-argument counterpart in CPS, and all
continuations are 1-argument functions. Moreover, it is guaranteed
that each CPS counterpart of a direct-style function gets called with
precisely two arguments, and each continuation gets called with
precisely one argument:

(define f-ds (lambda (x y) ...))
-CPS>

(define f-cps (lambda (k arg) (match-tuple arg (x y) (k ...))))

(define g-ds (lambda () (values-ds a b)))
-CPS>
(define g-cps (lambda (k arg) (match-tuple arg ()
(values-cps k (tuple a b)))))

Where we have:

(define values-ds (lambda (x) x))
-CPS>
(define values-cps (lambda (k arg) (match-tuple arg (x) (k x))))


And finally:

(f-ds (g-ds))
-CPS>
(g-cps (lambda (g-tmp)
(f-cps K (tuple g-tmp)))
(tuple))

or, simplified:

(g-cps (lambda (g-tmp) (f-cps K g-tmp)) (tuple))

If we substitute the definitions of g-cps we get:

((lambda (k arg) (match-tuple arg () (values-cps k (tuple a b))))
(lambda (g-tmp) (f-cps K g-tmp))
(tuple))

and reduce:

(match-tuple (tuple) ()
(values-cps (lambda (g-tmp) (f-cps K g-tmp))
(tuple a b)))

and further:

(values-cps (lambda (g-tmp) (f-cps K g-tmp))
(tuple a b))

Now let's substitute the definition of values-cps:

((lambda (k arg) (match-tuple arg (x) (k x)))
(lambda (g-tmp) (f-cps K g-tmp))
(tuple a b))

and reduce:

(match-tuple (tuple a b) (x) ((lambda (g-tmp) (f-cps K g-tmp)) x))

and further:

((lambda (g-tmp) (f-cps K g-tmp)) (tuple a b))

and further:

(f-cps K (tuple a b))

Now substitute f-cps:

((lambda (k arg) (match-tuple arg (x y) (k ...))) K (tuple a b))

and reduce:

(match-tuple (tuple a b) (x y) (K ...))

which means that ... will be evaluated with x bound to a and y bound to b.

Notice that the CPS transform is completely uniform and simple.
Roughly speaking we have the single uniform rule for converting lambdas:

C[(lambda <arglist> <body>)]K
==
(K [(lambda (k arg) (match-tuple arg <arglist> C[<body>]\b.[(k b)]))])

([] denotes syntax; C[]K is the CPS-transfrom in context K; \ is the
meta-level lambda.)

Matthias Blume

unread,
Aug 14, 2003, 5:20:58 PM8/14/03
to
Ray Blaak <bl...@telus.net> writes:

> Matthias Blume <matt...@shimizu-blume.com> writes:
> > > Why must we change the very meaning of application?
> >
> > We don't.
>
> I have just been reading about how to map application to the tuple approach in
> the 0, 1, and n-args cases. Is that not a change?

The point is that old and new rule sets coincide for all currently
legal Scheme code. Yes, there is a change, but the change is
backward-compatible. I would not call this "changing the very meaning
...".

> I find Scheme's existing approach to be a simpler mental model. Whether you
> as an implementor found Scheme's semantics complex to implement is another
> matter. It is the "clean" programming model that I am after.
>
> I of course realize that this kind of preference is ultimately a religious
> matter.

Indeed.

> > b) (list ...) returns a list of as many elements as there are
> > arguments in "...". If "..." is V but denotes a tuple of k
> > elements, then, since tuples are first-class multiple arguments,
> > we really have k arguments.
>
> So you want tuples to be first class, but spread on function application? This
> is asking for problems.

That's right. I am not sure why it is "asking for problems", though.

> Consider:
>
> (define f (lambda () (values 1 2)))
> (define g (lambda (x y) (+ x y)))
> (define tuple-list (lambda tuple (cons tuple '())))
>
> (define result (f))
> (g result)
> (tuple-list result)
>
> Currently, of course, (g result) is an error. What is it under the tuple
> proposal?

3

> If tuples are properly first class, then they should be assignable,
> of course.

Yes.

> Can you construct a list of them?

Yes.

> How?

Using CONS. (Like all lists should be constructed.)

> Does TUPLE-LIST work?

Probably not as you intended. I guess you wanted:

(define tuple-list (lambda (tuple) (cons tuple '())))

For your amusement, here is a little sample interaction with a
modified Scheme implementation to which I added tuple support today.
(This is a toy implementation without a REPL, so I have to call
"print" explicitly. Tuple syntax looks like this: #{...}. This is a
cut-n-paste from a real, running system.)

[blume@tti5 SepComp]$ sml @SMLload=scm
> (define (values x) x)
> (define (f) (values 1 2))
> (define (g x y) (+ x y))
> (define (tuple-list tuple) (cons tuple '()))
> (define result (f))
> (print (g result))
3
> (print (tuple-list result))
(#{1 2})
>

Cool, huh?

> Everytime you try to pass them as an argument to CONS or LIST they will
> "spread".

No. If you provide two arguments to cons, the first argument will not
spread even if it is a tuple. And in currently legal Scheme programs,
all calls of CONS will have two arguments.
(More precisely, if you provide two arguments, you are passing a 2-tuple
of these two arguments. It is perfectly ok to have one or more of the tuple
elements be tuples themselves.)

The difference in behavior comes in when you call CONS with a single
argument -- in which case this argument must evaluate to the 2-tuple
that CONS expects.

> Consider assigning them to a record field. Field assignments are commonly
> implemented as macros expanding into helper functions. Tuples can be passed in
> to a function only if they are the only argument to a function form of
> (lambda v ...)
> no?

No. Tuples are first-class. They can be passed like any old
argument.

> If there are other arguments specified then the tuple's contents are
> spread and bound to the arguments.

Only if the tuple is the only lexical argument given, and only if the
function being called expects something other than a single argument.
(This means that such spreading can never occur in currently legal
Scheme code since the situation is considered a runtime error there.)

> Perhaps it is possible through some contorted way to assign tuples to data
> structures, but it seems difficult.

No, it is completely straightforward. It will even work
"out-of-the-box" as long as all those macros expand into currently
legal Scheme code.

> There is an inherent contradiction. Tuples are first class, can be assigned
> with set! or define, but cannot be usefully passed as a parameter, precisely
> because tuples have a special meaning in function application.

This is a misunderstanding on your part.

> The approach of having multiple values that are dropped unless you need them
> is considered (at least by some) to be useful. The canonical example is a
> division operator that returns both the quotient and remainder. Most people
> are interested in the quotient only. The remainder is calculated anyway so it
> is cheap to return as well in case anyone is interested.

First of all, the implicit dropping of all but the first value is not
guaranteed in Scheme (at least it was not last time I looked). Moreover,
things like this can (and should be!) done explicitly:

(define (fst x y) x)
(define (snd x y) y)

... (fst (divide ...)) ... (snd (divide ...)) ...

This works and has the added benefit of clearly documenting that
losing the other value was intended.

Matthias

Ray Blaak

unread,
Aug 14, 2003, 5:39:28 PM8/14/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:

> Ray Blaak <bl...@telus.net> writes:
> > (print (tuple-list result))
> (#{1 2})
> >
>
> Cool, huh?

Yes indeed. That addresses my main objection.

Erann Gat

unread,
Aug 14, 2003, 6:00:43 PM8/14/03
to

> I think the only reason values and call-with-values appear
> to be necessary is because the cons cell model of lists is
> constrained in some way we're finding unacceptable or
> inconvenient for the purpose of passing arguments and results.
> What's the limitation we're trying to overcome here? Is it
> just a performance hack? Or is it something else?

I think you've hit the nail on the head, and I would like to float a
trial-balloon theory of why people try to avoid lists/cons cells, and a
straw-man proposal. Note that I do not necessarily believe that this
theory is true (or that the proposal is a good idea), so please don't
stomp on me if you disagree with it. I don't necessraily believe this
either, but I think it's plausible, and I have not seen it discussed
before.

In a nutshell: the reason Schemers are uncomfortable with
cons-cell-based-lists is that they are a hack. A tremendously useful hack
to be sure, but nonetheless a hack, one that is so deeply ingrained in the
roots of Lisp that it is no longer perceived as a hack but rather as an
essential part of the Way The World Is.

The hack is the transformation of the tree structure:

(x . (y . (z . ())))

into the (apparently) linear structure:

(x y z)

This is one of the most useful and devilishly clever hacks of all time,
but it is nonetheless a hack. The reason it's a hack is that there is no
inherent reason why the abstract data type "list" must be (or even should
be) represented as a linked list. Like all representations it has
advantages and disadvantages. The main advantage of linked lists is that
it's easy to traverse the CDR chain (in fact, a linked list is essentially
a list where all of the "rests" of the list are in some sense
"precomputed". This is why a linked list requires twice as much storage
as a vector or "cdr-coded" list, because you need a place to store all the
precomputed CDRs. And because lists-as-linked-lists are permanently
enshrined in the Lisp canon, you have to have this whether or not you
actually use all the precomputed CDRs or not.) But linked lists have all
sorts of disadvantages, not least of which is the fact that LENGTH and NTH
are O(n), as is checking to see if a list is a proper or dotted list.

How else could things be? There's nothing particularly magical about
linked lists. We could easily design a language that looked just like
Scheme but where (x y z) denoted a a vector of three elements instead of a
linked list. In other words, instead of the old equivalence of:

(x y z) == (x . (y . (z . ())))

we instead have:

[x y z] == #(x y z)

or perhaps

[x y z] == (x . y . z)

I'll use square brackets to denote these "neo-lists" to avoid confusion
between them and old-style "classic" linked lists.

Neo-scheme (or perhaps we should call it V-scheme, for vector-scheme) has
two important differences from classic Scheme (L-scheme). First, there is
no more dot syntax. [x y . z] is non-sensical. So you can still write:

[lambda x ...]

(in which case X is bound to a vector of all the arguments) but you can no
longer write:

[lambda [x y . z] ...]

Second, CDRs are no longer precomputed. This means that the
representation of code and argument lists now consume half as much memory
as they did before, but you have to iterate over vector-based "lists" by
indexing rather than by CDR-chaining. (Since the CDRs are no longer
precomputed, calling CDR would cons.) But the overall efficiency should
remain the same since LENGTH and NTH would now be O(1) instead of O(n).

Having done this, we could define:

(x . y) == [x y]

and

(x y z) == [x [y [z ()]]]

or perhaps

(x y z) == [x [y [z]]]

Now we have some potentially very interesting language constructs, like:

(lambda [x y z] ...) == [lambda [[x y z] [...]]]
[lambda (x y . z) ...] == [lambda [x [y z]] ...]

in addition to the familiar:

(lambda (x y . z) ...) == [lambda [[x [y z]] [...]]]

and the new:

[lambda [x y z] ...] and [lambda x ...]

I have not yet worked out all the consequences of this design, but it
seems like one might be able to work out evaluation rules for these
various new lambda constructs so that 1) various combinations of square
and round brackets would correspond to "classic" and "multiple value"
calls, and 2) that using neo-lists for return values could be much more
efficient than returning linked lists, and so they would make an adequate
first-class "tuple" that people wouldn't shy away from.

Anyway, before I put a whole lot of effort into this I thought I'd ask for
some preliminary feedback. In particular I'm curious if this idea has
been explored and found wanting for some reason.

Thanks,
E.

Ray Blaak

unread,
Aug 15, 2003, 1:26:32 AM8/15/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:
> > (define (values x) x)
> > (define (f) (values 1 2))

This bothers me, enough for me to eject the entire proposal.

My top two run time errors that I want detected are a) unknown bindings and b)
arity mismatches.

It is critical, if I am saying:

(define foo (lambda (arg) ...))

that

(foo 'valid-arg 'extra-arg)

gives an error.

Given that scheme does not have static type checking, I *need* the run time
checking. It is not enough that we get the arity error detection for all
functions with an arity of other-than-one arguments. 1-arg functions are very
very common, and we need the error detection there too.

Perhaps the special case of allowing tuple arguments for the 1-arg case, being
special, can have a special syntax. E.g.

(define values (lambda (&tuple args) args))

vs

(define foo (lambda (single-arg) ...))

which would let (values 1 2) work as expected, yet still making (foo 1 2) be
an error (and, I suppose, making (foo (values 1 2)) pass in a tuple as a single
argument).

(Didn't someone similarly suggest the rest keyword?)

Or simply keep (values ...) as a special form that constructs tuples.

Another area of confusion for me: how would multi-dispatch mechanisms work
with tuple function application semantics? That is, if everything is a single
tuple argument under the hood, doesn't that defeat multi-dispatching, which
needs to explicitly *not* do that?

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,

rAYb...@STRIPCAPStelus.net The Rhythm has my soul.

Ray Blaak

unread,
Aug 15, 2003, 1:56:19 AM8/15/03
to
g...@jpl.nasa.gov (Erann Gat) writes:
> How else could things be? There's nothing particularly magical about
> linked lists. We could easily design a language that looked just like
> Scheme but where (x y z) denoted a a vector of three elements instead of a
> linked list.

Interesting idea. Note, though that there is no reason to eliminate linked
lists. Already today we have the choice to use lists or vectors as our
datastructure, allowing us to the speed/convenience/memory tradeoff as
desired.

> (x . y) == [x y]

We already have notation to deal with vectors. Why introduce another? Just use
#(x y).

Argument passing is independent of lists except in the (lambda (x . y) ...)
and (lambda x ...) cases.

Note that binding parameter values to formal arguments has currently nothing
to do with lists or with vectors. Scheme also does not (officially) internally
represent code as a linked list.

What I would resue from your post is to allow all or rest of the arguments to
be accessed as a vector of arguments, in addition to the list possibility.
E.g.

(define foo (lambda (#args) args))
(foo 1 2 3) ==> #(1 2 3)

(define bar (lambda (x y . #z) (list x y z)))
(bar 1 2 3 4) ==> (1 2 #(3 4))

or perhaps (to be consistent with my &tuple suggestion elsewhere):

(define foo (lambda (&vector args) args))
(define bar (lambda (x y &vector z) (list x y z)))

as well as

(define values (&tuple all-args) all-args)
(values 1 2) ==> #{1 2} ; #{...} denotes a tuple

Examples like these:

> (lambda [x y z] ...) == [lambda [[x y z] [...]]]
> [lambda (x y . z) ...] == [lambda [x [y z]] ...]

> [lambda [x y z] ...] and [lambda x ...]

don't make sense to me, since explicit binding has nothing to do with lists or
vectors. Only with the vector form of (lambda (x y . z) ...) and
(lambda x ...) do I see anything of interest.

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,

rAYb...@STRIPCAPStelus.net The Rhythm has my soul.

Matthias Blume

unread,
Aug 15, 2003, 10:58:10 AM8/15/03
to
Ray Blaak <rAYb...@STRIPCAPStelus.net> writes:

> Matthias Blume <matt...@shimizu-blume.com> writes:
> > > (define (values x) x)
> > > (define (f) (values 1 2))
>
> This bothers me, enough for me to eject the entire proposal.
>
> My top two run time errors that I want detected are a) unknown bindings and b)
> arity mismatches.
>
> It is critical, if I am saying:
>
> (define foo (lambda (arg) ...))
>
> that
>
> (foo 'valid-arg 'extra-arg)
>
> gives an error.

I can feel your pain. I personally want arity errors along with most
other type errors detected -- the earlier the better -- which is
another reason why I do not use Scheme for my programming anymore.

Anyway, by definition the above is not an arity error. There are no
arity errors under the proposal anymore since all functions
uniformely have one formal parameter, and all function calls uniformely pass
one argument. Arity errors turn into type errors involving tuple
types. Surely, if the above function foo relies on arg not being a
tuple, this will show up somewhere down the line as a type error.

> Given that scheme does not have static type checking, I *need* the run time
> checking. It is not enough that we get the arity error detection for all
> functions with an arity of other-than-one arguments. 1-arg functions are very
> very common, and we need the error detection there too.

You don't lose runtime checking. It just happens at a different time.

> Perhaps the special case of allowing tuple arguments for the 1-arg
> case, being special, can have a special syntax. E.g.

> (define values (lambda (&tuple args) args))
>
> vs
>
> (define foo (lambda (single-arg) ...))
>
> which would let (values 1 2) work as expected, yet still making (foo 1 2) be
> an error (and, I suppose, making (foo (values 1 2)) pass in a tuple as a single
> argument).

How about

(define-syntax lambda1
(syntax-rules ()
((_ (arg) body ...)
(lambda (arg) (if (tuple? arg) (error "...")) body ...))))

?

> Another area of confusion for me: how would multi-dispatch mechanisms work
> with tuple function application semantics? That is, if everything is a single
> tuple argument under the hood, doesn't that defeat multi-dispatching, which
> needs to explicitly *not* do that?

I don't understand what you are saying. Where is the problem?

Matthias

Joe Marshall

unread,
Aug 15, 2003, 12:17:06 PM8/15/03
to
Ray Blaak <rAYb...@STRIPCAPStelus.net> writes:

> Scheme also does not (officially) internally represent code as a linked list.

On the other hand, whatever means Scheme uses to represent code
internally must be isomorphic to linked lists at the time macros are
expanded.

Joe Marshall

unread,
Aug 15, 2003, 12:29:24 PM8/15/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:


>> If there are other arguments specified then the tuple's contents are
>> spread and bound to the arguments.
>
> Only if the tuple is the only lexical argument given, and only if the
> function being called expects something other than a single argument.
> (This means that such spreading can never occur in currently legal
> Scheme code since the situation is considered a runtime error there.)

Ugh. What about zero and one-arity tuples? Are they illegal to
create? Can I differentiate between passing a one-tuple as spread on
not?

Joe Marshall

unread,
Aug 15, 2003, 12:23:34 PM8/15/03
to
g...@jpl.nasa.gov (Erann Gat) writes:

> In a nutshell: the reason Schemers are uncomfortable with
> cons-cell-based-lists is that they are a hack.

I disagree. First, I'm not uncomfortable with cons-cell-based-lists.
Second, cons cell based lists have a recursive structure which is key
to their power. I wouldn't call them a hack.

> Anyway, before I put a whole lot of effort into this I thought I'd ask for
> some preliminary feedback. In particular I'm curious if this idea has
> been explored and found wanting for some reason.

Check out REBOL. The primitive construct there is similar to what
you are describing. You sort of can use vectors, but they are awfully
clunky and you find people creating two-element vectors and chaining
them together all the time. You simply can't `splice' or `prepend'
without a *lot* of copying.

Matthias Blume

unread,
Aug 15, 2003, 12:38:31 PM8/15/03
to
Joe Marshall <j...@ccs.neu.edu> writes:

There is no way of creating a one-tuple. Zero-tuples are just fine.

Matthias

Ray Blaak

unread,
Aug 15, 2003, 1:00:18 PM8/15/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:

> Ray Blaak <rAYb...@STRIPCAPStelus.net> writes:
> > It is critical, if I am saying:
> >
> > (define foo (lambda (arg) ...))
> >
> > that
> >
> > (foo 'valid-arg 'extra-arg)
> >
> > gives an error.
>
> Anyway, by definition the above is not an arity error. There are no
> arity errors under the proposal anymore since all functions
> uniformely have one formal parameter, and all function calls uniformely pass
> one argument.

No fair to fix problems changing the definition. It is an error -- at least I
want it to be.

Existing Scheme behaviour has it as an error. You will get more trouble than
benefit by making it not an error.

> > Perhaps the special case of allowing tuple arguments for the 1-arg
> > case, being special, can have a special syntax.

[...]


> How about
>
> (define-syntax lambda1
> (syntax-rules ()
> ((_ (arg) body ...)
> (lambda (arg) (if (tuple? arg) (error "...")) body ...))))

What I don't like about it is that the error case should be the default. You
now have to do something different to get "contemporary" 1-arg behaviour.

Passing in multiple args to get a single tuple is the new idea, the newly
introduced mechanism. Make new syntax to take explicit advantage of it, and
leave the old syntax alone.

The need to do explicit tuple manipulation is (at least for now) less common
than the existing 1-arg function behaviour, and syntax convenience should
favor the common case.

Existing code will be less disturbed (it is important to be able to rely on
errors occuring as early as possible).

If nothing else, the tuple proposal will have an easier time of being accepted
if this issue is properly dealt with.

> > Another area of confusion for me: how would multi-dispatch mechanisms work
> > with tuple function application semantics? That is, if everything is a single
> > tuple argument under the hood, doesn't that defeat multi-dispatching, which
> > needs to explicitly *not* do that?
>
> I don't understand what you are saying. Where is the problem?

I was imagining compiled code, which ultimately converts all functions to true
1-arg functions. If original signature information is not kept,
multi-dispatching does not have the means to decide which generic function to
invoke. Or since tupling is "lazy", signature information is still necessarily
present?

Joe Marshall

unread,
Aug 15, 2003, 1:00:26 PM8/15/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:

So TUPLE? would have to be a special form, or take extra arguments?

Anthony Carrico

unread,
Aug 15, 2003, 1:06:09 PM8/15/03
to
Matthias Blume <matt...@shimizu-blume.com> wrote:
> If there were any extra complexity, and if that extra complexity were
> all there is to it, then I could agree with you. But I don't see it
> as extra complexity. In fact, I see it as a simplification and
> generalization. And it has the benefit of removing the need for the
> bad hack known as CALL-WITH-VALUES, and with it the need for all kinds
> of other code contortions which I do not want to rehash here again. So
> there are clear benefits.

and

Anthony Carrico <acar...@memebeam.org> wrote:
> I know that my cps doesn't capture the proposal's extra machinery. It
> does point out where that machinery would have to go to make things
> right. One of my misgivings about the proposal is that I can't figure
> out how to express it in the cps transform without resorting to
> conditionals. Can you?

Thank you for the work you put into your version of cps transform. It
seems correct, and does a good job of capturing your proposal's
semantics.

Note that both of your additional primitives (TUPLE and MATCH-TUPLE)
expand into expressions which include three low level cps primitives:
LAMBDA, IF, and application.

I reassert that the traditional version of application (cps
transforming into a simple expression composed of two cps primitives:
LAMBDA and application) has strictly less complexity than the proposal
version (cps transforming into an expression composed of three cps
primitives: LAMBDA, IF, and application).

The proposal's mechanism dynamically decides where to insert APPLY,
VALUES, and CALL-WITH-VALUES by inserting conditionals into the
application mechanism. In the traditional model, the programmer
statically decides where to use them (or he can build his own dynamic
mechanism) by explicitly using those three orthogonal special purpose
primitive functions (again, APPLY, VALUES and CALL-WITH-VALUES).

> Hmm. I don't see any "nasty side effects". I don't see that it
> complicates any semantics. I also don't see any loopholes. (And what
> would qualify as an "expoit" of said loophole?) So what are you
> talking about?

The exploitable loophole is that various combinations of APPLY and
CALL-WITH-VALUES, and a mechanism to decide when to use them are
rolled into application. An example of an exploit is that
compose-with-a-twist (sending all the results of one procedure to the
parameters of the next) has a very clean syntax: application.

By the way, I'm not someone who thinks "hack" is a dirty word. The
proposal is very clever. I acknowledge that it is backwards compatible
with Scheme to the extent that a person is willing to give up their
compiler's call and return arity checking (yes, I know R5RS is very
loose on this point, that there is disagreement about what return
arity checking is desirable, and that, in the proposal's case, arity
errors may show up as type or bounds errors down the line but perhaps
out of context).

> I will overlook your accusations (denial, blindness, failure to
> acknowledge whatever) for the time being.

I'm sorry. I meant them as genuine questions rather than
accusations. I really had the feeling that you weren't catching my
points even after I explained them several different ways, it my
responsibility to be clear, so surely this is my own shortfall, not
your own.

--
Anthony Carrico

Anthony Carrico

unread,
Aug 15, 2003, 1:18:10 PM8/15/03
to
Ray Blaak <rAYb...@stripcapstelus.net> wrote:
> Passing in multiple args to get a single tuple is the new idea, the newly
> introduced mechanism. Make new syntax to take explicit advantage of it, and
> leave the old syntax alone.

There is already an explicit traditianal syntax: (lambda args body). I
agree with you that it is better to let the programmer decide staticly
than to base the choice on the argument's runtime type.

--
Anthony Carrico

Matthias Blume

unread,
Aug 15, 2003, 1:40:48 PM8/15/03
to
Ray Blaak <rAYb...@STRIPCAPStelus.net> writes:

> No fair to fix problems changing the definition. It is an error -- at least I
> want it to be.

"is an error" and "an error is signaled" are two different things.
"is an error" is worthless information to the user. (In fact, R5RS
seems to be silent on the issue of what happens when there is an arity
mismatch -- except in the denotational semantics section where the
implementation-specific "wrong" is being invoked in this case.)

> What I don't like about it is that the error case should be the default. You
> now have to do something different to get "contemporary" 1-arg behaviour.

"Contemporary behavior" for the multi-arg/one-formal case is
implementation-specific (see above).

> Existing code will be less disturbed (it is important to be able to rely on
> errors occuring as early as possible).

For the last time: existing code will not be disturbed at all!

[about multi-dispatch]


> I was imagining compiled code, which ultimately converts all
> functions to true 1-arg functions. If original signature information
> is not kept, multi-dispatching does not have the means to decide
> which generic function to invoke. Or since tupling is "lazy",
> signature information is still necessarily present?

????

Good compiled code most certainly does not convert all functions to
true 1-arg functions. True "one-arg" semantics is what the programmer
sees. Compilers are free to chose whatever representation is most
efficient, including spreading tuples across registers etc., as long
as this is not detectable by the program.

Guessing what you might mean, I see absolutely no problem with
multi-dispatch, and I am not sure where you might be hung up.

Matthias

Thant Tessman

unread,
Aug 15, 2003, 1:46:56 PM8/15/03
to
Matthias Blume wrote:
> Ray Blaak <rAYb...@STRIPCAPStelus.net> writes:
>

[...]

>>Existing code will be less disturbed (it is important to be able to rely on
>>errors occuring as early as possible).
>
>
> For the last time: existing code will not be disturbed at all!

Existing *working* code won't be disturbed.

Existing *busted* code on the other hand...

:-)

-thant


Ray Blaak

unread,
Aug 15, 2003, 1:49:52 PM8/15/03
to
Anthony Carrico <acar...@memebeam.org> writes:
> Ray Blaak <rAYb...@stripcapstelus.net> wrote:
> > Passing in multiple args to get a single tuple is the new idea, the newly
> > introduced mechanism. Make new syntax to take explicit advantage of it, and
> > leave the old syntax alone.
>
> There is already an explicit traditianal syntax: (lambda args body).

But that creates a list, not a tuple. I don't think we want to abandon that.

As I mentioned elsewhere, we in fact probably want to allow gathering
arguments as a list, a tuple, or even a vector.

Matthias Blume

unread,
Aug 15, 2003, 1:50:32 PM8/15/03
to
Joe Marshall <j...@ccs.neu.edu> writes:

In the proposal, as it stands, there is no need for TUPLE. TUPLE can be
defined as:

(define (TUPLE x) x)

The rest follows from the semantics of LAMBDA and application.

Matthias

Ray Blaak

unread,
Aug 15, 2003, 1:56:15 PM8/15/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:
> Ray Blaak <rAYb...@STRIPCAPStelus.net> writes:
> > What I don't like about it is that the error case should be the default. You
> > now have to do something different to get "contemporary" 1-arg behaviour.
>
> "Contemporary behavior" for the multi-arg/one-formal case is
> implementation-specific (see above).

Technically yes. Practically? All systems I have ever used prevent the
such functions from even being invoked. I will only use those systems where I
can rely on that.

> For the last time: existing code will not be disturbed at all!

Correct code yes. Tests in regression suites? Framework code? Anything that
makes use of error signalling to effect their behaviour (reporting to the
user, loading alternative modules, etc.)?

> [about multi-dispatch]


>
> Guessing what you might mean, I see absolutely no problem with
> multi-dispatch, and I am not sure where you might be hung up.

I am not actually hung up. It simply occurred to me to wonder how it might
interact with the tuple mechanism, if at all.

Erann Gat

unread,
Aug 15, 2003, 1:13:08 PM8/15/03
to
In article <uznibe...@STRIPCAPStelus.net>, Ray Blaak
<rAYb...@STRIPCAPStelus.net> wrote:

> > (x . y) == [x y]
>
> We already have notation to deal with vectors. Why introduce another? Just use
> #(x y).

Two reasons: first, I thought the code looked better without all those #
signs, and second, because #(x y) is not the same as (x . y), but I wanted
[x y] to be the same thing as (x . y) (though I now think that was a
mistake, and that distinguishing cons cells from two-vectors really is the
right thing after all).

E.

Erann Gat

unread,
Aug 15, 2003, 1:26:27 PM8/15/03
to
In article <65ky6g...@ccs.neu.edu>, Joe Marshall <j...@ccs.neu.edu> wrote:

> g...@jpl.nasa.gov (Erann Gat) writes:
>
> > In a nutshell: the reason Schemers are uncomfortable with
> > cons-cell-based-lists is that they are a hack.
>
> I disagree. First, I'm not uncomfortable with cons-cell-based-lists.
> Second, cons cell based lists have a recursive structure which is key
> to their power. I wouldn't call them a hack.

Well, perhaps "hack" is a bit strong. The point is, yes, cons cells have
a rescursive structure, but the recursive structure is different from what
is lexically apparent. The apparent recursive structure of, e.g.:

(a (b c) d)

is that it is a list of three elements, one of which is a list of two
elements. The actual recursive structure is that it's a cons cells whose
CAR is a symbol and whose CDR is a cons cell whose CAR is a cons cell...

It's not clear that the recursive structure of cons cells is inherently
more powerful than the recursive structure of nested vectors.

> > Anyway, before I put a whole lot of effort into this I thought I'd ask for
> > some preliminary feedback. In particular I'm curious if this idea has
> > been explored and found wanting for some reason.
>
> Check out REBOL. The primitive construct there is similar to what
> you are describing. You sort of can use vectors, but they are awfully
> clunky and you find people creating two-element vectors and chaining
> them together all the time. You simply can't `splice' or `prepend'
> without a *lot* of copying.

Yes, but splicing and prepending tend to happen mostly during macro
expansions, and so the extra cost would be incurred mainly during
compilation, not execution, which might be an acceptable tradeoff
(assuming that there are some actual advantages, which I admit is not
altogether clear).

In any case, thanks for the pointer to REBOL. It looks interesting
(though at first glance not something I'd be ready to give up Lisp for).

E.

Matthias Blume

unread,
Aug 15, 2003, 2:16:23 PM8/15/03
to
Anthony Carrico <acar...@memebeam.org> writes:

> Note that both of your additional primitives (TUPLE and MATCH-TUPLE)
> expand into expressions which include three low level cps primitives:
> LAMBDA, IF, and application.

I don't understand what you mean. TUPLE does not need any runtime
tests at all, and the tests performed by MATCH-TUPLE are the very
same as the arity tests performed by contemporary Scheme implementations.

Anyway, those extra IFs etc. that you complain about are the same IFs
that are currently hidden in LAMBDA. The LAMBDA in my CPS does not
need to do any testing at all: there are only two kinds of LAMBDAs:
those with one formal parameter and those with two. And it is
guaranteed (without runtime tests) that these two always get the correct
number of parameters.

> I reassert that the traditional version of application (cps
> transforming into a simple expression composed of two cps primitives:
> LAMBDA and application) has strictly less complexity than the proposal
> version (cps transforming into an expression composed of three cps
> primitives: LAMBDA, IF, and application).

See above. The complexity merely lies in different places (and
conflates two orthogonal issues).

> The proposal's mechanism dynamically decides where to insert APPLY,
> VALUES, and CALL-WITH-VALUES by inserting conditionals into the
> application mechanism.

No, this is not so. The number of expressions in a TUPLE form and the
number of pattern variables in a MATCH-TUPLE form are statically known
at the time of CPS-conversion. If you insist on thinking about how to
expand these forms into other forms, then the structure of the
expansion is statically determined by those numbers. The expansion
does not depend on runtime values.

Matthias

Joe Marshall

unread,
Aug 15, 2003, 2:24:27 PM8/15/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:

Sorry, I should have been more clear. The predicate for deciding
tupleness `TUPLE?' would have to be special and could not be
abstracted over:

(define (my-tuple? x)
(tuple? x))



Matthias Blume

unread,
Aug 15, 2003, 3:17:28 PM8/15/03
to
Joe Marshall <j...@ccs.neu.edu> writes:

Oops, sorry. Missed the question mark in "TUPLE?" in your original
question.

In any case, TUPLE? could just be a function, and it can also be
abstracted over. Why couldn't it?

Matthias

Joe Marshall

unread,
Aug 15, 2003, 3:57:23 PM8/15/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:

I think this question arose in my mind before you said that
there were no 1-tuples. Obviously, a 1-tuple would be spread before
tuple? could see it.

I'm still not convinced that a reasonably complete and coherent
calculus of tuples is possible, though.

Anthony Carrico

unread,
Aug 15, 2003, 4:27:25 PM8/15/03
to
Matthias Blume <matt...@shimizu-blume.com> wrote:
> I don't understand what you mean. TUPLE does not need any runtime
> tests at all,

and

Matthias Blume <matt...@shimizu-blume.com> also wrote:
> (define (tuple . l)
> (if (= (length l) 1) (car l) l))

Which is it? If you don't do the dynamic test in the tuple constructor
(where it makes sense, to avoid a space leak), then you must do it in
the tuple accessor.

> and the tests performed by MATCH-TUPLE are the very
> same as the arity tests performed by contemporary Scheme implementations.

I agree with that statement, but

(a) MATCH-TUPLE's conditionals have different functionality than
LAMBDA's. LAMBDA's don't dynamically select "spread vs. flat calling
convention". MATCH-TUPLE's don't catch the arity mismatch errors.

(b) When efficiency is an issue, LAMBDA's conditionals can be dropped
from correct code (since they are a debugging tool), MATCH-TUPLE's
conditionals can't (since they are necessary for correct dynamic
operation).

(c) We had been arguing the complexity of the mental model, not the
implementation, programmers can drop LAMBDA's arity tests from their
mental models of application. They can't drop MATCH-TUPLE's tests
since it is the substance of the proposal to overload application with
these features. For example, in the traditional model, I need not
think about about apply unless I'm using the APPLY operator, but in
the proposal I must: I should consider both call semantics in the
light of my argument's dynamic type.

>> The proposal's mechanism dynamically decides where to insert APPLY,
>> VALUES, and CALL-WITH-VALUES by inserting conditionals into the
>> application mechanism.
>
> No, this is not so. The number of expressions in a TUPLE form and
> the number of pattern variables in a MATCH-TUPLE form are statically
> known at the time of CPS-conversion. If you insist on thinking about
> how to expand these forms into other forms, then the structure of
> the expansion is statically determined by those numbers. The
> expansion does not depend on runtime values.

I agree that the number of expressions in a TUPLE form and the number


of pattern variables in a MATCH-TUPLE form are statically

known. However, I was trying to imply these two dynamic properties:

(1) The code path which determines which TUPLE forms match up with
which MATCH-TUPLE forms (if it isn't dynamic, then Scheme compilers
could statically detect all call and return arity mismatch errors;
clearly they cannot).

(2) the type being passed, in particular, the arity of the tuple.

These dynamic choices are equivalent to the decisions a progammer
makes when using various combinations of application, APPLY, VALUES,
CALL-WITH-VALUES, and lambda. The fact that he is not supplying this
information leaves the application mechanism to doing the choosing for
him. I don't know how you can say "this is not so", since folding
these features into application is the substance of your proposal.

--
Anthony Carrico

Matthias Blume

unread,
Aug 15, 2003, 5:02:22 PM8/15/03
to
Anthony Carrico <acar...@memebeam.org> writes:

> Matthias Blume <matt...@shimizu-blume.com> wrote:
> > I don't understand what you mean. TUPLE does not need any runtime
> > tests at all,
>
> and
>
> Matthias Blume <matt...@shimizu-blume.com> also wrote:
> > (define (tuple . l)
> > (if (= (length l) 1) (car l) l))

You are confusing two completely separate issues. We are talking
about two different TUPLE constructors here. The one whose code is
given is actually constructing a list, and I gave this example only to
show that such "paradoxical" constructors can already be defined.

The TUPLE constructor that I use to explain the proposed semantics of
LAMBDA does not have to test dynamically how many arguments it receives.
Instead, view it as a special form.

> Which is it? If you don't do the dynamic test in the tuple constructor
> (where it makes sense, to avoid a space leak), then you must do it in
> the tuple accessor.

No, the test is done "at compile time". Notice that the surface
language that I propose does not even need TUPLE. TUPLE is just used
as a way of explaining the new semantics of application.

> > and the tests performed by MATCH-TUPLE are the very
> > same as the arity tests performed by contemporary Scheme implementations.
>
> I agree with that statement, but
>
> (a) MATCH-TUPLE's conditionals have different functionality than
> LAMBDA's. LAMBDA's don't dynamically select "spread vs. flat calling
> convention". MATCH-TUPLE's don't catch the arity mismatch errors.

MATCH-TUPLE does not catch arity mismatches in one case (i.e., if
there is one variable in the pattern). Of course, this is not a
suprise since this is the WHOLE POINT OF THE EXERCISE.

> (b) When efficiency is an issue, LAMBDA's conditionals can be dropped
> from correct code (since they are a debugging tool), MATCH-TUPLE's
> conditionals can't (since they are necessary for correct dynamic
> operation).

When efficiency is an issue, the conditional can be implemented by
having multiple entry points. In fact, that (combined with the "lazy
tupling" approach) is exactly how I would go about implementing all
this. Once you do that, there are still a few arity tests that for
safety need to be done, but which could be sacrificed on the altar of
efficiency just like it can be done now. (I am very much against
turing off safety in favor of efficiency, btw. But that's a different
topic.)

> (c) We had been arguing the complexity of the mental model, not the
> implementation, programmers can drop LAMBDA's arity tests from their
> mental models of application. They can't drop MATCH-TUPLE's tests
> since it is the substance of the proposal to overload application with
> these features. For example, in the traditional model, I need not
> think about about apply unless I'm using the APPLY operator, but in
> the proposal I must: I should consider both call semantics in the
> light of my argument's dynamic type.

???

> [...]


> (1) The code path which determines which TUPLE forms match up with
> which MATCH-TUPLE forms (if it isn't dynamic, then Scheme compilers
> could statically detect all call and return arity mismatch errors;
> clearly they cannot).

See comment about multiple entry points.

> (2) the type being passed, in particular, the arity of the tuple.

The type of the argument being passed is already dynamic in Scheme,
given that Scheme is a dynamically typed language. So I am not sure I
get your point.

> These dynamic choices are equivalent to the decisions a progammer
> makes when using various combinations of application, APPLY, VALUES,
> CALL-WITH-VALUES, and lambda. The fact that he is not supplying this
> information leaves the application mechanism to doing the choosing for
> him. I don't know how you can say "this is not so", since folding
> these features into application is the substance of your proposal.

Frankly, I don't follow. We seem to have very different mental models
of how things work because I don't even see the problem that you are
so upset about. (Or, more precisely: some of the problems I do not see
at all, and and the remaining ones I do not consider problems.)

Matthias

be...@sonic.net

unread,
Aug 15, 2003, 6:00:24 PM8/15/03
to
Erann Gat wrote:
>
> In article <65ky6g...@ccs.neu.edu>, Joe Marshall <j...@ccs.neu.edu> wrote:
>
> > g...@jpl.nasa.gov (Erann Gat) writes:
> >
> > > In a nutshell: the reason Schemers are uncomfortable with
> > > cons-cell-based-lists is that they are a hack.
> >
> > I disagree. First, I'm not uncomfortable with cons-cell-based-lists.
> > Second, cons cell based lists have a recursive structure which is key
> > to their power. I wouldn't call them a hack.
>
> Well, perhaps "hack" is a bit strong. The point is, yes, cons cells have
> a rescursive structure, but the recursive structure is different from what
> is lexically apparent. The apparent recursive structure of, e.g.:
>
> (a (b c) d)
>
> is that it is a list of three elements, one of which is a list of two
> elements. The actual recursive structure is that it's a cons cells whose
> CAR is a symbol and whose CDR is a cons cell whose CAR is a cons cell...
>
> It's not clear that the recursive structure of cons cells is inherently
> more powerful than the recursive structure of nested vectors.

It is, however, certainly *differently* powerful.

Vectors and lists are different things, and they're good at
different things. LISP was originally based strictly on
lists -- that was how arguments were passed, how results
were returned, how processing happened, what recursion worked
on, etc... It's a useful and general construct, and I don't
think it's a hack at all. It's genuinely beautiful and
elegant.

But, even though we now use list *syntax* to express argument
passing, most serious implementations actually pass arguments
in some kind of unboxed form - effectively as a tuple. And
vectors have been introduced into LISP/Scheme, sort of as an
add-on, with a somewhat hacky-looking syntax, just because
they're too useful to live without.

I like the idea of square braces denoting vectors. I think
that puts them on a lexically-equivalent footing with lists.
And I also think that it *is* worthwhile to let lambda
specify what form a function wants its arguments in - as a
lists or as a tuple, for example, and how to break them out.

So, my intuition here for "tuple scheme" would be something
like this:

(lambda [x y z] expr)

would denote a function that takes a 3-tuple as an argument,
binding the first value to the local name x, the second to the
local name y, and the third to the local name z. Note that
this is the same semantics most current implementations now
use for the standard (lambda (x y z) expr) form. You could
call this either as

(foo xval yval zval)
or as
(foo expr-that-returns-a-3tuple)
or as
(foo expr-that-returns-a-2tuple zval)
or whatever.


The most disturbing part of this, to me, is expressions or
variables that have tuple values would bind values to more
than one local name when calling functions like this. The
entire tuple is the (sole) argument, so it would not be
correct to refer to them as being "multiple arguments in a
single expression", but binding multiple local names would
tend to break the heads of people who are thinking of them
as multiple individual arguments.

Also, you'd have to quote vectors if you didn't want them
to bind values to multiple local variables. For example
(foo [1 2] 'a) would call the function with x bound to 1,
y bound to 2, and z bound to 'a. If you wanted to pass
a vector as a value, you'd need to do something like
(foo '[1 2] 'a 'b).


(lambda (x y z) expr) would expect as an argument a list of
three elements, giving the values of the local variables in
order the same way, although wherever the compiler could
prove that it didn't matter, it could still optimize the code
to pass the args as a tuple. The places where it matters are
when you're using apply with lists or acting as the
continuation for a function that returns a list.

(lambda (x y . z) expr) expects a list of two or more
values as an argument and binds the third local variable
to the "rest" of the list.

(lambda x expr) would get an arity error if called with
more or less than one argument value. You could call it
with that value being a list or a tuple or a character
or whatever else, but you could not call it with more
or less than one argument value. This would break existing
code, but it's the only clean (unambiguous, symmetric)
solution.

I'm not entirely comfortable with this as it introduces
a wart in function application. This means that
(foo expr-that-returns-a-3tuple) would work fine because
the 3tuple is the argument, bound locally to x. But
(foo val1 val2 val3), which is valid for other functions
that expect a 3tuple and want to unbox it (because apply
implicitly creates a 3tuple of the values), wouldn't be
a valid call here because this function couldn't accept
the implicitly-created 3tuple that's created during the
ordinary function application on its own call. It would
require a 3tuple that was created explicitly, either by
user code or by apply during some other function application.

(lambda () expr) and (lambda [] expr) would be equivalent,
specifying functions of zero arguments.

Things I'm not sure about:

(lambda [x y . z] expr) should allow calling this function
with a tuple argument of three or more values. But what ought
z be bound to? A newly-constructed tuple containing the
rest of the values? a newly-constructed list?

[x y . z] would indicate a 3tuple with some kind of pointer
between the second and third element. With a few support
functions an array pointer would facilitate some kinds of
recursive programming.

This is interesting.... It wouldn't be scheme as we now
understand it, but it could be an interesting, capable, and
very efficient language, possibly cleaving even closer to
the ancestral lambda calculus and pure design than scheme
itself, and completely cutting around the call-with-values
and values misfeatures while eliminating none of their
functionality.

It would *completely* restore the symmetry between function
return and function application, for starters, and encourage
the use of more efficient data structures.

Bear

Matthias Blume

unread,
Aug 15, 2003, 6:29:11 PM8/15/03
to
be...@sonic.net writes:

> So, my intuition here for "tuple scheme" would be something
> like this:
>
> (lambda [x y z] expr)
>
> would denote a function that takes a 3-tuple as an argument,
> binding the first value to the local name x, the second to the
> local name y, and the third to the local name z. Note that
> this is the same semantics most current implementations now
> use for the standard (lambda (x y z) expr) form. You could
> call this either as
>
> (foo xval yval zval)
> or as
> (foo expr-that-returns-a-3tuple)
> or as
> (foo expr-that-returns-a-2tuple zval)
> or whatever.

Just to be sure this is not being confused with the proposal that I
brought forward.

Under my proposal the third form above would be a runtime error since
the single argument passed to foo would be a 2-tuple containing
another 2-tuple as its first element.

I think that a scheme under which tuples automatically "splice" into
other tuples (as seems to be implied by the above) is asking for
serious trouble.

Matthias

Eli Barzilay

unread,
Aug 16, 2003, 1:13:23 AM8/16/03
to
Matthias Blume <matt...@shimizu-blume.com> writes:

> For the last time: existing code will not be disturbed at all!

Let me get this straight -- you want first-class tuples, you make this
happen at the cost of

(cons x '())

not being the same thing as

(list x)

and if I complain about this you'll shout back that existing code will
not be disturbed -- that is, existing code that is at the same level
of tuple-ignorance.

If I have this correctly, then you have a proposal for a system with
first-class tuples that works fine with existing code only if it
doesn't use first-class tuples. That's a really good argument for
call-with-values. I'd even prefer to use ML -- and I can certainly
imagine the alternative universe where Scheme adopts your thing:

| Newsgroups: blume-universe.comp.lang.scheme
| Subject: Re: Why does my code break?


|
| Eli Barzilay <e...@barzilay.org> writes:
|
| > Matthias Blume <matt...@shimizu-blume.com> writes:
| >

| > > Eli Barzilay <e...@barzilay.org> writes:
| > > > Why does the following code breaks?
| > > > [...]
| > >
| > > Of course it breaks, you use tuples in <...> so (list x) doesn't
| > > do what you want it to do.
| >
| > So I also have to change the definitions of <...>, <...>, and
| > <...>??? Actually, I think that I need to walk over all of my
| > code. That's going to be messy.
|
| Well, that's the price you have to pay for using tuples. You can
| avoid using them and everything will be fine, or you can switch to a
| language that was designed with tuples in mind (like <...>).

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://www.barzilay.org/ Maze is Life!

Matthias Blume

unread,
Aug 16, 2003, 1:51:21 AM8/16/03
to
On Sat, 16 Aug 2003 01:13:23 -0400, Eli Barzilay wrote:

> Matthias Blume <matt...@shimizu-blume.com> writes:
>
>> For the last time: existing code will not be disturbed at all!
>
> Let me get this straight -- you want first-class tuples, you make this
> happen at the cost of
>
> (cons x '())
>
> not being the same thing as
>
> (list x)

Yes. If x evaluates to a tuple, then the two will not evaluate to the
same result. I have explained that already. I have also said that
I can see that some people will gripe about it. I also explained why
I don't think that the situation is all that serious.

> and if I complain about this you'll shout back that existing code will
> not be disturbed -- that is, existing code that is at the same level
> of tuple-ignorance.

I meant it in the sense of "existing programs". The denotational meaning
of code, of course, differs. However, the difference can only be exposed
by feeding first-class tuples into the code -- something that obviously
does not (because it cannot) happen in existing programs.

> If I have this correctly, then you have a proposal for a system with
> first-class tuples that works fine with existing code only if it
> doesn't use first-class tuples.

Indeed. And, as I said, if only existing code is involved, then there
can be no first-class tuples, so the difference is not observable.
Moreover, the system that I proposed (and implemented for the heck of it
yesterday) works "fine" even in the presence of first-class tuples.
Of course, you have to use a suitable definition of "fine".

> That's a really good argument for call-with-values.

How so?

> I'd even prefer to use ML

Sure. So do I. :-)

> -- and I can certainly
> imagine the alternative universe where Scheme adopts your thing:
>
> | Newsgroups: blume-universe.comp.lang.scheme
> | Subject: Re: Why does my code break?
> |
> | Eli Barzilay <e...@barzilay.org> writes:
> |
> | > Matthias Blume <matt...@shimizu-blume.com> writes:
> | >
> | > > Eli Barzilay <e...@barzilay.org> writes:
> | > > > Why does the following code breaks?
> | > > > [...]
> | > >
> | > > Of course it breaks, you use tuples in <...> so (list x) doesn't
> | > > do what you want it to do.
> | >
> | > So I also have to change the definitions of <...>, <...>, and
> | > <...>??? Actually, I think that I need to walk over all of my
> | > code. That's going to be messy.
> |
> | Well, that's the price you have to pay for using tuples. You can
> | avoid using them and everything will be fine, or you can switch to a
> | language that was designed with tuples in mind (like <...>).

Actually, what I would tell you (in this or any other universe where you
and I could simultaneously exist) is to not use LIST (except for the
specific purpose of turning a k-tuple into a k-element list). But I did
explain that already elsewhere.

Matthias

PS: Sorry, I have to tune out from this discussion now. I haven't seen
any rational arguments against the proposal, and it is good to know
that a number of rationally arguing people seem to be with me. That's
probably all I can hope for, so I quit now.

Eli Barzilay

unread,
Aug 16, 2003, 3:58:54 AM8/16/03
to
"Matthias Blume" <fi...@my.email.address.elsewhere> writes:

> Yes. If x evaluates to a tuple, then the two will not evaluate to the
> same result. I have explained that already. I have also said that
> I can see that some people will gripe about it. I also explained why
> I don't think that the situation is all that serious.

`Seriousness' is clearly subjective, as someone who doesn't use
Scheme, I see little point in raising subjective arguments.


> I meant it in the sense of "existing programs". The denotational
> meaning of code, of course, differs. However, the difference can
> only be exposed by feeding first-class tuples into the code --
> something that obviously does not (because it cannot) happen in
> existing programs.

Sure. But -- don't you see any ridiculousness in a proposal that is
supposed to enhance the language by some feature in a way that doesn't
have any effect on code as long as it doesn't use that feature?
(Where `code' is in the global sense, not like you can do something on
a local module or something.)

Here's a proposal -- ML lacks a few features that are found in Scheme,
and this proposal shows how ML can get all of these missing features
without losing anything or changing existing code. The
implementation: whenever you want to get the new features, use Scheme.
Now I can also shout: existing ML code will not be disturbed at all!


> > That's a really good argument for call-with-values.
>
> How so?

Because it implies that the Scheme world is now absolutely split into
code that use tuples and code that doesn't. And because the whole
tuple thing is slapped on artificially, in a worse way than C++ is
some OOP slapped on C (IMO). Because compared to the inconvenience of
using `call-with-values', this looks like a debugging nightmare.


> > I'd even prefer to use ML
>
> Sure. So do I. :-)

So what's the point of all this? Wanting tuples bad enough to use
your thing (again, IMO) is radical enough that I would just use ML
before I do that.


> Actually, what I would tell you (in this or any other universe where
> you and I could simultaneously exist) is to not use LIST (except for
> the specific purpose of turning a k-tuple into a k-element list).
> But I did explain that already elsewhere.

Of course you did, I just couldn't believe this is serious. I did try
a quick implementation of this, which shouldn't be too hard, but
verbose since `lambda', `tuple', and function applications should all
have a special treatment for the k=1 case. This is (IMO) inelegant.
Of course this could be excused by just being stuff-under-the-hood,
but this special case thing leaks into actual code, in the form of
destroying the usual semantics of variable-arity functions, and
similar points like the arity error problem. Now there is no need to
repeat -- I know what are your replies to this point, they are just
different than the average reply of a *Scheme programmer*, which makes
this whole exercise pointless (IMO, for the last time).


> PS: Sorry, I have to tune out from this discussion now. I haven't
> seen any rational arguments against the proposal, and it is good to
> know that a number of rationally arguing people seem to be with me.
> That's probably all I can hope for, so I quit now.

Clarification -- I have no objection to tuples, if I had a way that
would give me first-class Scheme tuples which was elegant and cheap,
then I wouldn't even blink before using it, and I wouldn't have
problems giving up `call-with-values' or any other unnecessary
values-related stuff. But I would not use something that makes me
give up what you propose I give up.

If you think that this or anything else I wrote is irrational, please
say so.

Matthias Blume

unread,
Aug 16, 2003, 1:17:02 PM8/16/03
to
On Sat, 16 Aug 2003 03:58:54 -0400, Eli Barzilay wrote:

> "Matthias Blume" <fi...@my.email.address.elsewhere> writes:
>
>> Yes. If x evaluates to a tuple, then the two will not evaluate to the
>> same result. I have explained that already. I have also said that
>> I can see that some people will gripe about it. I also explained why
>> I don't think that the situation is all that serious.
>
> `Seriousness' is clearly subjective, as someone who doesn't use
> Scheme, I see little point in raising subjective arguments.

As someone who has a) used and b) implemented Scheme, I think I don't
have to even listen to this. But you are right, seriousness is clearly
subjective. I just can't believe that the "spirit of Scheme" would
hinge on the issue of whether

(cons x '())

and

(list x)

are equivalent in all contexts.

>> I meant it in the sense of "existing programs". The denotational
>> meaning of code, of course, differs. However, the difference can
>> only be exposed by feeding first-class tuples into the code --
>> something that obviously does not (because it cannot) happen in
>> existing programs.
>
> Sure. But -- don't you see any ridiculousness in a proposal that is
> supposed to enhance the language by some feature in a way that doesn't
> have any effect on code as long as it doesn't use that feature?
> (Where `code' is in the global sense, not like you can do something on
> a local module or something.)

No, I don't see that ridiculousness. Obviously, if you add a new feature
to a language, then -- by definition -- code that uses this new feature
will behave differently than code that doesn't. That's why it is called
a new feature.

Here is another way of looking at it:

In contemporary Scheme there are two kinds of programs: correct ones
and incorrect ones. Any language change must fall into one of three
categories:


1. All changes affect only previously incorrect programs.
2. All changes affect only previously correct programs.
3. There are changes that affect previously correct programs and
there are changes that affect previously incorrect programs.

There is absolutely no way around being in one of these three categories,
otherwise all programs are unaffected, so there wouldn't be a language
change at all. Now, the most benign kind of change is one that falls
under case 1., and my proposal is such a change.

> Here's a proposal -- ML lacks a few features that are found in Scheme,
> and this proposal shows how ML can get all of these missing features
> without losing anything or changing existing code. The
> implementation: whenever you want to get the new features, use Scheme.
> Now I can also shout: existing ML code will not be disturbed at all!

You are now being ridiculous. Obviously, if I use Scheme, my existing
ML programs will not run at all, so they are clearly affected by the
change.

> And because the whole
> tuple thing is slapped on artificially, in a worse way than C++ is
> some OOP slapped on C (IMO).

I don't know how to answer this one since it does not make any sense
at all. You just seem to try very hard insulting me or the proposal.
But that's no rational argument.

> Because compared to the inconvenience of
> using `call-with-values', this looks like a debugging nightmare.

If you say so. To me it does not look any worse than the debugging
nightmare that we have already.

>> Actually, what I would tell you (in this or any other universe where
>> you and I could simultaneously exist) is to not use LIST (except for
>> the specific purpose of turning a k-tuple into a k-element list).
>> But I did explain that already elsewhere.
>
> Of course you did, I just couldn't believe this is serious. I did try
> a quick implementation of this, which shouldn't be too hard, but
> verbose since `lambda', `tuple', and function applications should all
> have a special treatment for the k=1 case. This is (IMO) inelegant.
> Of course this could be excused by just being stuff-under-the-hood,
> but this special case thing leaks into actual code, in the form of
> destroying the usual semantics of variable-arity functions, and
> similar points like the arity error problem.

The arity error problem is a non-issue since Scheme does not even define
what's supposed to happen in that case. Arguably the semantics of
variable-argument functions has not changed at all. It just so happened
that until now you couldn't observe the difference in meaning between
(list x) and (cons x '()) because there were no first-class tuples.

Matthias

Eli Barzilay

unread,
Aug 16, 2003, 6:32:23 PM8/16/03
to
"Matthias Blume" <fi...@my.email.address.elsewhere> writes:

> On Sat, 16 Aug 2003 03:58:54 -0400, Eli Barzilay wrote:
>
> > `Seriousness' is clearly subjective, as someone who doesn't use
> > Scheme, I see little point in raising subjective arguments.
>
> As someone who has a) used and b) implemented Scheme, I think I
> don't have to even listen to this.

You forgot `c', which I mention above. Your subjective arguments come
now as external arguments, whether you implemented your own Scheme or
not.


> But you are right, seriousness is clearly subjective. I just can't
> believe that the "spirit of Scheme" would hinge on the issue of
> whether
>
> (cons x '())
>
> and
>
> (list x)
>
> are equivalent in all contexts.

It's not -- but that's just a side effect of destroying variable-arity
functions as we know them now, which is closer to some "spirit"
concept. (BTW, note that I never raised any spiritual issues, I like
hacking and I like have more language power.)


> > Sure. But -- don't you see any ridiculousness in a proposal that
> > is supposed to enhance the language by some feature in a way that
> > doesn't have any effect on code as long as it doesn't use that
> > feature? (Where `code' is in the global sense, not like you can
> > do something on a local module or something.)
>
> No, I don't see that ridiculousness. Obviously, if you add a new
> feature to a language, then -- by definition -- code that uses this
> new feature will behave differently than code that doesn't. That's
> why it is called a new feature.

But the problem is that when you do want to use the new feature,
*existing* code *will* need to change. For example, I might imagine
something like this

(define (foo x)
(...do something with (cdr x)...))
(let ((x (list blah)))
(foo x))

This is perfectly fine now, but if I want to use your tuples, then I
have to change this (or make sure tuples can never end up in `blah').
This means that either I do have to scan and modify existing code, or
I can avoid it by not using your tuples. This is why I think that
this is not a good solution for getting tuples into Scheme.


> Here is another way of looking at it:
>
> In contemporary Scheme there are two kinds of programs: correct
> ones and incorrect ones. Any language change must fall into one
> of three categories:
>
>
> 1. All changes affect only previously incorrect programs.
> 2. All changes affect only previously correct programs.
> 3. There are changes that affect previously correct programs
> and there are changes that affect previously incorrect
> programs.
>
> There is absolutely no way around being in one of these three
> categories, otherwise all programs are unaffected, so there wouldn't
> be a language change at all. Now, the most benign kind of change is
> one that falls under case 1., and my proposal is such a change.

See that above code bit -- is there anything wrong with it now?
Should it be modified to accommodate `blah' being a tuple?


> > Here's a proposal -- ML lacks a few features that are found in
> > Scheme, and this proposal shows how ML can get all of these
> > missing features without losing anything or changing existing
> > code. The implementation: whenever you want to get the new
> > features, use Scheme. Now I can also shout: existing ML code will
> > not be disturbed at all!
>
> You are now being ridiculous. Obviously, if I use Scheme, my
> existing ML programs will not run at all, so they are clearly
> affected by the change.

Of course I'm beiong ridiculous -- but again, clarifying the above
thing would help: if I do need to modify that *existing* code then
this is (too me) going to a similar ridiculousness direction, and I
can claim that your existing ML programs will run just fine as long as
you don't use my new proposal.


> > And because the whole tuple thing is slapped on artificially, in a
> > worse way than C++ is some OOP slapped on C (IMO).
>
> I don't know how to answer this one since it does not make any sense
> at all. You just seem to try very hard insulting me or the
> proposal. But that's no rational argument.

I'm sorry if this comes out as an insult (my excuse is that I really
don't see any insult there). My point is that C++ was basically
slapping OOP on top of C, and the resulting OOP system feels very
awkward at times, especially when compared to a language that was
pre-designed with OOP in mind. In a very similar way, your proposal
seem to suffer from the "special k=1 case" in enough places that it is
easy to see that it is a slap-on feature, and I think that many people
would agree that in ML tuples are much better than in your version of
Scheme.


> The arity error problem is a non-issue since Scheme does not even

> define what's supposed to happen in that case. [...]

I am not a "Scheme programmer" in the R5RS sense -- I use tons of
non-R5RS features all the time, one of them is arity errors.

Erann Gat

unread,
Aug 16, 2003, 8:48:41 PM8/16/03
to
In article <3F3D5C3F...@sonic.net>, be...@sonic.net wrote:

> Erann Gat wrote:
>
> > It's not clear that the recursive structure of cons cells is inherently
> > more powerful than the recursive structure of nested vectors.
>
> It is, however, certainly *differently* powerful.
>
> Vectors and lists are different things

Yes, but lists and cons cells are also different things. It's not that
lists are a hack, it's that lists that are *required* to be implemented as
linked lists that's the hack. (And hack is really the wrong word. It's a
little too pejorative. "Inapproprately low level of abstraction" is
closer to what I have in mind, but that's too wordy.)

> , and they're good at
> different things. LISP was originally based strictly on
> lists -- that was how arguments were passed, how results
> were returned, how processing happened, what recursion worked
> on, etc... It's a useful and general construct, and I don't
> think it's a hack at all. It's genuinely beautiful and
> elegant.

Yes, but it also leads to some unintuitive behavior, like having result
lists come out in the reverse order from what one would naively expect
when writing iterative algorithms, and the existence of "improper" lists,
which are a conceptually separate type, but can only be distinguished from
"proper" lists in O(n) time.

Also...

> I like the idea of square braces denoting vectors. I think
> that puts them on a lexically-equivalent footing with lists.

Yes, I agree. I think that people's infatuation with the elegance of
lists is partially to blame for vectors being second-class citizens (even
if they are first-class data structures).

> Things I'm not sure about:
>
> (lambda [x y . z] expr)

[x y . z] doesn't make syntactic sense. (x y . z) is shorthand for (x .
(y . z)), but it's not clear what having a dot inside a vector would
mean. This does not mean that (lambda [x y . z] expr) couldn't be given a
coherent meaning, but then you could no longer represent code as data
unless you gave an account of what [x y . z] means when read as data.

> [x y . z] would indicate a 3tuple with some kind of pointer
> between the second and third element.

That's a little too vague IMO.

E.

David Rush

unread,
Aug 16, 2003, 6:24:11 AM8/16/03
to
Anthony Carrico <acar...@memebeam.org> writes:
> David Rush <dr...@aol.net> wrote:
> > I'm sorry but you have just restated the definition of
> > composition. In (f (g)), f *is* the continnuation of (g). Just what is
> > it that you consider an error?
>
> Assume f has two parameters and g has two results.

For the moment let's assume it doesn't. Then my statement is
true. It would be syntactically and semantically clean if it were
*also* true in the case where you have multiple arguments/values.

> Consider a loose interpretation of the "what happens to extra values"
> question. The error is that the application fails to initialize f's second
> argument.

In LC we don't initialize argumens, right? We apply functions to
values. Bear with me, I'm building up to something here.

> >> There is a big huge difference between "(g) initializing the first
> >> argument of f" and "f being the continuation of (g)".
> >
> > There isn't once you apply the CPS-transform.
>
> Sure there is. Watch:
> (define f-ds (lambda (x y) ...))

AHA! We are back to the basic problem with multiple function
arguments. LC doesn't have those, so the whole problem boild down to
what is the meaning of

(lambda (x y) (expression in x, y))

when expressed in LC? There are two competing models here,

1) \x.\y.(expression in x, y)

or

2) \(x,y).(expression in x, y)

with model 2, (f (g)) is an error iff the arity of the tuple
returned by g doesn't match the arity of the tuple required by f. This
arity matching is *already* performed by every Scheme implementation
in existence, and in fact is very similiar to the pattern-matching
done in SML/Haskell/your-favorite-functional-language-here.

Currently Scheme doesn't really suppport either model very well. Model
1 implies that too few arguments is simply a partial application (I'm
not too sure what too many arguments would be - it seems like it would
almost have to be an error). Scheme does not currently support such
implicit currying. Model 2 implies complete symmetry between multiple
args and multiple returns. R5RS takes a middle position, where

(lambda x ...)

corresponds to 2,

(lambda (x) ...)

corresponds to 1,

(lambda (x y) ...)

is like model 2 *except* in the case of function composition, and

(lambda (x y . args) ...)

is an unholy (but useful) hybrid of 1 & 2.

> >> In your world, the operator is indeed the continuation of the first
> >> and only argument. It sort of makes every call into a
> >> call-with-values.
> >
> > Precisely. This is the basic theory of the _Lambda, the Ultimate_
> > papers, actually.
>
> No, the lambda described in the ultimate papers doesn't work that
> way,

Actually, I think it does.

> it doesn't require magic tuples to fix the mistakes introduced by the
> feature.

The semantics presented in the _Lambda_ papers isn't terribly
rigorous, they are written very much from an actual implementation
point of view. The set of locations used to pass parameters can easily
be viewed as a tuple, and (as Anton van Straaten points out) it is
modelled that way in R5RS.

> Call-with-values introduces the same feature without any mistakes,
> so I can't understand why a few people seem to hate it.

Because it's an irregular model of the lambda calculus, as I explained
above. Now if you want to introduce call/values and implicit currying,
I think there'd be a stronger case in favor of it. But the tuple model
*is* consistent, it is very simple conceptually, and is widely adopted
in FP languages generally. eq? and eqv?, and the mutability question
add some interesting twists to the concept for Scheme, but I don't
think that there is anything fundamentally broken there.

david rush
--
In a tight spot, you trust your ship or your rifle to get you through,
so you refer to her affectionately and with respect. Your computer? It
would just as soon reboot YOU if it could. Nasty, unreliable,
ungrateful wretches, they are. -- Mike Jackmin (on sci.crypt)

Sander Vesik

unread,
Aug 18, 2003, 5:13:36 PM8/18/03
to
Erann Gat <g...@jpl.nasa.gov> wrote:
> In article <3F3D5C3F...@sonic.net>, be...@sonic.net wrote:
>
>> Erann Gat wrote:
>>
>> > It's not clear that the recursive structure of cons cells is inherently
>> > more powerful than the recursive structure of nested vectors.
>>
>> It is, however, certainly *differently* powerful.
>>
>> Vectors and lists are different things
>
> Yes, but lists and cons cells are also different things. It's not that
> lists are a hack, it's that lists that are *required* to be implemented as
> linked lists that's the hack. (And hack is really the wrong word. It's a
> little too pejorative. "Inapproprately low level of abstraction" is
> closer to what I have in mind, but that's too wordy.)

Well, in many contexts that wordy thing would get shortened to ... 'a hack'
or possibly even 'a bad hack'. The immidiate result is that any structural
optimisation to lisst ends up being seriously bad black voodoo.

>
>> , and they're good at
>> different things. LISP was originally based strictly on
>> lists -- that was how arguments were passed, how results
>> were returned, how processing happened, what recursion worked
>> on, etc... It's a useful and general construct, and I don't
>> think it's a hack at all. It's genuinely beautiful and
>> elegant.
>
> Yes, but it also leads to some unintuitive behavior, like having result
> lists come out in the reverse order from what one would naively expect
> when writing iterative algorithms, and the existence of "improper" lists,
> which are a conceptually separate type, but can only be distinguished from
> "proper" lists in O(n) time.
>

Nothing compared to appending and quering for length taking O(n). OK,
maybe not nothing.

>
> E.

--
Sander

+++ Out of cheese error +++

0 new messages