Explanation about call-with-values

14 views
Skip to first unread message

Brent Benson

unread,
Jan 4, 1995, 8:14:45 PM1/4/95
to
dedi...@emi.u-bordeaux.fr (Gabriel De Dietrich) writes:
#
# Can anyone explain the meaning of the finction call-with-values as
# explained in the June 1992 Meeting? I cannot see any difference
# between these two expressions:
# (call-with-values thunk receiver)
# and
# (apply receiver (thunk))

In the first expression, THUNK is meant to return multiple values
using the function VALUES. These values are not necessarily consed
into a list, but rather can be returned in a more efficient manner
like on the stack or in registers.

On the other hand, you could always implement CALL-WITH-VALUES using
lists if your implementation doesn't provide them (no error checking):

(define-syntax call-with-values
(syntax-rules ()
((call-with-values thunk receiver)
(apply receiver (thunk)))))

(define (values . vals) vals)

--
Brent Benson
Harris Computer Systems

Gabriel De Dietrich

unread,
Jan 4, 1995, 1:30:34 PM1/4/95
to
Can anyone explain the meaning of the finction call-with-values as explained in
the June 1992 Meeting? I cannot see any difference between these two expressions:
(call-with-values thunk receiver)
and
(apply receiver (thunk))

Thanks.

Gabriel de Dietrich
dedi...@emi.u-bordeaux.fr

Brent Benson

unread,
Jan 4, 1995, 8:24:07 PM1/4/95
to
Brent....@mail.csd.harris.com (Brent Benson) writes:
#
# On the other hand, you could always implement CALL-WITH-VALUES using
# lists if your implementation doesn't provide them (no error checking):

There was no need for syntax:

(define (call-with-values thunk receiver)
(apply receiver thunk))

(define values list)

Axel Wienberg

unread,
Jan 6, 1995, 4:15:57 AM1/6/95
to

In article <3eepga$a...@serveur.cribx1.u-bordeaux.fr> dedi...@emi.u-bordeaux.fr (Gabriel De Dietrich) writes:

Can anyone explain the meaning of the finction call-with-values as
explained in the June 1992 Meeting? I cannot see any difference
between these two expressions:

(call-with-values thunk receiver)
and
(apply receiver (thunk))

You may have confused multiple values with lists of values: The thunk
in the (apply...) example has to return one value which is a list of
arguments for the receiver, while the thunk in the (c-w-v...) example
has to return multiple values, via (values ...).
For example:

(let ((thunk (lambda () (values 1 2 3)))
(receiver (lambda (x y z) (+ x (* y z)))))
(call-with-values thunk receiver))
=> 7

and

(let ((thunk (lambda () (list 1 2 3)))
(receiver (lambda (x y z) (+ x (* y z)))))
(apply receiver (thunk)))
=> 7

Conversely,

(let ((thunk (lambda () (list 1 2 3)))
(receiver (lambda (x y z) (+ x (* y z)))))
(call-with-values thunk receiver))
=> error: wrong number of arguments to <receiver> (expected 3, got 1,
namely (1 2 3))

(let ((thunk (lambda () (values 1 2 3)))
(receiver (lambda (x y z) (+ x (* y z)))))
(apply receiver (thunk)))
=> error: multiple return values are not allowed in argument expressions

Continuations inside call-with-values are the only ones that can take
multiple return values.

Hope this helped.
--
I know a mouse \ 2wi...@rzdspc2.informatik.uni-hamburg.de
And he hasn't got a house \ Axel Wienberg, Hinzeweg 9, 21075 Hamburg
I don't know why I call him Gerald \ ja Nein!

Brian Harvey

unread,
Jan 6, 1995, 11:12:10 AM1/6/95
to
2wi...@rzdspc43.informatik.uni-hamburg.de (Axel Wienberg) writes:
>You may have confused multiple values with lists of values: The thunk
>in the (apply...) example has to return one value which is a list of
>arguments for the receiver, while the thunk in the (c-w-v...) example
>has to return multiple values, via (values ...).

I think what he really wants explained is why using a list of values
wouldn't be just as good. Someone else posted saying that multiple
values can be implemented more efficiently, e.g., as a vector -- but
that still leaves us with the question of why a vector of values
wouldn't be just as good!

Erik Naggum

unread,
Jan 7, 1995, 9:23:26 AM1/7/95
to
[Brian Harvey]

| I think what he really wants explained is why using a list of values
| wouldn't be just as good. Someone else posted saying that multiple
| values can be implemented more efficiently, e.g., as a vector -- but
| that still leaves us with the question of why a vector of values
| wouldn't be just as good!

the standard argument is that it may be possible to pass those values in
registers. another standard argument is that the values you receive,
except for the first, are merely informative, not essential, and that you
would have to write extra code to handle the first, essential value if that
is the only one you want.

#<Erik>
--
guvf vf abg n synzr

Matthias Blume

unread,
Jan 18, 1995, 10:28:04 AM1/18/95
to
In article <3fh4v5$a...@apple.com> Ken Dickey <ke...@newton.apple.com> writes:

> From: b...@anarres.CS.Berkeley.EDU (Brian Harvey)
> I think what he really wants explained is why using a list
of values
> wouldn't be just as good. Someone else posted saying that
multiple
> values can be implemented more efficiently, e.g., as a
vector -- but
> that still leaves us with the question of why a vector of
values
> wouldn't be just as good!

The point is that with call-with-values, the compiler can
simply transfer
values directly (e.g. stuff the registers) rather than
creating *any*
data structure. Less memory traffic => better performance.

This seems like a pretty pointless minor optimization, which doesn't
justify the introduction of a new language element. ``Programming
languages should be designed not by...'' -- Rings a bell?

Given that Scheme's current design makes it pretty hard to optimize
even (usually) trivial things for speed (such as arithmetic because of
the very general number type tower) this kind of optimization is
hardly worth the paper it is proposed on. Has anybody measured the
benefits from using call-with-values?

On top of that, the use of call-with-values is so awkward, that it
wouldn't be much more complicated to do things by hand: simply pass
an explicit multi-argument continuation procedure and call it instead
of returning some `(values ...)'!

The use of
call-with-values enables comparatively simple compiler
analysis.

I seriously doubt that, given the fact that it is possible to pass
both `values' and `call-with-values' as arguments to higher-order
functions. The compiler must be prepared for the call to `f' (or `h'
for that matter) in

(define (g f h x)
(f (h x)))

really being a call to `values'. This can't be determined at compile
time (at least not in general).

I go with Brian here: an explicitly constructed vector (or cons cell)
of values does the job nearly as well as `values'. And if it really
becomes a bottleneck (but *only* then!) then convert it to CPS! In
addition, I would venture to say that in places where the compiler can
benefit from `values' and `call-with-values' it wouldn't be too hard
to find corresponding `vector' and `vector-ref' calls (and to optimize
them away) either.

[As an aside: even if the compiler sees an explicitly written down
`call-with-values' -- how does it know that nobody re-defines this
identifier later on? Back to the old module-system debate... :)]

Regards,
--
-Matthias

J. Michael Ashley

unread,
Jan 19, 1995, 5:33:55 PM1/19/95
to
My paper with Kent Dybvig in the 1995 ACM Conference on Lisp and
Functional Programming discusses some of the issues raised in this
thread. In particular, there is a comparison of our implementation
against both consing and converting to CPS.

An on-line copy of the paper can be found at
ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/mrvs.ps.gz

Mike

Darius Bacon

unread,
Jan 21, 1995, 2:40:50 AM1/21/95
to
Call-with-values has a couple advantages that I haven't seen mentioned
yet.

It's more expressive than passing an explicit multi-argument continuation.
Let's say foo calls bar, which makes a tail call to baz. If we've decided
to return multiple values by calling an explicit continuation, then bar has
to know whether or not baz returns multiple values. Call-with-values
avoids that problem.

It's safer than returning a vector or list, because the language
implementation can check that the right number of values were returned, and
raise an error immediately. (But you can't depend on that, because the
behavior is unspecified.)

I'm not convinced that either reason is sufficient, but presumably the
Knights of the Lambda Calculus know best. :-)

Matthias Blume

unread,
Jan 21, 1995, 1:14:42 PM1/21/95
to
In article <3fqdq2$9...@nkosi.well.com> dje...@well.sf.ca.us (Darius Bacon) writes:

Call-with-values has a couple advantages that I haven't seen mentioned
yet.

It's more expressive than passing an explicit multi-argument continuation.
Let's say foo calls bar, which makes a tail call to baz. If we've decided
to return multiple values by calling an explicit continuation, then bar has
to know whether or not baz returns multiple values.

Wrong. If it is a tail-call then bar will just pass its own
continuation (which is multi-arg) along to baz.

Call-with-values avoids that problem.

No. There is no problem.

It's safer than returning a vector or list, because the language
implementation can check that the right number of values were returned, and
raise an error immediately.

Wouldn't -- (lower your voice to a whisper) -- *typechecking* do the
same thing for you, only even better and earlier (at compile time)?
--- Ok, ok, you don't need to yell at me! :)

(But you can't depend on that, because the behavior is unspecified.)

Exactly.

--
-Matthias

Darius Bacon

unread,
Jan 22, 1995, 2:20:39 AM1/22/95
to
bl...@dynamic.cs.princeton.edu (Matthias Blume) writes:

>In article <3fqdq2$9...@nkosi.well.com> dje...@well.sf.ca.us (Darius Bacon)
writes:

> Call-with-values has a couple advantages that I haven't seen mentioned
> yet.

> It's more expressive than passing an explicit multi-argument continuation.
> Let's say foo calls bar, which makes a tail call to baz. If we've decided
> to return multiple values by calling an explicit continuation, then bar
has
> to know whether or not baz returns multiple values.

>Wrong. If it is a tail-call then bar will just pass its own
>continuation (which is multi-arg) along to baz.

One of us must misunderstand the other, I think. I was saying multi-arg
implicit continuations should be nice to have because they make your
statement true.

In more detail: when we want baz to return a single value, we normally write
something like

(define (bar) (baz))
(define (baz) whatever)

But if baz returns multiple values, and we don't have call-with-values, and
we've settled on the convention of passing a multi-arg continuation in such
cases, we write

(define (bar k) (baz k))
(define (baz k) (k whatever))

Since the calling convention is different, all calls have to be different.
This makes code less easily composable -- in fact, the compose procedure is
a good example:

(define (compose f g)
(lambda (x) (f (g x))))

What if f returns multiple values? Okay, then the body is

(lambda (x k) (f (g x) k)))

But wait! G has to return one value, it's true, but what if it takes a
continuation anyway, to conform to our CPS calling convention? Then

(lambda (x k) (g x (lambda (y) (f y k))))

This recalls the situation with infinite streams in SICP, where they needed
multiple versions of the same procedure, such as interleave and
interleave-delayed.

> It's safer than returning a vector or list, because the language
> implementation can check that the right number of values were returned,
and
> raise an error immediately.

>Wouldn't -- (lower your voice to a whisper) -- *typechecking* do the
>same thing for you, only even better and earlier (at compile time)?
>--- Ok, ok, you don't need to yell at me! :)

Oh, you're absolutely right! Heh.

> (But you can't depend on that, because the behavior is unspecified.)

>Exactly.

And that's annoying. The above makes a good case (IMO) for call-with-values
merely rounding out an asymmetry in R4RS Scheme, by allowing implicit
continuations to take multiple arguments just like ordinary procedures --
but then they go and underspecify it so badly it can be implemented directly
in R4RS Scheme. Are there any good arguments for the Common Lisp behavior
when the parameters don't match the arguments? Is there any other
reasonable behavior?

Matthias Blume

unread,
Jan 22, 1995, 1:38:47 PM1/22/95
to
In article <3ft107$f...@nkosi.well.com> dje...@well.sf.ca.us (Darius Bacon) writes:

bl...@dynamic.cs.princeton.edu (Matthias Blume) writes:

>In article <3fqdq2$9...@nkosi.well.com> dje...@well.sf.ca.us (Darius Bacon)
writes:

> Call-with-values has a couple advantages that I haven't seen mentioned
> yet.

> It's more expressive than passing an explicit multi-argument continuation.
> Let's say foo calls bar, which makes a tail call to baz. If we've decided
> to return multiple values by calling an explicit continuation, then bar
has
> to know whether or not baz returns multiple values.

>Wrong. If it is a tail-call then bar will just pass its own
>continuation (which is multi-arg) along to baz.

One of us must misunderstand the other, I think. I was saying multi-arg
implicit continuations should be nice to have because they make your
statement true.

You seem to be the one misunderstanding the other. If there are no implicit
multi-arg continuations then in order for bar to return multiple values it
must receive an explicit multi-arg continuation as an argument and can pass
it on to baz. Since it is never calling this continuation itself it doesn't
care how many arguments it takes.

Anyway, I would prefer passing multiple return values in a vector and let
the compiler figure out the details. I also doubt that this part of the
language constitutes a performance bottleneck at all, so why taking the
trouble?

And that's annoying. The above makes a good case (IMO) for call-with-values
merely rounding out an asymmetry in R4RS Scheme, by allowing implicit
continuations to take multiple arguments just like ordinary procedures --
but then they go and underspecify it so badly it can be implemented directly
in R4RS Scheme. Are there any good arguments for the Common Lisp behavior
when the parameters don't match the arguments? Is there any other
reasonable behavior?

Are there any *good* arguments for having call-with values at all? I
think this is the real question, and so far nobody has been able to
convince me that the aswer is `yes'.

--
-Matthias

Darius Bacon

unread,
Jan 23, 1995, 5:24:14 AM1/23/95
to
bl...@dynamic.cs.princeton.edu (Matthias Blume) writes: >Anyway, I would prefer passing multiple return values in a vector and let >the compiler figure out the details. I also doubt that this part of the >language constitutes a performance bottleneck at all, so why taking the >trouble? > [...] >Are there any *good* arguments for having call-with values at all? I >think this is the real question, and so far nobody has been able to >convince me that the aswer is `yes'. Since you never really answered the first time: call-with-values can add something to Scheme, besides the alleged efficiency. A properly defined call-with-values would add safety. (That was a good point about catching errors with static typechecking, but I'm not sure I want to give up dynamic typing -- probably not, from my limited experience with ML.) Now, is this a good *enough* argument? As I said the first time, I'm inclined against it. If I ended up using it often, I'd want the extra safety. On the other hand, there appears to be a good reason not to define call-with-values this way: it could slow down even code that doesn't use it. Any time you're calling an unknown (implicit) continuation -- one that you don't have any static information about -- you'd have to check that the argument count is right. One way around that is dynamic typechecking: define a new type that packages together a set of return values, and that only call-with-values and values know how to handle. That wouldn't catch all errors immediately, but at least it would keep you from doing a vector-ref or a car on the result returned by values . If Scheme let us define new types disjoint from all built-in types, we wouldn't even have to make call-with-values primitive. Would that satisfy R5RS? I haven't seen it. (So what am I doing here? I only meant to inject a simple point into this discussion, then shut up -- but nothing is simple in Usenet.) Now about who misunderstood whom... (skip the rest of this post if you don't care) >In article <3ft107$f...@nkosi.well.com> dje...@well.sf.ca.us (Darius Bacon) writes: > bl...@dynamic.cs.princeton.edu (Matthias Blume) writes: > >In article <3fqdq2$9...@nkosi.well.com> dje...@well.sf.ca.us (Darius Bacon) > writes: > > Call-with-values has a couple advantages that I haven't seen mentioned > > yet. > > It's more expressive than passing an explicit multi-argument continuation. > > Let's say foo calls bar, which makes a tail call to baz. If we've decided > > to return multiple values by calling an explicit continuation, then bar > has > > to know whether or not baz returns multiple values. > >Wrong. If it is a tail-call then bar will just pass its own > >continuation (which is multi-arg) along to baz. > One of us must misunderstand the other, I think. I was saying multi-arg > implicit continuations should be nice to have because they make your > statement true. >You seem to be the one misunderstanding the other. If there are no implicit >multi-arg continuations then in order for bar to return multiple values it >must receive an explicit multi-arg continuation as an argument and can pass >it on to baz. Since it is never calling this continuation itself it doesn't >care how many arguments it takes. I was wrong -- we both misunderstood. I thought you were talking about the implicit continuation the first time. In turn, I said "bar needs to know that baz returns multiple values" where it would be more precise to say "bar needs to know that baz follows the multiple-value return convention of taking an explicit continuation". My point is, if different parts of your code follow different calling conventions, the pieces no longer go together without fuss. Procedures not directly involved with returning multiple values must act as if they were. See the examples you snipped. I wrote that in response to your earlier suggestion that it's just as easy to pass an explicit continuation as it is to use call-with-values. Maybe if I'd quoted you originally we could've avoided this comedy of errors.

Matthias Blume

unread,
Jan 23, 1995, 2:03:14 PM1/23/95
to
In article <3g004e$7...@nkosi.well.com> dje...@well.sf.ca.us (Darius Bacon) writes:

The other problem I see with `values' and `call-with-values' it that
it asymmetrically duplicates functionality. No matter how it is
implemented, the result of `values' can be understood as having a
special `tuple' type. `values' is the constructor for objects of this
type, and `call-with-values' is the corresponding deconstructor: it
takes the tuple apart and spreads its contents over the argument list
of a given function.

But we already *have* this pair of operators: they are called `list'
and `apply'. The only remaining argument in favor of `values' seems
to be safety -- something that doesn't sound overly convincing given
the rest of the language. The difference in usefulness between a
scheme where `wrong-number-of-values' errors are caught at the time
those values arrive at the receiving function or at the time this
function actually tries to access them seems to be marginal at most.

SML really got this part right: *all* functions take *one* argument
and produce *one* result. If you want to pass more than that in
either direction you must use a structured type like a tuple or a
record. It is just as convenient to use as ordinary function-call
syntax (SML's pattern matching helps quite a bit here, though.), but
the compiler must do some work to get it efficient. But so what?
It's the compiler doing the work, not me...

I would be surprised if Andrew Wright's `soft typing' couldn't point
out occurences of `values' and `call-with-values' disguised as `list'
and `apply'. A compiler could easily optimize this kind of contruct
and even issue `wrong number of arguments' errors at compile-time.
And the amount of effort appears to be precisely the same as if we
used the special `values' instead.

If you want early error checking, then, please, make sure it can be
done at compile time. The only reason for waiting until runtime seems
to be taking advantage of non-strictness, and there you really want to
delay errors as much as possible.

[ ... ] That wouldn't catch


all errors immediately, but at least it would keep you from doing a
vector-ref or a car on the result returned by values.

In what sense is a `wrong number of arguments' type of error
preferable over `wrong argument to CAR'? I really fail to see the
point here.

I was wrong -- we both misunderstood.

Yes, I figured that out myself after I sent my last article.

I thought you were talking about the
implicit continuation the first time. In turn, I said "bar needs to know
that baz returns multiple values" where it would be more precise to say
"bar needs to know that baz follows the multiple-value return convention of
taking an explicit continuation". My point is, if different parts of your
code follow different calling conventions, the pieces no longer go together
without fuss. Procedures not directly involved with returning multiple
values must act as if they were.

That is certainly true. What I was saying was that multi-argument
implicit continuations don't buy you much, because the can easily be
simulated by making them explicit.

However, by the same token we don't need implicit continuations and
call/cc, because by passing explicit continuations *everywhere* we
could simulate that, too. The fine point here is that IMO multi-arg
continuations are rare, and they don't blend in with the rest of the
languages very well, so they don't get used very often. (Normal
function calls always create one-arg continuations, we must use a
special construct to create a multi-arg continuation. Therefore,
one-arg continuations are ubiquitous, even if we never invoke call/cc
to reify them.)

We can afford passing explicit continuations in places, where this
appers to be unavoidable. (VSCM's first compiler pass is written this
way mainly because I felt I need multiple return values. After the
first two functions written this way it begins to become second
nature.) Moreover, the use of implicit multi-arg continuations is so
awkward in itself, that I don't see much benefits in programmer's
convenience either.

--
-Matthias

D. Richter-susers

unread,
Jan 24, 1995, 1:44:36 AM1/24/95
to
In article <BLUME.95J...@dynamic.cs.princeton.edu> bl...@dynamic.cs.princeton.edu (Matthias Blume) writes:
|>In article <3fqdq2$9...@nkosi.well.com> dje...@well.sf.ca.us (Darius Bacon) writes:
|>> Call-with-values has a couple advantages that I haven't seen mentioned yet.
|>> It's safer than returning a vector or list, because the language
|>> implementation can check that the right number of values were returned, and
|>> raise an error immediately.
|>Wouldn't -- (lower your voice to a whisper) -- *typechecking* do the
|>same thing for you, only even better and earlier (at compile time)?
|>--- Ok, ok, you don't need to yell at me! :)

Matthias, I'm surprised you'd say this. It's safer because it is typechecked.
If you want compile-time typechecking instead of runtime typechecking, say so.
Better yet, look at extending the soft-typing systems for R4RS scheme to
handle multiple return values.

David

Darius Bacon

unread,
Jan 24, 1995, 7:08:31 AM1/24/95
to
bl...@dynamic.cs.princeton.edu (Matthias Blume) writes:

Thanks for the thoughtful reply. I agree, mostly.

>The difference in usefulness between a
>scheme where `wrong-number-of-values' errors are caught at the time
>those values arrive at the receiving function or at the time this
>function actually tries to access them seems to be marginal at most.

More than marginal, IMO; it's common for the receiving procedure to pass
the value on or store it in a data structure, making the damage show up far
from its source. My intuition here may be suspect, since I get paid to
program in C and even worse languages rather than Scheme.

> [ ... ] That wouldn't catch
> all errors immediately, but at least it would keep you from doing a
> vector-ref or a car on the result returned by values.
>
>In what sense is a `wrong number of arguments' type of error
>preferable over `wrong argument to CAR'? I really fail to see the
>point here.

If a values-record happens to be represented by a pair, then applying CAR
to it is perfectly legal.

>If you want early error checking, then, please, make sure it can be
>done at compile time. The only reason for waiting until runtime seems
>to be taking advantage of non-strictness, and there you really want to
>delay errors as much as possible.

I don't understand that last sentence. If you'd care to post an expansion
or a reference, I'd be interested.

>Moreover, the use of implicit multi-arg continuations is so
>awkward in itself, that I don't see much benefits in programmer's
>convenience either.

One reason I decided they're worth considering was Olin Shivers's
awk-in-Scheme package, posted here recently. I don't remember exactly,
but I think it lets you use VALUES to specify a process with N variables
conveniently, without having to bind and call a continuation. A common
case is N=1. (Of course, quite possibly there's another way to do it
that's just as good.) So, I wondered, might the single/multiple value
distinction have inclined us to miss a lot of useful abstractions? -- since
the more general case is also more awkward. Probably not, I guess -- if
there were any rich bonanza there, the Common Lispers would have unearthed
it. Or the ML community, though they don't enjoy Lisp's wild freedom with
regard to syntactic extension. Oh, well.

Matthias Blume

unread,
Jan 24, 1995, 11:41:28 AM1/24/95
to
In article <3g2qjv$g...@nkosi.well.com> dje...@well.sf.ca.us (Darius Bacon) writes:

>The difference in usefulness between a
>scheme where `wrong-number-of-values' errors are caught at the time
>those values arrive at the receiving function or at the time this
>function actually tries to access them seems to be marginal at most.

More than marginal, IMO; it's common for the receiving procedure to pass
the value on or store it in a data structure, making the damage show up far
from its source.

Huh? How can this happen? There are only two things you can possibly
do with the result of `values': you can pass it on to your caller by
having it in tail position or you can take it apart. Passing it on
will never trigger an error, neither in the `values' scheme nor in the
`list' scheme. Taking it apart will *always* trigger the error: both
in the `call-with-values' scheme and in the `apply' scheme.

BTW, even though values/call-with-values is so similar to list/apply
it has one major disadvantage: the thing constructed by `values' is
only second-class. This is clearly against the spirit of Scheme, and
on this basis alone I find it despicable. What I mean is this:

With list/apply I can say:

(define (g ...)
(let ((r (f ...)))
<do something else>
r))

where `f' can return `multiple' values by passing them as a list. `g'
will return just the same set of multiple values, and it doesn't even
need to know about this fact. But with `values' this wouldn't be
possible, because the continuation of the call to `f' only takes one
argument. In this case `g' needs to be aware of the fact that `f'
returns multiple values and explicitely turn them into a temporary
data structure in order to deal with this fact. If `f' is a parameter
to `g', then you arrive at the same kind of asymmetry you complained
about earlier in the context of my suggestion to use explicit
continuations. Here you would have to write:

(define (g ...)
(call-with-values
(lambda () (f ...))
(lambda r
<do something else>
(apply values r))))

which clearly takes away any advantages there might have been to using
`values' in the first place while also obfuscating the code.

`values' is clearly a symptom of `feature cancer', because it's
rationale is the opportunity for some optimization. On top of that it
is un-schemely, the merits of the optimization it might enable are
questionable at best, and if they are worth something then they can
easily be achieved without a special new language feature.

>In what sense is a `wrong number of arguments' type of error
>preferable over `wrong argument to CAR'? I really fail to see the
>point here.

If a values-record happens to be represented by a pair, then applying CAR
to it is perfectly legal.

But if there aren't enough values in the list then CAR will be applied
to ().

>If you want early error checking, then, please, make sure it can be
>done at compile time. The only reason for waiting until runtime seems
>to be taking advantage of non-strictness, and there you really want to
>delay errors as much as possible.

I don't understand that last sentence. If you'd care to post an expansion
or a reference, I'd be interested.

`Non-strictness' refers to languages, where a non-terminating or
erroneous argument doesn't necessarily make the function call fail.
Strict languages correspond to the `applicative order evaluation' in
lambda calculus, which is how most people intuitively understand
function calls, but which also fails to compute certain normal forms
even if they exist. Lazy languages use `normal order evaluation' and
compute the normal form of an expresssion whenever it exists. To do
this they inherently take advantage of non-strictness (this is what it
is all about) by avoiding computations which might cause an error
unless they turn out to be absolutely necessary.

Most `strict' languages must include some non-strict special forms in
order to make the language usable. For example, the expression

(if (zero? y) 0 (/ x y))

takes advantage of the non-strictness of `if'. If `if' would be
strict it would evaluate (/ x y) regardless of the outcome of the
(zero? y) test and trigger an error in the case of `y' being zero.

What I was saying was that delaying errors is most useful if you want
to take advantage of non-strictness. In this context you want errors
to happen AS LATE AS POSSIBLE, because this way they might actually be
avoided altogether.

Or the ML community, though they don't enjoy Lisp's wild freedom with
regard to syntactic extension. Oh, well.

That's my only complaint about ML, too. Not enough consideration for
syntax.

--
-Matthias

Alan Bawden

unread,
Jan 25, 1995, 3:43:55 AM1/25/95
to sch...@mc.lcs.mit.edu
Date: Tue, 24 Jan 1995 16:41:28 GMT
From: Matthias Blume <bl...@dynamic.cs.princeton.edu>
...

With list/apply I can say:

(define (g ...)
(let ((r (f ...)))
<do something else>
r))

where `f' can return `multiple' values by passing them as a list. `g'
will return just the same set of multiple values, and it doesn't even
need to know about this fact. But with `values' this wouldn't be
possible, because the continuation of the call to `f' only takes one

argument....

I stopped reading here. And I wasn't even paying very close attention
before I got here. These arguments so rapidly degenerate into spaghetti
that only the participants can unknot -- I lost track of this one days ago.
I'm not disagreing with Matthias by what I'm about to write here. (In
truth I suspect I do disagree with his position -- but I haven't been
following the argument with enough attention to want to get involved.) I'm
simply reacting to a single phrase in the quoted text above:

"the continuation of the call to `f' only takes one argument"

I hear this a lot when people are discussing multiple values. But I don't
think this assertion is justified. Consider the following simple, but
essentially complete, Scheme evaluator:

(define (evaluate expr env cont)
(cond ((symbol? expr)
(cont (lookup expr env))) ; [A]
((not (pair? expr))
(cont expr)) ; [B]
((eq? (car expr) 'quote)
(cont (cadr expr))) ; [C]
((eq? (car expr) 'lambda)
(cont ; [D]
(lambda (cont args)
(evaluate (caddr expr) (bind (cadr expr) args env) cont))))
((eq? (car expr) 'begin)
(let loop ((exprs (cdr expr)))
(evaluate (car exprs) env
(if (null? (cdr exprs))
cont
(lambda (val) ; [1]
(loop (cdr exprs)))))))
((eq? (car expr) 'if)
(evaluate (cadr expr) env
(lambda (val) ; [2]
(evaluate ((if val caddr cadddr) expr) env cont))))
(else
(let loop ((exprs expr)
(vals '()))
(if (null? exprs)
(let ((vals (reverse vals)))
((car vals) cont (cdr vals)))
(evaluate (car exprs) env
(lambda (val) ; [3]
(loop (cdr exprs) (cons val vals)))))))))

(The implementations of `lookup' and `bind' are left to the reader.)

In particular note that I have included every place where a continuation is
ever created -- these are the three lines [1], [2] and [3]. There might be
others if we start adding new features to the language (such as
`call-with-values'), but I'm not going to consider that possibility because
I'm only interested in reasoning about the basic Scheme language. I've
also marked some lines where continuations are invoked ([A], [B], [C],
etc.). There probably -are- other places where continuations are invoked,
in particular I haven't shown how primitive procedures (`cons', `car',
`eq?', `+', etc.) return values, but they all look pretty much the same:
`(cont <expression>)'.

(I'm ignoring `set!', but it's inclusion doesn't significantly change the
argument that follows.)

OK, so now think about the assertion:

"the continuation of the call ... only takes one argument"

How could I disagree with that? After all, I just showed you an
interpreter in which this assertion is clearly true. Ah, but that isn't
the only interpreter I could have showed you! For example, consider line
[1]. I could have written that line in any one of three different ways:

(lambda (val) ; [1a]
(lambda (val . ignore) ; [1b]
(lambda ignore ; [1c]

How can you tell from the language itself which way the interpreter is
written? You can't! Since (in the unaugmented language) every place that
invokes a continuation -always- supplies exactly one argument, there is no
way to test to see if perchance some continuations are capable of taking
more. And as long as they all take at -least- one, the interpreter is free
of number-of-argument errors.

There are also two different ways to write line [2]:

(lambda (val) ; [2a]
(lambda (val . ignore) ; [2b]

and two different ways to write line [3]:

(lambda (val) ; [3a]
(lambda (val . ignore) ; [3b]

Now I've never heard anyone advocate an interpreter where line [2] and line
[3] actually differ, but I have (I think) heard five of the six other
possible combinations advocated:

1a,2a,3a -- what most people seem to assume must be the case
1b,2a,3a
1c,2a,3a
1b,2b,3b -- second most common belief among multiple value advocates
1c,2b,3b -- most common belief among multiple value advocates

(OK, not all multiple value advocates think multiple values have anything
to do with the arity of continuations. But among those who do, I think my
comments hold.)

So before we even start arguing about multiple values, we'd better decide
which kind of interpreter we all believe in. I hasn't mattered until now,
indeed there wasn't any way to tell the difference, but we're contemplating
new language features where it -may- make a difference, so we need to be
clear about it. Sloppy language like:

"the continuation of the call ... only takes one argument"

can get us in trouble because it -assumes- a particular kind of interpreter
before we even get started.

Matthias Blume

unread,
Jan 25, 1995, 12:16:52 PM1/25/95
to
In article <25Jan1995....@LCS.MIT.EDU> Al...@lcs.mit.EDU (Alan Bawden) writes:

I'm not disagreing with Matthias by what I'm about to write here. (In
truth I suspect I do disagree with his position -- but I haven't been
following the argument with enough attention to want to get involved.) I'm
simply reacting to a single phrase in the quoted text above:

"the continuation of the call to `f' only takes one argument"

[ ... goes on by showing different interpreters with different behaviors,
in some of which my statement is true, in some of which it would be
false ... ]

So before we even start arguing about multiple values, we'd better decide
which kind of interpreter we all believe in. I hasn't mattered until now,
indeed there wasn't any way to tell the difference, but we're contemplating
new language features where it -may- make a difference, so we need to be
clear about it. Sloppy language like:

"the continuation of the call ... only takes one argument"

can get us in trouble because it -assumes- a particular kind of interpreter
before we even get started.

I always thought that stuff like this is a matter of the language
*definition*. (While giving a `reference' interpreter might be one way
to do this, there are some things to watch out for when using
metacircular evaluators.)

Anyway, this is not the point. We do have (sort of) a definition of
what is supposed to happen. It was written down in the famous `the
june 1992 meeting' paper. R5RS (as far as I know) doesn't exist, and
IEEE Scheme doesn't mention `values' either. So let's look at what
the `june' paper says... My copy has it on page 3 (at the very top):

``... Except for continuations created by the call-with-values
procedure, all continuations TAKE EXACTLY ONE value, as now; the
effect of passing no value or more than one value to continuations
that were not created by call-with-values is unspecified (as indeed
it is unspecified now). ...'' (emphasis mine)

This sounds fairly clear to me, so all this discussion of different
interpreters is moot. If I want to write portable Scheme code (under
the new rules, against which I tried to argue), I have to use

(define (g x)
(call-with-values
(lambda () (f x))


(lambda r
<do something else>
(apply values r))))

instead of the straightforward

(define (g x)
(let ((r (f x)))
<do something else>
r))

unless I know exactly how many values `f' is going to return.

--
-Matthias

William D Clinger

unread,
Jan 25, 1995, 2:10:38 PM1/25/95
to
"J. Michael Ashley" <jas...@cs.indiana.edu> writes:
My paper with Kent Dybvig in the 1995 ACM Conference on Lisp and
Functional Programming discusses some of the issues raised in this
thread. In particular, there is a comparison of our implementation
against both consing and converting to CPS.

He means the 1994 ACM LFP.

bl...@dynamic.cs.princeton.edu (Matthias Blume) writes:
Has anybody measured the benefits from using call-with-values?

One of the results reported by Ashley and Dybvig is that three passes
of the Chez Scheme compiler ran 20% faster when rewritten to use
call-with-values, implemented as described in their paper, instead of
returning a vector.

This is a fairly impressive speedup for such a simple change to such
a large program.

To measure small differences between implementations of multiple
values, you need micro-benchmarks that exercise the implementation
of multiple values while performing almost no other computation.
For a "split" micro-benchmark (see their paper for the code), Ashley
and Dybvig reported the timings shown below for Chez Scheme, which
are normalized to the time required by their implementation of
values and call-with-values.

I also ran this micro-benchmark in MacScheme, which has no built-in
support for values or call-with-values. These MacScheme timings
are normalized to the time required when cons was used to return
a pair instead of multiple values, which I normalized to the timing
reported by Ashley and Dybvig for that same program on a different
machine running a different implementation of Scheme.

The "unoptimized, no run-time support" line is for a portable
implementation of values and call-with-values written in
R4RS/IEEE/ANSI Scheme. The "optimized, no run-time support" line
is for that same implementation after values and call-with-values
have been made into integrable procedures that are recognized by
the compiler.

Chez Scheme MacScheme
call-with-values
optimized, run-time support 1.00 ----
unoptimized, run-time support 1.94 ----
optimized, no run-time support ---- 2.07
unoptimized, no run-time support ---- 9.43
cps 1.21 2.51
cons 1.26 1.26
vector ---- 6.47
byref 1.20 1.54
reverse 1.53 1.30

bl...@dynamic.cs.princeton.edu (Matthias Blume) also writes:
I go with Brian here: an explicitly constructed vector (or cons cell)
of values does the job nearly as well as `values'. And if it really
becomes a bottleneck (but *only* then!) then convert it to CPS!

As can be inferred from the timings reported above, multiple values
are likely to be faster than using vectors or CPS in any implementation
that provides even modest support for them. If multiple values become
a bottleneck because an implementation doesn't support them, then
(but *only* then!) it is reasonable to resort to CPS or vectors.

bl...@dynamic.cs.princeton.edu (Matthias Blume) continues: In


addition, I would venture to say that in places where the compiler can
benefit from `values' and `call-with-values' it wouldn't be too hard
to find corresponding `vector' and `vector-ref' calls (and to optimize
them away) either.

....the result of `values' can be understood as having a


special `tuple' type. `values' is the constructor for objects of this

type, and `call-with-values' is the corresponding deconstructor....

....I would be surprised if Andrew Wright's `soft typing' couldn't point


out occurences of `values' and `call-with-values' disguised as `list'

and `apply'. A compiler could easily optimize this kind of contruct....

Not so. Lists and vectors are first class. Furthermore they have
individual personalities that can be tested using eqv? and side
effects. A global analysis would therefore be required to eliminate
calls to vector and vector-ref, or list and apply, which means the
optimization would fail when lists or vectors are passed across the
boundary of a compilation unit.

Multiple values are not a data structure at all, much less first
class. There are only two things an implementor must do to make
multiple values fast, even when passed between compilation units:

1. Make values and call-with-values into integrable procedures.
2. Add just a little bit of run-time support.

Implementors that want to detect the error of returning multiple
values to a single-value continuation, or want the best possible
efficiency for multiple values, should read the paper by Ashley
and Dybvig.

Incidentally, the error of returning multiple values to a
single-value continuation would be easier to detect if it were
also an error to pass the wrong number of values to a statement
continuation. A technique not listed by Ashley and Dybvig is to
make statement continuations look different at run-time from
expression continuations.

bl...@dynamic.cs.princeton.edu (Matthias Blume) also writes:
[As an aside: even if the compiler sees an explicitly written down
`call-with-values' -- how does it know that nobody re-defines this
identifier later on? Back to the old module-system debate... :)]

By using a crock such as integrable procedures, which most Scheme
compilers are using already because we don't have a module system. :(

bl...@dynamic.cs.princeton.edu (Matthias Blume) also writes:
Given that Scheme's current design makes it pretty hard to optimize
even (usually) trivial things for speed (such as arithmetic because of
the very general number type tower) this kind of optimization is

hardly worth the paper it is proposed on...

Inexact arithmetic is slow in most implementations of Scheme
because a straightforward and general implementation of structural
polymorphism requires all values to be represented using the same
number of bits, which leads to excessive boxing of flonums. The
tower of numeric types has little to do with it. Anyone who doubts
this is invited to benchmark Standard ML against Scheme.

It takes two cycles to add small exact integers in Larceny on a SPARC.
Better optimization would reduce this to one cycle for most additions.
Don't blame Scheme if your workstation is so much slower than this
that optimization isn't worthwhile. :)

bl...@dynamic.cs.princeton.edu (Matthias Blume) also writes:
SML really got this part right: *all* functions take *one* argument

and produce *one* result....

With the addition of values and call-with-values to Scheme, all
Scheme procedures take zero or more arguments and return zero or
more results.

Since tuples are first class in Standard ML, my remarks above
might seem to imply that the compiler must perform a global
optimization to avoid boxing tuples that are just being used to
pass arguments or to return values. In fact it doesn't, because
Standard ML doesn't have any destructive operations or eq?
predicate that is defined on tuples. Thus it is ok for the
compiler to pass an unboxed tuple in registers, boxing and
unboxing it dynamically as many times as necessary. Since
tuples passed as arguments or returned as results will seldom
be boxed at all, this works well.

Tuples would be a legitimate extension to Scheme. If you add
them without defining any side effects on tuples, and you make
eq? and eqv? always return #f when one or both of their
arguments is a tuple, then you could use tuples to pass
arguments and to return results exactly as in Standard ML.

On the other hand, you would have a problem with efficiency
unless things like

(let ((x (make-tuple 1 2 3)))
(eqv? x x))

evaluate to #f, which runs counter to a prejudice that is
widely shared in the Scheme community. Note that I myself
do not share that prejudice, and have argued against it with
very limited success.

William Clinger

Barry Margolin

unread,
Jan 25, 1995, 4:20:51 PM1/25/95
to
In article <25Jan1995....@LCS.MIT.EDU> Al...@lcs.mit.EDU (Alan Bawden) writes:
> "the continuation of the call to `f' only takes one argument"
>
>I hear this a lot when people are discussing multiple values. But I don't
>think this assertion is justified.

[He then goes on to illustrate possible implementations where normal
continuations accept an arbitrary number of arguments.]

It's true, the Scheme definition allows such implementations. But a
program can't tell whether it's being run in such an implementation, so it
can't assume that it's safe to call it that way. The Scheme spec doesn't
say what happens when you call a continuation that wasn't created by
CALL-WITH-VALUES with other than one argument. So, for all intents and
purposes, such continuations "only take one argument." CALL-WITH-VALUES is
the only portable way to create a continuation that is known to accept
other than one argument.

To further justify this, take a look at the specification of
CALL-WITH-CURRENT-CONTINUATION in R4RS. It says, "The escape procedure is
a Scheme procedure of one argument...." Presumably when this paragraph is
updated to account for CALL-WITH-VALUES it will say something like,
"unless the current continuation was created by CALL-WITH-VALUES, the
escape procedure is a Scheme procedure of one argument...."

The language in the description of the multiple value mechanism allows for
implementation freedom in handling a mismatch in the number of values
passed to a continuation. So, if an implementor wished to emulate Common
Lisp's defaulting and dropping of values when there's a mismatch, it can.
But a portable Scheme program can't depend on this. According to the
current spec, the following is undefined:

(let ((v (values 1 2 3)))
#f)
--

Barry Margolin
BBN Internet Services Corp.
bar...@near.net

Matthias Blume

unread,
Jan 25, 1995, 4:54:00 PM1/25/95
to
In article <3g67ne$e...@narnia.ccs.neu.edu> wi...@ccs.neu.edu (William D Clinger) writes:

bl...@dynamic.cs.princeton.edu (Matthias Blume) writes:
Has anybody measured the benefits from using call-with-values?

One of the results reported by Ashley and Dybvig is that three passes
of the Chez Scheme compiler ran 20% faster when rewritten to use
call-with-values, implemented as described in their paper, instead of
returning a vector.

This is a fairly impressive speedup for such a simple change to such
a large program.

Hmm, but it is against the spirit of Scheme. If we pile feature on
top of feature just to gain a few percent here and there, then we will
soon end up with something like Common Lisp, PL/1, C, C++, or worse.
Why not adding some `(asm ...)' directive for inline assembly
language? This would certainly beat the hell out of every Scheme
compiler... (Ok, ok, I exaggerate...)

If Ashley and Dybvig would be talking about a special kind of
analysis, that makes multiple values (as vectors/lists/...) fast, or
if they would point out the things that need to be removed from the
language without sacrificing expressiveness, which enable such
optimizations -- well, *that* would be an interesting result.

Even if values/call-with-values takes zero time their numbers seem to
indicate that the original program spent 20% of its time returning
from functions having multiple resturn values! This sounds fairly
unbelievable to me. Or did they rewrite those passes to take
advantage of values/call-with-values? (In this case the numbers don't
mean anything, because we are not comparing the same program anymore.)

bl...@dynamic.cs.princeton.edu (Matthias Blume) continues: In
addition, I would venture to say that in places where the compiler can
benefit from `values' and `call-with-values' it wouldn't be too hard
to find corresponding `vector' and `vector-ref' calls (and to optimize
them away) either.

....the result of `values' can be understood as having a
special `tuple' type. `values' is the constructor for objects of this
type, and `call-with-values' is the corresponding deconstructor....

....I would be surprised if Andrew Wright's `soft typing' couldn't point
out occurences of `values' and `call-with-values' disguised as `list'
and `apply'. A compiler could easily optimize this kind of contruct....

Not so. Lists and vectors are first class.

Exactly. It is against the spirit of Scheme -- as I understand it,
and as far as the language still has one -- to have a procedure
(`values') to construct second-class objects. I still don't
understand why first-class-ness prevents us from taking advantage of
Wright's soft typing or similar kinds of analyses.

Furthermore they have
individual personalities that can be tested using eqv? and side
effects.

If you can prove that they never participate in such tests...

A global analysis would therefore be required to eliminate
calls to vector and vector-ref, or list and apply, which means the
optimization would fail when lists or vectors are passed across the
boundary of a compilation unit.

Correct. But so what? What is a compilation unit in Scheme, anyway?
This doesn't make sense to me at all. We are dealing with a language,
which doesn't even have the notion of `compilation' or `compilation
unit', no sound module system, no sound definition of the behavior of
its interactive top-level, but we are talking/arguing about
cross-module optimizations?!

Multiple values are not a data structure at all, much less first
class.

But that's a pity! And what a pity!

There are only two things an implementor must do to make
multiple values fast, even when passed between compilation units:

1. Make values and call-with-values into integrable procedures.

Something the languages doesn't give you a portable handle onto.

bl...@dynamic.cs.princeton.edu (Matthias Blume) also writes:
Given that Scheme's current design makes it pretty hard to optimize
even (usually) trivial things for speed (such as arithmetic because of
the very general number type tower) this kind of optimization is
hardly worth the paper it is proposed on...

In fact it doesn't, because


Standard ML doesn't have any destructive operations or eq?
predicate that is defined on tuples.

I would like to see eq? go, and set-car!/set-cdr! as well... Things
would be so much simpler. But heh, what's wrong with global analysis?
Your `tuple' type sounds like a good alternative, too...

The bottom line is that I am still unconvinced about the merits of
`values'. Their second-class-ness is utterly despicable, and there
seems to be no reason why they should be faster than explicit CPS.

I do not understand why we suddenly give up on the goal of conceptual
simplicity, and the goal of power through orthogonality. In my eyes,
`values' is a performance hack and as such it doesn't belong into a
language like Scheme.

There are many other regrettable descisions that have been made in the
past -- decisions which not only lead to difficulties for compilers,
but also to difficulties with the cleanliness of the language as a
whole. They should be cleaned up first. (»... but by removing the
weaknesses and restrictions that made additional features appear
necessary.«) `values' is a blatant symptom of feature cancer. It is
easy to define in other terms:

(define values list)
(define (call-with-values thunk receiver)
(apply receiver (thunk)))

The properties of this pair of functions contain those of the proposed
values/call-with-values, granted -- they are not that easy to make
fast, but I don't count this as an argument -- they fit right in with
the rest of the language, communicate via first-class objects and are
therefore more convenient to use, easy to combine, etc. etc.

Maybe it is time to ask ourselves: »Why do we still need Scheme at
all?« Performance is clearly not one of the reasons, and language
design becomes less and less of a reason. So what is left? Probably,
it is time to get out of here...

I'm sorry that I came to having to paint such a gloomy picture of the
state of my once-favorite language. But looking at what's going on
(Guile/`R5RS'/values/eval/dynamic-wind/no module system/...) I can't
help it anymore. If there only was a good alternative... :(

--
-Matthias

William D Clinger

unread,
Jan 26, 1995, 11:58:08 AM1/26/95
to
Al...@lcs.mit.EDU (Alan Bawden) writes:
So before we even start arguing about multiple values, we'd better decide
which kind of interpreter we all believe in.

By a happy accident, the semantics that has been agreed upon for multiple
values matches that of the denotational semantics in R4RS, read as an
interpreter. In that semantics the 'single' help function, which checks
for the wrong number of return values, is used to create the continuation
for the test part of a conditional, for the right hand side of an assignment,
and for a subexpression of a call, but the 'single' help function is not
used to create the continuation for a command. Using Alan's notation,
the agreed-upon semantics and the R4RS denotational semantics both imply

1c,2a,3a

Note also, however, that while it is an error to return the wrong number
of values to certain continuations, the multiple values proposal does not
require those errors to be signalled. Among other things, this makes it
possible to write values and call-with-values entirely in IEEE Scheme.
Using Alan's notation, we can say that implementations with the following
more permissive behaviors are permitted, but are discouraged because they
fail to detect non-portable uses of multiple values:

1c,2a,3a
1c,2a,3b
1c,2b,3a


1c,2b,3b -- most common belief among multiple value advocates

dje...@well.sf.ca.us (Darius Bacon) would like for those errors to be
signalled, but is worried about the cost of checking for them:


On the other hand, there appears to be a good reason not to define
call-with-values this way: it could slow down even code that doesn't use
it. Any time you're calling an unknown (implicit) continuation -- one
that you don't have any static information about -- you'd have to check

that the argument count is right....

It is true that call-by-values could be implemented in such a way as
to affect the performance of code that doesn't use it. This might
even be a reasonable thing to do in an interpreted implementation, but
not in a medium or high-performance implementation. Fortunately, it is
possible to implement call-by-values in such a way that

1. Returning a single value is just as fast as ever.
and 2. Returning the wrong number of values to a continuation signals
an error.
and 3. Returning multiple values is as fast as in Standard ML.

bl...@dynamic.cs.princeton.edu (Matthias Blume), trying to deny that
multiple values offer the potential for better error checking than


returning a list, writes:
Huh? How can this happen? There are only two things you can possibly
do with the result of `values': you can pass it on to your caller by
having it in tail position or you can take it apart. Passing it on
will never trigger an error, neither in the `values' scheme nor in the

`list' scheme....

This is not true. The buggy-mv program below contains an error that
implementations are encouraged (but not required) to detect and report.
The buggy-list program has the same conceptual error, but no
IEEE-conforming implementation is allowed to report it.

William Clinger


(define (buggy-mv ints)

; Given a list of exact integers greater than 1,
; this program is supposed to print the sublist of prime integers.
; But it contains a bug.

(define (split p? x)

; Given a predicate p? and a list x, splits x into the sublist
; that satisfies p? and the sublist that doesn't.
; Returns both lists as multiple values.

(cond ((null? x) (values '() '()))
((p? (car x))
(call-with-values (lambda () (split p? (cdr x)))
(lambda (satifies dont)
(values (cons (car x) satifies) dont))))
(else
(call-with-values (lambda () (split p? (cdr x)))
(lambda (satifies dont)
(values satifies (cons (car x) dont)))))))

(display "Primes: ")
(write (split prime? ints))
(newline))

(define (buggy-list ints)

; Given a list of exact integers greater than 1,
; this program is supposed to print the sublist of prime integers.
; But it contains a bug.

(define (split p? x)

; Given a predicate p? and a list x, splits x into the sublist
; that satisfies p? and the sublist that doesn't.
; Returns a list of both lists.

(cond ((null? x) (list '() '()))
((p? (car x))
(let* ((t (split p? (cdr x)))
(satisfies (car t))
(dont (cadr t)))
(list (cons (car x) satisfies) dont)))
(else
(let* ((t (split p? (cdr x)))
(satisfies (car t))
(dont (cadr t)))
(list satisfies (cons (car x) dont))))))

(display "Primes: ")
(write (split prime? ints))
(newline))

(define (prime? n)
(define (loop i)
(cond ((> (* i i) n) #t)
((zero? (remainder n i)) #f)
(else (loop (+ i 1)))))
(loop 2))

Brian Harvey

unread,
Jan 26, 1995, 12:07:06 PM1/26/95
to
bl...@atomic.cs.princeton.edu (Matthias Blume) writes:
>I do not understand why we suddenly give up on the goal of conceptual
>simplicity, and the goal of power through orthogonality.

For what it's worth, I agree completely. (If I have seen almost as far
as others, it's because I have ridden in the pockets of giants, so I'm
diffident about having an opinion about this at all, but I *do* have one
just the same!) But really my reason for posting is to reply to this:

>Maybe it is time to ask ourselves: "Why do we still need Scheme at
>all?" Performance is clearly not one of the reasons, and language
>design becomes less and less of a reason. So what is left? Probably,
>it is time to get out of here...

Although I've felt that way myself occasionally, I think it's worth
restating what everyone, including Matthias, already knows. The technical
report that introduced Scheme in 1975 begins this way:

Inspired by ACTORS, we have implemented an interpreter
for a LISP-like language, SCHEME, based on the lambda calculus,
but extended for side effects, multiprocessing, and process
synchronization. **THE PURPOSE OF THIS IMPLEMENTATION IS
TUTORIAL.**

-- "SCHEME: An Interpreter for Extended Lambda Calculus"
Gerald Jay Sussman and Guy Lewis Steele Jr.
AI Memo No. 349, December 1975
Massachusetts Institute of Technology
Artificial Intelligence Laboratory
[emphasis added]

When I'm trying to get practical work done on a computer, as often as not it
turns out that I can get the job done soonest in awk. But when I'm trying to
help my students understand some algorithm, Scheme is unbeatable.

From this point of view, it's a strength, not a weakness, that Scheme supports
sequential as well as functional programming techniques. It's a strength that
the language can be implemented in a manner transparent to freshmen. It's
certainly a strength that the language has hardly any features, not only
because it makes Scheme easy to learn, but also because the ideas one learns
using Scheme can be carried over to any other language, even if one later finds
oneself programming in C--.

So, please, for the sake of us teachers, let's continue to have a Scheme for
which people can post an interpreter in a couple of screenfuls!

Jeffrey Mark Siskind

unread,
Jan 26, 1995, 2:51:03 PM1/26/95
to
First of all I would like to say that I agree completely with Matthias on this
issue. But that is not the purpose of this message. This message is also not
about multiple values. It is about the role(s) that Scheme currently does play
and might play in the future and what role standardization should play in.

Standards and innovation are mutually exclusive. Neither is good or bad. They
just are important at different stages in the life cycle of a technology.
Standardization is important when
a. the technology is sufficiently mature that there is reasonable general
consensus on what the standard should be and
b. there is a sufficiently large community of users who would rely upon the
standard for interoperability.
Thus standards for line voltage and frequency, nut and bolt dimensions, and
widely used programming languages like C are appropriate. Standardizing Scheme
is probably not.

Today Scheme serves three purposes for the most part:
a. A teaching language
b. A language used by researchers to conduct research in areas not directly
related to Scheme or programming languages
c. A language used by researchers where Scheme itself is the object of study
My guess is that members of groups (b) and (c) care very little about what is
and isn't standard. In fact, members of group (c), if they thought about it,
would probably not want Scheme standardized. And members of group (a) are not
concerned about supporting products on multiple platforms. They are happy so
long as there is a single good implementation that they can use that is
compatible with whatever teaching material that is available.

Maybe some day Scheme will have a sufficiently large community of commercial
users to warrant standardization. And maybe by then we will reach consensus on
what the standard should be. That would be nice. But it isn't necessary for
Scheme to be successful or to play an important role. I, for one, use Scheme
for all three of the above purposes. And after much analysis have come to the
opinion that no other existing language would serve any of those purposes
better.

So what role does a standard like R4RS play. We don't want to treat it like a
holy cannonized document. There is really little benefit for touting one
implementation as R4RS-compliant and another as non-compliant. Rather R4RS
should be a source of guidance and inspiration to researchers. It reflects
(albeit somwhat imprefectly) the collective wisdom and the state of the field
when it was written. Someone starting to build a new implementation or
starting to study some aspect of programming languages will do well to use
R4RS as a starting point. And then it is important to reserve the right to
deviate from that standard when one deems that necessary to further the goals
(a), (b), or (c) above. Sometimes such deviations will turn out to be `the
wrong thing'. And sometimes they will turn out to be `the right thing'. And
sometimes a community of many researchers will synthesize `the right thing'
out of many `almost but not quite right things' that were pursued by individual
researchers.

I think that if we realize this, we will be better off.

Jeff (home page http://www.cdf.toronto.edu/DCS/Personal/Siskind.html)
--

Jeff (home page http://www.cdf.toronto.edu/DCS/Personal/Siskind.html)

R. Kent Dybvig

unread,
Jan 26, 1995, 5:44:27 PM1/26/95
to
bl...@atomic.cs.princeton.edu (Matthias Blume) writes:

>Even if values/call-with-values takes zero time their numbers seem to
>indicate that the original program spent 20% of its time returning
>from functions having multiple resturn values! This sounds fairly
>unbelievable to me. Or did they rewrite those passes to take
>advantage of values/call-with-values? (In this case the numbers don't
>mean anything, because we are not comparing the same program anymore.)

Unbelievable as it may be, it was the case. We were using macros for
call-with-values and values. The macros chose between a list or vector
representation depending upon the number of values to be returned or
accepted. (The macro for call-with-values required that the consumer
be a lambda expression.) The only rewriting we did was to eliminate
the macros and use the new built-in versions of call-with-values and
values.

Since most of the computation in the concerned passes was performed in
registers, the memory operations involved in creating the lists and
vectors and taking them apart added up to a significant portion of the
total run time. And with our implementation call-with-values and
values now do take essentially zero time (for integrable calls where
the consumer is a lambda expression).

> Not so. Lists and vectors are first class.

>Exactly. It is against the spirit of Scheme -- as I understand it,
>and as far as the language still has one -- to have a procedure
>(`values') to construct second-class objects. I still don't
>understand why first-class-ness prevents us from taking advantage of
>Wright's soft typing or similar kinds of analyses.

It isn't against the spirit or reality of Scheme. The values are still
first class, we just aren't committing to any sort of container for
them. Single return values don't come in a first-class container
either, but you don't complain that they aren't first class. Arguments
don't come in first-class containers, although the syntax of lambda
allows us to put them in one (a list). And as you point out you can do
the same thing with call-with-values (and with macros, you can make
doing so very concise).

>I do not understand why we suddenly give up on the goal of conceptual
>simplicity, and the goal of power through orthogonality. In my eyes,
>`values' is a performance hack and as such it doesn't belong into a
>language like Scheme.

Perhaps. I fought against multiple return values along these lines,
but I was finally convinced to agree to them based on arguments of
portability and readability of code. Multiple return values do make
some code much more readable, and there's no doubt that they can
eliminate a lot of consing if implemented properly.

> ... `values' is a blatant symptom of feature cancer. It is


>easy to define in other terms:

> (define values list)
> (define (call-with-values thunk receiver)
> (apply receiver (thunk)))

>The properties of this pair of functions contain those of the proposed

>values/call-with-values. ...

It's not this simple, since this doesn't handle reified continuations
of arities other than one. You need something like the following (from
the LFP paper):

(define values)
(define call-with-values)
(let ((magic (cons 'multiple 'values)))
(define magic?
(lambda (x)
(and (pair? x) (eq? (car x) magic))))

(set! call-with-current-continuation
(let ((primitive-call/cc
call-with-current-continuation))
(lambda (p)
(primitive-call/cc
(lambda (k)
(p (lambda args
(k (apply values args)))))))))

(set! values
(lambda args
(if (and (not (null? args)) (null? (cdr args)))
(car args)
(cons magic args))))

(set! call-with-values
(lambda (producer consumer)
(let ((x (producer)))
(if (magic? x)
(apply consumer (cdr x))
(consumer x))))))

Matthias Blume

unread,
Jan 27, 1995, 11:45:24 AM1/27/95
to
In article <1995Jan26....@news.cs.indiana.edu> "R. Kent Dybvig" <d...@cs.indiana.edu> writes:

[ ... ] We were using macros for


call-with-values and values. The macros chose between a list or vector
representation depending upon the number of values to be returned or
accepted. (The macro for call-with-values required that the consumer
be a lambda expression.) The only rewriting we did was to eliminate
the macros and use the new built-in versions of call-with-values and
values.

Since most of the computation in the concerned passes was performed in
registers, the memory operations involved in creating the lists and
vectors and taking them apart added up to a significant portion of the
total run time. And with our implementation call-with-values and
values now do take essentially zero time (for integrable calls where
the consumer is a lambda expression).

Ok, so I eat my words. 20%. Still, have you compared these numbers
to explicit CPS. This should give you the same benefits...

> Not so. Lists and vectors are first class.

>Exactly. It is against the spirit of Scheme -- as I understand it,
>and as far as the language still has one -- to have a procedure
>(`values') to construct second-class objects. I still don't
>understand why first-class-ness prevents us from taking advantage of
>Wright's soft typing or similar kinds of analyses.

It isn't against the spirit or reality of Scheme. The values are still
first class, we just aren't committing to any sort of container for
them.

No, they are not `first class'. I cannot call the `values' procedure
in most contexts where I can call any other function. If it was
first-class, then I could say:

(define x (values 1 2 3))

and then

(call-with-values (lambda () x) list) -> (1 2 3)

Single return values don't come in a first-class container
either, but you don't complain that they aren't first class.

Sorry, but that is a silly argument. If a single values needs a
container, which is first-class and therefore a single value in
itself, we would need a container for the container, and a container
for the container of the container, ...

This whole discussion shows that multiple values are not just a
straight-forward extension to single values. They are something
conceptually different. They are containers! As a matter of fact, I
would prefer (eq? (values 2) 2) to be #f, just like (eq? (list 2) 2)
is #f.

Arguments
don't come in first-class containers, although the syntax of lambda
allows us to put them in one (a list). And as you point out you can do
the same thing with call-with-values (and with macros, you can make
doing so very concise).

Actually, I would like to have all functions take just exactly one
argument. If you want to pass more, then pass a container! (Witness SML!)
This would make things symmetrical, and it would also settle your
complaint about list/apply not properly taking care of call/cc's behavior.

>I do not understand why we suddenly give up on the goal of conceptual
>simplicity, and the goal of power through orthogonality. In my eyes,
>`values' is a performance hack and as such it doesn't belong into a
>language like Scheme.

Perhaps. I fought against multiple return values along these lines,
but I was finally convinced to agree to them based on arguments of
portability and readability of code. Multiple return values do make
some code much more readable, and there's no doubt that they can
eliminate a lot of consing if implemented properly.

I maintain my assertion: with sufficient work in the area of compilers
we can avoid the same (or nearly the same) amount of consing. I do
not see the justification for the `portability' claim. And, last but
not least, `readability' -- that's a matter of taste. To me,
call-with-values is utterly unreadable, but perhaps that's just me.
And since you can achieve the same `beauty' or `ugliness' without
having a magic built-in `values' procedure this can hardly count as
an argument.

> ... `values' is a blatant symptom of feature cancer. It is
>easy to define in other terms:

> (define values list)
> (define (call-with-values thunk receiver)
> (apply receiver (thunk)))

>The properties of this pair of functions contain those of the proposed
>values/call-with-values. ...

It's not this simple, since this doesn't handle reified continuations
of arities other than one.

I see what you mean. But, IMO, this is a good thing -- and it fits
right in with my argument that *all* procedures should take exactly
one argument. If we make the container first-class (and therefore
distinguish between the container and the things contained therein),
then returning many values means returning *one* thing: the container!
Therefore, the continuation *does* take only one argument (the
container).

Sometimes I might want to say:

(call/cc
(lambda (k)


(let ((r (f ...)))
<do something else>

(if (panic? ...) (k r))
<do even more>
r)))

I.e., I want to capture the return value(s) of the call to `f' and
pass them on to my continuation, and this should look the same no
matter whether my continuation takes one or many (or no) values.
(This is obviously a silly example, and it could easily be written
without call/cc, but you get the idea...)

--
-Matthias

Matthias Blume

unread,
Jan 27, 1995, 12:12:17 PM1/27/95
to
In article <3g8kb0$1...@narnia.ccs.neu.edu> wi...@ccs.neu.edu (William D Clinger) writes:

bl...@dynamic.cs.princeton.edu (Matthias Blume), trying to deny that
multiple values offer the potential for better error checking than
returning a list, writes:
Huh? How can this happen? There are only two things you can possibly
do with the result of `values': you can pass it on to your caller by
having it in tail position or you can take it apart. Passing it on
will never trigger an error, neither in the `values' scheme nor in the
`list' scheme....

This is not true. The buggy-mv program below contains an error that
implementations are encouraged (but not required) to detect and report.
The buggy-list program has the same conceptual error, but no
IEEE-conforming implementation is allowed to report it.

[ goes on posting a program, which takes the `list'-substitute for
`values' apart using `car' and `cdr' ]

I was referring to an implementation of values and call-with-values as in:

(define values list)
(define (call-with-values thunk receiver) (apply receiver (thunk)))

which makes the container carrying multiple values first-class. In
this case (and provided containers are printable) the `bug' in your
program (not explicitely ignoring the second value from `split' in the
initial call to split) is not an error at all -- because there is only
one value (the container). I strongly believe that containers ought
to be first class, and that this is the correct behavior for them
(i.e. The Right Thing). I agree that this doesn't precisely match the
specifications for values/call-with-values as given in the `june 1992'
paper. But to me this is actually a good thing.

---

With this let me conclude my tirade against the newest trends in
Scheme. I actually have other things to do, and it is unlikely, that
I can make even just a dent in what has been proposed by others.

--
-Matthias

R. Kent Dybvig

unread,
Jan 29, 1995, 10:24:30 PM1/29/95
to
bl...@atomic.cs.princeton.edu (Matthias Blume) writes:

>This whole discussion shows that multiple values are not just a
>straight-forward extension to single values. They are something
>conceptually different. They are containers! As a matter of fact, I
>would prefer (eq? (values 2) 2) to be #f, just like (eq? (list 2) 2)
>is #f.

But in fact they are not containers. There is no more a container for
multiple than for single return values, just as there is no more a
container for multiple than for single arguments. This is a point that
you seem to have missed all along, and in missing it you have convinced
yourself that multiple values are conceptually different from single
values, even though we can write a cps conversion or denotational
semantics (such as the one in the revised report) that shows that they
are no more different than are multiple arguments from single arguments.

> Arguments
> don't come in first-class containers, although the syntax of lambda
> allows us to put them in one (a list). And as you point out you can do
> the same thing with call-with-values (and with macros, you can make
> doing so very concise).

>Actually, I would like to have all functions take just exactly one
>argument. If you want to pass more, then pass a container! (Witness SML!)
>This would make things symmetrical, and it would also settle your
>complaint about list/apply not properly taking care of call/cc's behavior.

So you would like multiple return values to be packaged in a container
of some sort, and you would like the same for multiple arguments. I
don't like the idea, and in fact think it goes in exactly the wrong
direction for reasons I don't care to enumerate right now. (Some of my
reasons were given in a September 1990 LASC article Bob Hieb and I
wrote, called ``A new approach to procedures with variable arity'', in
which we proposed an alternative to the current Scheme "rest" interface
and, incidentally, a different multiple return values interface that I
still prefer.) Regardless, we aren't going to change the language to
eliminate multiple-argument procedures (or the list-based rest
interface, for that matter), and the containerless multiple return
values of the call-with-values/values interface is symmetric with the
containerless multiple argument values.

>Sometimes I might want to say:

> (call/cc
> (lambda (k)
> (let ((r (f ...)))
> <do something else>
> (if (panic? ...) (k r))
> <do even more>
> r)))

>I.e., I want to capture the return value(s) of the call to `f' and
>pass them on to my continuation, and this should look the same no
>matter whether my continuation takes one or many (or no) values.

This is trivially accomplished with call-with-values and values:

(call/cc
(lambda (k)
(let ((r (call-with-values (lambda () (f ...)) list)))
<do something else>
(if (panic? ...) (apply k r))
<do even more>
(apply values r))))

This isn't quite as concise, but then in my experience it isn't often
necessary to do this.

Juliusz Chroboczek

unread,
Jan 30, 1995, 7:58:49 AM1/30/95
to
In article <1995Jan29.2...@news.cs.indiana.edu>,

R. Kent Dybvig <d...@cs.indiana.edu> wrote:
> the containerless multiple return
>values of the call-with-values/values interface is symmetric with the
>containerless multiple argument values.

That's an argument we often hear. I'm not quite sure whether
multiple arguments in a function call and multiple values from a
function are symmetric (whatever that means). In particular, there
is, to my knowledge, no equivalent of currying on the return side. In
other words, there is an isomorphism

A*B -> C ~~~ A -> B -> C

but no isomorphism between A -> B*C and a function space (the only
space isomorphic to A -> B*C I know is (A -> B) * (A -> C) -- clearly
not what we want).

J. Chroboczek

Matthias Blume

unread,
Jan 29, 1995, 12:39:17 PM1/29/95
to

I wrote:

>I'm sorry that I came to having to paint such a gloomy picture of the
>state of my once-favorite language. But looking at what's going on
>(Guile/`R5RS'/values/eval/dynamic-wind/no module system/...) I can't
>help it anymore. If there only was a good alternative... :(

In article <3gbl6h$i...@crl2.crl.com> jeng...@crl.com (Joe English) answers:

Wait! Don't give up! There is an alternative...
you could start with an R4RS implementation,
get rid of 'values', don't add 'eval' or 'dynamic-wind',
and add a module system. Let it diverge from R5RS --
it'd be a new dialect. You could call it Scheme V,
or maybe VSCM for short. Surely at least a few
people would like it better (I know I would).

Thank you! You just made my day. This is exactly what I would like
to do. The only trouble is: I don't have time. But if somebody
volunteers to come over and write my thesis... :)

--
-Matthias

Axel Wienberg

unread,
Jan 30, 1995, 9:54:21 AM1/30/95
to

In article <1995Jan29.2...@news.cs.indiana.edu> "R. Kent Dybvig" <d...@cs.indiana.edu> writes:

> Arguments
> don't come in first-class containers, although the syntax of lambda
> allows us to put them in one (a list). And as you point out you can do
> the same thing with call-with-values (and with macros, you can make
> doing so very concise).

bl...@atomic.cs.princeton.edu (Matthias Blume) writes:

>Actually, I would like to have all functions take just exactly one
>argument. If you want to pass more, then pass a container! (Witness SML!)
>This would make things symmetrical, and it would also settle your
>complaint about list/apply not properly taking care of call/cc's behavior.

So you would like multiple return values to be packaged in a container

of some sort, and you would like the same for multiple arguments. [...]

What would really be interesting:

(apply (lambda args args) 7) ==> 7
(apply (lambda args args) '#(a b c)) ==> #(a b c)

See what I mean? The lambda interpreter in the R4RS uses lists for
arguments and return values. If you'd circumvent this with apply, the
`single argument' style would become visible, and the implicit consing
for arguments would be just a matter of convenience, not unlike ML's
pattern matching. You could even do the following:

(call-with-values
(lambda () (apply values 42))
(lambda args args)) ==> 42

Not that it's much use...
--
I know a mouse \ 2wi...@rzdspc2.informatik.uni-hamburg.de
And he hasn't got a house \ Axel Wienberg, Hinzeweg 9, 21075 Hamburg
I don't know why I call him Gerald \ ja Nein!

Matthias Blume

unread,
Jan 30, 1995, 11:00:05 AM1/30/95
to

Originally I didn't intend to prolong this thread any further. But
since I have been accused of `missing the point' I have to answer.

In article <1995Jan29.2...@news.cs.indiana.edu> "R. Kent Dybvig" <d...@cs.indiana.edu> writes:

bl...@atomic.cs.princeton.edu (Matthias Blume) writes:

>This whole discussion shows that multiple values are not just a
>straight-forward extension to single values. They are something
>conceptually different. They are containers! As a matter of fact, I
>would prefer (eq? (values 2) 2) to be #f, just like (eq? (list 2) 2)
>is #f.

But in fact they are not containers. There is no more a container for
multiple than for single return values, just as there is no more a
container for multiple than for single arguments.

And that's a pity. ML (and other newer languages) get it right: they
make every function take exactly one argument and produce exactly one
result. Multiple arguments and multiple values are always passed in
containers, and we rely on good compilers to get rid of the overhead.
This allows for maximum orthogonality, and therefore for maximum
convenience. `compose' in ML can be written as

fun compose (f, g) = fn x => f (g x)

while in Scheme, in order to be fully general, we must write awkward
things like

(define (compose f g)
(lambda args
(call-with-values
(lambda () (apply g args))
f)))

And, btw, all `efficiency' benefits are lost too, because we had to
resort to a `catch-all' restlist argument `args' and `apply', which in
effect turns your `containerless' multiple arguments into an explicit
container.

This is a point that
you seem to have missed all along,

No, I have NOT missed that point all along. I forgot who I replied to
with exactly what I wrote above, but I did so before. Sorry, I just
didn't want to `shock' you with an even more `outrageous' demand like
getting rid of multiple arguments...

and in missing it you have convinced
yourself that multiple values are conceptually different from single
values, even though we can write a cps conversion or denotational
semantics (such as the one in the revised report) that shows that they
are no more different than are multiple arguments from single arguments.

This is indeed true (not that I convinced myself about such a
difference, but that there is no such difference). I was always very
much aware of this.

> Arguments
> don't come in first-class containers, although the syntax of lambda
> allows us to put them in one (a list). And as you point out you can do
> the same thing with call-with-values (and with macros, you can make
> doing so very concise).

>Actually, I would like to have all functions take just exactly one
>argument. If you want to pass more, then pass a container! (Witness SML!)
>This would make things symmetrical, and it would also settle your
>complaint about list/apply not properly taking care of call/cc's behavior.

Ahh -- there we go. I did say it before, so please don't acuse me of
missing any points here! Thanks.

So you would like multiple return values to be packaged in a container
of some sort, and you would like the same for multiple arguments.

Yes. Other languages do it to very good effect. SML/NJ manages to
pass almost all arguments in registers, even though conceptually there
are no `multiple' arguments in ML. With a good compiler it can be
done -- we don't need it at the language level.

If we want arbitrary restrictions and ad-hoc performance hacks then we
know where to find them. They simply don't belong into the world of
Scheme or ML. What we should thrive for is a proof, that conceptually
clean, simple, and powerful languages can be compiled efficiently
*despite* of their high-level nature. By adding performance hacks like
`values' we are conceding defeat.

I
don't like the idea, and in fact think it goes in exactly the wrong
direction for reasons I don't care to enumerate right now.

Oh -- that's great. I completely disagree for reasons I don't care to
enumerate right now.

(Some of my
reasons were given in a September 1990 LASC article Bob Hieb and I
wrote, called ``A new approach to procedures with variable arity'', in
which we proposed an alternative to the current Scheme "rest" interface
and, incidentally, a different multiple return values interface that I
still prefer.)

I think that there is no reason whatsoever to have variable arity
procedures. And I would like to see some of those reasons you don't
care to enumerate.

Regardless, we aren't going to change the language to
eliminate multiple-argument procedures (or the list-based rest
interface, for that matter),

I know. It is unfortunate that we seem to not take the necessary
steps of cleaning up the language while we take the unnecessary steps
of adding needless baggage.

and the containerless multiple return
values of the call-with-values/values interface is symmetric with the
containerless multiple argument values.

Symmetric in its non-orthogonality. What a feat!

>Sometimes I might want to say:

> (call/cc
> (lambda (k)
> (let ((r (f ...)))
> <do something else>
> (if (panic? ...) (k r))
> <do even more>
> r)))

>I.e., I want to capture the return value(s) of the call to `f' and
>pass them on to my continuation, and this should look the same no
>matter whether my continuation takes one or many (or no) values.

This is trivially accomplished with call-with-values and values:

(call/cc
(lambda (k)
(let ((r (call-with-values (lambda () (f ...)) list)))
<do something else>
(if (panic? ...) (apply k r))
<do even more>
(apply values r))))

Oh, sure! I can also `trivially' code this into a Turing machine. We
all know that Scheme with and without `values' is Turing-complete,
therefor you can do in one what you can do in the other and vice
versa. Sorry, but I have to say that this is the most unconvincing
argument I've seen in a long time.

I didn't say it can't be done. I said it can't be done in a
straight-forward and intuitive manner. And even you must resort to
explicitly bundling up the values into a list. If a compiler can
puzzle this apart and keep the values in registers, then it could have
done so in the first place without explicit new language constructs.
As a matter of fact it would be less likely for a compiler to keep all
values in separate registers in *your* example than it would be in
mine, because the implementor in a world with `values' will lazily rely
on the programmer to provide the optimizations. If they are not there
then nothing will happen.

This all very much reminds me on the `register' keyword in C.

This isn't quite as concise, but then in my experience it isn't often
necessary to do this.

This is self-fulfilling prophecy: Because it is so inconventient we
don't see much uses of it. And because we see so very little uses we
don't need to support it conveniently... That's the type of argument
we usually hear from those who oppose call/cc and such.

I'm very, very sad to see that I have to fight over these issues with
the biggest names in the Scheme community -- those who I would have
thought stand up for the principles layed down in the introduction to
R4RS. But I guess those golden times are over. I suggest to scrap
the first sentence in the next version of the report, because
apparently Scheme is not an example anymore of how languages can be
`designed not by piling feature on top of feature'...

--
-Matthias

Pertti Kellomaki

unread,
Jan 31, 1995, 7:18:30 AM1/31/95
to
In article <QOBI.95Ja...@qobi.ai>, qo...@qobi.ai (Jeffrey Mark Siskind) writes:
|> Today Scheme serves three purposes for the most part:
|> a. A teaching language
|> b. A language used by researchers to conduct research in areas not directly
|> related to Scheme or programming languages
|> c. A language used by researchers where Scheme itself is the object of study
|> My guess is that members of groups (b) and (c) care very little about what is
|> and isn't standard. In fact, members of group (c), if they thought about it,
|> would probably not want Scheme standardized. And members of group (a) are not
|> concerned about supporting products on multiple platforms. They are happy so
|> long as there is a single good implementation that they can use that is
|> compatible with whatever teaching material that is available.

I have to disagree here. I happen to fall into groups (a) and (b). For the
(b) part, I am writing a tool that does a fair amount of symbolic manipulation,
and while portability is not a great concern right now, I would certainly like
not to worry about it.

The real problem, at least in my experience, is the (a) part. Students tend
to have their own machines, so you are faced with at least PCs, Macs and
Amigas. Few teachers are lucky enough to be able to provide the kind of
resources needed in running a 800+ student course.

|> So what role does a standard like R4RS play. We don't want to treat it like a
|> holy cannonized document.

When assigning class projects, you may well want to do just that. Give THE
definition of the language.

|> There is really little benefit for touting one
|> implementation as R4RS-compliant and another as non-compliant.

See above. What language research people is their business, and they should
be given the freedom of doing just what they want. But R4RS compliance is an
extremely useful thing when you want to minimize porting problems, be it a
student project or a piece of useful software.

Actually, what I would really like to see in a document like R4RS is a procedure
(R4RS), which would make sure that an implementation complies to the
document. In a complying implementation this would be a no-op, in an extended
implementation this would turn on some kind of a compatibility mode, and in
a non-complying implementation this would just abort with an error message.
--
pertti

Greg Morrisett

unread,
Jan 31, 1995, 11:23:28 AM1/31/95
to
I'm afraid I have to agree with Matthias Blume on all of this.
If we're going to start making changes to Scheme to support
the generation of "fast" code, then I can think of a lot of changes
that I, as a compiler writer, would rather see than values. For
instance, declaring whether objects are mutable or not gets rid of the
need for a lot of indirection, saving us both space and time. It
also allows us to perform code motion and other optimzations more
readily. Disallowing the redefinition of operations like cons,+,etc.
contributes more to speed than anything I can think of. Type declarations
and pattern matching (a la Andrew Wright) help us get rid of runtime
tag checks. The list goes on and on.

Whether these changes to the language are warranted or not requires a
detailed study of a variety of real programs, not just some programs that
are coded so as to take advantage of these features. Would the addition
of the values construct result in a significant win for a significant
fraction of programs? If so, can an optimizing compiler get the most of
this win? It seems to me that neither of these questions is adequately
addressed by the current study, especially if we are to consider the
other potential changes to the language mentioned above.

So, arguing for values on a strictly performance point of view seems
pretty suspect if you ask me. Arguing that it makes the language
symmetric is just as suspect because, as Matthias has pointed out,
a much easier approach is to limit procedures to taking a single
argument. So, what arguments are there to support values?

-Greg Morrisett
jgmo...@cs.cmu.edu

William D Clinger

unread,
Feb 2, 1995, 12:23:27 PM2/2/95
to
jgmo...@cs.cmu.edu (Greg Morrisett) asks a serious question:

So, what arguments are there to support values?

USEFULNESS
Multiple values are widely used in real programs. They are thus
the sort of thing we would want in...

LIBRARIES
We can add libraries of new procedures without changing the spirit
of Scheme. Libraries help to promote...

RELIABILITY
because a widely used library justifies more effort in design and
implementation, and has been tested by a larger number of users.
Furthermore the library procedures can be designed to detect errors
that might not be detected by the typical implementation-from-scratch.
Widely used libraries lead to...

STANDARDIZATION
to make it easier to recognize the names of the library procedures
that are used in someone else's code. Though standardization is
not essential, standardization and widely used libraries encourage
the development of...

IMPLEMENTATION-SPECIFIC LIBRARIES
in which library procedures that have proved their usefulness can
perhaps be implemented better than in portable Scheme. This is
particularly important for things that are hard to implement
efficiently in portable Scheme without sacrificing efficiency.

SIMPLICITY
We don't want creeping featuritis. Features that can be implemented
in portable Scheme (perhaps at the expense of error checking and
efficiency) are safe, however, because they don't add any complexity
to the language. Such features are consistent with the...

SPIRIT OF SCHEME
in which the core language is very small, and features are
orthogonal.

William Clinger

William D Clinger

unread,
Feb 2, 1995, 12:27:24 PM2/2/95
to
I have misunderstood several of the points that Matthias Blume
has been trying to make. As I now understand his arguments,
Matthias has not been criticizing the multiple values proposal.
Instead he has been criticizing various misfeatures of Scheme
that make multiple values appear necessary.

In this message I will acknowledge the criticisms I feel are
valid, showing why they have nothing to do with the multiple
values proposal.

I will then outline some changes to Scheme that would make multiple
values unnecessary. These changes are radical, yet they would not
break any code written in R4RS Scheme. I believe these changes are
in the spirit of what Blume has been advocating, but no one who has
been following this newsgroup would make the mistake of thinking
that I speak for Matthias Blume.


CRITICISM #1

The following code creates a list, only to take it apart.

(define (compose f g)
(lambda args

(f (apply g args))))

Obviously the multiple values proposal is not responsible for
the inefficiency caused by Scheme's rest argument and apply
mechanisms.


CRITICISM #2

Blume is correct when he asserts that Standard ML, by making


"every function take exactly one argument and produce exactly one

result", "allows for maximum orthogonality, and therefore for
maximum convenience". For example,

fun compose f g = fn x => f (g x)

val h = compose (fn (x, y) => [x, y])
(fn x => x);

h (3, 4)

works in Standard ML but

(define (compose f g) (lambda (x) (f (g x))))

(define h
(compose (lambda (x y) (list x y))
(lambda (x) x)))

(h 3 4)

doesn't work in Scheme. Obviously the multiple values proposal
is not responsible for this problem.


CRITICISM #3

Blume thinks we should restrict Scheme to procedures that take
one argument and return one result, using data structures to
get the effect of passing multiple arguments or returning
multiple results. This technique can be implemented efficiently,
as is shown by the performance of Standard ML of New Jersey.

But in Standard ML there are no side effects on tuples, and
the equality operator is defined structurally on tuples (like
Scheme's equal?, but not like eqv? or eq?). Thus a tuple of
Standard ML can be passed (or returned) by passing its elements
in registers. A compiler for Standard ML can do this without
any optimization whatsoever.

A Scheme compiler generally can't do this for lists and vectors
because of procedures such as eqv?, set-car!, and vector-set!.
It has been suggested that a sufficiently clever compiler could
deal with this through optimization, but no such compiler has
been proved to exist, even in the mathematical sense.

Obviously the multiple values proposal is not responsible for
the fact that R4RS Scheme has no immutable data structures that
cannot be tested for identity using eq? and eqv?.


A MODEST PROPOSAL

This proposal would address all three of the above criticisms
without breaking *any* portable Scheme code. On the other hand,
this proposal would require changes to all existing implementations
of Scheme.

Add a new data type, called "sequences", to Scheme. The values
of this data type are sequences of Scheme values. Sequences are
written like lists, but square brackets are used in place of
parentheses:

["hello" "cruel" "world"] ; a sequence of 3 elements

As with the tuples of Standard ML, a sequence with exactly one
element is identical to that element:

(eq? '[[[[foo]]]] 'foo) => #t

Thus every Scheme value is a sequence of one element. For example,
[a b c d] is a sequence of one element as well as a sequence of
four elements.

Sequences are constructed by an n-ary procedure named SEQUENCE:

(sequence) => []
(sequence 'one) => one
(sequence "oh" 'zero (+ 3 4)) => ["oh" zero 7]

The behavior of eq? and eqv? is completely unspecified on sequences
of zero or two or more elements, except that the result is always
a boolean:

(eq? '[] '[]) => unspecified

(let ((x (sequence 4 5 6)))
(eqv? x x)) => unspecified

(let ((x (make-vector 19)))
(eq? (sequence x) (sequence x))) => #t

The equal? procedure is defined on sequences the way you might expect:

(equal? '[] '[]) => #t
(equal? '[] '[[]]) => #t ; because [[]] is []
(equal? '[] '[[][]]) => #f
(let ((x (f))
(y (g)))
(eqv? (equal? (sequence x) y)
(equal? x y))) => #t
(let ((x (f))
(y (g))
(z (h))
(w (i)))
(eqv? (equal? (sequence x y)
(sequence z w))
(and (equal? x z)
(equal? y w)))) => #t

There are no side effects to sequences.

Elements of a sequence are selected by lambda binding, as explained
below.

Change Scheme so that every procedure takes exactly one argument
and returns exactly one result. The argument and the result will
always be a sequence. (But remember that every Scheme value is a
one-element sequence, so the previous sentence doesn't say much.)

An expression in tail-recursive position returns a one-element
sequence. Continuations accept sequences containing an arbitrary
number of elements as results.

A procedure call (proc) passes the empty sequence. A procedure
call (proc arg1) passes a one-element sequence; that is, it passes
the value of arg1. A procedure call (proc arg1 ... argn) passes
an n-element sequence. In general we have the property that

(proc arg1 ... argn) = (proc (sequence arg1 ... argn))

A lambda expression (lambda () body ...) takes a zero-element
sequence as its argument. A lambda expression (lambda (x1) body ...)
takes a one-element sequence as its argument; since all Scheme values
are one-element sequences, this means (lambda (x1) body ...) can be
passed any value whatsoever, including a 17-element sequence. In
general, a lambda expression (lambda (x1 ... xn) body ...) takes
an n-element sequence as its argument.

Introduce a new kind of lambda expression,
(lambda [first rest] body ...), that takes as argument a sequence
of one or more elements and binds first and rest to fresh locations
whose initial contents are the first element and a sequence of the
remaining elements, respectively.

The (lambda x body ...) and (lambda (x1 ... xn . rest) body ...)
syntaxes of R4RS Scheme become derived expressions, because the
second syntax can be defined in terms of the first and the first
syntax is equivalent to

(lambda (x)
(let ((x (list x)))
body ...))

None of these changes affect any portable Scheme code. Some code
that is now in error would become legal, however, so implementations
would have to change. Implementations that omit the check for a
wrong number of arguments might become slower, but implementations
that check for that error can probably make these changes without
affecting the performance of existing code.

The following examples illustrate the usefulness of these changes:

; The list procedure can be defined without using the
; (lambda x ...) syntax:

(define list
(lambda (x)
(if (equal? x (sequence))
'()
(lambda [first rest]
(cons first (list rest))))))

; We can define a non-boxing apply.

(define non-boxing-apply
(lambda [f args]
(f args)))

; (non-boxing-apply * 3 4 5) => 60
; (non-boxing-apply * (sequence 3 4 5)) => 60

; We can easily define values and call-with-values.

(define values sequence)

(define call-with-values
(lambda (thunk f)
(f (thunk))))


Shall we make these changes in R6RS? ;-)

William Clinger

Brian Harvey

unread,
Feb 2, 1995, 2:43:22 PM2/2/95
to
wi...@ccs.neu.edu (William D Clinger) writes:
>USEFULNESS
>Multiple values are widely used in real programs. They are thus
>the sort of thing we would want in...
>LIBRARIES
>RELIABILITY
>STANDARDIZATION...

This induction of Good Things reminds me a little of the argument that
all horses are the same color. What I don't see is how we get from step
1 to step 2, because you haven't explained why it wouldn't be just as
useful, portable, reliable, etc., to put those values in a list.
Nobody is suggesting that all functions must return one number or
any such idea!

Greg Morrisett

unread,
Feb 2, 1995, 3:12:27 PM2/2/95
to
In article <3gr4ef$2...@narnia.ccs.neu.edu>,

William D Clinger <wi...@ccs.neu.edu> wrote:
>jgmo...@cs.cmu.edu (Greg Morrisett) asks a serious question:
> So, what arguments are there to support values?

[A laudable list follows]

>SIMPLICITY
>We don't want creeping featuritis. Features that can be implemented
>in portable Scheme (perhaps at the expense of error checking and
>efficiency) are safe, however, because they don't add any complexity
>to the language. Such features are consistent with the...

Ahh. This is the best reason I've seen yet. So, facilities that
we can code "naturally" (without resorting to Turing-tarpit bickering)
are good candidates for language extension on the grounds of
portability, efficiency, and backwards compatibility. If they
can be coded in the language, then adding them as primitives shouldn't
break existing facilities. And implementors can always punt by
coding them in the language.

This seems like a good guideline for language evolution. I guess
I was thinking of a more "radical" approach where we might go back
and address what I see as root problems in the language. In particular,
pointer equality on every object and the inability to define immutable
data structures seems to prevent a "vanilla" Scheme compiler from
turning functions of one tuple argument into functions of multiple
arguments. (Olin can probably add to the list here.) But this
isn't so much a question of "evolution" as it is "revolution". :-)

-Greg Morrisett
jgmo...@cs.cmu.edu

Henry Baker

unread,
Feb 6, 1995, 1:10:44 PM2/6/95
to
In article <3gr4ls$2...@narnia.ccs.neu.edu>, wi...@ccs.neu.edu (William D
Clinger) wrote:

> CRITICISM #1
>
> The following code creates a list, only to take it apart.
>
> (define (compose f g)
> (lambda args
> (f (apply g args))))
>
> Obviously the multiple values proposal is not responsible for
> the inefficiency caused by Scheme's rest argument and apply
> mechanisms.

Uh... How do I get the multiple values returned from (apply g args) and
give them to f?? Wouldn't I need something like (sorry about the CL
syntax):

(multiple-value-call f (apply g args))

Haven't you just introduced a new _second class_ datatype? If I
can't detect the difference between the empty sequence and the
sequence of one element which is the empty sequence, then I'm back
into the second class soup, because I can't pass or return empty
sequences as first class values.

I have a better proposal: exchange the roles of '()' and '[]'. Now
every list is immutable, and I have to go to some trouble to make
a mutable list. Eliminate the troublesome and inelegant notion that
[a] = a. Even mathematicians fail to make this distinction rather
frequently, because they are misled by the notation.

Joe English

unread,
Feb 6, 1995, 7:20:24 PM2/6/95
to
William D Clinger <wi...@ccs.neu.edu> wrote:

>A MODEST PROPOSAL

[ adds immutable sequences as a primitive data type and
redefines the semantics of procedure calls in terms
of sequences (which is close to the denotational semantics
in R4RS anyway), providing for multiple return values
in an elegant way, restoring the symmetry between procedure
calls and other continuations, introducing a data type
that's useful in it's own right, and does so in a way
that breaks no existing Scheme code and (I think) requiring
only minor changes to existing R4RS implementations -- even
those that already implement call-with-values -- and
generally Making Everybody Happy ]


>Shall we make these changes in R6RS? ;-)

Why wait? Why not R5RS?

values/call-with-values always struck me as syntactically
and semantically ugly, with little practical value.
*This* idea is beautiful and elegant. Plus,
unless I've misunderstood something, it would do
everything call-with-values does and more.


--Joe English

jeng...@crl.com

Reply all
Reply to author
Forward
0 new messages