Coroutines

5 views
Skip to first unread message

John Macdonald

unread,
May 21, 2003, 11:01:51 PM5/21/03
to perl6-l...@perl.org
The last weekly perl6 summary listed a discussion
on coroutines; which lead me to scan the archive and
to subscribe.

One item that I gather from the discussion is that the
current plan is to have a coroutine be resumed simply
by calling it again. This, I think, is a mistake.
Coroutines should be identified by an object that
represents the execution state of the coroutine.

One of the most useful uses of coroutines I've had
in the past is for chaining together a data flow.
This is similar to a Unix pipeline, except of
course that it is being done inside a single program
instead of with separate programs. Each coroutine
would resume its predecessor coroutine whenever it
needed additional data to process, and would resume
its successor coroutine whenever it had prepared
data ready to be passed on.

In either a shell pipeline or a coroutine chain,
you might use the same subprocess more than once.
A pipeline might use the same program more than once
(tr or sed or awk or perl are all likely candidates);
a coroutine chain might use a single subroutine more
than once (perhaps a symmetric encryption routine,
a grep routine, or a file insertion routine).

If you resume by simply calling the subroutine again,
which one are you calling? For that matter, why
should one routine necessarily know which routine
will be the next one? If a coroutine is passing
lines of text to its successor, it might wish to
insert a coroutine to read the contents of a file
between itself and its successor (maybe it just found
a #include directive) - why should the successor have
to know to call/resume a different function for a
while until the included file is fully read, and then
go back to call/resuming this original predecessor?

So, using a token (object) to represent the coroutine
provides both the ability to have the same subroutine
active in two coroutines at the same time, but also
means that one coroutine need not know the actual
subroutine that it is exchanging data with.

(This idea of a pipeline of data massaging routines
also reminds me of the perl5 Filter package.)

Restricting yourself to an iterator viewpoint of
coroutines tends to emphasize routines that generate
a sequence of values while deemphasizing the equally
useful set of routines that consume a sequence
of values.

The coroutine package I used was a subroutine library
for the B language (the precursor to C). It used
an explicit routine to create a new coroutine - it
allocated a private stack, remembered the coroutine
that did the creation, and then invoked the top level
subroutine with its initial arguments.

There were two types of relationships between
coroutines managed by the library. A parent
process created and managed a coroutine or group of
coroutines. A group of coroutines acted as siblings.

Between siblings, the resume call:

ret = resume( coroutine, value );

would resume a selected coroutine. The value
parameter passed in by this routine would be the
ret value returned to the coroutine that was being
resumed. (In the B subroutine it was a single scalar
value, but in Perl it could be either a scalar or
a list.)

A coroutine often would use the function caller()

called_from = caller();

to find which coroutine invoked it, so that it would
know which coroutine to resume when it had finished a
unit of processing.

A parent routine used create() and reattach() to manage
its children, while a child used detach() to revert
to its parent.

ret = create( size, function, args... );
child = caller();

ret = reattach( achild, value );

ret = detach( value );
parent = caller(); # can do this right away
...
parent = invoker(); # or can do this later
# possibly in a resume'd sibling

The routines detach() and reattach() were essentially
the same as resume, but had slightly different
side effects. detach() didn't take an argument to
specify the coroutine to be resumed but used the
remembered parent. The reattach() routine would
resume the specified child, setting its rememered
parent to the routine (possibly different from the
original creator) that reattached it. (The resume()
function, as well as remembering the caller's identity
in case the callee uses the caller() function to find
it, also sets the parent of the calle to be the same
as the parent of the caller.) The create() function
would allocate a new stack of the specified size (perl
doesn't believe in such fixed limits, of course), set
up the stack frame to be consistant with a normal call
to the specified function with the given arguments,
and then does that same action as reattach to that
stack frame.

The functions caller() would return the value of
the caller (the last coroutine of any sort to resume
this coroutine in any way - that includes create(),
detach(), or reattach, as well as resume()).
The function invoker() would return the value of
the parent process that directly or indirectly did
a create() or reattach() of this process.

Having separate syntax for the creation and resumption
of a coroutine provides much the same advantage
as the C library's separation of fork and exec.
Keeping them separate allows a much more flexible
set of ways of using them.

Luke Palmer

unread,
May 22, 2003, 7:15:49 AM5/22/03
to j...@perlwolf.com, perl6-l...@perl.org
> The last weekly perl6 summary listed a discussion
> on coroutines; which lead me to scan the archive and
> to subscribe.
>
> One item that I gather from the discussion is that the
> current plan is to have a coroutine be resumed simply
> by calling it again. This, I think, is a mistake.
> Coroutines should be identified by an object that
> represents the execution state of the coroutine.

Hooray! :)

> [snippity snip]


>
> So, using a token (object) to represent the coroutine
> provides both the ability to have the same subroutine
> active in two coroutines at the same time, but also
> means that one coroutine need not know the actual
> subroutine that it is exchanging data with.
>
> (This idea of a pipeline of data massaging routines
> also reminds me of the perl5 Filter package.)
>
> Restricting yourself to an iterator viewpoint of
> coroutines tends to emphasize routines that generate
> a sequence of values while deemphasizing the equally
> useful set of routines that consume a sequence
> of values.

I'm listening...

> [more snipping]


>
> The functions caller() would return the value of
> the caller (the last coroutine of any sort to resume
> this coroutine in any way - that includes create(),
> detach(), or reattach, as well as resume()).
> The function invoker() would return the value of
> the parent process that directly or indirectly did
> a create() or reattach() of this process.
>
> Having separate syntax for the creation and resumption
> of a coroutine provides much the same advantage
> as the C library's separation of fork and exec.
> Keeping them separate allows a much more flexible
> set of ways of using them.

So, basically your coroutines are continuations that have been spiffed
up to allow easier data flow (and store parents automatically, etc.)
I think *most* of us think of coroutines as less than that, but I like
most of the functionality that you've described. The immediate
problem I see is that it might facilitate "spaghetti code" a little
too easily. But then, I haven't actually seen it in practice.

I absolutely agree that a coroutine should be encapsulated in some
kind of object rather than being magically implicit based on your
scope and whether you've called that name before. Although most of
these functions that take a coroutine argument would become methods on
the object... meh, not important.

One thing that's very important for Perl's coroutine method is that it
makes the Easy Things Easy. For instance, it needs to be easy to
return a series of values generated from a recursive function call.
The method you describe easily facilitates that, simply because
coroutine creation is explicit, something that none of the other
proposed methods have thought of yet.

Alright, on to the bad news. There's a big issue here with
encapsulation. Most of the arguments for the other coroutine methods
lie in encapsulation: either "it's encapsulated in the function call
interface" or "it's encapsulated in the iterator interface". This
approach makes an entirely new interface, something that will lose big
time if a battle starts up.

What I'm seeing is two interfaces, actually. There's one that
communicates between the parent and the child, and another that
communicates between the siblings.

So, here's my little preliminary adaptation of your suggestion:

The parent's role in this interaction is to receive data from its
children --- sequentially. But it actually can't be an iterator this
time, because that doesn't provide a way to pass data from the parent
to the children. So I'm going to put it in a function interface, just
a different one from the way people have been thinking. You'd make a
coroutine with one of the following (or some variant thereof):

my &coro := new Coroutine: &foo.assuming(arg => $list, goes => $here);
my &coro := &foo.coroutine($arg, $list);

Either way, it's not a real sub, it's just pretending to be one for
interface's sake. C<reattach> would be spelled just like the function
call operator:

$ret = coro($value);

Siblings are much simpler, actually. One can use raw continuations to
do the C<resume> described above, provided that continuations have a
way to transfer data. If not, sibling communication could be done
through a wrapper around continuations that do transfer data.

And as far as the distinction between C<detach> and C<resume>ing the
parent, C<yield> could have two forms: one as a regular sub which
detaches, and another as a method on a coroutine object (which the
child could obtain likely through some encantation of the C<caller>
function).

Thanks for the suggestion --- I like it better than any others I've
seen yet. :)

Luke

Michael Lazzaro

unread,
May 22, 2003, 2:39:26 PM5/22/03
to Luke Palmer, j...@perlwolf.com, perl6-l...@perl.org

On Thursday, May 22, 2003, at 04:15 AM, Luke Palmer wrote:
> I absolutely agree that a coroutine should be encapsulated in some
> kind of object rather than being magically implicit based on your
> scope and whether you've called that name before.

You know, I think this is a very important observation. The suggested
syntaxes so far have all been things that attempt to make coroutines
look almost magically identical to normal functions, but that worries
me, because they're _not_.

I don't mind having a prominent reminder when I'm using a coroutine, vs
a normal sub. But at the same time, I wonder if we're getting far too
complicated, here.

A coroutine is basically a routine that can be yielded out of and
resumed, similar to a thread. Therefore, I would expect the syntax to
be very similar to whatever the thread syntax turns out to be.

Adapting the simple coroutine example from Dan's log
(http://www.sidhe.org/~dan/blog/archives/000178.html), just so people
can refer back to that if they don't know what the hell we're talking
about:

coroutine foo (int a, int b) {
yield a;
yield a + 1;
yield a + 2;
}
print foo(1), "\n"; # (ignore b for now)
print foo(2), "\n"; # uh-oh, different arg -- what happens?
print foo(1), "\n";
print foo(1), "\n";


I keep wondering what's wrong with something simple like:

sub foo (int $a, int $b) is coroutine {
yield $a;
yield $a + 1;
yield $a + 2;
}

my &foo1 := attach &foo(a => 1);
my &foo2 := attach &foo(a => 2);

print foo1(b => $n), "\n";
print foo2(b => $n), "\n"; # different attach, so no problem
print foo1(b => $n), "\n";
print foo1(b => $n), "\n";


C<attach> could just do the ".assuming" part, and would return a
wrapped version of &foo. Or, hell, call it C<tie>, since we probably
don't need the C<tie> keyword anymore. :-)

Calling the wrapped coroutine, e.g. C<foo1()>, would resume it. (Note
that since you're attaching it in a separate step, the first resume
actually begins the function.)

Because you've got the explicit C<attach>, I think that's a pretty good
visual cue right there that you're doing a coroutine. (I would go so
far as to say that it's illegal to call foo() without C<attach>ing it,
because it would be too easy for it to be confused with a normal sub.)

So an iterator would be something like:

sub piped_input(...) is coroutine {...}

...etc...

my &i := attach &piped_input(...);
for &i(...) {
...
}

That seems simple enough; what am I missing?


OK, there's a few things missing, namely some clarifications:

- We don't need C<.assuming>, because C<attach> implies it.

- It's automatically detached when the var &i goes out of scope or gets
re-bound, so we don't need a C<.detach> or similar on the caller end.

- We'd probably need the ability to differentiate between yield,
yield-and-detach and yield-and-reattach. How about:

yield # yields
detach # yield-and-detach
return # yield-and-reattach

(Calling the last one C<return> would make a yield-less coroutine
behave similarly to a normal sub, in that it'd start over every time.)

- Calling a coroutine after it's been detached performs a noop, and
when in a looping construct, ends the loop.

???

MikeL

Dulcimer

unread,
May 22, 2003, 3:04:12 PM5/22/03
to Michael Lazzaro, Luke Palmer, j...@perlwolf.com, perl6-l...@perl.org

--- Michael Lazzaro <mlaz...@cognitivity.com> wrote:
>
> On Thursday, May 22, 2003, at 04:15 AM, Luke Palmer wrote:
> > I absolutely agree that a coroutine should be encapsulated in some
> > kind of object rather than being magically implicit based on your
> > scope and whether you've called that name before.
>
> You know, I think this is a very important observation.
> The suggested syntaxes so far have all been things that
> attempt to make coroutines look almost magically identical
> to normal functions, but that worries me, because they're
> _not_.
>
> I don't mind having a prominent reminder when I'm using
> a coroutine, vs a normal sub. But at the same time, I
> wonder if we're getting far too complicated, here.

Simple is better, so long as the power and flexibility aren't squashed.

That looks pretty straightforward to me.
It's visually distinct enough to be clear what's going on without being
unreadably obtuse.

As an aside, is this a "core" language feature we're talking about, or
something for a module?



> C<attach> could just do the ".assuming" part, and would return a
> wrapped version of &foo. Or, hell, call it C<tie>, since we probably

> don't need the C<tie> keyword anymore. :-)

lol -- while I personally like the idea, I'd still vote against using
"tie" because of previous expectations.



> Calling the wrapped coroutine, e.g. C<foo1()>, would resume it.
> (Note that since you're attaching it in a separate step, the first
> resume actually begins the function.)

So the first call is the first call. That's a Good Thing.

> Because you've got the explicit C<attach>, I think that's
> a pretty good visual cue right there that you're doing a
> coroutine. (I would go so far as to say that it's illegal
> to call foo() without C<attach>ing it, because it would be
> too easy for it to be confused with a normal sub.)

Agreed. If you've gone to the trouble to write "is coroutine" it
shouldn't be a big deal.

> So an iterator would be something like:
>
> sub piped_input(...) is coroutine {...}
>
> ...etc...
>
> my &i := attach &piped_input(...);
> for &i(...) {
> ...
> }
>
> That seems simple enough; what am I missing?
>
> OK, there's a few things missing, namely some clarifications:
>
> - We don't need C<.assuming>, because C<attach> implies it.

and it's a module thing anyway, isn't it?



> - It's automatically detached when the var &i goes out of scope
> or gets re-bound, so we don't need a C<.detach> or similar on the
> caller end.

another Good Thing, but I'd still want the ability to manually detach
it.



> - We'd probably need the ability to differentiate between yield,
> yield-and-detach and yield-and-reattach. How about:
>
> yield # yields
> detach # yield-and-detach
> return # yield-and-reattach
>
> (Calling the last one C<return> would make a yield-less coroutine
> behave similarly to a normal sub, in that it'd start over every
> time.)

Hm. I'd say no on return. Just don't use a coro for that.
I'm a little fuzzy on reattach anyway, though.

Why not just do it in two steps? Call it for a yield, then reassign for
a new attach? Am I missing something?



> - Calling a coroutine after it's been detached performs a noop, and
> when in a looping construct, ends the loop.

For that my gut says calling after it's been detached should be fatal,
but for the ideal case I'd like to see it no-op as the default, gripe
under warnings, and die screaming horribly under stricture. Since I
live under C<use strict> that'd do me, and I'd still be able to
explicitly add C<no strict coro> if I wanted to perform some
naughtiness.


__________________________________
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo.
http://search.yahoo.com

John Macdonald

unread,
May 22, 2003, 1:14:22 PM5/22/03
to Luke Palmer, j...@perlwolf.com, perl6-l...@perl.org
On Thu, May 22, 2003 at 05:15:49AM -0600, Luke Palmer wrote:
> So, basically your coroutines are continuations that have been spiffed
> up to allow easier data flow (and store parents automatically, etc.)
> I think *most* of us think of coroutines as less than that, but I like
> most of the functionality that you've described. The immediate
> problem I see is that it might facilitate "spaghetti code" a little
> too easily. But then, I haven't actually seen it in practice.

Coroutines implicitly add spagettiness to flow of
control - instead of the pure stack subroutine call
flow (which already can jump around a lot), you are
able to get back to one context without terminating
the "subroutine" context that has been in operation;
using multiple independent stacks.

I'll show below an example of the sort of code I
used in the original B, but translated into a sort
of Perl 6.

Reading through my message a day later, I realize that
I spent a lot of time on history but never really said
how I'd envision that this get translated into Perl 6.

The parent-sibling distinction is useful, but I
don't think it is necessary to require separate
syntax for a Perl 6 case the way it did in B.
Having simply create, resume, and caller capability
is sufficient. The create mechanism would set up the
separate coroutine mechanics (its own coroutine object
encapsulating stack frame and state), and call the
subroutine with the initial arguments. At this point
(at entry to the subroutine) the caller method returns
the coroutine object id of the creating process.
The child coroutine can then resume the parent
coroutine (although it need not actually do that).
As long as the child actually does resume the parent,
the parent can find the object id of the new child
using the caller mechanism.

I don't recall if I said so before, but a return from
the subroutine essentially terminated the coroutine.
It would resume the parent passing the subroutine
return value. If the coroutine was again resumed,
it would immediately resume the parent with a value
of zero. (This is much like an input handle - after
end of file, it returns undef constantly.) This lets
you wrap a standard subroutine up as a coroutine (kind
of a scalar coroutine that only returns one value
before terminating instead of the more usual array
coroutine that returns a series of values). It is
easy to code the top level subroutine to not return
if you don't want this termination activity to happen.

In the B code, I used a sequence of coroutines in
a pipeline. The parent would build the pipeline
from one end (e.g. front to back). The already
built sibling object would be passed as an argument
to its successor. After the the cahin was built, it
would start up by reattach()ing the final coroutine.
It would resume its predecessor (who discovered the
more recently created successor using the caller()
function).

Here's an example of connecting together a number
of filter coroutine with source and sink coroutines
into a pipeline. It uses the same subroutine as the
top level routine of two different coroutines, and
does a resume from within a subroutine call (there
should not no limit on this sort of thing - imagine
a data structure traversing routine that passes out
infomation on each piece in passing).

A filter coroutine (one on the middle of the chain)
might look like (in Perl 5 syntax - I'm not fluent
enough in Perl 6 yet to not be distracting):

sub decomment {
my( $predecessor, $pattern ) = @_;
my $parent = caller;
my $successor = $parent->resume;
while(1) {
# pred return undef at EOF
my $buffer = $predecessor->resume;
# skip comment lines, but not EOF
next if( defined($buffer) && $buffer =~ m/$pattern/;
$successor->resume( $buffer );
}
}

sub cat1 {
my( $pred, $succ ) = @_;
my $line;
$succ->resume( $line ) while defined( $line = $pred->resume );
}

sub cat {
my( @pred ) = @_;
my $parent = caller;
my $successor = $parent->resume;
# concatenate next pipeline in the list
while( my $pred = shift @pred ) {
cat1( $pred, $successor );
}
$successor->resume( undef );
}

The routines to go at the ends of the pipeline chain
have to know their place (there is no coroutine to resume
in one direction).

sub getfile {
my( $filename ) = @_;
my $parent = caller;
my $status = open my $file, "<", $filename;
my $successor = $parent->resume($status);
while( my $line = <$file> ) {
$successor->resume( $line );
}
close $file;
$successor->resume( undef );
}

sub putfile {
my( $predecessor, $filename ) = @_;
my $parent = caller;
my $status = open my $file, ">", $filename;
$parent->resume( $status );
while( defined( my $line = $predecessor->resume ) ) {
print $file, $line;
}
close $file;
$parent->resume;
}

And a parent program to splice them all together:

my @inlist;
my $pipe;

coro::create( &getfile, "file1" )
or die "failed to open file1";
push @inlist, caller;

coro::create( &getfile, "file2" )
or die "failed to open file2";
push @inlist, caller;

# read both file1 and file2 in order
coro::create( &cat, @inlist );
$pipe = caller;

# strip perl comment lines
coro::create( &decomment, $pipe, qr/^\s*#/ );
$pipe = caller;

# strip C++ comment lines
coro::create( &decomment, $pipe, qr#^\s*//# );
$pipe = caller;

# write it all to filestrip
coro::create( &putfile, $pipe, "filestripped" )
or die "failed to open filestripped";
$pipe = caller;

# and start the whole mess going
my $status = $pipe->resume;

Austin Hastings

unread,
May 22, 2003, 3:41:39 PM5/22/03
to Michael Lazzaro, perl6-l...@perl.org

--- Michael Lazzaro <mlaz...@cognitivity.com> wrote:
>
> On Thursday, May 22, 2003, at 04:15 AM, Luke Palmer wrote:
> > I absolutely agree that a coroutine should be encapsulated in some
> > kind of object rather than being magically implicit based on your
> > scope and whether you've called that name before.
>
> You know, I think this is a very important observation. The
> suggested
> syntaxes so far have all been things that attempt to make coroutines
> look almost magically identical to normal functions, but that worries
> me, because they're _not_.

I used to agree with this observation, but now I'm pretty sure it's not
right.

The problem lies in confusing e.g. "iterator" with "coroutine."
There's a level-of-implementation difference.

An "iterator" is a design element.

A "for loop" is an implementation detail.

I think "coroutine" is an implementation detail. Specifically, I think
it's what you do when you can't figure out how to convert your code
into a simple state machine. As a result, I can conceive of coders *not
wanting to know* when they call a coroutine.

sub foo() {
state $i = 0;
return $i++;
}

sub foo() {
my $i = 0;
loop {
yield $i++;
}
}


What's the difference? None, from the caller's viewpoint. Which means
that when refactoring or extending a stateful subroutine, I can convert
it to a coroutine without telling anyone -- statefulness is an
implementation detail.

I liked the point about Unix pipes, though -- if you're doing a
translate, or processing a #include, then you may want more than one of
them. But that leads across the design/implementation barrier.

If we're going to iterate, or pipeline, or anything else with
coroutines involved, we need to separate "what's implementation" from
"what's design". Much the same way the we separate "scalar/array/hash"
from "object" -- one is an attribute (implementation detail), the other
is a design item.

So if there is going to be an Iterator class, or a Coroutine class, to
support objectification of the coroutine concept, let them be objects.
And let them be a part of the standard P6 object library. They don't
have to be exactly the same as the implementation details.

Depending on how the continuation interface works, and its interaction
with threads, I think we may just dispose of explicit coroutine support
completely, and rely on continuations plus Coroutines -- the object
interface that will probably use continuations underneath.

=Austin


Piers Cawley

unread,
May 22, 2003, 3:49:13 PM5/22/03
to Michael Lazzaro, Luke Palmer, j...@perlwolf.com, perl6-l...@perl.org
Michael Lazzaro <mlaz...@cognitivity.com> writes:

> On Thursday, May 22, 2003, at 04:15 AM, Luke Palmer wrote:
>> I absolutely agree that a coroutine should be encapsulated in some
>> kind of object rather than being magically implicit based on your
>> scope and whether you've called that name before.
>
> You know, I think this is a very important observation. The suggested
> syntaxes so far have all been things that attempt to make coroutines
> look almost magically identical to normal functions, but that worries
> me, because they're _not_.

One of the cute things that Common Lisp has is multiple return values
(and I don't just mean returning a list). In most cases you just use
the 'main' return value, but in others you would do

(multiple-value-list (a-coroutine arg1 arg2))

And get a list back along the lines of

((usual results) #'a-coroutine-handle)

Point is that, unless you *know* you're going to need the secondary
values returned by a function then they're completely invisible to the
caller. It seems to me that this might be a good approach with Perl:

my(@results, &coro_handle) = multi_values(coroutine(@args));

Okay, so that's an ugly name for the function, but we have the
advantage that the Easy case remains easy and the hard case is solved
by a useful general mechanism.

--
Piers

Michael Lazzaro

unread,
May 22, 2003, 4:41:38 PM5/22/03
to Austin_...@yahoo.com, perl6-l...@perl.org

On Thursday, May 22, 2003, at 12:41 PM, Austin Hastings wrote:
> I think "coroutine" is an implementation detail. Specifically, I think
> it's what you do when you can't figure out how to convert your code
> into a simple state machine. As a result, I can conceive of coders *not
> wanting to know* when they call a coroutine.
>
> sub foo() {
> state $i = 0;
> return $i++;
> }
>
> sub foo() {
> my $i = 0;
> loop {
> yield $i++;
> }
> }
>
> What's the difference? None, from the caller's viewpoint. Which means
> that when refactoring or extending a stateful subroutine, I can convert
> it to a coroutine without telling anyone -- statefulness is an
> implementation detail.

There's the rub, and that's what's bothering me. In the ideal world,
after you somehow attach a coroutine to make a particular
iterator/whatever, you really want the coroutine to be
indistinguishable from a sub. But in practice, coroutines have a
couple of features that normal subs don't have, which require some form
of invocation be built into the language, which gums everything up.

The two examples above are different in that the first only allows you
to maintain one state, for all callers, whereas the second presumably
allows you to have multiple different states, controlled by the caller
-- *if* you have some language support for invoking those different
states.

The basic problem with coroutines is what happens when you have more
than one invocation of them -- i.e. simultaneous states. Why I'm
musing over the explicit C<attach> step is because it solves that
problem -- completely.

my &foo1 := attach &piped_input(...);
my &foo2 := attach &piped_input(...);

That creates two separate subs, with two separate internal states. But
you can only have these separate states if you have some way of
specifying them -- if you just assume that a coro is called like a sub:

sub foo (...) is coroutine {...}

foo();
foo();

...you then have to answer the question of whether the second foo() is
using the state information of the first foo(), or is starting over.
Which is nasty, because you may want it to do either of those things,
depending on individual circumstance.

Typically there may be some method of detaching or restarting the coro,
but I think historically speaking that may be the wrong direction --
perhaps it's the attachment that needs to be explicit, and not the
resuming/restarting. I don't think you _need_ detach, or reattach,
etc., if you assume it all just happens as a natural consequence of var
scope. If you want to restart the coro, just rebind your var to a new
invocation.

There are other solutions. For example, make a call to a coroutine
automatically "attach" it to a state, if it doesn't already have one,
and have explicit de/reattach on the caller side:

foo(); # (1) no prev state, so "attaches" one
foo(); # (2) same state as (1)
restart foo(); # (3) calls foo() with a newly attached state
detach foo(); # (4) kills the state associated with foo()

That doesn't quite work, because while it allows you to restart the
coroutine, it doesn't allow you to have more than one state associated
at one time! (It also doesn't take advantage of var scope to
automatically detach your state when you're done with it.) So you need
some explicit method of creating a "stateful" coroutine invocation,
which we could do via another keyword:

my &foo1 = attach &foo(...);

... getting us back to what I was proposing, sortof, but saying that
calling the coroutine directly is OK, if you want to use whatever the
last implicitly attached state is. Dunno, I have concerns about
allowing that, because that's where most people get bitten when using
them. Hmm...

MikeL

Austin Hastings

unread,
May 22, 2003, 6:20:18 PM5/22/03
to Michael Lazzaro, perl6-l...@perl.org

--- Michael Lazzaro <mlaz...@cognitivity.com> wrote:
>

Not necessarily. Why shouldn't I want my routine to give a unique
number to all callers? That's what Oracle does when I create an
autoincrement data field.

Anyway, this is focusing at too low a level -- nobody has answered the
bigger questions, except for JMM's attempt: What *SHOULD* the semantics
of coroutines be? I am not sure they should allow separate states --
provided that there is another mechanism (closure or continuation or
object) to handle separate states apart from the coroutine mechanics
itself.

Is there a way to encapsulate the invocations? Does every separate call
produce a separate instance, or do all calls yield up the same
instance? (In which case can you capture separate instances in a
closure?)

A lot of us do our thinking (me included) by scribbling something down
and then optimizing the look and feel of what we're talking about. But
a certain amount of thought has to be given to the "well, what do we
want?" side of the equation.

The coroutine syntax is appealing because it allows reverting a state
based system to a procedural description - the opposite of our normal
behavior. It can be implemented with continuations, of course, but
that's like saying that we can implement a switch statement with
chained if's, or with (cond) && do {...}; - it's equivalent but clunky.

So what do we really want from the coro syntax? Luke Palmer says
"iterators", and I'm sure I disagree with that: iterators are not as
small as that. John Macdonald says they should have separate state, and
you (Michael) have proposed a syntax that provides that. But I don't
like the overhead. Why *shouldn't* coroutines be global by default.

I like the idea of being able to capture a separate coroutine state, so
that I could have two active contexts simultaneously. But to me that
screams either "object" or "closure". Whereas I'd also like to be
trivially able to create a coroutine *without* separable instances, and
without much more syntax than just C<yield>.

So how about a coroutine/Coroutine distinction: using C<yield> does a
global coro thing. Calling C<new Coroutine &Code> creates an object
that can be called, etc as a separate function (like your C<attach>)
and maintains its own state (continuation/closure). Of course, if the
&Code doesn't call C<yield>, then it never acts very interesting
externally.

> The basic problem with coroutines is what happens when you have more
> than one invocation of them -- i.e. simultaneous states. Why I'm
> musing over the explicit C<attach> step is because it solves that
> problem -- completely.
>
> my &foo1 := attach &piped_input(...);
> my &foo2 := attach &piped_input(...);
>
> That creates two separate subs, with two separate internal states.

Remember threads. If all references to the same name produce the same
coro, then two threads can't call the same coroutine-invoking sub at
the same time. And how do you know what uses coroutines internally?

=Austin


Simon Cozens

unread,
May 22, 2003, 7:27:45 PM5/22/03
to perl6-l...@perl.org
austin_...@yahoo.com (Austin Hastings) writes:
> Anyway, this is focusing at too low a level -- nobody has answered the
> bigger questions, except for JMM's attempt: What *SHOULD* the semantics
> of coroutines be?
> ...

> So what do we really want from the coro syntax? Luke Palmer says
> "iterators", and I'm sure I disagree with that: iterators are not as
> small as that.

We can play with syntax and semantics until the cows come home. Can I suggest
a way for mere humans to evaluate the lofty design issues you guys are toying
with? (I have to admit that I haven't been able to understand this thread at
all, or understand why I should care about its ramifications in my
programming.)

Here's a good way. I suggest discourse takes one of two forms:

1) The semantics of Perl 6's co-routines will be just like those of
$language_x, in that...
2) Perl 6's co-routines will be different from every other language, and this
is OK, because we're obviously doing something brand new in the areas of
...

Warning against form 2: people don't want or need groundbreaking ideas to get
their jobs done. Experimental languages are cool. In their place.

I'm having a hard enough time working through precisely how coroutines will
make my day to day programming more productive, without having to read through
pages of screeds about what colour and shape they ought to be.

--
Perl 6 will be different from Perl 5, but never gratuitously so. When syntax
or semantics change, it will always be a change for the better: for greater
consistency, for more intuitability, for extra Do-What-I-Meanness.
- Damian Conway

my ($stdout, $stderr) = »<<<« open 'grep foo * |';
- Damian Conway

Damian Conway

unread,
May 22, 2003, 8:43:06 PM5/22/03
to perl6-l...@perl.org, la...@wall.org
Didn't we already have this discussion six months ago???
The conclusion of which seemed to be:

http://archive.develooper.com/perl6-l...@perl.org/msg12518.html

However, on revisiting it, I think I have an even simpler, best-of-both-worlds
solution that's also more in line with A6...

==============================================================================

A coroutine is declared with the C<coro> declarator:

coro fibs (?$a = 0, ?$b = 1) {
loop {
yield $b;
($a, $b) = ($b, $a+$b);
}
}

and is allowed to have zero or more C<yield> statements inside it.

A coroutine is an object of type C<Coro> (just as a subroutine is an object of
type C<Sub>). C<Coro> objects have the following methods:

next() # resumes coroutine body until next C<yield>

next_args(PARAM_LIST) # resumes coroutine body until next C<yield>,
# rebinds params to the args passed to
# C<next_args>.
# PARAM_LIST is the same as the parameter list
# of the coroutine itself

each() # returns a lazy array, each element of which
# is computed on demand by the appropriate
# number of resumptions of the coroutine body

A C<Coro> object is therefore a type of iterator.

Calling a coroutine using the normal subroutine call syntax is the same as
calling the C<next> or C<next_args> method on it:

$next = fibs(); # same as: $next = &fibs.next()

$next = fibs(10,11); # same as: $next = &fibs.next_args(10,11)

while ($next = fibs()) {
if ($next < 0) {
fibs() for 1..2; # skip the next two values
}
}


To create multiple independent coroutine interators using a single coroutine
definition, one can simply use an *anonymous* coroutine:

sub fibs_iter (?$a = 0, ?$b = 1){
return coro (?$x = $a, ?$y = $b) {
loop {
yield $b;
($a, $b) = ($b, $a+$b);
}
}
}

# and later...

$iter = fibs_iter(7,11);

while ($next = $iter.next()) {
if ($next < 0) {
$iter.next() for 1..2; # skip the next two values...
$iter = fibs_iter(11,7); # and change iterator
}
}

Additionally, class C<Coro> would have a C<clone> method:

$iter1 = &fibs.clone;
$iter2 = &fibs.clone;

which would return a reference to a new anonymous C<Coro> with the same
implementation as (but distinct state from) the original C<Coro>.

Either way, for iteration that implies:

<$fh> # Call $fh.next() # IO objects are iterators too
<$iter> # Call $iter.next()
coro() # Call &coro.next()
<&coro> # Call &coro.next()
<coro()> # Call .next() on iterator returned by call to coro()


Whilst, in a list context:

<$fh> # Call $fh.each()
<$iter> # Call $iter.each()
coro() # Call &coro.each()
<&coro> # Call &coro.each()
<coro()> # Call .each() on iterator returned by call to coro()


So then:

for <$fh> {...} # Build and then iterate a lazy array (the elements
# of which call back to the filehandle's input
# retrieval coroutine)

for <$iter> {...} # Build and then iterate a lazy array (the elements
# of which call back to the iterator's coroutine)

for coro() {...} # Build and then iterate a lazy array (the elements
# of which call back to the C<&coro> coroutine)

for <&coro> {...} # Build and then iterate a lazy array (the elements
# of which call back to the C<&coro> coroutine)

for <coro()> {...} # Call coro(), then build and iterate a lazy
# array (the elements of which call back to the
# iterator returned by the call to C<coro()>)


==============================================================================

This approach preserves the simple (and I suspect commonest) use of "monadic"
coroutines, but still allows a single coroutine to produce multiple iterators.
It also minimizes additional syntax and semantics.

Damian

Luke Palmer

unread,
May 22, 2003, 9:19:02 PM5/22/03
to dam...@conway.org, perl6-l...@perl.org, la...@wall.org
Damian wrotes:

Sweet.

> Either way, for iteration that implies:
>
> <$fh> # Call $fh.next() # IO objects are iterators too
> <$iter> # Call $iter.next()
> coro() # Call &coro.next()
> <&coro> # Call &coro.next()
> <coro()> # Call .next() on iterator returned by call to coro()
>
>
> Whilst, in a list context:
>
> <$fh> # Call $fh.each()
> <$iter> # Call $iter.each()
> coro() # Call &coro.each()

Er, what if it wanted to yield a list?

> <&coro> # Call &coro.each()
> <coro()> # Call .each() on iterator returned by call to coro()
>
>
> So then:
>
> for <$fh> {...} # Build and then iterate a lazy array (the elements
> # of which call back to the filehandle's input
> # retrieval coroutine)
>
> for <$iter> {...} # Build and then iterate a lazy array (the elements
> # of which call back to the iterator's coroutine)
>
> for coro() {...} # Build and then iterate a lazy array (the elements
> # of which call back to the C<&coro> coroutine)
>
> for <&coro> {...} # Build and then iterate a lazy array (the elements
> # of which call back to the C<&coro> coroutine)
>
> for <coro()> {...} # Call coro(), then build and iterate a lazy
> # array (the elements of which call back to the
> # iterator returned by the call to C<coro()>)
>
>
> ==============================================================================
>
> This approach preserves the simple (and I suspect commonest) use of "monadic"
> coroutines, but still allows a single coroutine to produce multiple iterators.
> It also minimizes additional syntax and semantics.

I like most of it. There's one problem: recursion.

coro count($n) {
count($n - 1) if $n > 0;
yield $n;
}

The call to count() inside count() would either start a new coroutine,
or continue the old one -- neither of which are the desired behavior.
What you want in this case is just to call the sub as usual. This can
be remedied like this:

sub _count_impl($n) {
_count_impl($n - 1) if $n > 0;
yield $n;
}

coro count($n) {
_count_impl($n);
}

But that seems like an awful lot of work for something this common.
Hmm...

coro count($n) {
(sub { _($n - 1) if $n > 0; yield $n }).($n);
}

Cool, but, eew.

I'm not sure how to solve the recursion problem while appeasing those
who want implicit iteration.

Another detail: is implicit iteration scoped or global?

Luke

Luke Palmer

unread,
May 22, 2003, 9:36:15 PM5/22/03
to j...@algate.perlwolf.com, j...@perlwolf.com, perl6-l...@perl.org

The discussion has gone far beyond this message already, but for fun,
I'll show you what this looks like in idiomatic Perl 6 I<without>
coroutines.

sub decomment($pattern, *@lines) {
grep { !/<$pattern>/ } @lines
}

<open "file1">, <open "file2">
==> decomment(/^ \s* \#/)
==> decomment(rx#^ \s* //#)
==> open("> filestripped").print;

Snazzy, no?

It's possible that that will all happen lazily (in other words, like a
real pipeline) because all operators involved are lazy operators.

Luke

Damian Conway

unread,
May 22, 2003, 9:41:30 PM5/22/03
to perl6-l...@perl.org, Larry Wall
Luke Plamer observed:

>>Whilst, in a list context:
>>
>> <$fh> # Call $fh.each()
>> <$iter> # Call $iter.each()
>> coro() # Call &coro.each()
>
>
> Er, what if it wanted to yield a list?

A very good question. It would certainly make more sense for:

coro() # Call &coro.next() in list context

and when one wanted the lazy iterative behaviour, one would use:

>> <&coro> # Call &coro.each()

> There's one problem: recursion.
>
> coro count($n) {
> count($n - 1) if $n > 0;
> yield $n;
> }

I'm not sure what that's supposed to do. Are you missing a C<yield> on the
first statement? And maybe a loop?

What was the behaviour you were expecting from calling (say):

$x = count(7);


> I'm not sure how to solve the recursion problem while appeasing those
> who want implicit iteration.

I'm not sure I understand the problem to which you're alluding.
Could you perhaps explain it another way?


> Another detail: is implicit iteration scoped or global?

I must be particularly dense today. I don't understand that question either.
Could you give an example?

Damian

Luke Palmer

unread,
May 22, 2003, 9:55:37 PM5/22/03
to dam...@conway.org, perl6-l...@perl.org, la...@wall.org
> > There's one problem: recursion.
> >
> > coro count($n) {
> > count($n - 1) if $n > 0;
> > yield $n;
> > }
>
> I'm not sure what that's supposed to do. Are you missing a C<yield> on the
> first statement? And maybe a loop?
>
> What was the behaviour you were expecting from calling (say):
>
> $x = count(7);

0, then 1, then 2, up to 7. (count(7) calls count(6) calls count(5)
down to 0, then count(0) yields 0, returns to count(1) which yields 1,
etc.)

It's a simplified version of the traversal problem:

# traverse: return each element arbitrarily deep in @lol,
# depth first.
coro traverse(@lol) {
for @lol {
when List { traverse($_) }
default { yield $_ }
}
}

(Now that I look at it, it can't get much simpler :-)

Or am I missing something? I thought that one could yield from
arbitrarily deep in the call stack.

>
> > I'm not sure how to solve the recursion problem while appeasing those
> > who want implicit iteration.
>
> I'm not sure I understand the problem to which you're alluding.
> Could you perhaps explain it another way?
>
>
> > Another detail: is implicit iteration scoped or global?
>
> I must be particularly dense today. I don't understand that question either.
> Could you give an example?

Nevermind. It's global (stupid question on my part, sorry).

Luke

John Macdonald

unread,
May 22, 2003, 10:42:40 PM5/22/03
to Damian Conway, perl6-l...@perl.org, la...@wall.org
On Fri, May 23, 2003 at 10:43:06AM +1000, Damian Conway wrote:
> Didn't we already have this discussion six months ago???

I wasn't on the list then, but that looks like it was...

Larry's description of having the initial call to a
coro function return an object sounds very much like
the create() call I'm used to except that (as per
Luke's discussion) it doesn't allow a function to
be used as both a coroutine and as a function (e.g.
when it calls itself recursively). Luke's approach
of using two routines, one for the coro and another
for the function:

coro foo { ... real_foo( args ) }
sub real_foo { ... can call real_foo recursively

is manageable, but I prefer to have different syntax
for call and resume rather than have the distinction
intuited from the declaration of the routine.

However, when the function of a coroutine is
sufficiently complicated that it ought to be
decomposed (using additional subroutines that can
each yield some of the total sequence from the
single coroutine process), the argument rebinding
form next_args is inadequate. (Note that this also
strongly says that having a yield operator in its
body is not adequate reason to treat a subroutine as
a coroutine, and automatically turn an invokation of
that subroutine into a new coroutine process.)

> next_args(PARAM_LIST) # resumes coroutine body until next C<yield>,
> # rebinds params to the args passed to
> # C<next_args>.
> # PARAM_LIST is the same as the parameter list
> # of the coroutine itself

If the coroutine has called a stackful of functions,
rebinding the args of the top level function is either
invisible to the currently resumed subroutine, or it
affects all of the stacked routines. While you may
wish to pass a value to the code that is resumed,
that value may not have any relationship with the
parameters that were bound when the coroutine was
instantiated. Consider a coroutine that traverses
a parse tree, for example. The initial setup will
be specifying the root of the tree that is being
tranversed. The traversal code might call different
subroutines for different types of nodes - some of
those subroutines might yield a number of values;
others might only yield values indirectly (from
subroutines that they call). When the code that is
processing the values coming out of the traversal is
ready to resume this coroutine to get a new value,
it might wish to indicate how the traversal is to
proceed (normal, prune special processing for the
current node, prune all processing for this node and
all of its children, etc.).

This could be handled by allowing the next method be
given a list of values, and those values are passed
to the coroutine as the return value from the yield
operator. That allows the coroutine to process the
value(s) in a way that is relevant to the current
state of the coroutine.

The next_args approach requires that the entire state
of the coroutine be encapsulated in the argument list.
That will almost always be more than you would want
to change at one time.

Damian Conway

unread,
May 23, 2003, 4:32:44 AM5/23/03
to perl6-l...@perl.org, Larry Wall
Many thanks to John for his excellent analyses and especially for reminding us
of the potential use of C<yield>'s return value to pass new values into a
coroutine. We did explore that approach last year but it had completely
slipped my mind since then.

That input, and Luke's important observations about recursion and list
contexts have led to the following re-revised proposal...

==============================================================================

A coroutine is declared with the C<coro> declarator:

coro fibs (?$a = 0, ?$b = 1) {
loop {

($a, $b) = yield($b) || ($b, $a+$b);
}
}

and is allowed to have zero or more C<yield> statements inside it.

A coroutine is an object of type C<Coro> (just as a subroutine is an object of
type C<Sub>). C<Coro> objects have the following methods:

=over

=item C< method Coro::next(*@args) {...} >

which resumes execution of the coroutine object's body until the next C<yield>
or C<return> is executed. The arguments (if any) that are passed to C<next>
become the return value of the previous C<yield>, or are bound directly to the
coroutine's parameters if there was no previous C<yield>.


=item C< method Coro::each(*@args) {...} >

which returns a lazy array, each element of which is computed on demand by the
appropriate number of calls to C<next>. each call to C<next> is passed the
argument list that was passed to C<each>.


=item C<method Coro::clone(*@args) {...}>

which creates and returns a copy of the implementation (but not the current
state) of the C<Coro> object, having first prebound the copy's parameters to
the arguments passed to C<clone>.

=back

A C<Coro> object is therefore a type of iterator.

Calling a coroutine using the normal subroutine call syntax is the same as

calling the C<next> method on it:

$next = fibs(); # same as: $next = &fibs.next()

$next = fibs(10,11); # same as: $next = &fibs.next(10,11)

while ($next = fibs()) {
if (abs($next) > 10) {
fibs(11,7) for 1..2; # skip the next two values,
# whilst resetting the iterator state
}
}


To create multiple independent coroutine interators, all of which use the same
coroutine definition, define a "factory" subroutine that builds *anonymous*
coroutines:

sub fibs_iter ($a = 0, $b = 1){


return coro (?$x = $a, ?$y = $b) {
loop {

($a, $b) = yield($b) || ($b, $a+$b);
}
}
}

# and later...

$iter = fibs_iter(7,11);

# and later still...

while ($next = $iter.next()) {
if (abs($next) > 10) {


$iter.next() for 1..2; # skip the next two values...

$iter = fibs_iter(11,7); # and change iterator state
}
}

Note that, since C<$iter> is a coroutine reference, the calls to
C<$iter.next()> can be written more compactly as:

while ($next = $iter()) {
if (abs($next) > 10) {
$iter() for 1..2; # skip the next two values...
$iter = fibs_iter(11,7); # and change iterator state
}
}


If the coroutine itself already exists, another solution is simply to clone it:

$iter = &fibs.clone(7,11);

while ($next = $iter.next()) {
if ($next < 0) {
$iter.next() for 1..2; # skip the next two values...

$iter = &fibs.clone(11,7); # and change iterator state
}
}


And if only one version of the coroutine will ever be active at any one time,
an even simpler approach is just to use the original coroutine itself:

while ($next = fibs()) {
if ($next < 0) {

$iter.next() for 1..2; # skip the next two values...

&fibs := &fibs.clone(11,7); # and change iterator state
}
}


For iteration in a scalar context that implies:

<$fh> # Call $fh.iter().next()


<$iter> # Call $iter.next()
coro() # Call &coro.next()
<&coro> # Call &coro.next()

<func()> # Call .next() on iterator returned by call to func()
<coro()> # Call .next() on iterator returned by call to &coro.next()


Whilst, in a list context:

<$fh> # Call $fh.iter().each()
<$iter> # Call $iter.each()
coro() # Call &coro.next() in a list context
<&coro> # Call &coro.each()
<func()> # Call .each() on iterator returned by call to func()
<coro()> # Call .each() on iterator returned by call to &coro.next()

In other words, a direct call to a coroutine (in any context) always calls its
C<next> method.

Whereas using a coroutine object as the operand to the C<< <...> >> operator
calls the coroutine's C<next> method in a scalar context, and its C<each>
method in a list context.


So then:

for <$fh> {...} # Build and then loop over a lazy array, the elements
# of which are computed on demand by calling
# $fh.iter().next() as required

for <$iter> {...} # Build and then loop over a lazy array, the elements
# of which are computed on demand by calling
# $iter.next() as required

for func() {...} # Call func() once, then loop over the list
# of values returned by that call

for coro() {...} # Call &coro.next() once, then loop over the list
# of values returned by that call

for <&coro> {...} # Build and then loop over a lazy array, the elements
# of which are computed on demand by calling
# &coro.next() as required

for <func()> {...} # Call func() once, then build and loop over a
# lazy array, the elements of which are computed on
# demand by calling the .next method of the iterator
# returned by the single call to func()

for <coro()> {...} # Call &coro.next() once, then build and loop over a
# lazy array, the elements of which are computed on
# demand by calling the .next method of the iterator
# returned by the single call to &coro.next()


==============================================================================

Meanwhile, Luke Palmer wrote:

> It's a simplified version of the traversal problem

Under the above assumptions, here's a traversal coroutine for a HoHoHo...oH
tree:

coro traverse (Hash $tree) {
$tree // return;
if $tree{left} {
yield $_ for <&_.clone($tree{left})>;
}
yield $tree{node};
if $tree{right} {
yield $_ for <&_.clone($tree{right})>;
}
}

And here's that recursive counter Luke wanted:

coro count($n) {
return 1 if $n<=1;
yield $_ for <&_.clone($n-1)>;
yield $n;
}

And here's John's pipelined comment ecdysiast:

coro getfile(Str $name) {
yield $_ for <open $name or die $!>
}

coro putfile(Str $name, Iter $iter) {
my $fh = open '>', $name or die $!;
$fh.print($_) && yield for <$iter>;
}

coro decomment(Rule $pattern, Iter $iter) {
/$pattern/ || yield $_ for <$iter>;
}

coro cat(Inter $iter1, Iter $iter2) {
yield $_ for <$iter1>, <$iter2>;
}

@sources = &getfile.clone("file1"),
&getfile.clone("file2");

$pipeline = &cat.clone(*@sources);
$pipeline = &decomment.clone(rx/^ \s* \#/, $pipeline);
$pipeline = &decomment.clone(rx#^ \s* //#, $pipeline);
$pipeline = &putfile.clone("filestripped", $pipeline);

while <$pipeline> {}


This last example suggests that C<.clone> might more readably be named
something like C<.with>:

$pipeline = &cat.with(*@sources);
$pipeline = &decomment.with(/^ \s* \#/, $pipeline);
$pipeline = &decomment.with(rx#^ \s* //#, $pipeline);
$pipeline = &putfile.with("filestripped", $pipeline);

though that doesn't work quite as well when there are no arguments being set up:

$iter = &fibs.with(); # fibs with...nothing?


Note too that the (apparently common) construction:

yield $_ for <&_.clone(@newargs)>;

could probably be condensed to:

»yield« <&_.clone(@newargs)>;

> Or am I missing something? I thought that one could yield from
> arbitrarily deep in the call stack.

That *was* an early idea, but subsequently proved both untenable and unnecessary.

Damian

Luke Palmer

unread,
May 23, 2003, 4:32:42 AM5/23/03
to j...@algate.perlwolf.com, dam...@conway.org, perl6-l...@perl.org, la...@wall.org
> On Fri, May 23, 2003 at 10:43:06AM +1000, Damian Conway wrote:
> > Didn't we already have this discussion six months ago???
>
> I wasn't on the list then, but that looks like it was...
>
> Larry's description of having the initial call to a

Hang on... Larry? He's actually said something about this? Where?

> coro function return an object sounds very much like
> the create() call I'm used to except that (as per
> Luke's discussion) it doesn't allow a function to
> be used as both a coroutine and as a function (e.g.
> when it calls itself recursively). Luke's approach
> of using two routines, one for the coro and another
> for the function:
>
> coro foo { ... real_foo( args ) }
> sub real_foo { ... can call real_foo recursively
>
> is manageable, but I prefer to have different syntax
> for call and resume rather than have the distinction
> intuited from the declaration of the routine.

I am certain now that there exists no approach that will allow both:

coro foo {...}
$x = foo(); $x = foo(); # The second yielded value.

And recursive calls from a coroutine to itself, simply because they're
mutually exclusive. If someone does come up with such a way (which
I'm convinced they can't :-), there would be no way to invoke a
separate coroutine instance within the coroutine itself.

> However, when the function of a coroutine is
> sufficiently complicated that it ought to be
> decomposed (using additional subroutines that can
> each yield some of the total sequence from the
> single coroutine process), the argument rebinding
> form next_args is inadequate. (Note that this also
> strongly says that having a yield operator in its
> body is not adequate reason to treat a subroutine as
> a coroutine, and automatically turn an invokation of
> that subroutine into a new coroutine process.)
>
> > next_args(PARAM_LIST) # resumes coroutine body until next C<yield>,
> > # rebinds params to the args passed to
> > # C<next_args>.
> > # PARAM_LIST is the same as the parameter list
> > # of the coroutine itself

Agreed. This is silly. Why not just return the args from C<yield>,
as Damian suggested before?

Luke

Luke Palmer

unread,
May 23, 2003, 4:50:34 AM5/23/03
to dam...@conway.org, perl6-l...@perl.org, la...@wall.org
Damian e-wrote:

> Meanwhile, Luke Palmer wrote:
>
> > It's a simplified version of the traversal problem
>
> Under the above assumptions, here's a traversal coroutine for a HoHoHo...oH
> tree:
>
> coro traverse (Hash $tree) {
> $tree // return;
> if $tree{left} {
> yield $_ for <&_.clone($tree{left})>;
> }
> yield $tree{node};
> if $tree{right} {
> yield $_ for <&_.clone($tree{right})>;
> }
> }

HoHoHoHoHoH, that works for me :-)

> And here's that recursive counter Luke wanted:
>
> coro count($n) {
> return 1 if $n<=1;
> yield $_ for <&_.clone($n-1)>;
> yield $n;
> }

You know, I actually think this is better than yielding from deep in
the call stack. It's clear what's happening when, and will likely be
a lot easier to debug.

> Note too that the (apparently common) construction:
>
> yield $_ for <&_.clone(@newargs)>;
>
> could probably be condensed to:
>
> »yield« <&_.clone(@newargs)>;

Together with something I'll write right off the bat:

macro cocall(Str $sub, Str $args) {
"yield \$_ for <($sub).clone($args)>"
}

However that will be done cleanly.

I also had a thought. If C<sub> is short for subroutine, shouldn't
the coroutine declarator actually be... C<co>? :-P

Well done, Damian! (as if I expected less)

Luke

Piers Cawley

unread,
May 23, 2003, 5:25:55 AM5/23/03
to Damian Conway, perl6-l...@perl.org, Larry Wall
Damian Conway <dam...@conway.org> writes:

> Many thanks to John for his excellent analyses and especially for
> reminding us of the potential use of C<yield>'s return value to pass
> new values into a coroutine. We did explore that approach last year
> but it had completely slipped my mind since then.
>
> That input, and Luke's important observations about recursion and list
> contexts have led to the following re-revised proposal...
>
> ==============================================================================
>
> A coroutine is declared with the C<coro> declarator:

I think you can arrange things so that you don't need the 'coro'
declarator by careful definition of your yield macro and suitable
reflective capabilities.

macro yield($parse_state : *@args) {
my $coro = $parse_state.get_enclosing_routine;
$coro.is_coroutine(1);
return {
(sub {
call-cc -> $cont {
$coro.resume_at($cont);
}
leave $coro <== *@)
}).(*@args);
}
}


A Coroutine can then be thought of as a wrapped subroutine with a
wrapper that would be declared like:

&coro.wrap({
my $resume = $coro.resume_at;
if($resume) {
$coro.resume_at(undef);
$resume.invoke(*@_);
}
else {
call
}
});

Assuming a simplistic coroutine invocation scheme.

--
Piers

Luke Palmer

unread,
May 23, 2003, 6:53:45 AM5/23/03
to pdca...@bofh.org.uk, dam...@conway.org, perl6-l...@perl.org, la...@wall.org
> Damian Conway <dam...@conway.org> writes:
>
> > Many thanks to John for his excellent analyses and especially for
> > reminding us of the potential use of C<yield>'s return value to pass
> > new values into a coroutine. We did explore that approach last year
> > but it had completely slipped my mind since then.
> >
> > That input, and Luke's important observations about recursion and list
> > contexts have led to the following re-revised proposal...
> >
> > ==============================================================================
> >
> > A coroutine is declared with the C<coro> declarator:
>
> I think you can arrange things so that you don't need the 'coro'
> declarator by careful definition of your yield macro and suitable
> reflective capabilities.

Uh huh... but what if we I<want> the C<coro> declarator? I do,
personally.

Plus, what about all the methods that coroutine objects have? If you
could implement I<that> without a declarator... it still wouldn't be
used. But I'd be impressed, at least.

Luke

John Macdonald

unread,
May 23, 2003, 6:39:36 AM5/23/03
to Damian Conway, perl6-l...@perl.org, Larry Wall
On Fri, May 23, 2003 at 06:32:44PM +1000, Damian Conway wrote:
> > Or am I missing something? I thought that one could yield from
> > arbitrarily deep in the call stack.
>
> That *was* an early idea, but subsequently proved both untenable and
> unnecessary.

OK.

So, rather than allowing one coroutine to call
subroutines that can yield results on its behalf,
it starts other coroutines and relays their results.

That comes at a cost. When you are getting the next
node from a deeply recursive structure, you do a
next on the root, which does a next on its current
child node, which does a next on its current child
node, ..., which eventually gets down to the bottom
to the coroutine that is executing the code for the
previously return node, that code determines the next
value (possibly starting a new level) and yields the
value, its parent node yields the value, ..., the
root node yields the value. The act of getting one
next value from the tree traversal, then, involves
N next calls, and N yield retuns. Keeping a call
stack for a sinlge coroutine would make that one
next call (which goes directly to the code for the
current node) and one yield. This would turn the
cost of traversing a nicely balanced tree from O(N)
to O(N log(N)), and of course a wildly unbalanced
tree traversal could go from O(N) to O(N**2).

Piers Cawley

unread,
May 23, 2003, 9:51:08 AM5/23/03
to Luke Palmer, dam...@conway.org, perl6-l...@perl.org, la...@wall.org
Luke Palmer <fibo...@babylonia.flatirons.org> writes:

>> Damian Conway <dam...@conway.org> writes:
>>
>> > Many thanks to John for his excellent analyses and especially for
>> > reminding us of the potential use of C<yield>'s return value to pass
>> > new values into a coroutine. We did explore that approach last year
>> > but it had completely slipped my mind since then.
>> >
>> > That input, and Luke's important observations about recursion and list
>> > contexts have led to the following re-revised proposal...
>> >
>> > ==============================================================================
>> >
>> > A coroutine is declared with the C<coro> declarator:
>>
>> I think you can arrange things so that you don't need the 'coro'
>> declarator by careful definition of your yield macro and suitable
>> reflective capabilities.
>
> Uh huh... but what if we I<want> the C<coro> declarator? I do,
> personally.
>
> Plus, what about all the methods that coroutine objects have? If you
> could implement I<that> without a declarator... it still wouldn't be
> used. But I'd be impressed, at least.

Easy. You have a method, 'Routine.become_coroutine' which mutates the
Routine iinto a coroutine (where I've put 'is_coroutine(1)') and then
that has all the Coroutine methods.

--
Piers

Dulcimer

unread,
May 23, 2003, 10:21:08 AM5/23/03
to Damian Conway, perl6-l...@perl.org, Larry Wall
> [...]

> This last example suggests that C<.clone> might more readably be
> named something like C<.with>:
>
> $pipeline = &cat.with(*@sources);
> $pipeline = &decomment.with(/^ \s* \#/, $pipeline);
> $pipeline = &decomment.with(rx#^ \s* //#, $pipeline);
> $pipeline = &putfile.with("filestripped", $pipeline);
>
> though that doesn't work quite as well when there are no arguments
> being set up:
>
> $iter = &fibs.with(); # fibs with...nothing?

Spinning lies from thin air? :)
Ok, Fibonacci wasn't lying....
(it's a "fib" joke, for those whose sense of humor isn't lame enough to
follow that silliness.)

Seriously, though, we have other cases where there's more than one
keyword for the same thing, don't we? Can we afford another for this
case?

Austin Hastings

unread,
May 23, 2003, 12:44:40 PM5/23/03
to Hod...@writeme.com, Damian Conway, perl6-l...@perl.org, Larry Wall

--- Dulcimer <ydb...@yahoo.com> wrote:
> > [...]
> > This last example suggests that C<.clone> might more readably be
> > named something like C<.with>:
> >
> > $pipeline = &cat.with(*@sources);
> > $pipeline = &decomment.with(/^ \s* \#/, $pipeline);
> > $pipeline = &decomment.with(rx#^ \s* //#, $pipeline);
> > $pipeline = &putfile.with("filestripped", $pipeline);
> >
> > though that doesn't work quite as well when there are no arguments
> > being set up:
> >
> > $iter = &fibs.with(); # fibs with...nothing?
>
> Spinning lies from thin air? :)
> Ok, Fibonacci wasn't lying....
> (it's a "fib" joke, for those whose sense of humor isn't lame enough
> to
> follow that silliness.)
>
> Seriously, though, we have other cases where there's more than one
> keyword for the same thing, don't we? Can we afford another for this
> case?

Sure, but the huffman value of C<with> is too great to spend it on
something like this.

=Austin

Austin Hastings

unread,
May 23, 2003, 12:53:36 PM5/23/03
to John Macdonald, Damian Conway, perl6-l...@perl.org, Larry Wall

One the one hand, "that's why TMTOWTDI" -- you can lsearch an array, if
you want, too. Performance versus elegance is a trade you make every
time you choose recursion over iteration.

On the other hand, the decision between "support recursive coroutines"
and "support delegation of C<yield> to subs" is one that Larry may have
some interesting views on.

Or, consider that the coroutine is going to have to have an object of
some kind lying about so that it can store "I am here" data for any
resume calls.

There's nothing which says that yield couldn't update the toplevel
object to contain the bottomlevel continuation, making a resume
operation essentially costless, assuming no parameters are used by the
callee.

That is, if I say: C<yield $x> then I'm not using whatever the
putative return value of yield is. And for obvious cases like
yield-in-loop, the optimizer might note that this:

yield $_ for coro(...);

is basically yield(resume()) after the first pass. Accordingly,
optimizing the CPS flow might essentially treat this as a I<code> snap
and just store the continuation of resume().

=Austin

John Williams

unread,
May 23, 2003, 1:17:41 PM5/23/03
to Damian Conway, perl6-l...@perl.org, Larry Wall
On Fri, 23 May 2003, Damian Conway wrote:
> =item C<method Coro::clone(*@args) {...}>
>
> which creates and returns a copy of the implementation (but not the current
> state) of the C<Coro> object, having first prebound the copy's parameters to
> the arguments passed to C<clone>.
>
> =back

Isn't that essentially what .assuming() does by currying normal
subroutines? And if so, why not use the same name for coroutines?

(Assuming, of course, that assuming nothing will return something which is
the same, only different.)

~ John Williams


John Williams

unread,
May 23, 2003, 1:23:03 PM5/23/03
to Piers Cawley, perl6-l...@perl.org
On Thu, 22 May 2003, Piers Cawley wrote:
> One of the cute things that Common Lisp has is multiple return values
> (and I don't just mean returning a list). In most cases you just use
> the 'main' return value, but in others you would do
>
> (multiple-value-list (a-coroutine arg1 arg2))
>
> And get a list back along the lines of
>
> ((usual results) #'a-coroutine-handle)
>
> Point is that, unless you *know* you're going to need the secondary
> values returned by a function then they're completely invisible to the
> caller. It seems to me that this might be a good approach with Perl:

How about using properties for that?

sub multi_value_sub
{
return @results but also( coro { ... } );
}

~ John Williams


Luke Palmer

unread,
May 23, 2003, 1:58:57 PM5/23/03
to mlaz...@cognitivity.com, perl6-l...@perl.org
> For example, if you have count(100), you're recursing through ~100
> yields. Which means, unless I am sorely mistaken, that you have 99
> saved coroutine states lying around, never to be used again, but
> sucking up resources? How's it know to release them?

Coroutine states are garbage collected, too. But there's still a lot
of space overhead for currently executing (or could-be-executed)
coroutines. Much more than there need be.

But then there's John's observation that yielding for the deeper calls
actually increses the time complexity of some coroutine algorithms.
That's not cool -- "Perl 6, it's faster; it just makes _your_ code
slower".

I don't think the proposal is acceptable, now. It's getting close,
though :)

Luke

>
> 2) I am going to be a heretic, and state publicly that I have always
> thought "coroutine" to be a damn unhelpful name for the behavior of
> "yieldable, resumable subroutines". "Coroutines" are not something
> most people have seriously dealt with, and the name is bloody
> impenetrable. Rather than use C<coro> or even C<coroutine>, I'd much
> rather use a more descriptive phrase, perhaps something like:
>
> sub fibs (...) is reentrant {...}
>
> E.G. I'm wondering how close we can get coroutine syntax to mirror
> (as-of-yet-unknown) thread syntax, such that the user perhaps doesn't
> need to be aware of "how" the routine is reentrant -- it just magically
> does the right thing.
>
> MikeL
>
>
> [*] (I think Simon is right -- we need to better identify what problem
> we think we're solving here, and why 99% of the world should care.)

Michael Lazzaro

unread,
May 23, 2003, 1:49:15 PM5/23/03
to perl6-l...@perl.org
On Friday, May 23, 2003, at 01:32 AM, Damian Conway wrote:
> That input, and Luke's important observations about recursion and list
> contexts have led to the following re-revised proposal...

That's pretty darn slick, I like it. (I'm personally quite satisfied
with the clone and/or factory approach to simultaneous
instances/states/iterators.)

In the spirit of questioning everything, however, I'm going to (mildly)
question some things. [*]

1) My impression is that yielding and resuming a coroutine is by
necessity a relatively expensive operation. As such, given any
arbitrary recursive coroutine:

> coro count($n) {
> return 1 if $n<=1;
> yield $_ for <&_.clone($n-1)>;
> yield $n;
> }

are there really routines in the world for which a recursive
implementation isn't _substantially_ more expensive than a decent
alternative solution?

For example, if you have count(100), you're recursing through ~100
yields. Which means, unless I am sorely mistaken, that you have 99
saved coroutine states lying around, never to be used again, but
sucking up resources? How's it know to release them?

Dulcimer

unread,
May 23, 2003, 1:59:48 PM5/23/03
to perl6-l...@perl.org
> > Seriously, though, we have other cases where there's more than one
> > keyword for the same thing, don't we? Can we afford another for
> > this case?
>
> Sure, but the huffman value of C<with> is too great to spend it on
> something like this.

I did think of that, and I really do agree.
Which brings me to another point.

I understand that you can override even keywords in perl.
For example (and PLEASE correct me on this) I could theoretically
create an operator named "print" that compares operands numerically,
were I sufficiently deranged to believe this to be a good idea.

That being the case, what's the precedence of terms in P6?
I mean, macros are executed immediately, so that puts them up near if
not on the top, right? Then subs before methods? Operators by to
whatever you liken them?

I could macro "with" to "clone", and right now that's not really and
issue since I'm not familiar with a "with" in P6.... If we add one
later it shouldn't matter in that code, because it predates the new
keyword, and the macro likely has higher precedence, but that sort of
thing could bite someone in other circumstances, no?

Or is this a non-issue?

Luke Palmer

unread,
May 23, 2003, 2:05:59 PM5/23/03
to Hod...@writeme.com, perl6-l...@perl.org
> > > Seriously, though, we have other cases where there's more than one
> > > keyword for the same thing, don't we? Can we afford another for
> > > this case?
> >
> > Sure, but the huffman value of C<with> is too great to spend it on
> > something like this.
>
> I did think of that, and I really do agree.
> Which brings me to another point.
>
> I understand that you can override even keywords in perl.
> For example (and PLEASE correct me on this) I could theoretically
> create an operator named "print" that compares operands numerically,
> were I sufficiently deranged to believe this to be a good idea.

You can do it in P5. I hope you can in P6. (There's something
special about C<print> in particular that's keeping me from overriding
it, though :-( )

> That being the case, what's the precedence of terms in P6?
> I mean, macros are executed immediately, so that puts them up near if
> not on the top, right? Then subs before methods? Operators by to
> whatever you liken them?

Uh, what? Macros are subs that execute during parsing. They're
executed as soon as it's determined that they can be.

Nothing else executes during parsing, and it's obvious the order in
which other things execute at run time. Or am I missing your
question

If both a sub and a multimethod are defined with the same name, yes,
the sub gets executed.

> I could macro "with" to "clone", and right now that's not really and
> issue since I'm not familiar with a "with" in P6.... If we add one
> later it shouldn't matter in that code, because it predates the new
> keyword, and the macro likely has higher precedence, but that sort of
> thing could bite someone in other circumstances, no?
>
> Or is this a non-issue?

What's the issue?

Luke

Michael Lazzaro

unread,
May 23, 2003, 2:14:37 PM5/23/03
to Luke Palmer, perl6-l...@perl.org

On Friday, May 23, 2003, at 10:58 AM, Luke Palmer wrote:

>> For example, if you have count(100), you're recursing through ~100
>> yields. Which means, unless I am sorely mistaken, that you have 99
>> saved coroutine states lying around, never to be used again, but
>> sucking up resources? How's it know to release them?
>
> Coroutine states are garbage collected, too. But there's still a lot
> of space overhead for currently executing (or could-be-executed)
> coroutines. Much more than there need be.

I guess what I'm asking is, how's it know when they can be garbage
collected? If you've just yielded out of a coro, what actually
"detaches" it? Here, for example:

>> coro count($n) {
>> return 1 if $n<=1;
>> yield $_ for <&_.clone($n-1)>;
>> yield $n;
>> }

You're yielding out of each of the cloned states, but not actually
detaching them. Are they detached by the <&_.clone($n-1)> going out of
scope? But since you're yielding from within the C<for>, does it
_ever_ go out of scope?

I suppose it would go out of scope when you finally completed the
loop... hmm...

> But then there's John's observation that yielding for the deeper calls
> actually increses the time complexity of some coroutine algorithms.

Yeah. That's nasty.

MikeL

John Macdonald

unread,
May 23, 2003, 1:11:36 PM5/23/03
to perl6-l...@perl.org
On Fri, May 23, 2003 at 09:53:36AM -0700, Austin Hastings wrote:
> There's nothing which says that yield couldn't update the toplevel
> object to contain the bottomlevel continuation, making a resume
> operation essentially costless, assuming no parameters are used by the
> callee.
>
> That is, if I say: C<yield $x> then I'm not using whatever the
> putative return value of yield is. And for obvious cases like
> yield-in-loop, the optimizer might note that this:
>
> yield $_ for coro(...);
>
> is basically yield(resume()) after the first pass. Accordingly,
> optimizing the CPS flow might essentially treat this as a I<code> snap
> and just store the continuation of resume().

Actually, the way I read the current proposal,
there is something that says you can't.

You can only have yield in a coro, so one coro
can't function call to something that will do a
yield on it's behalf, because that function call
will automatically start up a new coroutine. So,
the only nested info in the coroutine continuation
can be sub-blocks in the coroutine, but not subroutine
calls of any sort. That is why Damian uses:

coro traverse (Hash $tree) {
$tree // return;
if $tree{left} {
yield $_ for <&_.clone($tree{left})>;
}
yield $tree{node};
if $tree{right} {
yield $_ for <&_.clone($tree{right})>;
}
}

Adding a variant of "coro" which I'll call "cosub"
fixes things. "cosub" would generally be the same as
"coro", but it doesn't do the automatic spawning of
a new coroutine if it is called as a subroutine.
It would give a runtime error if it is not being
called from a coro context, otherwise it would act
as a normal function but it would be able to yield a
value to and be resumed as part of the operation of
the original coro.

That would let the traverse be rewritten as:

cosub travsub (Hash $tree) {
$tree{left} && travsub( $tree{left} );
yield $tree{node};
$tree{right} && travsub( $tree{right} );
}

coro traverse (Hash $tree) {
$tree // return;

travsub( $tree );
}

(In this case, when you are 100 levels down in the
tree and next is called, there is only one coroutine
being resumed and it resumes right at the 100 deep
subroutine invokation; rather than there being 100
coroutines in action, 99 of whom are simply relaying
the next request down and the yield response back up.)

In the lines of simple things simple, hard things
possible, this retains the auto-coroutineify nature
of coro definitions so that simple iterations will
just work, while still keeping it possible to have
coroutines that deal with more complicated processes.

Just for interest, here is a traversal routine that
uses the bi-directional data transfer of the yield
and next operations. This traversal routine allows
the caller to do selective pruning of the traversal
in progress.

cosub travsub (Hash $tree) {
my( $p_left, $p_in, $p_right, $p_post ) = yield( "PRE", $tree{node} );
$tree{left} && travsub( $tree{left} ) unles $p_left;
yield( "IN", $tree{node} ) unless $p_in;
$tree{right} && travsub( $tree{right} ) unles $p_right;
yield( "POST", $tree{node} ) unless $p_post;
}

coro traverse (Hash $tree) {
$tree // return;

travsub( $tree );
}

It can be used to build various other things. For example,
is a coroutine that iterates all of the values of a tree
within a specified range - it skips traversing the parts
of the tree that cannot contain an interesting value:

coro rangeiter (Hash $tree, int $min, int $max ) {
my $trav = traverse( $tree );
$p_post = 1;
my( $type, $node ) = $trav.next;
while( defined $type ) {
if( $type eq 'PRE' ) {
my $p_left = $node{val} >= $min;
my $p_right = $node{val} <= $max;
my $p_in = $p_left && $p_right;
( $type, $node ) =
$trav.next( $p_left, $p_in, $p_right, 1 );
} elsif( $type eq 'IN' ) {
yield $node;
( $type, $node ) = $trav.next;
}
}
}

(Note that rangeiter resumes the traversal from 3
different place, and only one of them needs to pass
values through next to return from yield. This is
a lot of rope; use it to lasso steers rather than
hanging yourself.)

A similar sort of controlled traversal might be used
to manage something like Data::Dumper, so that it
could skip large chunks of a structure, but give
extremely fine detail on a specific area and coarse
detail on the parts of the data structure that are
more closely connected to it.

Piers Cawley

unread,
May 23, 2003, 2:25:42 PM5/23/03
to John Williams, perl6-l...@perl.org
John Williams <will...@tni.com> writes:

> On Thu, 22 May 2003, Piers Cawley wrote:
>> One of the cute things that Common Lisp has is multiple return values
>> (and I don't just mean returning a list). In most cases you just use
>> the 'main' return value, but in others you would do
>>
>> (multiple-value-list (a-coroutine arg1 arg2))
>>
>> And get a list back along the lines of
>>
>> ((usual results) #'a-coroutine-handle)
>>
>> Point is that, unless you *know* you're going to need the secondary
>> values returned by a function then they're completely invisible to the
>> caller. It seems to me that this might be a good approach with Perl:
>
> How about using properties for that?

Gah! Of course.

--
Piers

Mark J. Reed

unread,
May 23, 2003, 3:00:38 PM5/23/03
to perl6-l...@perl.org

On 2003-05-22, Piers Cawley wrote:
> One of the cute things that Common Lisp has is multiple return values
> (and I don't just mean returning a list).

s/cute/annoying/. The first time I used an application with a
CL scripting API, I found I needed a value that was part of such
a multiple-value-list. Only I'd never heard of such a thing.
I had no idea what it was, and the API doc didn't mention the term
"multiple-value-list" or "multiple value" or anything that might
have pointed me in the right direction. All I knew was that when
I printed the value out interactively, I got each element on a
separate line, but when I used the value in a function I only
got the first one. Much poring over of documentation ensued, and
I eventually found the m-v-l function, but why the function
didn't just return a plain old garden-variety Lisp list is beyond me.

> Point is that, unless you *know* you're going to need the secondary
> values returned by a function then they're completely invisible to the
> caller.

And even if you do know you need them they're invisible until you stumble
across the appropriate "reveal invisible" spell. :)

--
Mark REED | CNN Internet Technology
1 CNN Center Rm SW0831G | mark...@cnn.com
Atlanta, GA 30348 USA | +1 404 827 4754

Austin Hastings

unread,
May 23, 2003, 3:17:03 PM5/23/03
to Michael Lazzaro, Luke Palmer, perl6-l...@perl.org

--- Michael Lazzaro <mlaz...@cognitivity.com> wrote:
>
> On Friday, May 23, 2003, at 10:58 AM, Luke Palmer wrote:
>
> >> For example, if you have count(100), you're recursing through ~100
> >> yields. Which means, unless I am sorely mistaken, that you have
> 99
> >> saved coroutine states lying around, never to be used again, but
> >> sucking up resources? How's it know to release them?
> >
> > Coroutine states are garbage collected, too. But there's still a
> lot
> > of space overhead for currently executing (or could-be-executed)
> > coroutines. Much more than there need be.
>
> I guess what I'm asking is, how's it know when they can be garbage
> collected? If you've just yielded out of a coro, what actually
> "detaches" it? Here, for example:

If you're going to allow Dan Sugalski's example (which started this
thread, I think) then you can't GC the context until the thread ends.
After all, the next statement at global scope may reinvoke this
coro-context.

This is a good argument for context-as-object, I we care about that
sort of thing. Frankly, I don't think we do.


> >> coro count($n) {
> >> return 1 if $n<=1;
> >> yield $_ for <&_.clone($n-1)>;
> >> yield $n;
> >> }
>
> You're yielding out of each of the cloned states, but not actually
> detaching them. Are they detached by the <&_.clone($n-1)> going out
> of scope? But since you're yielding from within the C<for>, does it
> _ever_ go out of scope?
>
> I suppose it would go out of scope when you finally completed the
> loop... hmm...

See above.

Also, if you've got a big recursive coro, you don't want the context
for level N to expire until N-1 expires. So we've got some kind of coro
continuations, or maybe closures: exiting a function via C<yield>
cannot permit any of its cocontexts to expire.

Might as well turn it into a feature...

=Austin

Adam D. Lopresto

unread,
May 23, 2003, 1:37:17 PM5/23/03
to perl6-l...@perl.org
I'd like to mention that if yield breaks out of all nested coroutines, then
some very interesting pipeline stuff becomes either impossible or much harder.
Specifically, it means that you can't filter the returned values at all.

coro odds_only (Iter &i){
for i() {
yield if $_ % 2;
}
}

coro randoms () {
loop {
yield int(rand 10);
}
}

for odds_only(&randoms){
# $_ is a random odd number
}

Basically, coros should compose in a way that doesn't damage either of them.

--
Adam Lopresto
http://cec.wustl.edu/~adam/

.-""-.
.--./ `. .-""-.
.' `.__,"\ \ ___ .' _ \
: _ _ : \ `" "' ,' \ /
.-| _ _ |-. Y Y `-'
((_| (O)(O) |_)) | _ _ |
`-| .--. |-' | (o)(o) |
.-' ( ) `-. / __ \
/ .-._`--'_.-. \ | /# \ |
( (n uuuu n) )| \__/ |
`.`"=nnnnnn="'.' \ / _
`-.______.-' _ `.____.' _ / )-,
_/\| |/\__ .' `-" "-' (_/ / )
.w'/\ \__/ /\w/ |_/ / )
.-\w(( \/ \/ ))| | `-(_/
/ |ww\\ \ / //w| | | \
/ |www\\/`'\//ww| | |\ \
/ |wwww\\ //www| | | \ \

John Macdonald

unread,
May 23, 2003, 2:29:40 PM5/23/03
to Michael Lazzaro, perl6-l...@perl.org
On Fri, May 23, 2003 at 10:49:15AM -0700, Michael Lazzaro wrote:
> 2) I am going to be a heretic, and state publicly that I have always
> thought "coroutine" to be a damn unhelpful name for the behavior of
> "yieldable, resumable subroutines". "Coroutines" are not something
> most people have seriously dealt with, and the name is bloody
> impenetrable. Rather than use C<coro> or even C<coroutine>, I'd much
> rather use a more descriptive phrase, perhaps something like:
>
> sub fibs (...) is reentrant {...}
>
> E.G. I'm wondering how close we can get coroutine syntax to mirror
> (as-of-yet-unknown) thread syntax, such that the user perhaps doesn't
> need to be aware of "how" the routine is reentrant -- it just magically
> does the right thing.

I don't fully agree with this.

Compare coroutine and subroutine.

A subroutine is subordinate - it gets a job, does it,
returns the result and it is done. The "mainline"
caller of a subroutine has no such constraint on its
actions, it can be in the middle of a computation,
possibly in the middle of a number of subroutine
calls when it calls the subroutine. It can call the
subroutine again from different locations during its
operation, while each time the subroutine starts from
scratch and runs to completion.

Coroutines are cooperative - each takes turns doing
a bit of the process and each can treat the other as
a subroutine that can be called from multiple places
within the context of it entire operation. They
have equal status, an each can be written as if it
were the mainline function, invoking the other as a
a subroutine. (So they are equal by virtue of each
one believing it is superior. :-)

I do agree with Michael's viewpoint somewhat however
in terms of how coroutines are being desined for
Perl 6.

The design before this discussion started treated
coroutines as being useful primarily for iterators
that could be used from an arbitrary calling context,
and the design was heavily skewed in that direction.
The coroutine was capable of acting as an input
handle that would execute code every time you did a
"read". But coroutines can also be used as data
sinks instead of sources (like an output handle),
and for bi-directional activity.

The use of next and yield keywords as different
operators hides the equal status cooperative potential
of coroutines and emphasises the unidirectional
expectation that was inherent in that design. To a
certain extent, that is all right. Most simple cases,
like iterators, don't need the full cooperative
potential of being mutually superior.

Using a single operator (such as the "resume" that I
am used to) for both sides of the coroutine transfer
of control allows the programmer to more easily view
a collection of coroutines as equal partners, each
of which treats the other as a subroutine that can
be called to handle a function.

This is especially important for larger scale use
of coroutines that really are equal cooperatin
partners in the computations. Think of a parser,
an optimizer, and a code generator, an interpretor,
and a stack frame register for example.

The parser generates a chunk of parse tree and calls
a "subroutine" to deal with it. The code generator
calls a "subroutine" to get a parse tree, generates
code for it, and then calls another "subroutine"
to execute that code. The interpreter calls a
"subroutine" to get a block of code, files it and
links it, and executes it. Both the code generator
and the interpreterc call a "subroutine" to create
and manage stack and context information for a block
of code. Neither the parser nor the code generator
knows or cares whether the "subroutine" they are
calling is each other or is maybe it is the optimizer
sitting between them.

Now, coroutines is not the only way to do these sorts
of things. The stack frame might be an object with a
variety of methods instead of a coroutine for example.
(We didn't have objects when I was your age, we had
to use coroutines and execute them uphill both ways
in the snow. :-) There are also threads or disjoint
processes, but these have their own overheads,
something in between those chain saws and the iterator
nail file might be useful for certain purposes.

Austin Hastings

unread,
May 23, 2003, 3:32:46 PM5/23/03
to John Macdonald, perl6-l...@perl.org

--- John Macdonald <j...@algate.perlwolf.com> wrote:

The point here is that he is doing a recursive coroutine, which is
creating multiple coroutine contexts - one for each level of descent
into the tree. So you've got:

traverse.1.yield(
traverse.2.yield(
...
traverse.N.yield()

N levels of nested coroutine depth, one for each clone.

The object here is to avoid having to pay the price of resuming N
coroutine contexts (essentially N times the cost of a full VM context
switch, plus a surcharge I'm sure).

So if the optimizer can figure this out, and figure out where the other
contexts are, then it could short-circuit the resume process and move
much of the overhead to the yield code.

IOW, the post-cocall processing would look at the context that just got
stored (it has to be accessible to both caller and callee) and set the
resume address of the yield it is preparing to execute to link directly
to the next level down.

If every level does this, then you've effectively snapped the yield
pointers so that you resume directly to the working node of the yield
stack.

Drool, slobber. That's a pretty interesting idea.

=Austin

Simon Cozens

unread,
May 23, 2003, 3:33:19 PM5/23/03
to perl6-l...@perl.org
austin_...@yahoo.com (Austin Hastings) writes:
> If you're going to allow Dan Sugalski's example (which started this
> thread, I think) then you can't GC the context until the thread ends.
...

> Might as well turn it into a feature...

Pardon me for being grouchy again without offering better suggestions (but as
I've mentioned I don't understand the thread well enough to do so) but wasn't
one of the purposes of splitting the language and implementation mailing lists
to discourage people from using implementation arguments to justify design
issues?

Why not let's first work out what the best model is for the language, then
work out how to implement it; not the other way around. I'm sure p6i contains
enough smart folk to be able to support any sane design decision that p6l
comes up with.

--
teco < /dev/audio
- Ignatios Souvatzis

Austin Hastings

unread,
May 23, 2003, 3:38:34 PM5/23/03
to Simon Cozens, perl6-l...@perl.org

Granted, but this is really more of a language issue: "What is the
scope of a coroutine context?" It's just cloaked in keano low-level
terms because we can't help ourselves.

=Austin

Michael Lazzaro

unread,
May 23, 2003, 5:49:52 PM5/23/03
to John Macdonald, perl6-l...@perl.org

On Friday, May 23, 2003, at 11:29 AM, John Macdonald wrote:
> On Fri, May 23, 2003 at 10:49:15AM -0700, Michael Lazzaro wrote:
>> 2) I am going to be a heretic, and state publicly that I have always
>> thought "coroutine" to be a damn unhelpful name for the behavior of
>> "yieldable, resumable subroutines". "Coroutines" are not something
>> most people have seriously dealt with, and the name is bloody
>> impenetrable. Rather than use C<coro> or even C<coroutine>, I'd much
>> rather use a more descriptive phrase, perhaps something like:
>>
>> sub fibs (...) is reentrant {...}
[snip]

> Using a single operator (such as the "resume" that I
> am used to) for both sides of the coroutine transfer
> of control allows the programmer to more easily view
> a collection of coroutines as equal partners, each
> of which treats the other as a subroutine that can
> be called to handle a function.
>
> Coroutines are cooperative - each takes turns doing
> a bit of the process and each can treat the other as
> a subroutine that can be called from multiple places
> within the context of it entire operation. They
> have equal status, an each can be written as if it
> were the mainline function, invoking the other as a
> a subroutine. (So they are equal by virtue of each
> one believing it is superior. :-)

See, this is about where I think most people's brains start to melt.

Thinking aloud: Whereas the name "coroutine" is supposed to invoke
feelings that neither routine is subordinate to the other, you can just
as easily say that _both_ are subordinate to the other, and explain it
equally well that way. A sub is always subordinate to it's caller,
even if the caller was called by the subordinate!

And shortening it to "coro" doesn't do a lot to help matters, when it
comes to explaining it to people. :-)

That's why I'm intrigued by having much of the _functionality_ of
coroutines, but not terribly fond of _naming_ that functionality
"coroutine".[*] In actual usage, it's so similar to
continuations/threads that having a separate C<coro> feature seems
grafted on -- what we really would want, in an ideal world, would be to
magically unify *all* of those philosophies, as much as possible.

sub fibs (...) is reentrant {

my $v;
...
yield $v;
}

I mean, when we say "coroutine", what do we really want, here?
Reentrant subroutines? Lower-impact threading?
Iterators/Sources/Sinks? If the word "coroutine" had never been
invented, how would we be describing our end goal?

MikeL


[*] As an example, I've never sat down and said "I wonder how I can
design this feature using a coroutine". I have often, however, said
"gee, what I really want is for this stupid sub to be reentrant, so
that I can use it as a iterator/source/sink." And even in complex
cases -- where the routines really are cooperative amongst themselves
-- coroutines are simply another form of threading, yes?

Damian Conway

unread,
May 23, 2003, 6:34:09 PM5/23/03
to perl6-l...@perl.org
Luke Palmer wrote:

> You know, I actually think this is better than yielding from deep in
> the call stack. It's clear what's happening when, and will likely be
> a lot easier to debug.

I agree. Despite John's cogent arguments in favour of action-at-a-distance
yielding, and two forms of coroutine.


> Together with something I'll write right off the bat:
>
> macro cocall(Str $sub, Str $args) {
> "yield \$_ for <($sub).clone($args)>"
> }

Nice. But macros can't magically parse their arguments for you like that:
You'd need to write either something like:

macro yieldall(Str $sub, Str $args)
is parsed( rx:w/ (<ident>) (<Perl.arglist>)/ )
{
"yield \$_ for <($sub).clone($args)>"
}

or (better still):

macro yieldall(Coro $sub, *@args)
{
{ yield $_ for <$sub.clone(*@args) }
}

Damian

Damian Conway

unread,
May 23, 2003, 6:38:14 PM5/23/03
to perl6-l...@perl.org
Piers wrote:

>>>A coroutine is declared with the C<coro> declarator:
>>
>>I think you can arrange things so that you don't need the 'coro'
>>declarator by careful definition of your yield macro and suitable
>>reflective capabilities.

To which Luke replied:

> Uh huh... but what if we I<want> the C<coro> declarator? I do,
> personally.

Larry has consistently used explicit declarators for distinct subclasses of
<Code> object with distinct behaviours:

method foo {...}
multi foo {...}
macro foo {...}
rule foo {...}
sub foo {...}

While I suspect Piers is right -- that we *could* avoid the extra keyword -- I
think a C<coro> declarator is more in keeping with that tradition.

Damian

Damian Conway

unread,
May 23, 2003, 6:46:47 PM5/23/03
to perl6-l...@perl.org
Dulcimer wrote:

>>>Seriously, though, we have other cases where there's more than one
>>>keyword for the same thing, don't we? Can we afford another for
>>>this case?
>>
>>Sure, but the huffman value of C<with> is too great to spend it on
>>something like this.
>
> I did think of that, and I really do agree.

So do I. Which is why I left it as C<clone>, despite my musings.


> Which brings me to another point.
>
> I understand that you can override even keywords in perl.
> For example (and PLEASE correct me on this) I could theoretically
> create an operator named "print" that compares operands numerically,
> were I sufficiently deranged to believe this to be a good idea.
>
> That being the case, what's the precedence of terms in P6?
> I mean, macros are executed immediately, so that puts them up near if
> not on the top, right? Then subs before methods? Operators by to
> whatever you liken them?

The precedence is similar to that of Perl 5. Macros probably parse at the same
level of precedence as the named unary operators. The precedence of your
putative C<print> operator would be determined by its C<is equiv>, C<is
tighter>, or C<is looser> property.

Damian

Damian Conway

unread,
May 23, 2003, 6:54:21 PM5/23/03
to perl6-l...@perl.org
John Williams wrote:

> On Fri, 23 May 2003, Damian Conway wrote:
>
>>=item C<method Coro::clone(*@args) {...}>
>>
>>which creates and returns a copy of the implementation (but not the current
>>state) of the C<Coro> object, having first prebound the copy's parameters to
>>the arguments passed to C<clone>.
>>
>>=back
>
> Isn't that essentially what .assuming() does by currying normal
> subroutines? And if so, why not use the same name for coroutines?

I had the same thought myself, but my understanding of C<.assuming> is that
returns an argument-inserting wrapper around the original C<Code> object, not
a separate copy of it.

The other problem is that I would expect C<clone> to pitch an exception if the
argument list supplied to it didn't provide all the necessary parameters of
the coroutine. We're not currying a coroutine during a C<clone>; we're
"initializing" it.

Damian


John Macdonald

unread,
May 23, 2003, 5:53:09 PM5/23/03
to Damian Conway, perl6-l...@perl.org
On Sat, May 24, 2003 at 08:34:09AM +1000, Damian Conway wrote:
> Luke Palmer wrote:
>
> >You know, I actually think this is better than yielding from deep in
> >the call stack. It's clear what's happening when, and will likely be
> >a lot easier to debug.
>
> I agree. Despite John's cogent arguments in favour of action-at-a-distance
> yielding, and two forms of coroutine.

I tend to think of most coroutines as a data source
or a data sink (with the possibility of occassional
control activities using the reverse data channel)
or as filter (which is simultaneously a data sink to
one coroutine and a data source to another).

The connection between two coroutines is essentially a
read in the data sink being matched up with a write in
the data source. The action-at-a-distance of having
a resume (next/yield) occur in a nested subroutine
is the same as sharing a file handle with a nested
subroutine. I don't see a lot of value in having it
possible for a read handle (next) be available to an
entire nested subroutine hierarchy, while a write
handle (yield) can only be used in the top level
routine that "open"s it. That's adequate for simple
cases but it makes harder things unworkable.

I don't know about having two forms of coroutine -
I think that having the bi-directional form of next
and yield is sufficient, just confusing. (What does
it mean if one coroutine does a next an another, and
in the course of the second's operation it does a next
on the first instead of a yield? Using two different
operators prevents some things by requiring that there
be a master/slave relationship between each pair -
philosophically if not in fact [that depends on the
answer to the next/next question I just posed].)

Damian Conway

unread,
May 23, 2003, 7:32:29 PM5/23/03
to perl6-l...@perl.org
Michael Lazzaro wrote:

> In the spirit of questioning everything, however, I'm going to (mildly)
> question some things. [*]

> [*] (I think Simon is right -- we need to better identify what problem
> we think we're solving here, and why 99% of the world should care.)

I'm trying to solve problems of iteration, pipelined processing, and
especially of parallel traversal of complex data structures. For example,
John's pipelined decommenter, or the tree-equivalence problem (see below).


> 1) My impression is that yielding and resuming a coroutine is by
> necessity a relatively expensive operation. As such, given any
> arbitrary recursive coroutine:
>
>> coro count($n) {
>> return 1 if $n<=1;
>> yield $_ for <&_.clone($n-1)>;
>> yield $n;
>> }
>
>
> are there really routines in the world for which a recursive
> implementation isn't _substantially_ more expensive than a decent
> alternative solution?

That really depends on how good your implementation of recursion is. You can
convert any recursive problem to an iterative one (and vice versa). The
performance of each depends on whether keeping a context stack manually is
more efficient that having the interpreter keep a call stack automatically.

The real issue as far as I'm concerned is this: for a particular problem, is
recursion or iteration the more "natural" solution?

And the problem with that is that sometimes a *combined* approach is what you
really need, and that's much harder to achieve without iterators of some kind.

For example, consider the problem of determining whether two BST trees store
the same sequence of values. You need recursion to extract the sequences and
then iteration to compare them:

@seq1 = inorder($tree1);
@seq2 = inorder($tree2);

for zip(@seq1,@seq2) -> $node1, $node2 {
next if $node1 ~~ $node2;
print "differ\n";
exit;
}
print "same\n";

The problem is that, with ordinary recursive implementations of C<inorder>,
you have to traverse each tree completely before you can start to compare
them. That's horrendously inefficient if each tree has a million nodes, but
the very first nodes differ.

So instead, you use a recursive coroutine, like the C<traverse> coro that John
and I have been batting back and forth. Then your comparison can pull one node
at a time from each tree _whilst_still_using_recursive_traversal_. And your
comparison becomes:

$iter1 = &traverse.clone($tree1);
$iter2 = &traverse.clone($tree2);

for zip(<$iter1>,<$iter2>) -> $node1, $node2 {
next if $node1 ~~ $node2;
print "differ\n";
exit;
}
print "same\n";


Now, of course, you *could* get the same effect without a coroutine or
recursion, by creating a regular subroutine that can iteratively
inorder-traverse multiple trees in parallel. But that subroutine is
*significantly* more complicated to write and maintain than:

coro traverse (Hash $tree) {
$tree // return;

if $tree{left} { »yield« <&_.clone($tree{left})> }
yield $tree{node};
if $tree{right} { »yield« <&_.clone($tree{right})> }
}


> For example, if you have count(100), you're recursing through ~100
> yields. Which means, unless I am sorely mistaken, that you have 99
> saved coroutine states lying around, never to be used again, but sucking
> up resources? How's it know to release them?

This is an example where iteration is obviously the correct approach.
I'm sure Luke would never actually implement his counter coroutine
recursively, when he could just write:

coro count (Num $n) { »yield« 1..$n }


> 2) I am going to be a heretic, and state publicly that I have always
> thought "coroutine" to be a damn unhelpful name for the behavior of
> "yieldable, resumable subroutines". "Coroutines" are not something most
> people have seriously dealt with, and the name is bloody impenetrable.
> Rather than use C<coro> or even C<coroutine>, I'd much rather use a more
> descriptive phrase, perhaps something like:
>
> sub fibs (...) is reentrant {...}

Semantic note: "re-entrant" means "every call (including overlapping and
recursive ones) is independent". Coroutines are an example of something that
is specifically *not* re-entrant.

"Coroutine" is the accepted jargon for these things. And it's no worse than
(say) "closure", which the Perl community has become used to.

That being said, I'm sure Larry would carefully consider any better (i.e. less
technical) suggestions.


> E.G. I'm wondering how close we can get coroutine syntax to mirror
> (as-of-yet-unknown) thread syntax, such that the user perhaps doesn't
> need to be aware of "how" the routine is reentrant -- it just magically
> does the right thing.

Curiously, I'm specifically trying to move coroutine syntax *away* from thread
syntax, which has always struck me as far too low-level. I think of coroutines
as a syntactic (and semantic) convenience layer around threading, much as loop
constructs are a convenience layer around C<goto>.

Damian

Piers Cawley

unread,
May 23, 2003, 8:23:59 PM5/23/03
to Damian Conway, perl6-l...@perl.org
Damian Conway <dam...@conway.org> writes:

I've tended to think of coro as distinct from the rest simply because
I can see how to implement them without having to use a new
declarator, but I can't see how to do the same thing with the other
types.

BTW, speaking of declarators and types of Code, I still want to be
able to write:

class String {

method open($path:) {
File.open($path) or fail $!
}

method open($path: &block) {
my $fh = $path.open;
for <$fh> -> { &block($_) }
close $fh;
}
}

...

"/etc/passwd".open( -> $pwent { ... } );

Is there any chance of getting this? Or will I have to go the whole
multimethod hog? (Not that it shouldn't be possible to set up
appropriate macros of course)

--
Piers

Piers Cawley

unread,
May 23, 2003, 8:26:15 PM5/23/03
to Damian Conway, perl6-l...@perl.org
Damian Conway <dam...@conway.org> writes:

Presumably all bets about macro precendence are off in the presence of
C<is parsed>?


--
Piers

John Macdonald

unread,
May 23, 2003, 7:43:28 PM5/23/03
to Piers Cawley, Damian Conway, perl6-l...@perl.org
On Sat, May 24, 2003 at 01:23:59AM +0100, Piers Cawley wrote:
> Damian Conway <dam...@conway.org> writes:
> > While I suspect Piers is right -- that we *could* avoid the extra
> > keyword -- I think a C<coro> declarator is more in keeping with that
> > tradition.
>
> I've tended to think of coro as distinct from the rest simply because
> I can see how to implement them without having to use a new
> declarator, but I can't see how to do the same thing with the other
> types.

Well, method doesn't need a declarator in Perl 5 but
Larry added it anyway, so being able to implement a
functional form without needing a special declarator
is not sufficient reason to not provide one.

Michael Lazzaro

unread,
May 23, 2003, 9:19:21 PM5/23/03
to Damian Conway, perl6-l...@perl.org

On Friday, May 23, 2003, at 04:32 PM, Damian Conway wrote:
> Semantic note: "re-entrant" means "every call (including overlapping
> and recursive ones) is independent". Coroutines are an example of
> something that is specifically *not* re-entrant.

$#@%ing $@#^. Yes, sorry. Word overlap. Education long time gone.

> "Coroutine" is the accepted jargon for these things. And it's no worse
> than (say) "closure", which the Perl community has become used to.

Well, yes, but... umm...

My concern is not particularly the jargon of "coroutine", or "closure",
or anything else by itself, but the possible aggregation of
state/closure/coroutine/continuation/thread/whatzit into the language
-- that's what's worrying me. Any one of those concepts is "hard" in
both jargon and practice, and I wonder what will be the straw that
breaks the Camel's back, so to speak.

What bugs me about it is (potentially) the sheer volume of secret words
you may need to know to play along.[*] All these things -- from
closures to coroutines to continuations to threads -- are variations of
the same central theme, which boils down to maintaining the state in
one location while something else happens in another location. I
submit that Perl needs a single, unified way to accomplish that, for
some negotiable definition of "single" and "unified."

Don't get me wrong, I like the idea of including coroutine
_capabilities_ -- I'm just pondering whether we are maintaining an
artificial separation from similar, possibly overlapping concepts.

> Curiously, I'm specifically trying to move coroutine syntax *away*
> from thread syntax, which has always struck me as far too low-level. I
> think of coroutines as a syntactic (and semantic) convenience layer
> around threading, much as loop constructs are a convenience layer
> around C<goto>.

That's interesting, because I sortof agree with that (threads being too
low-level). Perhaps I should turn my question around; is there a way
to move thread syntax closer to coroutine syntax, for common cases? :-)

Or to move coroutine syntax to interrelated but still-higher-level
iterator, source/sink, and thread/parallelism syntaxes, since in
practice coroutines are almost always(?) used in those rather narrow,
defined circumstances?

MikeL


[*]An explanatory aside, FWIW:

I've loathed CS proper ever since college, in which -- even at my
undisputedly top-tier, pretty damn good school -- the entire CS
curriculum could be largely boiled down to:
1) Enroll in a CS class
2) Buy professor's $70 textbook, sold only at the campus
bookstore, describing the new language they just invented.
3) Forget everything you thought you learned last semester.
4) Spend a few weeks learning the words your professor uses
to describe the same damn concepts you learned last semester,
by different names.
5) Spend the rest of the semester coding problems you coded before,
using an experimental language whose only working compiler
consists of a flannel-clad grad student TA.
6) Repeat until graduated or insane.

That isn't meant to be a slam against professors -- just the Babelesque
state of CS education in general, at least as I experienced it. YMMV.
;-)

Luke Palmer

unread,
May 24, 2003, 1:12:24 AM5/24/03
to pdca...@bofh.org.uk, dam...@conway.org, perl6-l...@perl.org
Piers comduced:

> BTW, speaking of declarators and types of Code, I still want to be
> able to write:
>
> class String {
>
> method open($path:) {
> File.open($path) or fail $!
> }
>
> method open($path: &block) {
> my $fh = $path.open;
> for <$fh> -> { &block($_) }
> close $fh;
> }
> }
>
> ...
>
> "/etc/passwd".open( -> $pwent { ... } );
>
> Is there any chance of getting this? Or will I have to go the whole
> multimethod hog? (Not that it shouldn't be possible to set up
> appropriate macros of course)

Get what?

This is valid Perl 6, and it does what you expect:

method Str::open() {
open self err fail;
}
method Str::open(&block) {
for <open self> { block($_) };
}

That is, just tacking methods on to the Str class without Str's
express written consent.

Luke

Luke Palmer

unread,
May 24, 2003, 1:16:42 AM5/24/03