Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Coroutines

72 views
Skip to first unread message

John Macdonald

unread,
May 21, 2003, 11:01:51 PM5/21/03
to perl6-l...@perl.org
The last weekly perl6 summary listed a discussion
on coroutines; which lead me to scan the archive and
to subscribe.

One item that I gather from the discussion is that the
current plan is to have a coroutine be resumed simply
by calling it again. This, I think, is a mistake.
Coroutines should be identified by an object that
represents the execution state of the coroutine.

One of the most useful uses of coroutines I've had
in the past is for chaining together a data flow.
This is similar to a Unix pipeline, except of
course that it is being done inside a single program
instead of with separate programs. Each coroutine
would resume its predecessor coroutine whenever it
needed additional data to process, and would resume
its successor coroutine whenever it had prepared
data ready to be passed on.

In either a shell pipeline or a coroutine chain,
you might use the same subprocess more than once.
A pipeline might use the same program more than once
(tr or sed or awk or perl are all likely candidates);
a coroutine chain might use a single subroutine more
than once (perhaps a symmetric encryption routine,
a grep routine, or a file insertion routine).

If you resume by simply calling the subroutine again,
which one are you calling? For that matter, why
should one routine necessarily know which routine
will be the next one? If a coroutine is passing
lines of text to its successor, it might wish to
insert a coroutine to read the contents of a file
between itself and its successor (maybe it just found
a #include directive) - why should the successor have
to know to call/resume a different function for a
while until the included file is fully read, and then
go back to call/resuming this original predecessor?

So, using a token (object) to represent the coroutine
provides both the ability to have the same subroutine
active in two coroutines at the same time, but also
means that one coroutine need not know the actual
subroutine that it is exchanging data with.

(This idea of a pipeline of data massaging routines
also reminds me of the perl5 Filter package.)

Restricting yourself to an iterator viewpoint of
coroutines tends to emphasize routines that generate
a sequence of values while deemphasizing the equally
useful set of routines that consume a sequence
of values.

The coroutine package I used was a subroutine library
for the B language (the precursor to C). It used
an explicit routine to create a new coroutine - it
allocated a private stack, remembered the coroutine
that did the creation, and then invoked the top level
subroutine with its initial arguments.

There were two types of relationships between
coroutines managed by the library. A parent
process created and managed a coroutine or group of
coroutines. A group of coroutines acted as siblings.

Between siblings, the resume call:

ret = resume( coroutine, value );

would resume a selected coroutine. The value
parameter passed in by this routine would be the
ret value returned to the coroutine that was being
resumed. (In the B subroutine it was a single scalar
value, but in Perl it could be either a scalar or
a list.)

A coroutine often would use the function caller()

called_from = caller();

to find which coroutine invoked it, so that it would
know which coroutine to resume when it had finished a
unit of processing.

A parent routine used create() and reattach() to manage
its children, while a child used detach() to revert
to its parent.

ret = create( size, function, args... );
child = caller();

ret = reattach( achild, value );

ret = detach( value );
parent = caller(); # can do this right away
...
parent = invoker(); # or can do this later
# possibly in a resume'd sibling

The routines detach() and reattach() were essentially
the same as resume, but had slightly different
side effects. detach() didn't take an argument to
specify the coroutine to be resumed but used the
remembered parent. The reattach() routine would
resume the specified child, setting its rememered
parent to the routine (possibly different from the
original creator) that reattached it. (The resume()
function, as well as remembering the caller's identity
in case the callee uses the caller() function to find
it, also sets the parent of the calle to be the same
as the parent of the caller.) The create() function
would allocate a new stack of the specified size (perl
doesn't believe in such fixed limits, of course), set
up the stack frame to be consistant with a normal call
to the specified function with the given arguments,
and then does that same action as reattach to that
stack frame.

The functions caller() would return the value of
the caller (the last coroutine of any sort to resume
this coroutine in any way - that includes create(),
detach(), or reattach, as well as resume()).
The function invoker() would return the value of
the parent process that directly or indirectly did
a create() or reattach() of this process.

Having separate syntax for the creation and resumption
of a coroutine provides much the same advantage
as the C library's separation of fork and exec.
Keeping them separate allows a much more flexible
set of ways of using them.

Luke Palmer

unread,
May 22, 2003, 7:15:49 AM5/22/03
to j...@perlwolf.com, perl6-l...@perl.org
> The last weekly perl6 summary listed a discussion
> on coroutines; which lead me to scan the archive and
> to subscribe.
>
> One item that I gather from the discussion is that the
> current plan is to have a coroutine be resumed simply
> by calling it again. This, I think, is a mistake.
> Coroutines should be identified by an object that
> represents the execution state of the coroutine.

Hooray! :)

> [snippity snip]


>
> So, using a token (object) to represent the coroutine
> provides both the ability to have the same subroutine
> active in two coroutines at the same time, but also
> means that one coroutine need not know the actual
> subroutine that it is exchanging data with.
>
> (This idea of a pipeline of data massaging routines
> also reminds me of the perl5 Filter package.)
>
> Restricting yourself to an iterator viewpoint of
> coroutines tends to emphasize routines that generate
> a sequence of values while deemphasizing the equally
> useful set of routines that consume a sequence
> of values.

I'm listening...

> [more snipping]


>
> The functions caller() would return the value of
> the caller (the last coroutine of any sort to resume
> this coroutine in any way - that includes create(),
> detach(), or reattach, as well as resume()).
> The function invoker() would return the value of
> the parent process that directly or indirectly did
> a create() or reattach() of this process.
>
> Having separate syntax for the creation and resumption
> of a coroutine provides much the same advantage
> as the C library's separation of fork and exec.
> Keeping them separate allows a much more flexible
> set of ways of using them.

So, basically your coroutines are continuations that have been spiffed
up to allow easier data flow (and store parents automatically, etc.)
I think *most* of us think of coroutines as less than that, but I like
most of the functionality that you've described. The immediate
problem I see is that it might facilitate "spaghetti code" a little
too easily. But then, I haven't actually seen it in practice.

I absolutely agree that a coroutine should be encapsulated in some
kind of object rather than being magically implicit based on your
scope and whether you've called that name before. Although most of
these functions that take a coroutine argument would become methods on
the object... meh, not important.

One thing that's very important for Perl's coroutine method is that it
makes the Easy Things Easy. For instance, it needs to be easy to
return a series of values generated from a recursive function call.
The method you describe easily facilitates that, simply because
coroutine creation is explicit, something that none of the other
proposed methods have thought of yet.

Alright, on to the bad news. There's a big issue here with
encapsulation. Most of the arguments for the other coroutine methods
lie in encapsulation: either "it's encapsulated in the function call
interface" or "it's encapsulated in the iterator interface". This
approach makes an entirely new interface, something that will lose big
time if a battle starts up.

What I'm seeing is two interfaces, actually. There's one that
communicates between the parent and the child, and another that
communicates between the siblings.

So, here's my little preliminary adaptation of your suggestion:

The parent's role in this interaction is to receive data from its
children --- sequentially. But it actually can't be an iterator this
time, because that doesn't provide a way to pass data from the parent
to the children. So I'm going to put it in a function interface, just
a different one from the way people have been thinking. You'd make a
coroutine with one of the following (or some variant thereof):

my &coro := new Coroutine: &foo.assuming(arg => $list, goes => $here);
my &coro := &foo.coroutine($arg, $list);

Either way, it's not a real sub, it's just pretending to be one for
interface's sake. C<reattach> would be spelled just like the function
call operator:

$ret = coro($value);

Siblings are much simpler, actually. One can use raw continuations to
do the C<resume> described above, provided that continuations have a
way to transfer data. If not, sibling communication could be done
through a wrapper around continuations that do transfer data.

And as far as the distinction between C<detach> and C<resume>ing the
parent, C<yield> could have two forms: one as a regular sub which
detaches, and another as a method on a coroutine object (which the
child could obtain likely through some encantation of the C<caller>
function).

Thanks for the suggestion --- I like it better than any others I've
seen yet. :)

Luke

Michael Lazzaro

unread,
May 22, 2003, 2:39:26 PM5/22/03
to Luke Palmer, j...@perlwolf.com, perl6-l...@perl.org

On Thursday, May 22, 2003, at 04:15 AM, Luke Palmer wrote:
> I absolutely agree that a coroutine should be encapsulated in some
> kind of object rather than being magically implicit based on your
> scope and whether you've called that name before.

You know, I think this is a very important observation. The suggested
syntaxes so far have all been things that attempt to make coroutines
look almost magically identical to normal functions, but that worries
me, because they're _not_.

I don't mind having a prominent reminder when I'm using a coroutine, vs
a normal sub. But at the same time, I wonder if we're getting far too
complicated, here.

A coroutine is basically a routine that can be yielded out of and
resumed, similar to a thread. Therefore, I would expect the syntax to
be very similar to whatever the thread syntax turns out to be.

Adapting the simple coroutine example from Dan's log
(http://www.sidhe.org/~dan/blog/archives/000178.html), just so people
can refer back to that if they don't know what the hell we're talking
about:

coroutine foo (int a, int b) {
yield a;
yield a + 1;
yield a + 2;
}
print foo(1), "\n"; # (ignore b for now)
print foo(2), "\n"; # uh-oh, different arg -- what happens?
print foo(1), "\n";
print foo(1), "\n";


I keep wondering what's wrong with something simple like:

sub foo (int $a, int $b) is coroutine {
yield $a;
yield $a + 1;
yield $a + 2;
}

my &foo1 := attach &foo(a => 1);
my &foo2 := attach &foo(a => 2);

print foo1(b => $n), "\n";
print foo2(b => $n), "\n"; # different attach, so no problem
print foo1(b => $n), "\n";
print foo1(b => $n), "\n";


C<attach> could just do the ".assuming" part, and would return a
wrapped version of &foo. Or, hell, call it C<tie>, since we probably
don't need the C<tie> keyword anymore. :-)

Calling the wrapped coroutine, e.g. C<foo1()>, would resume it. (Note
that since you're attaching it in a separate step, the first resume
actually begins the function.)

Because you've got the explicit C<attach>, I think that's a pretty good
visual cue right there that you're doing a coroutine. (I would go so
far as to say that it's illegal to call foo() without C<attach>ing it,
because it would be too easy for it to be confused with a normal sub.)

So an iterator would be something like:

sub piped_input(...) is coroutine {...}

...etc...

my &i := attach &piped_input(...);
for &i(...) {
...
}

That seems simple enough; what am I missing?


OK, there's a few things missing, namely some clarifications:

- We don't need C<.assuming>, because C<attach> implies it.

- It's automatically detached when the var &i goes out of scope or gets
re-bound, so we don't need a C<.detach> or similar on the caller end.

- We'd probably need the ability to differentiate between yield,
yield-and-detach and yield-and-reattach. How about:

yield # yields
detach # yield-and-detach
return # yield-and-reattach

(Calling the last one C<return> would make a yield-less coroutine
behave similarly to a normal sub, in that it'd start over every time.)

- Calling a coroutine after it's been detached performs a noop, and
when in a looping construct, ends the loop.

???

MikeL

Dulcimer

unread,
May 22, 2003, 3:04:12 PM5/22/03
to Michael Lazzaro, Luke Palmer, j...@perlwolf.com, perl6-l...@perl.org

--- Michael Lazzaro <mlaz...@cognitivity.com> wrote:
>
> On Thursday, May 22, 2003, at 04:15 AM, Luke Palmer wrote:
> > I absolutely agree that a coroutine should be encapsulated in some
> > kind of object rather than being magically implicit based on your
> > scope and whether you've called that name before.
>
> You know, I think this is a very important observation.
> The suggested syntaxes so far have all been things that
> attempt to make coroutines look almost magically identical
> to normal functions, but that worries me, because they're
> _not_.
>
> I don't mind having a prominent reminder when I'm using
> a coroutine, vs a normal sub. But at the same time, I
> wonder if we're getting far too complicated, here.

Simple is better, so long as the power and flexibility aren't squashed.

That looks pretty straightforward to me.
It's visually distinct enough to be clear what's going on without being
unreadably obtuse.

As an aside, is this a "core" language feature we're talking about, or
something for a module?



> C<attach> could just do the ".assuming" part, and would return a
> wrapped version of &foo. Or, hell, call it C<tie>, since we probably

> don't need the C<tie> keyword anymore. :-)

lol -- while I personally like the idea, I'd still vote against using
"tie" because of previous expectations.



> Calling the wrapped coroutine, e.g. C<foo1()>, would resume it.
> (Note that since you're attaching it in a separate step, the first
> resume actually begins the function.)

So the first call is the first call. That's a Good Thing.

> Because you've got the explicit C<attach>, I think that's
> a pretty good visual cue right there that you're doing a
> coroutine. (I would go so far as to say that it's illegal
> to call foo() without C<attach>ing it, because it would be
> too easy for it to be confused with a normal sub.)

Agreed. If you've gone to the trouble to write "is coroutine" it
shouldn't be a big deal.

> So an iterator would be something like:
>
> sub piped_input(...) is coroutine {...}
>
> ...etc...
>
> my &i := attach &piped_input(...);
> for &i(...) {
> ...
> }
>
> That seems simple enough; what am I missing?
>
> OK, there's a few things missing, namely some clarifications:
>
> - We don't need C<.assuming>, because C<attach> implies it.

and it's a module thing anyway, isn't it?



> - It's automatically detached when the var &i goes out of scope
> or gets re-bound, so we don't need a C<.detach> or similar on the
> caller end.

another Good Thing, but I'd still want the ability to manually detach
it.



> - We'd probably need the ability to differentiate between yield,
> yield-and-detach and yield-and-reattach. How about:
>
> yield # yields
> detach # yield-and-detach
> return # yield-and-reattach
>
> (Calling the last one C<return> would make a yield-less coroutine
> behave similarly to a normal sub, in that it'd start over every
> time.)

Hm. I'd say no on return. Just don't use a coro for that.
I'm a little fuzzy on reattach anyway, though.

Why not just do it in two steps? Call it for a yield, then reassign for
a new attach? Am I missing something?



> - Calling a coroutine after it's been detached performs a noop, and
> when in a looping construct, ends the loop.

For that my gut says calling after it's been detached should be fatal,
but for the ideal case I'd like to see it no-op as the default, gripe
under warnings, and die screaming horribly under stricture. Since I
live under C<use strict> that'd do me, and I'd still be able to
explicitly add C<no strict coro> if I wanted to perform some
naughtiness.


__________________________________
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo.
http://search.yahoo.com

John Macdonald

unread,
May 22, 2003, 1:14:22 PM5/22/03
to Luke Palmer, j...@perlwolf.com, perl6-l...@perl.org
On Thu, May 22, 2003 at 05:15:49AM -0600, Luke Palmer wrote:
> So, basically your coroutines are continuations that have been spiffed
> up to allow easier data flow (and store parents automatically, etc.)
> I think *most* of us think of coroutines as less than that, but I like
> most of the functionality that you've described. The immediate
> problem I see is that it might facilitate "spaghetti code" a little
> too easily. But then, I haven't actually seen it in practice.

Coroutines implicitly add spagettiness to flow of
control - instead of the pure stack subroutine call
flow (which already can jump around a lot), you are
able to get back to one context without terminating
the "subroutine" context that has been in operation;
using multiple independent stacks.

I'll show below an example of the sort of code I
used in the original B, but translated into a sort
of Perl 6.

Reading through my message a day later, I realize that
I spent a lot of time on history but never really said
how I'd envision that this get translated into Perl 6.

The parent-sibling distinction is useful, but I
don't think it is necessary to require separate
syntax for a Perl 6 case the way it did in B.
Having simply create, resume, and caller capability
is sufficient. The create mechanism would set up the
separate coroutine mechanics (its own coroutine object
encapsulating stack frame and state), and call the
subroutine with the initial arguments. At this point
(at entry to the subroutine) the caller method returns
the coroutine object id of the creating process.
The child coroutine can then resume the parent
coroutine (although it need not actually do that).
As long as the child actually does resume the parent,
the parent can find the object id of the new child
using the caller mechanism.

I don't recall if I said so before, but a return from
the subroutine essentially terminated the coroutine.
It would resume the parent passing the subroutine
return value. If the coroutine was again resumed,
it would immediately resume the parent with a value
of zero. (This is much like an input handle - after
end of file, it returns undef constantly.) This lets
you wrap a standard subroutine up as a coroutine (kind
of a scalar coroutine that only returns one value
before terminating instead of the more usual array
coroutine that returns a series of values). It is
easy to code the top level subroutine to not return
if you don't want this termination activity to happen.

In the B code, I used a sequence of coroutines in
a pipeline. The parent would build the pipeline
from one end (e.g. front to back). The already
built sibling object would be passed as an argument
to its successor. After the the cahin was built, it
would start up by reattach()ing the final coroutine.
It would resume its predecessor (who discovered the
more recently created successor using the caller()
function).

Here's an example of connecting together a number
of filter coroutine with source and sink coroutines
into a pipeline. It uses the same subroutine as the
top level routine of two different coroutines, and
does a resume from within a subroutine call (there
should not no limit on this sort of thing - imagine
a data structure traversing routine that passes out
infomation on each piece in passing).

A filter coroutine (one on the middle of the chain)
might look like (in Perl 5 syntax - I'm not fluent
enough in Perl 6 yet to not be distracting):

sub decomment {
my( $predecessor, $pattern ) = @_;
my $parent = caller;
my $successor = $parent->resume;
while(1) {
# pred return undef at EOF
my $buffer = $predecessor->resume;
# skip comment lines, but not EOF
next if( defined($buffer) && $buffer =~ m/$pattern/;
$successor->resume( $buffer );
}
}

sub cat1 {
my( $pred, $succ ) = @_;
my $line;
$succ->resume( $line ) while defined( $line = $pred->resume );
}

sub cat {
my( @pred ) = @_;
my $parent = caller;
my $successor = $parent->resume;
# concatenate next pipeline in the list
while( my $pred = shift @pred ) {
cat1( $pred, $successor );
}
$successor->resume( undef );
}

The routines to go at the ends of the pipeline chain
have to know their place (there is no coroutine to resume
in one direction).

sub getfile {
my( $filename ) = @_;
my $parent = caller;
my $status = open my $file, "<", $filename;
my $successor = $parent->resume($status);
while( my $line = <$file> ) {
$successor->resume( $line );
}
close $file;
$successor->resume( undef );
}

sub putfile {
my( $predecessor, $filename ) = @_;
my $parent = caller;
my $status = open my $file, ">", $filename;
$parent->resume( $status );
while( defined( my $line = $predecessor->resume ) ) {
print $file, $line;
}
close $file;
$parent->resume;
}

And a parent program to splice them all together:

my @inlist;
my $pipe;

coro::create( &getfile, "file1" )
or die "failed to open file1";
push @inlist, caller;

coro::create( &getfile, "file2" )
or die "failed to open file2";
push @inlist, caller;

# read both file1 and file2 in order
coro::create( &cat, @inlist );
$pipe = caller;

# strip perl comment lines
coro::create( &decomment, $pipe, qr/^\s*#/ );
$pipe = caller;

# strip C++ comment lines
coro::create( &decomment, $pipe, qr#^\s*//# );
$pipe = caller;

# write it all to filestrip
coro::create( &putfile, $pipe, "filestripped" )
or die "failed to open filestripped";
$pipe = caller;

# and start the whole mess going
my $status = $pipe->resume;

Austin Hastings

unread,
May 22, 2003, 3:41:39 PM5/22/03
to Michael Lazzaro, perl6-l...@perl.org

--- Michael Lazzaro <mlaz...@cognitivity.com> wrote:
>
> On Thursday, May 22, 2003, at 04:15 AM, Luke Palmer wrote:
> > I absolutely agree that a coroutine should be encapsulated in some
> > kind of object rather than being magically implicit based on your
> > scope and whether you've called that name before.
>
> You know, I think this is a very important observation. The
> suggested
> syntaxes so far have all been things that attempt to make coroutines
> look almost magically identical to normal functions, but that worries
> me, because they're _not_.

I used to agree with this observation, but now I'm pretty sure it's not
right.

The problem lies in confusing e.g. "iterator" with "coroutine."
There's a level-of-implementation difference.

An "iterator" is a design element.

A "for loop" is an implementation detail.

I think "coroutine" is an implementation detail. Specifically, I think
it's what you do when you can't figure out how to convert your code
into a simple state machine. As a result, I can conceive of coders *not
wanting to know* when they call a coroutine.

sub foo() {
state $i = 0;
return $i++;
}

sub foo() {
my $i = 0;
loop {
yield $i++;
}
}


What's the difference? None, from the caller's viewpoint. Which means
that when refactoring or extending a stateful subroutine, I can convert
it to a coroutine without telling anyone -- statefulness is an
implementation detail.

I liked the point about Unix pipes, though -- if you're doing a
translate, or processing a #include, then you may want more than one of
them. But that leads across the design/implementation barrier.

If we're going to iterate, or pipeline, or anything else with
coroutines involved, we need to separate "what's implementation" from
"what's design". Much the same way the we separate "scalar/array/hash"
from "object" -- one is an attribute (implementation detail), the other
is a design item.

So if there is going to be an Iterator class, or a Coroutine class, to
support objectification of the coroutine concept, let them be objects.
And let them be a part of the standard P6 object library. They don't
have to be exactly the same as the implementation details.

Depending on how the continuation interface works, and its interaction
with threads, I think we may just dispose of explicit coroutine support
completely, and rely on continuations plus Coroutines -- the object
interface that will probably use continuations underneath.

=Austin


Piers Cawley

unread,
May 22, 2003, 3:49:13 PM5/22/03
to Michael Lazzaro, Luke Palmer, j...@perlwolf.com, perl6-l...@perl.org
Michael Lazzaro <mlaz...@cognitivity.com> writes:

> On Thursday, May 22, 2003, at 04:15 AM, Luke Palmer wrote:
>> I absolutely agree that a coroutine should be encapsulated in some
>> kind of object rather than being magically implicit based on your
>> scope and whether you've called that name before.
>
> You know, I think this is a very important observation. The suggested
> syntaxes so far have all been things that attempt to make coroutines
> look almost magically identical to normal functions, but that worries
> me, because they're _not_.

One of the cute things that Common Lisp has is multiple return values
(and I don't just mean returning a list). In most cases you just use
the 'main' return value, but in others you would do

(multiple-value-list (a-coroutine arg1 arg2))

And get a list back along the lines of

((usual results) #'a-coroutine-handle)

Point is that, unless you *know* you're going to need the secondary
values returned by a function then they're completely invisible to the
caller. It seems to me that this might be a good approach with Perl:

my(@results, &coro_handle) = multi_values(coroutine(@args));

Okay, so that's an ugly name for the function, but we have the
advantage that the Easy case remains easy and the hard case is solved
by a useful general mechanism.

--
Piers

Michael Lazzaro

unread,
May 22, 2003, 4:41:38 PM5/22/03
to Austin_...@yahoo.com, perl6-l...@perl.org

On Thursday, May 22, 2003, at 12:41 PM, Austin Hastings wrote:
> I think "coroutine" is an implementation detail. Specifically, I think
> it's what you do when you can't figure out how to convert your code
> into a simple state machine. As a result, I can conceive of coders *not
> wanting to know* when they call a coroutine.
>
> sub foo() {
> state $i = 0;
> return $i++;
> }
>
> sub foo() {
> my $i = 0;
> loop {
> yield $i++;
> }
> }
>
> What's the difference? None, from the caller's viewpoint. Which means
> that when refactoring or extending a stateful subroutine, I can convert
> it to a coroutine without telling anyone -- statefulness is an
> implementation detail.

There's the rub, and that's what's bothering me. In the ideal world,
after you somehow attach a coroutine to make a particular
iterator/whatever, you really want the coroutine to be
indistinguishable from a sub. But in practice, coroutines have a
couple of features that normal subs don't have, which require some form
of invocation be built into the language, which gums everything up.

The two examples above are different in that the first only allows you
to maintain one state, for all callers, whereas the second presumably
allows you to have multiple different states, controlled by the caller
-- *if* you have some language support for invoking those different
states.

The basic problem with coroutines is what happens when you have more
than one invocation of them -- i.e. simultaneous states. Why I'm
musing over the explicit C<attach> step is because it solves that
problem -- completely.

my &foo1 := attach &piped_input(...);
my &foo2 := attach &piped_input(...);

That creates two separate subs, with two separate internal states. But
you can only have these separate states if you have some way of
specifying them -- if you just assume that a coro is called like a sub:

sub foo (...) is coroutine {...}

foo();
foo();

...you then have to answer the question of whether the second foo() is
using the state information of the first foo(), or is starting over.
Which is nasty, because you may want it to do either of those things,
depending on individual circumstance.

Typically there may be some method of detaching or restarting the coro,
but I think historically speaking that may be the wrong direction --
perhaps it's the attachment that needs to be explicit, and not the
resuming/restarting. I don't think you _need_ detach, or reattach,
etc., if you assume it all just happens as a natural consequence of var
scope. If you want to restart the coro, just rebind your var to a new
invocation.

There are other solutions. For example, make a call to a coroutine
automatically "attach" it to a state, if it doesn't already have one,
and have explicit de/reattach on the caller side:

foo(); # (1) no prev state, so "attaches" one
foo(); # (2) same state as (1)
restart foo(); # (3) calls foo() with a newly attached state
detach foo(); # (4) kills the state associated with foo()

That doesn't quite work, because while it allows you to restart the
coroutine, it doesn't allow you to have more than one state associated
at one time! (It also doesn't take advantage of var scope to
automatically detach your state when you're done with it.) So you need
some explicit method of creating a "stateful" coroutine invocation,
which we could do via another keyword:

my &foo1 = attach &foo(...);

... getting us back to what I was proposing, sortof, but saying that
calling the coroutine directly is OK, if you want to use whatever the
last implicitly attached state is. Dunno, I have concerns about
allowing that, because that's where most people get bitten when using
them. Hmm...

MikeL

Austin Hastings

unread,
May 22, 2003, 6:20:18 PM5/22/03
to Michael Lazzaro, perl6-l...@perl.org

--- Michael Lazzaro <mlaz...@cognitivity.com> wrote:
>

Not necessarily. Why shouldn't I want my routine to give a unique
number to all callers? That's what Oracle does when I create an
autoincrement data field.

Anyway, this is focusing at too low a level -- nobody has answered the
bigger questions, except for JMM's attempt: What *SHOULD* the semantics
of coroutines be? I am not sure they should allow separate states --
provided that there is another mechanism (closure or continuation or
object) to handle separate states apart from the coroutine mechanics
itself.

Is there a way to encapsulate the invocations? Does every separate call
produce a separate instance, or do all calls yield up the same
instance? (In which case can you capture separate instances in a
closure?)

A lot of us do our thinking (me included) by scribbling something down
and then optimizing the look and feel of what we're talking about. But
a certain amount of thought has to be given to the "well, what do we
want?" side of the equation.

The coroutine syntax is appealing because it allows reverting a state
based system to a procedural description - the opposite of our normal
behavior. It can be implemented with continuations, of course, but
that's like saying that we can implement a switch statement with
chained if's, or with (cond) && do {...}; - it's equivalent but clunky.

So what do we really want from the coro syntax? Luke Palmer says
"iterators", and I'm sure I disagree with that: iterators are not as
small as that. John Macdonald says they should have separate state, and
you (Michael) have proposed a syntax that provides that. But I don't
like the overhead. Why *shouldn't* coroutines be global by default.

I like the idea of being able to capture a separate coroutine state, so
that I could have two active contexts simultaneously. But to me that
screams either "object" or "closure". Whereas I'd also like to be
trivially able to create a coroutine *without* separable instances, and
without much more syntax than just C<yield>.

So how about a coroutine/Coroutine distinction: using C<yield> does a
global coro thing. Calling C<new Coroutine &Code> creates an object
that can be called, etc as a separate function (like your C<attach>)
and maintains its own state (continuation/closure). Of course, if the
&Code doesn't call C<yield>, then it never acts very interesting
externally.

> The basic problem with coroutines is what happens when you have more
> than one invocation of them -- i.e. simultaneous states. Why I'm
> musing over the explicit C<attach> step is because it solves that
> problem -- completely.
>
> my &foo1 := attach &piped_input(...);
> my &foo2 := attach &piped_input(...);
>
> That creates two separate subs, with two separate internal states.

Remember threads. If all references to the same name produce the same
coro, then two threads can't call the same coroutine-invoking sub at
the same time. And how do you know what uses coroutines internally?

=Austin


Simon Cozens

unread,
May 22, 2003, 7:27:45 PM5/22/03
to perl6-l...@perl.org
austin_...@yahoo.com (Austin Hastings) writes:
> Anyway, this is focusing at too low a level -- nobody has answered the
> bigger questions, except for JMM's attempt: What *SHOULD* the semantics
> of coroutines be?
> ...

> So what do we really want from the coro syntax? Luke Palmer says
> "iterators", and I'm sure I disagree with that: iterators are not as
> small as that.

We can play with syntax and semantics until the cows come home. Can I suggest
a way for mere humans to evaluate the lofty design issues you guys are toying
with? (I have to admit that I haven't been able to understand this thread at
all, or understand why I should care about its ramifications in my
programming.)

Here's a good way. I suggest discourse takes one of two forms:

1) The semantics of Perl 6's co-routines will be just like those of
$language_x, in that...
2) Perl 6's co-routines will be different from every other language, and this
is OK, because we're obviously doing something brand new in the areas of
...

Warning against form 2: people don't want or need groundbreaking ideas to get
their jobs done. Experimental languages are cool. In their place.

I'm having a hard enough time working through precisely how coroutines will
make my day to day programming more productive, without having to read through
pages of screeds about what colour and shape they ought to be.

--
Perl 6 will be different from Perl 5, but never gratuitously so. When syntax
or semantics change, it will always be a change for the better: for greater
consistency, for more intuitability, for extra Do-What-I-Meanness.
- Damian Conway

my ($stdout, $stderr) = »<<<« open 'grep foo * |';
- Damian Conway

Damian Conway

unread,
May 22, 2003, 8:43:06 PM5/22/03
to perl6-l...@perl.org, la...@wall.org
Didn't we already have this discussion six months ago???
The conclusion of which seemed to be:

http://archive.develooper.com/perl6-l...@perl.org/msg12518.html

However, on revisiting it, I think I have an even simpler, best-of-both-worlds
solution that's also more in line with A6...

==============================================================================

A coroutine is declared with the C<coro> declarator:

coro fibs (?$a = 0, ?$b = 1) {
loop {
yield $b;
($a, $b) = ($b, $a+$b);
}
}

and is allowed to have zero or more C<yield> statements inside it.

A coroutine is an object of type C<Coro> (just as a subroutine is an object of
type C<Sub>). C<Coro> objects have the following methods:

next() # resumes coroutine body until next C<yield>

next_args(PARAM_LIST) # resumes coroutine body until next C<yield>,
# rebinds params to the args passed to
# C<next_args>.
# PARAM_LIST is the same as the parameter list
# of the coroutine itself

each() # returns a lazy array, each element of which
# is computed on demand by the appropriate
# number of resumptions of the coroutine body

A C<Coro> object is therefore a type of iterator.

Calling a coroutine using the normal subroutine call syntax is the same as
calling the C<next> or C<next_args> method on it:

$next = fibs(); # same as: $next = &fibs.next()

$next = fibs(10,11); # same as: $next = &fibs.next_args(10,11)

while ($next = fibs()) {
if ($next < 0) {
fibs() for 1..2; # skip the next two values
}
}


To create multiple independent coroutine interators using a single coroutine
definition, one can simply use an *anonymous* coroutine:

sub fibs_iter (?$a = 0, ?$b = 1){
return coro (?$x = $a, ?$y = $b) {
loop {
yield $b;
($a, $b) = ($b, $a+$b);
}
}
}

# and later...

$iter = fibs_iter(7,11);

while ($next = $iter.next()) {
if ($next < 0) {
$iter.next() for 1..2; # skip the next two values...
$iter = fibs_iter(11,7); # and change iterator
}
}

Additionally, class C<Coro> would have a C<clone> method:

$iter1 = &fibs.clone;
$iter2 = &fibs.clone;

which would return a reference to a new anonymous C<Coro> with the same
implementation as (but distinct state from) the original C<Coro>.

Either way, for iteration that implies:

<$fh> # Call $fh.next() # IO objects are iterators too
<$iter> # Call $iter.next()
coro() # Call &coro.next()
<&coro> # Call &coro.next()
<coro()> # Call .next() on iterator returned by call to coro()


Whilst, in a list context:

<$fh> # Call $fh.each()
<$iter> # Call $iter.each()
coro() # Call &coro.each()
<&coro> # Call &coro.each()
<coro()> # Call .each() on iterator returned by call to coro()


So then:

for <$fh> {...} # Build and then iterate a lazy array (the elements
# of which call back to the filehandle's input
# retrieval coroutine)

for <$iter> {...} # Build and then iterate a lazy array (the elements
# of which call back to the iterator's coroutine)

for coro() {...} # Build and then iterate a lazy array (the elements
# of which call back to the C<&coro> coroutine)

for <&coro> {...} # Build and then iterate a lazy array (the elements
# of which call back to the C<&coro> coroutine)

for <coro()> {...} # Call coro(), then build and iterate a lazy
# array (the elements of which call back to the
# iterator returned by the call to C<coro()>)


==============================================================================

This approach preserves the simple (and I suspect commonest) use of "monadic"
coroutines, but still allows a single coroutine to produce multiple iterators.
It also minimizes additional syntax and semantics.

Damian

Luke Palmer

unread,
May 22, 2003, 9:19:02 PM5/22/03
to dam...@conway.org, perl6-l...@perl.org, la...@wall.org
Damian wrotes:

Sweet.

> Either way, for iteration that implies:
>
> <$fh> # Call $fh.next() # IO objects are iterators too
> <$iter> # Call $iter.next()
> coro() # Call &coro.next()
> <&coro> # Call &coro.next()
> <coro()> # Call .next() on iterator returned by call to coro()
>
>
> Whilst, in a list context:
>
> <$fh> # Call $fh.each()
> <$iter> # Call $iter.each()
> coro() # Call &coro.each()

Er, what if it wanted to yield a list?

> <&coro> # Call &coro.each()
> <coro()> # Call .each() on iterator returned by call to coro()
>
>
> So then:
>
> for <$fh> {...} # Build and then iterate a lazy array (the elements
> # of which call back to the filehandle's input
> # retrieval coroutine)
>
> for <$iter> {...} # Build and then iterate a lazy array (the elements
> # of which call back to the iterator's coroutine)
>
> for coro() {...} # Build and then iterate a lazy array (the elements
> # of which call back to the C<&coro> coroutine)
>
> for <&coro> {...} # Build and then iterate a lazy array (the elements
> # of which call back to the C<&coro> coroutine)
>
> for <coro()> {...} # Call coro(), then build and iterate a lazy
> # array (the elements of which call back to the
> # iterator returned by the call to C<coro()>)
>
>
> ==============================================================================
>
> This approach preserves the simple (and I suspect commonest) use of "monadic"
> coroutines, but still allows a single coroutine to produce multiple iterators.
> It also minimizes additional syntax and semantics.

I like most of it. There's one problem: recursion.

coro count($n) {
count($n - 1) if $n > 0;
yield $n;
}

The call to count() inside count() would either start a new coroutine,
or continue the old one -- neither of which are the desired behavior.
What you want in this case is just to call the sub as usual. This can
be remedied like this:

sub _count_impl($n) {
_count_impl($n - 1) if $n > 0;
yield $n;
}

coro count($n) {
_count_impl($n);
}

But that seems like an awful lot of work for something this common.
Hmm...

coro count($n) {
(sub { _($n - 1) if $n > 0; yield $n }).($n);
}

Cool, but, eew.

I'm not sure how to solve the recursion problem while appeasing those
who want implicit iteration.

Another detail: is implicit iteration scoped or global?

Luke

Luke Palmer

unread,
May 22, 2003, 9:36:15 PM5/22/03
to j...@algate.perlwolf.com, j...@perlwolf.com, perl6-l...@perl.org

The discussion has gone far beyond this message already, but for fun,
I'll show you what this looks like in idiomatic Perl 6 I<without>
coroutines.

sub decomment($pattern, *@lines) {
grep { !/<$pattern>/ } @lines
}

<open "file1">, <open "file2">
==> decomment(/^ \s* \#/)
==> decomment(rx#^ \s* //#)
==> open("> filestripped").print;

Snazzy, no?

It's possible that that will all happen lazily (in other words, like a
real pipeline) because all operators involved are lazy operators.

Luke

Damian Conway

unread,
May 22, 2003, 9:41:30 PM5/22/03
to perl6-l...@perl.org, Larry Wall
Luke Plamer observed:

>>Whilst, in a list context:
>>
>> <$fh> # Call $fh.each()
>> <$iter> # Call $iter.each()
>> coro() # Call &coro.each()
>
>
> Er, what if it wanted to yield a list?

A very good question. It would certainly make more sense for:

coro() # Call &coro.next() in list context

and when one wanted the lazy iterative behaviour, one would use:

>> <&coro> # Call &coro.each()

> There's one problem: recursion.
>
> coro count($n) {
> count($n - 1) if $n > 0;
> yield $n;
> }

I'm not sure what that's supposed to do. Are you missing a C<yield> on the
first statement? And maybe a loop?

What was the behaviour you were expecting from calling (say):

$x = count(7);


> I'm not sure how to solve the recursion problem while appeasing those
> who want implicit iteration.

I'm not sure I understand the problem to which you're alluding.
Could you perhaps explain it another way?


> Another detail: is implicit iteration scoped or global?

I must be particularly dense today. I don't understand that question either.
Could you give an example?

Damian

Luke Palmer

unread,
May 22, 2003, 9:55:37 PM5/22/03
to dam...@conway.org, perl6-l...@perl.org, la...@wall.org
> > There's one problem: recursion.
> >
> > coro count($n) {
> > count($n - 1) if $n > 0;
> > yield $n;
> > }
>
> I'm not sure what that's supposed to do. Are you missing a C<yield> on the
> first statement? And maybe a loop?
>
> What was the behaviour you were expecting from calling (say):
>
> $x = count(7);

0, then 1, then 2, up to 7. (count(7) calls count(6) calls count(5)
down to 0, then count(0) yields 0, returns to count(1) which yields 1,
etc.)

It's a simplified version of the traversal problem:

# traverse: return each element arbitrarily deep in @lol,
# depth first.
coro traverse(@lol) {
for @lol {
when List { traverse($_) }
default { yield $_ }
}
}

(Now that I look at it, it can't get much simpler :-)

Or am I missing something? I thought that one could yield from
arbitrarily deep in the call stack.

>
> > I'm not sure how to solve the recursion problem while appeasing those
> > who want implicit iteration.
>
> I'm not sure I understand the problem to which you're alluding.
> Could you perhaps explain it another way?
>
>
> > Another detail: is implicit iteration scoped or global?
>
> I must be particularly dense today. I don't understand that question either.
> Could you give an example?

Nevermind. It's global (stupid question on my part, sorry).

Luke

John Macdonald

unread,
May 22, 2003, 10:42:40 PM5/22/03
to Damian Conway, perl6-l...@perl.org, la...@wall.org
On Fri, May 23, 2003 at 10:43:06AM +1000, Damian Conway wrote:
> Didn't we already have this discussion six months ago???

I wasn't on the list then, but that looks like it was...

Larry's description of having the initial call to a
coro function return an object sounds very much like
the create() call I'm used to except that (as per
Luke's discussion) it doesn't allow a function to
be used as both a coroutine and as a function (e.g.
when it calls itself recursively). Luke's approach
of using two routines, one for the coro and another
for the function:

coro foo { ... real_foo( args ) }
sub real_foo { ... can call real_foo recursively

is manageable, but I prefer to have different syntax
for call and resume rather than have the distinction
intuited from the declaration of the routine.

However, when the function of a coroutine is
sufficiently complicated that it ought to be
decomposed (using additional subroutines that can
each yield some of the total sequence from the
single coroutine process), the argument rebinding
form next_args is inadequate. (Note that this also
strongly says that having a yield operator in its
body is not adequate reason to treat a subroutine as
a coroutine, and automatically turn an invokation of
that subroutine into a new coroutine process.)

> next_args(PARAM_LIST) # resumes coroutine body until next C<yield>,
> # rebinds params to the args passed to
> # C<next_args>.
> # PARAM_LIST is the same as the parameter list
> # of the coroutine itself

If the coroutine has called a stackful of functions,
rebinding the args of the top level function is either
invisible to the currently resumed subroutine, or it
affects all of the stacked routines. While you may
wish to pass a value to the code that is resumed,
that value may not have any relationship with the
parameters that were bound when the coroutine was
instantiated. Consider a coroutine that traverses
a parse tree, for example. The initial setup will
be specifying the root of the tree that is being
tranversed. The traversal code might call different
subroutines for different types of nodes - some of
those subroutines might yield a number of values;
others might only yield values indirectly (from
subroutines that they call). When the code that is
processing the values coming out of the traversal is
ready to resume this coroutine to get a new value,
it might wish to indicate how the traversal is to
proceed (normal, prune special processing for the
current node, prune all processing for this node and
all of its children, etc.).

This could be handled by allowing the next method be
given a list of values, and those values are passed
to the coroutine as the return value from the yield
operator. That allows the coroutine to process the
value(s) in a way that is relevant to the current
state of the coroutine.

The next_args approach requires that the entire state
of the coroutine be encapsulated in the argument list.
That will almost always be more than you would want
to change at one time.

Damian Conway

unread,
May 23, 2003, 4:32:44 AM5/23/03
to perl6-l...@perl.org, Larry Wall
Many thanks to John for his excellent analyses and especially for reminding us
of the potential use of C<yield>'s return value to pass new values into a
coroutine. We did explore that approach last year but it had completely
slipped my mind since then.

That input, and Luke's important observations about recursion and list
contexts have led to the following re-revised proposal...

==============================================================================

A coroutine is declared with the C<coro> declarator:

coro fibs (?$a = 0, ?$b = 1) {
loop {

($a, $b) = yield($b) || ($b, $a+$b);
}
}

and is allowed to have zero or more C<yield> statements inside it.

A coroutine is an object of type C<Coro> (just as a subroutine is an object of
type C<Sub>). C<Coro> objects have the following methods:

=over

=item C< method Coro::next(*@args) {...} >

which resumes execution of the coroutine object's body until the next C<yield>
or C<return> is executed. The arguments (if any) that are passed to C<next>
become the return value of the previous C<yield>, or are bound directly to the
coroutine's parameters if there was no previous C<yield>.


=item C< method Coro::each(*@args) {...} >

which returns a lazy array, each element of which is computed on demand by the
appropriate number of calls to C<next>. each call to C<next> is passed the
argument list that was passed to C<each>.


=item C<method Coro::clone(*@args) {...}>

which creates and returns a copy of the implementation (but not the current
state) of the C<Coro> object, having first prebound the copy's parameters to
the arguments passed to C<clone>.

=back

A C<Coro> object is therefore a type of iterator.

Calling a coroutine using the normal subroutine call syntax is the same as

calling the C<next> method on it:

$next = fibs(); # same as: $next = &fibs.next()

$next = fibs(10,11); # same as: $next = &fibs.next(10,11)

while ($next = fibs()) {
if (abs($next) > 10) {
fibs(11,7) for 1..2; # skip the next two values,
# whilst resetting the iterator state
}
}


To create multiple independent coroutine interators, all of which use the same
coroutine definition, define a "factory" subroutine that builds *anonymous*
coroutines:

sub fibs_iter ($a = 0, $b = 1){


return coro (?$x = $a, ?$y = $b) {
loop {

($a, $b) = yield($b) || ($b, $a+$b);
}
}
}

# and later...

$iter = fibs_iter(7,11);

# and later still...

while ($next = $iter.next()) {
if (abs($next) > 10) {


$iter.next() for 1..2; # skip the next two values...

$iter = fibs_iter(11,7); # and change iterator state
}
}

Note that, since C<$iter> is a coroutine reference, the calls to
C<$iter.next()> can be written more compactly as:

while ($next = $iter()) {
if (abs($next) > 10) {
$iter() for 1..2; # skip the next two values...
$iter = fibs_iter(11,7); # and change iterator state
}
}


If the coroutine itself already exists, another solution is simply to clone it:

$iter = &fibs.clone(7,11);

while ($next = $iter.next()) {
if ($next < 0) {
$iter.next() for 1..2; # skip the next two values...

$iter = &fibs.clone(11,7); # and change iterator state
}
}


And if only one version of the coroutine will ever be active at any one time,
an even simpler approach is just to use the original coroutine itself:

while ($next = fibs()) {
if ($next < 0) {

$iter.next() for 1..2; # skip the next two values...

&fibs := &fibs.clone(11,7); # and change iterator state
}
}


For iteration in a scalar context that implies:

<$fh> # Call $fh.iter().next()


<$iter> # Call $iter.next()
coro() # Call &coro.next()
<&coro> # Call &coro.next()

<func()> # Call .next() on iterator returned by call to func()
<coro()> # Call .next() on iterator returned by call to &coro.next()


Whilst, in a list context:

<$fh> # Call $fh.iter().each()
<$iter> # Call $iter.each()
coro() # Call &coro.next() in a list context
<&coro> # Call &coro.each()
<func()> # Call .each() on iterator returned by call to func()
<coro()> # Call .each() on iterator returned by call to &coro.next()

In other words, a direct call to a coroutine (in any context) always calls its
C<next> method.

Whereas using a coroutine object as the operand to the C<< <...> >> operator
calls the coroutine's C<next> method in a scalar context, and its C<each>
method in a list context.


So then:

for <$fh> {...} # Build and then loop over a lazy array, the elements
# of which are computed on demand by calling
# $fh.iter().next() as required

for <$iter> {...} # Build and then loop over a lazy array, the elements
# of which are computed on demand by calling
# $iter.next() as required

for func() {...} # Call func() once, then loop over the list
# of values returned by that call

for coro() {...} # Call &coro.next() once, then loop over the list
# of values returned by that call

for <&coro> {...} # Build and then loop over a lazy array, the elements
# of which are computed on demand by calling
# &coro.next() as required

for <func()> {...} # Call func() once, then build and loop over a
# lazy array, the elements of which are computed on
# demand by calling the .next method of the iterator
# returned by the single call to func()

for <coro()> {...} # Call &coro.next() once, then build and loop over a
# lazy array, the elements of which are computed on
# demand by calling the .next method of the iterator
# returned by the single call to &coro.next()


==============================================================================

Meanwhile, Luke Palmer wrote:

> It's a simplified version of the traversal problem

Under the above assumptions, here's a traversal coroutine for a HoHoHo...oH
tree:

coro traverse (Hash $tree) {
$tree // return;
if $tree{left} {
yield $_ for <&_.clone($tree{left})>;
}
yield $tree{node};
if $tree{right} {
yield $_ for <&_.clone($tree{right})>;
}
}

And here's that recursive counter Luke wanted:

coro count($n) {
return 1 if $n<=1;
yield $_ for <&_.clone($n-1)>;
yield $n;
}

And here's John's pipelined comment ecdysiast:

coro getfile(Str $name) {
yield $_ for <open $name or die $!>
}

coro putfile(Str $name, Iter $iter) {
my $fh = open '>', $name or die $!;
$fh.print($_) && yield for <$iter>;
}

coro decomment(Rule $pattern, Iter $iter) {
/$pattern/ || yield $_ for <$iter>;
}

coro cat(Inter $iter1, Iter $iter2) {
yield $_ for <$iter1>, <$iter2>;
}

@sources = &getfile.clone("file1"),
&getfile.clone("file2");

$pipeline = &cat.clone(*@sources);
$pipeline = &decomment.clone(rx/^ \s* \#/, $pipeline);
$pipeline = &decomment.clone(rx#^ \s* //#, $pipeline);
$pipeline = &putfile.clone("filestripped", $pipeline);

while <$pipeline> {}


This last example suggests that C<.clone> might more readably be named
something like C<.with>:

$pipeline = &cat.with(*@sources);
$pipeline = &decomment.with(/^ \s* \#/, $pipeline);
$pipeline = &decomment.with(rx#^ \s* //#, $pipeline);
$pipeline = &putfile.with("filestripped", $pipeline);

though that doesn't work quite as well when there are no arguments being set up:

$iter = &fibs.with(); # fibs with...nothing?


Note too that the (apparently common) construction:

yield $_ for <&_.clone(@newargs)>;

could probably be condensed to:

»yield« <&_.clone(@newargs)>;

> Or am I missing something? I thought that one could yield from
> arbitrarily deep in the call stack.

That *was* an early idea, but subsequently proved both untenable and unnecessary.

Damian

Luke Palmer

unread,
May 23, 2003, 4:32:42 AM5/23/03
to j...@algate.perlwolf.com, dam...@conway.org, perl6-l...@perl.org, la...@wall.org
> On Fri, May 23, 2003 at 10:43:06AM +1000, Damian Conway wrote:
> > Didn't we already have this discussion six months ago???
>
> I wasn't on the list then, but that looks like it was...
>
> Larry's description of having the initial call to a

Hang on... Larry? He's actually said something about this? Where?

> coro function return an object sounds very much like
> the create() call I'm used to except that (as per
> Luke's discussion) it doesn't allow a function to
> be used as both a coroutine and as a function (e.g.
> when it calls itself recursively). Luke's approach
> of using two routines, one for the coro and another
> for the function:
>
> coro foo { ... real_foo( args ) }
> sub real_foo { ... can call real_foo recursively
>
> is manageable, but I prefer to have different syntax
> for call and resume rather than have the distinction
> intuited from the declaration of the routine.

I am certain now that there exists no approach that will allow both:

coro foo {...}
$x = foo(); $x = foo(); # The second yielded value.

And recursive calls from a coroutine to itself, simply because they're
mutually exclusive. If someone does come up with such a way (which
I'm convinced they can't :-), there would be no way to invoke a
separate coroutine instance within the coroutine itself.

> However, when the function of a coroutine is
> sufficiently complicated that it ought to be
> decomposed (using additional subroutines that can
> each yield some of the total sequence from the
> single coroutine process), the argument rebinding
> form next_args is inadequate. (Note that this also
> strongly says that having a yield operator in its
> body is not adequate reason to treat a subroutine as
> a coroutine, and automatically turn an invokation of
> that subroutine into a new coroutine process.)
>
> > next_args(PARAM_LIST) # resumes coroutine body until next C<yield>,
> > # rebinds params to the args passed to
> > # C<next_args>.
> > # PARAM_LIST is the same as the parameter list
> > # of the coroutine itself

Agreed. This is silly. Why not just return the args from C<yield>,
as Damian suggested before?

Luke

Luke Palmer

unread,
May 23, 2003, 4:50:34 AM5/23/03
to dam...@conway.org, perl6-l...@perl.org, la...@wall.org
Damian e-wrote:

> Meanwhile, Luke Palmer wrote:
>
> > It's a simplified version of the traversal problem
>
> Under the above assumptions, here's a traversal coroutine for a HoHoHo...oH
> tree:
>
> coro traverse (Hash $tree) {
> $tree // return;
> if $tree{left} {
> yield $_ for <&_.clone($tree{left})>;
> }
> yield $tree{node};
> if $tree{right} {
> yield $_ for <&_.clone($tree{right})>;
> }
> }

HoHoHoHoHoH, that works for me :-)

> And here's that recursive counter Luke wanted:
>
> coro count($n) {
> return 1 if $n<=1;
> yield $_ for <&_.clone($n-1)>;
> yield $n;
> }

You know, I actually think this is better than yielding from deep in
the call stack. It's clear what's happening when, and will likely be
a lot easier to debug.

> Note too that the (apparently common) construction:
>
> yield $_ for <&_.clone(@newargs)>;
>
> could probably be condensed to:
>
> »yield« <&_.clone(@newargs)>;

Together with something I'll write right off the bat:

macro cocall(Str $sub, Str $args) {
"yield \$_ for <($sub).clone($args)>"
}

However that will be done cleanly.

I also had a thought. If C<sub> is short for subroutine, shouldn't
the coroutine declarator actually be... C<co>? :-P

Well done, Damian! (as if I expected less)

Luke

Piers Cawley

unread,
May 23, 2003, 5:25:55 AM5/23/03
to Damian Conway, perl6-l...@perl.org, Larry Wall
Damian Conway <dam...@conway.org> writes:

> Many thanks to John for his excellent analyses and especially for
> reminding us of the potential use of C<yield>'s return value to pass
> new values into a coroutine. We did explore that approach last year
> but it had completely slipped my mind since then.
>
> That input, and Luke's important observations about recursion and list
> contexts have led to the following re-revised proposal...
>
> ==============================================================================
>
> A coroutine is declared with the C<coro> declarator:

I think you can arrange things so that you don't need the 'coro'
declarator by careful definition of your yield macro and suitable
reflective capabilities.

macro yield($parse_state : *@args) {
my $coro = $parse_state.get_enclosing_routine;
$coro.is_coroutine(1);
return {
(sub {
call-cc -> $cont {
$coro.resume_at($cont);
}
leave $coro <== *@)
}).(*@args);
}
}


A Coroutine can then be thought of as a wrapped subroutine with a
wrapper that would be declared like:

&coro.wrap({
my $resume = $coro.resume_at;
if($resume) {
$coro.resume_at(undef);
$resume.invoke(*@_);
}
else {
call
}
});

Assuming a simplistic coroutine invocation scheme.

--
Piers

Luke Palmer

unread,
May 23, 2003, 6:53:45 AM5/23/03
to pdca...@bofh.org.uk, dam...@conway.org, perl6-l...@perl.org, la...@wall.org
> Damian Conway <dam...@conway.org> writes:
>
> > Many thanks to John for his excellent analyses and especially for
> > reminding us of the potential use of C<yield>'s return value to pass
> > new values into a coroutine. We did explore that approach last year
> > but it had completely slipped my mind since then.
> >
> > That input, and Luke's important observations about recursion and list
> > contexts have led to the following re-revised proposal...
> >
> > ==============================================================================
> >
> > A coroutine is declared with the C<coro> declarator:
>
> I think you can arrange things so that you don't need the 'coro'
> declarator by careful definition of your yield macro and suitable
> reflective capabilities.

Uh huh... but what if we I<want> the C<coro> declarator? I do,
personally.

Plus, what about all the methods that coroutine objects have? If you
could implement I<that> without a declarator... it still wouldn't be
used. But I'd be impressed, at least.

Luke

John Macdonald

unread,
May 23, 2003, 6:39:36 AM5/23/03
to Damian Conway, perl6-l...@perl.org, Larry Wall
On Fri, May 23, 2003 at 06:32:44PM +1000, Damian Conway wrote:
> > Or am I missing something? I thought that one could yield from
> > arbitrarily deep in the call stack.
>
> That *was* an early idea, but subsequently proved both untenable and
> unnecessary.

OK.

So, rather than allowing one coroutine to call
subroutines that can yield results on its behalf,
it starts other coroutines and relays their results.

That comes at a cost. When you are getting the next
node from a deeply recursive structure, you do a
next on the root, which does a next on its current
child node, which does a next on its current child
node, ..., which eventually gets down to the bottom
to the coroutine that is executing the code for the
previously return node, that code determines the next
value (possibly starting a new level) and yields the
value, its parent node yields the value, ..., the
root node yields the value. The act of getting one
next value from the tree traversal, then, involves
N next calls, and N yield retuns. Keeping a call
stack for a sinlge coroutine would make that one
next call (which goes directly to the code for the
current node) and one yield. This would turn the
cost of traversing a nicely balanced tree from O(N)
to O(N log(N)), and of course a wildly unbalanced
tree traversal could go from O(N) to O(N**2).

Piers Cawley

unread,
May 23, 2003, 9:51:08 AM5/23/03
to Luke Palmer, dam...@conway.org, perl6-l...@perl.org, la...@wall.org
Luke Palmer <fibo...@babylonia.flatirons.org> writes:

>> Damian Conway <dam...@conway.org> writes:
>>
>> > Many thanks to John for his excellent analyses and especially for
>> > reminding us of the potential use of C<yield>'s return value to pass
>> > new values into a coroutine. We did explore that approach last year
>> > but it had completely slipped my mind since then.
>> >
>> > That input, and Luke's important observations about recursion and list
>> > contexts have led to the following re-revised proposal...
>> >
>> > ==============================================================================
>> >
>> > A coroutine is declared with the C<coro> declarator:
>>
>> I think you can arrange things so that you don't need the 'coro'
>> declarator by careful definition of your yield macro and suitable
>> reflective capabilities.
>
> Uh huh... but what if we I<want> the C<coro> declarator? I do,
> personally.
>
> Plus, what about all the methods that coroutine objects have? If you
> could implement I<that> without a declarator... it still wouldn't be
> used. But I'd be impressed, at least.

Easy. You have a method, 'Routine.become_coroutine' which mutates the
Routine iinto a coroutine (where I've put 'is_coroutine(1)') and then
that has all the Coroutine methods.

--
Piers

Dulcimer

unread,
May 23, 2003, 10:21:08 AM5/23/03
to Damian Conway, perl6-l...@perl.org, Larry Wall
> [...]

> This last example suggests that C<.clone> might more readably be
> named something like C<.with>:
>
> $pipeline = &cat.with(*@sources);
> $pipeline = &decomment.with(/^ \s* \#/, $pipeline);
> $pipeline = &decomment.with(rx#^ \s* //#, $pipeline);
> $pipeline = &putfile.with("filestripped", $pipeline);
>
> though that doesn't work quite as well when there are no arguments
> being set up:
>
> $iter = &fibs.with(); # fibs with...nothing?

Spinning lies from thin air? :)
Ok, Fibonacci wasn't lying....
(it's a "fib" joke, for those whose sense of humor isn't lame enough to
follow that silliness.)

Seriously, though, we have other cases where there's more than one
keyword for the same thing, don't we? Can we afford another for this
case?

Austin Hastings

unread,
May 23, 2003, 12:44:40 PM5/23/03
to Hod...@writeme.com, Damian Conway, perl6-l...@perl.org, Larry Wall

--- Dulcimer <ydb...@yahoo.com> wrote:
> > [...]
> > This last example suggests that C<.clone> might more readably be
> > named something like C<.with>:
> >
> > $pipeline = &cat.with(*@sources);
> > $pipeline = &decomment.with(/^ \s* \#/, $pipeline);
> > $pipeline = &decomment.with(rx#^ \s* //#, $pipeline);
> > $pipeline = &putfile.with("filestripped", $pipeline);
> >
> > though that doesn't work quite as well when there are no arguments
> > being set up:
> >
> > $iter = &fibs.with(); # fibs with...nothing?
>
> Spinning lies from thin air? :)
> Ok, Fibonacci wasn't lying....
> (it's a "fib" joke, for those whose sense of humor isn't lame enough
> to
> follow that silliness.)
>
> Seriously, though, we have other cases where there's more than one
> keyword for the same thing, don't we? Can we afford another for this
> case?

Sure, but the huffman value of C<with> is too great to spend it on
something like this.

=Austin

Austin Hastings

unread,
May 23, 2003, 12:53:36 PM5/23/03
to John Macdonald, Damian Conway, perl6-l...@perl.org, Larry Wall

One the one hand, "that's why TMTOWTDI" -- you can lsearch an array, if
you want, too. Performance versus elegance is a trade you make every
time you choose recursion over iteration.

On the other hand, the decision between "support recursive coroutines"
and "support delegation of C<yield> to subs" is one that Larry may have
some interesting views on.

Or, consider that the coroutine is going to have to have an object of
some kind lying about so that it can store "I am here" data for any
resume calls.

There's nothing which says that yield couldn't update the toplevel
object to contain the bottomlevel continuation, making a resume
operation essentially costless, assuming no parameters are used by the
callee.

That is, if I say: C<yield $x> then I'm not using whatever the
putative return value of yield is. And for obvious cases like
yield-in-loop, the optimizer might note that this:

yield $_ for coro(...);

is basically yield(resume()) after the first pass. Accordingly,
optimizing the CPS flow might essentially treat this as a I<code> snap
and just store the continuation of resume().

=Austin

John Williams

unread,
May 23, 2003, 1:17:41 PM5/23/03
to Damian Conway, perl6-l...@perl.org, Larry Wall
On Fri, 23 May 2003, Damian Conway wrote:
> =item C<method Coro::clone(*@args) {...}>
>
> which creates and returns a copy of the implementation (but not the current
> state) of the C<Coro> object, having first prebound the copy's parameters to
> the arguments passed to C<clone>.
>
> =back

Isn't that essentially what .assuming() does by currying normal
subroutines? And if so, why not use the same name for coroutines?

(Assuming, of course, that assuming nothing will return something which is
the same, only different.)

~ John Williams


John Williams

unread,
May 23, 2003, 1:23:03 PM5/23/03
to Piers Cawley, perl6-l...@perl.org
On Thu, 22 May 2003, Piers Cawley wrote:
> One of the cute things that Common Lisp has is multiple return values
> (and I don't just mean returning a list). In most cases you just use
> the 'main' return value, but in others you would do
>
> (multiple-value-list (a-coroutine arg1 arg2))
>
> And get a list back along the lines of
>
> ((usual results) #'a-coroutine-handle)
>
> Point is that, unless you *know* you're going to need the secondary
> values returned by a function then they're completely invisible to the
> caller. It seems to me that this might be a good approach with Perl:

How about using properties for that?

sub multi_value_sub
{
return @results but also( coro { ... } );
}

~ John Williams


Luke Palmer

unread,
May 23, 2003, 1:58:57 PM5/23/03
to mlaz...@cognitivity.com, perl6-l...@perl.org
> For example, if you have count(100), you're recursing through ~100
> yields. Which means, unless I am sorely mistaken, that you have 99
> saved coroutine states lying around, never to be used again, but
> sucking up resources? How's it know to release them?

Coroutine states are garbage collected, too. But there's still a lot
of space overhead for currently executing (or could-be-executed)
coroutines. Much more than there need be.

But then there's John's observation that yielding for the deeper calls
actually increses the time complexity of some coroutine algorithms.
That's not cool -- "Perl 6, it's faster; it just makes _your_ code
slower".

I don't think the proposal is acceptable, now. It's getting close,
though :)

Luke

>
> 2) I am going to be a heretic, and state publicly that I have always
> thought "coroutine" to be a damn unhelpful name for the behavior of
> "yieldable, resumable subroutines". "Coroutines" are not something
> most people have seriously dealt with, and the name is bloody
> impenetrable. Rather than use C<coro> or even C<coroutine>, I'd much
> rather use a more descriptive phrase, perhaps something like:
>
> sub fibs (...) is reentrant {...}
>
> E.G. I'm wondering how close we can get coroutine syntax to mirror
> (as-of-yet-unknown) thread syntax, such that the user perhaps doesn't
> need to be aware of "how" the routine is reentrant -- it just magically
> does the right thing.
>
> MikeL
>
>
> [*] (I think Simon is right -- we need to better identify what problem
> we think we're solving here, and why 99% of the world should care.)

Michael Lazzaro

unread,
May 23, 2003, 1:49:15 PM5/23/03
to perl6-l...@perl.org
On Friday, May 23, 2003, at 01:32 AM, Damian Conway wrote:
> That input, and Luke's important observations about recursion and list
> contexts have led to the following re-revised proposal...

That's pretty darn slick, I like it. (I'm personally quite satisfied
with the clone and/or factory approach to simultaneous
instances/states/iterators.)

In the spirit of questioning everything, however, I'm going to (mildly)
question some things. [*]

1) My impression is that yielding and resuming a coroutine is by
necessity a relatively expensive operation. As such, given any
arbitrary recursive coroutine:

> coro count($n) {
> return 1 if $n<=1;
> yield $_ for <&_.clone($n-1)>;
> yield $n;
> }

are there really routines in the world for which a recursive
implementation isn't _substantially_ more expensive than a decent
alternative solution?

For example, if you have count(100), you're recursing through ~100
yields. Which means, unless I am sorely mistaken, that you have 99
saved coroutine states lying around, never to be used again, but
sucking up resources? How's it know to release them?

Dulcimer

unread,
May 23, 2003, 1:59:48 PM5/23/03
to perl6-l...@perl.org
> > Seriously, though, we have other cases where there's more than one
> > keyword for the same thing, don't we? Can we afford another for
> > this case?
>
> Sure, but the huffman value of C<with> is too great to spend it on
> something like this.

I did think of that, and I really do agree.
Which brings me to another point.

I understand that you can override even keywords in perl.
For example (and PLEASE correct me on this) I could theoretically
create an operator named "print" that compares operands numerically,
were I sufficiently deranged to believe this to be a good idea.

That being the case, what's the precedence of terms in P6?
I mean, macros are executed immediately, so that puts them up near if
not on the top, right? Then subs before methods? Operators by to
whatever you liken them?

I could macro "with" to "clone", and right now that's not really and
issue since I'm not familiar with a "with" in P6.... If we add one
later it shouldn't matter in that code, because it predates the new
keyword, and the macro likely has higher precedence, but that sort of
thing could bite someone in other circumstances, no?

Or is this a non-issue?

Luke Palmer

unread,
May 23, 2003, 2:05:59 PM5/23/03
to Hod...@writeme.com, perl6-l...@perl.org
> > > Seriously, though, we have other cases where there's more than one
> > > keyword for the same thing, don't we? Can we afford another for
> > > this case?
> >
> > Sure, but the huffman value of C<with> is too great to spend it on
> > something like this.
>
> I did think of that, and I really do agree.
> Which brings me to another point.
>
> I understand that you can override even keywords in perl.
> For example (and PLEASE correct me on this) I could theoretically
> create an operator named "print" that compares operands numerically,
> were I sufficiently deranged to believe this to be a good idea.

You can do it in P5. I hope you can in P6. (There's something
special about C<print> in particular that's keeping me from overriding
it, though :-( )

> That being the case, what's the precedence of terms in P6?
> I mean, macros are executed immediately, so that puts them up near if
> not on the top, right? Then subs before methods? Operators by to
> whatever you liken them?

Uh, what? Macros are subs that execute during parsing. They're
executed as soon as it's determined that they can be.

Nothing else executes during parsing, and it's obvious the order in
which other things execute at run time. Or am I missing your
question

If both a sub and a multimethod are defined with the same name, yes,
the sub gets executed.

> I could macro "with" to "clone", and right now that's not really and
> issue since I'm not familiar with a "with" in P6.... If we add one
> later it shouldn't matter in that code, because it predates the new
> keyword, and the macro likely has higher precedence, but that sort of
> thing could bite someone in other circumstances, no?
>
> Or is this a non-issue?

What's the issue?

Luke

Michael Lazzaro

unread,
May 23, 2003, 2:14:37 PM5/23/03
to Luke Palmer, perl6-l...@perl.org

On Friday, May 23, 2003, at 10:58 AM, Luke Palmer wrote:

>> For example, if you have count(100), you're recursing through ~100
>> yields. Which means, unless I am sorely mistaken, that you have 99
>> saved coroutine states lying around, never to be used again, but
>> sucking up resources? How's it know to release them?
>
> Coroutine states are garbage collected, too. But there's still a lot
> of space overhead for currently executing (or could-be-executed)
> coroutines. Much more than there need be.

I guess what I'm asking is, how's it know when they can be garbage
collected? If you've just yielded out of a coro, what actually
"detaches" it? Here, for example:

>> coro count($n) {
>> return 1 if $n<=1;
>> yield $_ for <&_.clone($n-1)>;
>> yield $n;
>> }

You're yielding out of each of the cloned states, but not actually
detaching them. Are they detached by the <&_.clone($n-1)> going out of
scope? But since you're yielding from within the C<for>, does it
_ever_ go out of scope?

I suppose it would go out of scope when you finally completed the
loop... hmm...

> But then there's John's observation that yielding for the deeper calls
> actually increses the time complexity of some coroutine algorithms.

Yeah. That's nasty.

MikeL

John Macdonald

unread,
May 23, 2003, 1:11:36 PM5/23/03
to perl6-l...@perl.org
On Fri, May 23, 2003 at 09:53:36AM -0700, Austin Hastings wrote:
> There's nothing which says that yield couldn't update the toplevel
> object to contain the bottomlevel continuation, making a resume
> operation essentially costless, assuming no parameters are used by the
> callee.
>
> That is, if I say: C<yield $x> then I'm not using whatever the
> putative return value of yield is. And for obvious cases like
> yield-in-loop, the optimizer might note that this:
>
> yield $_ for coro(...);
>
> is basically yield(resume()) after the first pass. Accordingly,
> optimizing the CPS flow might essentially treat this as a I<code> snap
> and just store the continuation of resume().

Actually, the way I read the current proposal,
there is something that says you can't.

You can only have yield in a coro, so one coro
can't function call to something that will do a
yield on it's behalf, because that function call
will automatically start up a new coroutine. So,
the only nested info in the coroutine continuation
can be sub-blocks in the coroutine, but not subroutine
calls of any sort. That is why Damian uses:

coro traverse (Hash $tree) {
$tree // return;
if $tree{left} {
yield $_ for <&_.clone($tree{left})>;
}
yield $tree{node};
if $tree{right} {
yield $_ for <&_.clone($tree{right})>;
}
}

Adding a variant of "coro" which I'll call "cosub"
fixes things. "cosub" would generally be the same as
"coro", but it doesn't do the automatic spawning of
a new coroutine if it is called as a subroutine.
It would give a runtime error if it is not being
called from a coro context, otherwise it would act
as a normal function but it would be able to yield a
value to and be resumed as part of the operation of
the original coro.

That would let the traverse be rewritten as:

cosub travsub (Hash $tree) {
$tree{left} && travsub( $tree{left} );
yield $tree{node};
$tree{right} && travsub( $tree{right} );
}

coro traverse (Hash $tree) {
$tree // return;

travsub( $tree );
}

(In this case, when you are 100 levels down in the
tree and next is called, there is only one coroutine
being resumed and it resumes right at the 100 deep
subroutine invokation; rather than there being 100
coroutines in action, 99 of whom are simply relaying
the next request down and the yield response back up.)

In the lines of simple things simple, hard things
possible, this retains the auto-coroutineify nature
of coro definitions so that simple iterations will
just work, while still keeping it possible to have
coroutines that deal with more complicated processes.

Just for interest, here is a traversal routine that
uses the bi-directional data transfer of the yield
and next operations. This traversal routine allows
the caller to do selective pruning of the traversal
in progress.

cosub travsub (Hash $tree) {
my( $p_left, $p_in, $p_right, $p_post ) = yield( "PRE", $tree{node} );
$tree{left} && travsub( $tree{left} ) unles $p_left;
yield( "IN", $tree{node} ) unless $p_in;
$tree{right} && travsub( $tree{right} ) unles $p_right;
yield( "POST", $tree{node} ) unless $p_post;
}

coro traverse (Hash $tree) {
$tree // return;

travsub( $tree );
}

It can be used to build various other things. For example,
is a coroutine that iterates all of the values of a tree
within a specified range - it skips traversing the parts
of the tree that cannot contain an interesting value:

coro rangeiter (Hash $tree, int $min, int $max ) {
my $trav = traverse( $tree );
$p_post = 1;
my( $type, $node ) = $trav.next;
while( defined $type ) {
if( $type eq 'PRE' ) {
my $p_left = $node{val} >= $min;
my $p_right = $node{val} <= $max;
my $p_in = $p_left && $p_right;
( $type, $node ) =
$trav.next( $p_left, $p_in, $p_right, 1 );
} elsif( $type eq 'IN' ) {
yield $node;
( $type, $node ) = $trav.next;
}
}
}

(Note that rangeiter resumes the traversal from 3
different place, and only one of them needs to pass
values through next to return from yield. This is
a lot of rope; use it to lasso steers rather than
hanging yourself.)

A similar sort of controlled traversal might be used
to manage something like Data::Dumper, so that it
could skip large chunks of a structure, but give
extremely fine detail on a specific area and coarse
detail on the parts of the data structure that are
more closely connected to it.

Piers Cawley

unread,
May 23, 2003, 2:25:42 PM5/23/03
to John Williams, perl6-l...@perl.org
John Williams <will...@tni.com> writes:

> On Thu, 22 May 2003, Piers Cawley wrote:
>> One of the cute things that Common Lisp has is multiple return values
>> (and I don't just mean returning a list). In most cases you just use
>> the 'main' return value, but in others you would do
>>
>> (multiple-value-list (a-coroutine arg1 arg2))
>>
>> And get a list back along the lines of
>>
>> ((usual results) #'a-coroutine-handle)
>>
>> Point is that, unless you *know* you're going to need the secondary
>> values returned by a function then they're completely invisible to the
>> caller. It seems to me that this might be a good approach with Perl:
>
> How about using properties for that?

Gah! Of course.

--
Piers

Mark J. Reed

unread,
May 23, 2003, 3:00:38 PM5/23/03
to perl6-l...@perl.org

On 2003-05-22, Piers Cawley wrote:
> One of the cute things that Common Lisp has is multiple return values
> (and I don't just mean returning a list).

s/cute/annoying/. The first time I used an application with a
CL scripting API, I found I needed a value that was part of such
a multiple-value-list. Only I'd never heard of such a thing.
I had no idea what it was, and the API doc didn't mention the term
"multiple-value-list" or "multiple value" or anything that might
have pointed me in the right direction. All I knew was that when
I printed the value out interactively, I got each element on a
separate line, but when I used the value in a function I only
got the first one. Much poring over of documentation ensued, and
I eventually found the m-v-l function, but why the function
didn't just return a plain old garden-variety Lisp list is beyond me.

> Point is that, unless you *know* you're going to need the secondary
> values returned by a function then they're completely invisible to the
> caller.

And even if you do know you need them they're invisible until you stumble
across the appropriate "reveal invisible" spell. :)

--
Mark REED | CNN Internet Technology
1 CNN Center Rm SW0831G | mark...@cnn.com
Atlanta, GA 30348 USA | +1 404 827 4754

Austin Hastings

unread,
May 23, 2003, 3:17:03 PM5/23/03
to Michael Lazzaro, Luke Palmer, perl6-l...@perl.org

--- Michael Lazzaro <mlaz...@cognitivity.com> wrote:
>
> On Friday, May 23, 2003, at 10:58 AM, Luke Palmer wrote:
>
> >> For example, if you have count(100), you're recursing through ~100
> >> yields. Which means, unless I am sorely mistaken, that you have
> 99
> >> saved coroutine states lying around, never to be used again, but
> >> sucking up resources? How's it know to release them?
> >
> > Coroutine states are garbage collected, too. But there's still a
> lot
> > of space overhead for currently executing (or could-be-executed)
> > coroutines. Much more than there need be.
>
> I guess what I'm asking is, how's it know when they can be garbage
> collected? If you've just yielded out of a coro, what actually
> "detaches" it? Here, for example:

If you're going to allow Dan Sugalski's example (which started this
thread, I think) then you can't GC the context until the thread ends.
After all, the next statement at global scope may reinvoke this
coro-context.

This is a good argument for context-as-object, I we care about that
sort of thing. Frankly, I don't think we do.


> >> coro count($n) {
> >> return 1 if $n<=1;
> >> yield $_ for <&_.clone($n-1)>;
> >> yield $n;
> >> }
>
> You're yielding out of each of the cloned states, but not actually
> detaching them. Are they detached by the <&_.clone($n-1)> going out
> of scope? But since you're yielding from within the C<for>, does it
> _ever_ go out of scope?
>
> I suppose it would go out of scope when you finally completed the
> loop... hmm...

See above.

Also, if you've got a big recursive coro, you don't want the context
for level N to expire until N-1 expires. So we've got some kind of coro
continuations, or maybe closures: exiting a function via C<yield>
cannot permit any of its cocontexts to expire.

Might as well turn it into a feature...

=Austin

Adam D. Lopresto

unread,
May 23, 2003, 1:37:17 PM5/23/03
to perl6-l...@perl.org
I'd like to mention that if yield breaks out of all nested coroutines, then
some very interesting pipeline stuff becomes either impossible or much harder.
Specifically, it means that you can't filter the returned values at all.

coro odds_only (Iter &i){
for i() {
yield if $_ % 2;
}
}

coro randoms () {
loop {
yield int(rand 10);
}
}

for odds_only(&randoms){
# $_ is a random odd number
}

Basically, coros should compose in a way that doesn't damage either of them.

--
Adam Lopresto
http://cec.wustl.edu/~adam/

.-""-.
.--./ `. .-""-.
.' `.__,"\ \ ___ .' _ \
: _ _ : \ `" "' ,' \ /
.-| _ _ |-. Y Y `-'
((_| (O)(O) |_)) | _ _ |
`-| .--. |-' | (o)(o) |
.-' ( ) `-. / __ \
/ .-._`--'_.-. \ | /# \ |
( (n uuuu n) )| \__/ |
`.`"=nnnnnn="'.' \ / _
`-.______.-' _ `.____.' _ / )-,
_/\| |/\__ .' `-" "-' (_/ / )
.w'/\ \__/ /\w/ |_/ / )
.-\w(( \/ \/ ))| | `-(_/
/ |ww\\ \ / //w| | | \
/ |www\\/`'\//ww| | |\ \
/ |wwww\\ //www| | | \ \

John Macdonald

unread,
May 23, 2003, 2:29:40 PM5/23/03
to Michael Lazzaro, perl6-l...@perl.org
On Fri, May 23, 2003 at 10:49:15AM -0700, Michael Lazzaro wrote:
> 2) I am going to be a heretic, and state publicly that I have always
> thought "coroutine" to be a damn unhelpful name for the behavior of
> "yieldable, resumable subroutines". "Coroutines" are not something
> most people have seriously dealt with, and the name is bloody
> impenetrable. Rather than use C<coro> or even C<coroutine>, I'd much
> rather use a more descriptive phrase, perhaps something like:
>
> sub fibs (...) is reentrant {...}
>
> E.G. I'm wondering how close we can get coroutine syntax to mirror
> (as-of-yet-unknown) thread syntax, such that the user perhaps doesn't
> need to be aware of "how" the routine is reentrant -- it just magically
> does the right thing.

I don't fully agree with this.

Compare coroutine and subroutine.

A subroutine is subordinate - it gets a job, does it,
returns the result and it is done. The "mainline"
caller of a subroutine has no such constraint on its
actions, it can be in the middle of a computation,
possibly in the middle of a number of subroutine
calls when it calls the subroutine. It can call the
subroutine again from different locations during its
operation, while each time the subroutine starts from
scratch and runs to completion.

Coroutines are cooperative - each takes turns doing
a bit of the process and each can treat the other as
a subroutine that can be called from multiple places
within the context of it entire operation. They
have equal status, an each can be written as if it
were the mainline function, invoking the other as a
a subroutine. (So they are equal by virtue of each
one believing it is superior. :-)

I do agree with Michael's viewpoint somewhat however
in terms of how coroutines are being desined for
Perl 6.

The design before this discussion started treated
coroutines as being useful primarily for iterators
that could be used from an arbitrary calling context,
and the design was heavily skewed in that direction.
The coroutine was capable of acting as an input
handle that would execute code every time you did a
"read". But coroutines can also be used as data
sinks instead of sources (like an output handle),
and for bi-directional activity.

The use of next and yield keywords as different
operators hides the equal status cooperative potential
of coroutines and emphasises the unidirectional
expectation that was inherent in that design. To a
certain extent, that is all right. Most simple cases,
like iterators, don't need the full cooperative
potential of being mutually superior.

Using a single operator (such as the "resume" that I
am used to) for both sides of the coroutine transfer
of control allows the programmer to more easily view
a collection of coroutines as equal partners, each
of which treats the other as a subroutine that can
be called to handle a function.

This is especially important for larger scale use
of coroutines that really are equal cooperatin
partners in the computations. Think of a parser,
an optimizer, and a code generator, an interpretor,
and a stack frame register for example.

The parser generates a chunk of parse tree and calls
a "subroutine" to deal with it. The code generator
calls a "subroutine" to get a parse tree, generates
code for it, and then calls another "subroutine"
to execute that code. The interpreter calls a
"subroutine" to get a block of code, files it and
links it, and executes it. Both the code generator
and the interpreterc call a "subroutine" to create
and manage stack and context information for a block
of code. Neither the parser nor the code generator
knows or cares whether the "subroutine" they are
calling is each other or is maybe it is the optimizer
sitting between them.

Now, coroutines is not the only way to do these sorts
of things. The stack frame might be an object with a
variety of methods instead of a coroutine for example.
(We didn't have objects when I was your age, we had
to use coroutines and execute them uphill both ways
in the snow. :-) There are also threads or disjoint
processes, but these have their own overheads,
something in between those chain saws and the iterator
nail file might be useful for certain purposes.

Austin Hastings

unread,
May 23, 2003, 3:32:46 PM5/23/03
to John Macdonald, perl6-l...@perl.org

--- John Macdonald <j...@algate.perlwolf.com> wrote:

The point here is that he is doing a recursive coroutine, which is
creating multiple coroutine contexts - one for each level of descent
into the tree. So you've got:

traverse.1.yield(
traverse.2.yield(
...
traverse.N.yield()

N levels of nested coroutine depth, one for each clone.

The object here is to avoid having to pay the price of resuming N
coroutine contexts (essentially N times the cost of a full VM context
switch, plus a surcharge I'm sure).

So if the optimizer can figure this out, and figure out where the other
contexts are, then it could short-circuit the resume process and move
much of the overhead to the yield code.

IOW, the post-cocall processing would look at the context that just got
stored (it has to be accessible to both caller and callee) and set the
resume address of the yield it is preparing to execute to link directly
to the next level down.

If every level does this, then you've effectively snapped the yield
pointers so that you resume directly to the working node of the yield
stack.

Drool, slobber. That's a pretty interesting idea.

=Austin

Simon Cozens

unread,
May 23, 2003, 3:33:19 PM5/23/03
to perl6-l...@perl.org
austin_...@yahoo.com (Austin Hastings) writes:
> If you're going to allow Dan Sugalski's example (which started this
> thread, I think) then you can't GC the context until the thread ends.
...

> Might as well turn it into a feature...

Pardon me for being grouchy again without offering better suggestions (but as
I've mentioned I don't understand the thread well enough to do so) but wasn't
one of the purposes of splitting the language and implementation mailing lists
to discourage people from using implementation arguments to justify design
issues?

Why not let's first work out what the best model is for the language, then
work out how to implement it; not the other way around. I'm sure p6i contains
enough smart folk to be able to support any sane design decision that p6l
comes up with.

--
teco < /dev/audio
- Ignatios Souvatzis

Austin Hastings

unread,
May 23, 2003, 3:38:34 PM5/23/03
to Simon Cozens, perl6-l...@perl.org

Granted, but this is really more of a language issue: "What is the
scope of a coroutine context?" It's just cloaked in keano low-level
terms because we can't help ourselves.

=Austin

Michael Lazzaro

unread,
May 23, 2003, 5:49:52 PM5/23/03
to John Macdonald, perl6-l...@perl.org

On Friday, May 23, 2003, at 11:29 AM, John Macdonald wrote:
> On Fri, May 23, 2003 at 10:49:15AM -0700, Michael Lazzaro wrote:
>> 2) I am going to be a heretic, and state publicly that I have always
>> thought "coroutine" to be a damn unhelpful name for the behavior of
>> "yieldable, resumable subroutines". "Coroutines" are not something
>> most people have seriously dealt with, and the name is bloody
>> impenetrable. Rather than use C<coro> or even C<coroutine>, I'd much
>> rather use a more descriptive phrase, perhaps something like:
>>
>> sub fibs (...) is reentrant {...}
[snip]

> Using a single operator (such as the "resume" that I
> am used to) for both sides of the coroutine transfer
> of control allows the programmer to more easily view
> a collection of coroutines as equal partners, each
> of which treats the other as a subroutine that can
> be called to handle a function.
>
> Coroutines are cooperative - each takes turns doing
> a bit of the process and each can treat the other as
> a subroutine that can be called from multiple places
> within the context of it entire operation. They
> have equal status, an each can be written as if it
> were the mainline function, invoking the other as a
> a subroutine. (So they are equal by virtue of each
> one believing it is superior. :-)

See, this is about where I think most people's brains start to melt.

Thinking aloud: Whereas the name "coroutine" is supposed to invoke
feelings that neither routine is subordinate to the other, you can just
as easily say that _both_ are subordinate to the other, and explain it
equally well that way. A sub is always subordinate to it's caller,
even if the caller was called by the subordinate!

And shortening it to "coro" doesn't do a lot to help matters, when it
comes to explaining it to people. :-)

That's why I'm intrigued by having much of the _functionality_ of
coroutines, but not terribly fond of _naming_ that functionality
"coroutine".[*] In actual usage, it's so similar to
continuations/threads that having a separate C<coro> feature seems
grafted on -- what we really would want, in an ideal world, would be to
magically unify *all* of those philosophies, as much as possible.

sub fibs (...) is reentrant {

my $v;
...
yield $v;
}

I mean, when we say "coroutine", what do we really want, here?
Reentrant subroutines? Lower-impact threading?
Iterators/Sources/Sinks? If the word "coroutine" had never been
invented, how would we be describing our end goal?

MikeL


[*] As an example, I've never sat down and said "I wonder how I can
design this feature using a coroutine". I have often, however, said
"gee, what I really want is for this stupid sub to be reentrant, so
that I can use it as a iterator/source/sink." And even in complex
cases -- where the routines really are cooperative amongst themselves
-- coroutines are simply another form of threading, yes?

Damian Conway

unread,
May 23, 2003, 6:34:09 PM5/23/03
to perl6-l...@perl.org
Luke Palmer wrote:

> You know, I actually think this is better than yielding from deep in
> the call stack. It's clear what's happening when, and will likely be
> a lot easier to debug.

I agree. Despite John's cogent arguments in favour of action-at-a-distance
yielding, and two forms of coroutine.


> Together with something I'll write right off the bat:
>
> macro cocall(Str $sub, Str $args) {
> "yield \$_ for <($sub).clone($args)>"
> }

Nice. But macros can't magically parse their arguments for you like that:
You'd need to write either something like:

macro yieldall(Str $sub, Str $args)
is parsed( rx:w/ (<ident>) (<Perl.arglist>)/ )
{
"yield \$_ for <($sub).clone($args)>"
}

or (better still):

macro yieldall(Coro $sub, *@args)
{
{ yield $_ for <$sub.clone(*@args) }
}

Damian

Damian Conway

unread,
May 23, 2003, 6:38:14 PM5/23/03
to perl6-l...@perl.org
Piers wrote:

>>>A coroutine is declared with the C<coro> declarator:
>>
>>I think you can arrange things so that you don't need the 'coro'
>>declarator by careful definition of your yield macro and suitable
>>reflective capabilities.

To which Luke replied:

> Uh huh... but what if we I<want> the C<coro> declarator? I do,
> personally.

Larry has consistently used explicit declarators for distinct subclasses of
<Code> object with distinct behaviours:

method foo {...}
multi foo {...}
macro foo {...}
rule foo {...}
sub foo {...}

While I suspect Piers is right -- that we *could* avoid the extra keyword -- I
think a C<coro> declarator is more in keeping with that tradition.

Damian

Damian Conway

unread,
May 23, 2003, 6:46:47 PM5/23/03
to perl6-l...@perl.org
Dulcimer wrote:

>>>Seriously, though, we have other cases where there's more than one
>>>keyword for the same thing, don't we? Can we afford another for
>>>this case?
>>
>>Sure, but the huffman value of C<with> is too great to spend it on
>>something like this.
>
> I did think of that, and I really do agree.

So do I. Which is why I left it as C<clone>, despite my musings.


> Which brings me to another point.
>
> I understand that you can override even keywords in perl.
> For example (and PLEASE correct me on this) I could theoretically
> create an operator named "print" that compares operands numerically,
> were I sufficiently deranged to believe this to be a good idea.
>
> That being the case, what's the precedence of terms in P6?
> I mean, macros are executed immediately, so that puts them up near if
> not on the top, right? Then subs before methods? Operators by to
> whatever you liken them?

The precedence is similar to that of Perl 5. Macros probably parse at the same
level of precedence as the named unary operators. The precedence of your
putative C<print> operator would be determined by its C<is equiv>, C<is
tighter>, or C<is looser> property.

Damian

Damian Conway

unread,
May 23, 2003, 6:54:21 PM5/23/03
to perl6-l...@perl.org
John Williams wrote:

> On Fri, 23 May 2003, Damian Conway wrote:
>
>>=item C<method Coro::clone(*@args) {...}>
>>
>>which creates and returns a copy of the implementation (but not the current
>>state) of the C<Coro> object, having first prebound the copy's parameters to
>>the arguments passed to C<clone>.
>>
>>=back
>
> Isn't that essentially what .assuming() does by currying normal
> subroutines? And if so, why not use the same name for coroutines?

I had the same thought myself, but my understanding of C<.assuming> is that
returns an argument-inserting wrapper around the original C<Code> object, not
a separate copy of it.

The other problem is that I would expect C<clone> to pitch an exception if the
argument list supplied to it didn't provide all the necessary parameters of
the coroutine. We're not currying a coroutine during a C<clone>; we're
"initializing" it.

Damian


John Macdonald

unread,
May 23, 2003, 5:53:09 PM5/23/03
to Damian Conway, perl6-l...@perl.org
On Sat, May 24, 2003 at 08:34:09AM +1000, Damian Conway wrote:
> Luke Palmer wrote:
>
> >You know, I actually think this is better than yielding from deep in
> >the call stack. It's clear what's happening when, and will likely be
> >a lot easier to debug.
>
> I agree. Despite John's cogent arguments in favour of action-at-a-distance
> yielding, and two forms of coroutine.

I tend to think of most coroutines as a data source
or a data sink (with the possibility of occassional
control activities using the reverse data channel)
or as filter (which is simultaneously a data sink to
one coroutine and a data source to another).

The connection between two coroutines is essentially a
read in the data sink being matched up with a write in
the data source. The action-at-a-distance of having
a resume (next/yield) occur in a nested subroutine
is the same as sharing a file handle with a nested
subroutine. I don't see a lot of value in having it
possible for a read handle (next) be available to an
entire nested subroutine hierarchy, while a write
handle (yield) can only be used in the top level
routine that "open"s it. That's adequate for simple
cases but it makes harder things unworkable.

I don't know about having two forms of coroutine -
I think that having the bi-directional form of next
and yield is sufficient, just confusing. (What does
it mean if one coroutine does a next an another, and
in the course of the second's operation it does a next
on the first instead of a yield? Using two different
operators prevents some things by requiring that there
be a master/slave relationship between each pair -
philosophically if not in fact [that depends on the
answer to the next/next question I just posed].)

Damian Conway

unread,
May 23, 2003, 7:32:29 PM5/23/03
to perl6-l...@perl.org
Michael Lazzaro wrote:

> In the spirit of questioning everything, however, I'm going to (mildly)
> question some things. [*]

> [*] (I think Simon is right -- we need to better identify what problem
> we think we're solving here, and why 99% of the world should care.)

I'm trying to solve problems of iteration, pipelined processing, and
especially of parallel traversal of complex data structures. For example,
John's pipelined decommenter, or the tree-equivalence problem (see below).


> 1) My impression is that yielding and resuming a coroutine is by
> necessity a relatively expensive operation. As such, given any
> arbitrary recursive coroutine:
>
>> coro count($n) {
>> return 1 if $n<=1;
>> yield $_ for <&_.clone($n-1)>;
>> yield $n;
>> }
>
>
> are there really routines in the world for which a recursive
> implementation isn't _substantially_ more expensive than a decent
> alternative solution?

That really depends on how good your implementation of recursion is. You can
convert any recursive problem to an iterative one (and vice versa). The
performance of each depends on whether keeping a context stack manually is
more efficient that having the interpreter keep a call stack automatically.

The real issue as far as I'm concerned is this: for a particular problem, is
recursion or iteration the more "natural" solution?

And the problem with that is that sometimes a *combined* approach is what you
really need, and that's much harder to achieve without iterators of some kind.

For example, consider the problem of determining whether two BST trees store
the same sequence of values. You need recursion to extract the sequences and
then iteration to compare them:

@seq1 = inorder($tree1);
@seq2 = inorder($tree2);

for zip(@seq1,@seq2) -> $node1, $node2 {
next if $node1 ~~ $node2;
print "differ\n";
exit;
}
print "same\n";

The problem is that, with ordinary recursive implementations of C<inorder>,
you have to traverse each tree completely before you can start to compare
them. That's horrendously inefficient if each tree has a million nodes, but
the very first nodes differ.

So instead, you use a recursive coroutine, like the C<traverse> coro that John
and I have been batting back and forth. Then your comparison can pull one node
at a time from each tree _whilst_still_using_recursive_traversal_. And your
comparison becomes:

$iter1 = &traverse.clone($tree1);
$iter2 = &traverse.clone($tree2);

for zip(<$iter1>,<$iter2>) -> $node1, $node2 {
next if $node1 ~~ $node2;
print "differ\n";
exit;
}
print "same\n";


Now, of course, you *could* get the same effect without a coroutine or
recursion, by creating a regular subroutine that can iteratively
inorder-traverse multiple trees in parallel. But that subroutine is
*significantly* more complicated to write and maintain than:

coro traverse (Hash $tree) {
$tree // return;

if $tree{left} { »yield« <&_.clone($tree{left})> }
yield $tree{node};
if $tree{right} { »yield« <&_.clone($tree{right})> }
}


> For example, if you have count(100), you're recursing through ~100
> yields. Which means, unless I am sorely mistaken, that you have 99
> saved coroutine states lying around, never to be used again, but sucking
> up resources? How's it know to release them?

This is an example where iteration is obviously the correct approach.
I'm sure Luke would never actually implement his counter coroutine
recursively, when he could just write:

coro count (Num $n) { »yield« 1..$n }


> 2) I am going to be a heretic, and state publicly that I have always
> thought "coroutine" to be a damn unhelpful name for the behavior of
> "yieldable, resumable subroutines". "Coroutines" are not something most
> people have seriously dealt with, and the name is bloody impenetrable.
> Rather than use C<coro> or even C<coroutine>, I'd much rather use a more
> descriptive phrase, perhaps something like:
>
> sub fibs (...) is reentrant {...}

Semantic note: "re-entrant" means "every call (including overlapping and
recursive ones) is independent". Coroutines are an example of something that
is specifically *not* re-entrant.

"Coroutine" is the accepted jargon for these things. And it's no worse than
(say) "closure", which the Perl community has become used to.

That being said, I'm sure Larry would carefully consider any better (i.e. less
technical) suggestions.


> E.G. I'm wondering how close we can get coroutine syntax to mirror
> (as-of-yet-unknown) thread syntax, such that the user perhaps doesn't
> need to be aware of "how" the routine is reentrant -- it just magically
> does the right thing.

Curiously, I'm specifically trying to move coroutine syntax *away* from thread
syntax, which has always struck me as far too low-level. I think of coroutines
as a syntactic (and semantic) convenience layer around threading, much as loop
constructs are a convenience layer around C<goto>.

Damian

Piers Cawley

unread,
May 23, 2003, 8:23:59 PM5/23/03
to Damian Conway, perl6-l...@perl.org
Damian Conway <dam...@conway.org> writes:

I've tended to think of coro as distinct from the rest simply because
I can see how to implement them without having to use a new
declarator, but I can't see how to do the same thing with the other
types.

BTW, speaking of declarators and types of Code, I still want to be
able to write:

class String {

method open($path:) {
File.open($path) or fail $!
}

method open($path: &block) {
my $fh = $path.open;
for <$fh> -> { &block($_) }
close $fh;
}
}

...

"/etc/passwd".open( -> $pwent { ... } );

Is there any chance of getting this? Or will I have to go the whole
multimethod hog? (Not that it shouldn't be possible to set up
appropriate macros of course)

--
Piers

Piers Cawley

unread,
May 23, 2003, 8:26:15 PM5/23/03
to Damian Conway, perl6-l...@perl.org
Damian Conway <dam...@conway.org> writes:

Presumably all bets about macro precendence are off in the presence of
C<is parsed>?


--
Piers

John Macdonald

unread,
May 23, 2003, 7:43:28 PM5/23/03
to Piers Cawley, Damian Conway, perl6-l...@perl.org
On Sat, May 24, 2003 at 01:23:59AM +0100, Piers Cawley wrote:
> Damian Conway <dam...@conway.org> writes:
> > While I suspect Piers is right -- that we *could* avoid the extra
> > keyword -- I think a C<coro> declarator is more in keeping with that
> > tradition.
>
> I've tended to think of coro as distinct from the rest simply because
> I can see how to implement them without having to use a new
> declarator, but I can't see how to do the same thing with the other
> types.

Well, method doesn't need a declarator in Perl 5 but
Larry added it anyway, so being able to implement a
functional form without needing a special declarator
is not sufficient reason to not provide one.

Michael Lazzaro

unread,
May 23, 2003, 9:19:21 PM5/23/03
to Damian Conway, perl6-l...@perl.org

On Friday, May 23, 2003, at 04:32 PM, Damian Conway wrote:
> Semantic note: "re-entrant" means "every call (including overlapping
> and recursive ones) is independent". Coroutines are an example of
> something that is specifically *not* re-entrant.

$#@%ing $@#^. Yes, sorry. Word overlap. Education long time gone.

> "Coroutine" is the accepted jargon for these things. And it's no worse
> than (say) "closure", which the Perl community has become used to.

Well, yes, but... umm...

My concern is not particularly the jargon of "coroutine", or "closure",
or anything else by itself, but the possible aggregation of
state/closure/coroutine/continuation/thread/whatzit into the language
-- that's what's worrying me. Any one of those concepts is "hard" in
both jargon and practice, and I wonder what will be the straw that
breaks the Camel's back, so to speak.

What bugs me about it is (potentially) the sheer volume of secret words
you may need to know to play along.[*] All these things -- from
closures to coroutines to continuations to threads -- are variations of
the same central theme, which boils down to maintaining the state in
one location while something else happens in another location. I
submit that Perl needs a single, unified way to accomplish that, for
some negotiable definition of "single" and "unified."

Don't get me wrong, I like the idea of including coroutine
_capabilities_ -- I'm just pondering whether we are maintaining an
artificial separation from similar, possibly overlapping concepts.

> Curiously, I'm specifically trying to move coroutine syntax *away*
> from thread syntax, which has always struck me as far too low-level. I
> think of coroutines as a syntactic (and semantic) convenience layer
> around threading, much as loop constructs are a convenience layer
> around C<goto>.

That's interesting, because I sortof agree with that (threads being too
low-level). Perhaps I should turn my question around; is there a way
to move thread syntax closer to coroutine syntax, for common cases? :-)

Or to move coroutine syntax to interrelated but still-higher-level
iterator, source/sink, and thread/parallelism syntaxes, since in
practice coroutines are almost always(?) used in those rather narrow,
defined circumstances?

MikeL


[*]An explanatory aside, FWIW:

I've loathed CS proper ever since college, in which -- even at my
undisputedly top-tier, pretty damn good school -- the entire CS
curriculum could be largely boiled down to:
1) Enroll in a CS class
2) Buy professor's $70 textbook, sold only at the campus
bookstore, describing the new language they just invented.
3) Forget everything you thought you learned last semester.
4) Spend a few weeks learning the words your professor uses
to describe the same damn concepts you learned last semester,
by different names.
5) Spend the rest of the semester coding problems you coded before,
using an experimental language whose only working compiler
consists of a flannel-clad grad student TA.
6) Repeat until graduated or insane.

That isn't meant to be a slam against professors -- just the Babelesque
state of CS education in general, at least as I experienced it. YMMV.
;-)

Luke Palmer

unread,
May 24, 2003, 1:12:24 AM5/24/03
to pdca...@bofh.org.uk, dam...@conway.org, perl6-l...@perl.org
Piers comduced:

> BTW, speaking of declarators and types of Code, I still want to be
> able to write:
>
> class String {
>
> method open($path:) {
> File.open($path) or fail $!
> }
>
> method open($path: &block) {
> my $fh = $path.open;
> for <$fh> -> { &block($_) }
> close $fh;
> }
> }
>
> ...
>
> "/etc/passwd".open( -> $pwent { ... } );
>
> Is there any chance of getting this? Or will I have to go the whole
> multimethod hog? (Not that it shouldn't be possible to set up
> appropriate macros of course)

Get what?

This is valid Perl 6, and it does what you expect:

method Str::open() {
open self err fail;
}
method Str::open(&block) {
for <open self> { block($_) };
}

That is, just tacking methods on to the Str class without Str's
express written consent.

Luke

Luke Palmer

unread,
May 24, 2003, 1:16:42 AM5/24/03
to dam...@conway.org, perl6-l...@perl.org

However, would it not be valid to have a "coroutine method"? I don't
see any fundamental paradox prohibiting it (as with a "method sub",
(not to be confused with "submethod" :-) ). In fact, I would consider
it quite useful; e.g. Tree::traverse.

Luke

Piers Cawley

unread,
May 24, 2003, 1:56:09 AM5/24/03
to John Macdonald, Damian Conway, perl6-l...@perl.org
John Macdonald <j...@algate.perlwolf.com> writes:

Disagree about Perl 5 methods not needing their own declarator, that
is, after all why Larry added them in Perl 6. The trouble with Perl
5's "subs are methods" rule of thumb is that there is no way to know
at runtime whether a given sub is a method or a sub, which can lead to
crawling horrors. Meanwhile in Perl 6, methods get their own
declarator because they have recognizably different semantics from
subs (implicit passing of an invocant) and the distinction cannot be
recognized in any other way -- there's no distinctive signature to the
contents of a method that will be there in every possible method.

Meanwhile, in a coroutine, there is just such a distinctive signature;
if a subroutine body contains a yield, then that sub must necessarily
be a coroutine. (And now I'm trying to work out if having a
coroutinish method/submethod/rule/whatever makes sense).

If you look at the other Code types that get their own declarator
you'll see cases where there is no way of knowing from the body of the
Code what sort of Code object to instantiate. You'll also see that, as
of at least Synopsis 6, Coroutines don't require their own declarator...


--
Piers

Piers Cawley

unread,
May 24, 2003, 1:58:37 AM5/24/03
to Luke Palmer, dam...@conway.org, perl6-l...@perl.org
Luke Palmer <fibo...@babylonia.flatirons.org> writes:

Umm... Unless I've missed something, you'll get a compile error at the
second method. The issue isn't whether I can extend an arbitrary
class, but whether a method is distinguished solely by its name, or by
a combination of its name, signature (and possibly its return type).

--
Piers

Luke Palmer

unread,
May 24, 2003, 2:04:25 AM5/24/03
to pdca...@bofh.org.uk, dam...@conway.org, perl6-l...@perl.org

Oh, duh. Sorry.

As it stands, yes, that is illegal (or at least a warning with
semantics that you didn't intend). I'm fine with such "overloading",
as it were, but then, my opinion doesn't really matter.

I don't actually know why this is disallowed...?

Luke

Mark A. Biggar

unread,
May 23, 2003, 7:48:08 PM5/23/03
to perl6-l...@perl.org

Just an observation. There is really no difference between a coroutine
and a thread running under a non-preemptive run-to-completion-or-yield
threading policy. I could see writing something like:

coro foo(...) is threaded {
while (1) {
# ... some long complicated process
yield $return_value;
}
}

Where if C<foo> gets to the C<yield> it waits until someone calls
C<foo.next()> and if the call comes first it waits until C<foo>
gets to the C<yield>. In any case after the "rondevous" (to use
an Ada term) botht the caller and the coroutine go on their ways as
separate threads.

--
ma...@biggar.org
mark.a...@attbi.com

Mark A. Biggar

unread,
May 24, 2003, 1:12:00 AM5/24/03
to perl6-l...@perl.org

Just an observation. There is really no difference between a coroutine
and a thread running under a non-preemptive run-to-completion-or-yield
threading policy. I could see writing something like:

coro foo(...) is threaded {
while (1) {
# ... some long complicated process
yield $return_value;
}
}

Where if C<foo> gets to the C<yield> it waits until someone calls
C<foo.next()> and if the call comes first it waits until C<foo>

gets to the C<yield>. In any case after the "rendezvous" (to use
an Ada term) both the caller and the coroutine go on their ways as

Simon Cozens

unread,
May 24, 2003, 4:57:16 AM5/24/03
to perl6-l...@perl.org
pdca...@bofh.org.uk (Piers Cawley) writes:
> Disagree about Perl 5 methods not needing their own declarator

Disagree with your disagreement. I write object-oriented Perl 5 without
using the :method attribute, and it seems to work.

--
Resist the urge to start typing; thinking is a worthwhile alternative.
-- Kernighan and Pike

Simon Cozens

unread,
May 24, 2003, 5:06:46 AM5/24/03
to perl6-l...@perl.org
mlaz...@cognitivity.com (Michael Lazzaro) writes:
> My concern is not particularly the jargon of "coroutine", or
> "closure", or anything else by itself, but the possible aggregation of
> state/closure/coroutine/continuation/thread/whatzit into the language
> -- that's what's worrying me. Any one of those concepts is "hard" in
> both jargon and practice, and I wonder what will be the straw that
> breaks the Camel's back, so to speak.

I don't think any of them will be so long as people like you continue to
be aware that these are Hard Concepts. The trouble only comes when people
decide to throw coroutines into the core because "everybody knows" what
a coroutine is and uses them on a daily basis.

> 3) Forget everything you thought you learned last semester.
> 4) Spend a few weeks learning the words your professor uses
> to describe the same damn concepts you learned last semester,
> by different names.

Wouldn't it have been a lot easier if each new professor had taken the time to
relate the new concept to a similar concept in another language you were
familiar with? If someone said "Perl 6 co-routines will be just like
co-routines in language X", then not only would you have an instant point of
reference, you'd have another working model against which you could test your
preconceptions of how these concepts ought to behave, and, perhaps more
importantly, you'd find it easier to believe that the professor actually knew
or cared about what works and what doesn't work in other languages and would
not come away with the impression that he was just interested in NIH
navel-gazing.

May not apply to, say, MIT.

--
<fimmtiu> Sucks really Forth. Ugh.

John Macdonald

unread,
May 24, 2003, 6:31:21 AM5/24/03
to Piers Cawley, John Macdonald, Damian Conway, perl6-l...@perl.org
On Sat, May 24, 2003 at 06:56:09AM +0100, Piers Cawley wrote:
> John Macdonald <j...@algate.perlwolf.com> writes:
> > Well, method doesn't need a declarator in Perl 5 but
> > Larry added it anyway, so being able to implement a
> > functional form without needing a special declarator
> > is not sufficient reason to not provide one.
>
> Disagree about Perl 5 methods not needing their own declarator, that
> is, after all why Larry added them in Perl 6. The trouble with Perl
> 5's "subs are methods" rule of thumb is that there is no way to know
> at runtime whether a given sub is a method or a sub, which can lead to
> crawling horrors. Meanwhile in Perl 6, methods get their own
> declarator because they have recognizably different semantics from
> subs (implicit passing of an invocant) and the distinction cannot be
> recognized in any other way -- there's no distinctive signature to the
> contents of a method that will be there in every possible method.
>
> Meanwhile, in a coroutine, there is just such a distinctive signature;
> if a subroutine body contains a yield, then that sub must necessarily
> be a coroutine. (And now I'm trying to work out if having a
> coroutinish method/submethod/rule/whatever makes sense).

They were designed to require a yield in the immediate
body of the coroutine, but I can easily imagine
a coroutine that has no yields visible -- all of
the yields are done from nested subroutine calls.

Imagine a tree structure that contains a variety of
types of node. The top level routine might handle the
(likely recursive) process of traversing a structure
in order, but the decision of what to yield for
each node might be done in a method of the node.
The traverse coroutine would be a method of the
generic node class, but the value yielding block
would be a subclass method unique to each subclass
of the node class.

coro traverse (Node $node) {
$node // return;
traverse $node.left;
$node.valstream;
traverse $node.right;
}

and the valstream method for each subclass of Node
can contain the yield operators appropriate for
that subclass.

Piers Cawley

unread,
May 24, 2003, 2:13:31 PM5/24/03
to Simon Cozens, perl6-l...@perl.org
Simon Cozens <si...@simon-cozens.org> writes:

> pdca...@bofh.org.uk (Piers Cawley) writes:
>> Disagree about Perl 5 methods not needing their own declarator
>
> Disagree with your disagreement. I write object-oriented Perl 5 without
> using the :method attribute, and it seems to work.

Smartarse. Yes it works, but there's many a time I've wished that it
was possible to know if a random coderef were expected to be a method
or a subroutine.

--
Piers

Mlazzaro

unread,
May 24, 2003, 3:55:20 PM5/24/03
to perl6-l...@perl.org

Simon Cozens wrote:
> > 3) Forget everything you thought you learned last semester.
> > 4) Spend a few weeks learning the words your professor uses
> > to describe the same damn concepts you learned last semester,
> > by different names.
>
> Wouldn't it have been a lot easier if each new professor had taken the time to
> relate the new concept to a similar concept in another language you were
> familiar with? If someone said "Perl 6 co-routines will be just like
> co-routines in language X", then not only would you have an instant point of
> reference, you'd have another working model against which you could test your
> preconceptions of how these concepts ought to behave, and, perhaps more
> importantly, you'd find it easier to believe that the professor actually knew
> or cared about what works and what doesn't work in other languages and would
> not come away with the impression that he was just interested in NIH
> navel-gazing.

It's funny, because I, at least, _never_ really got that "comparative"
education; each language was discussed in a near-complete vacuum,
because the advocate(s) of that particular language were interested in
framing the discussion strictly in terms of how _their own_ concepts
worked. Moving quickly from Pascal, C, Lisp, Smalltalk, C++, etc., and
into an entire realm of specialty languages did have the desired effect,
if the desired effect was to make us understand that 99% of computer
science consisted of reinventing ever-more-curiously-shaped wheels,
simply to see how wacky you could make them and still have them roll.

My biggest frustration from those days is that my bookshelf, still, does
not have a truly authoritative book on the compared and contrasted
strategies of all the major species of languages. While I can tell you
how coroutines, continuations, and threads are related in an informal,
hand-waving sort of way, for example, I've yet to see a book that
satisfied me that it was answering those questions definitively in an
objective, language independant, _non-biased_ sort of way. (Suggestions
would certainly be welcome.)

MikeL

Damian Conway

unread,
May 24, 2003, 6:32:51 PM5/24/03
to perl6-l...@perl.org

> BTW, speaking of declarators and types of Code, I still want to be
> able to write:
>
> class String {
>
> method open($path:) {
> File.open($path) or fail $!
> }
>
> method open($path: &block) {
> my $fh = $path.open;
> for <$fh> -> { &block($_) }
> close $fh;
> }
> }
>
> ...
>
> "/etc/passwd".open( -> $pwent { ... } );
>
> Is there any chance of getting this? Or will I have to go the whole
> multimethod hog?

You'd need multimethods. But that's just:

class String {

multi open(String $path) {
File.open($path) or fail $!
}

multi open(String $path, &block) {


my $fh = $path.open;
for <$fh> -> { &block($_) }
close $fh;
}
}

and they can still be called exactly like methods:

my $str = String.new();

$str.open($path, { print $_ });


BTW, you could also write:

multi open(String $path, &block) {
my $fh = $path.open;
for <$fh>, &block;
close $fh;
}

(that is, directly use the closure block as the C<for> block).

Damian

Damian Conway

unread,
May 24, 2003, 6:50:31 PM5/24/03
to perl6-l...@perl.org
Piers Cawley wrote:

> Presumably all bets about macro precendence are off in the presence of
> C<is parsed>?

No.

C<is parsed> merely tells the parser what production (within the C<&macro>
rule) to use when matching the macro's arguments. The precedence of the
grammar rule for parsing a macro is unaffected.

In other words, Perl starts out with:

rule Perl6::macro { (<ident>) (<arglist>) }

which is called from somewhere else in the C<Perl6> grammar. And it's the
point in the grammar from which C<&macro> is called that determines its
precedence.

Thereafter, specifying an C<is parsed> on a particular macro:

macro debug ($message) is parsed( / \h+ (\N+?) \h* <before \n>/ )
{
return "warn qq{$message}" if $*DEBUGGING;
return "";
}

changes the C<&macro> rule from:

rule Perl6::macro { (<ident>) (<arglist>) }

to:

rule Perl6::macro { debug \h+ (\N+?) \h* <before \n>
| (<ident>) (<arglist>)
}


But the rule that *calls* C<&macro> is totally unaffected, which in turn means
that the precedence of C<debug> is the same as that of any other macro.

Damian

Damian Conway

unread,
May 24, 2003, 7:24:11 PM5/24/03
to perl6-l...@perl.org
Michael Lazzaro wrote:

> My concern is not particularly the jargon of "coroutine", or "closure",
> or anything else by itself, but the possible aggregation of
> state/closure/coroutine/continuation/thread/whatzit into the language --
> that's what's worrying me. Any one of those concepts is "hard" in both
> jargon and practice,

I would dispute the second part of that assertion. I remember closures being
considered a "hard" technique until Perl had them and people saw that they
were actually a very simple idea. Likewise references. Likewise ties. I'm even
old enough to remember when OO was considered "hard" by the general
programming language community. Nowadays Perl manages to reduce most of that
hardness to three trivial rules.

It's instructive to look through the first edition of "Advanced Perl
Programming" and see what was considered "hard Perl" last millenium:
references, anonymous arrays and hashes, symbolic references, closures, string
eval, exceptions, modules, OO, ties.

Yet I currently teach every one of those topics in one or other of my
intermediate courses.


> and I wonder what will be the straw that breaks the Camel's back, so to speak.

But that's the beauty of Perl. It's layered. Just as you can be a perfectly
useful Perl 5 programmer without understanding references, anonymous arrays
and hashes, symbolic references, closures, exceptions, modules, OO, ties, or
threads, so too you'll be able to be a useful Perl 6 programmer without
understanding multimethods, junctions, coroutines, or currying.

It's like a natural language. You can get by with just a 5000 word English
vocabulary (and just the 5000 concepts that vocabulary encodes), without being
unduly troubled by the other 395,000 words (and concepts) in the language.

When you need a word for a new concept, you learn its syntax, and learn how
its used. When you come across a word you don't know, you look it up or ask
someone to explain it.


> What bugs me about it is (potentially) the sheer volume of secret words
> you may need to know to play along.[*] All these things -- from
> closures to coroutines to continuations to threads -- are variations of
> the same central theme, which boils down to maintaining the state in one
> location while something else happens in another location. I submit
> that Perl needs a single, unified way to accomplish that, for some
> negotiable definition of "single" and "unified.
>
> Don't get me wrong, I like the idea of including coroutine
> _capabilities_ -- I'm just pondering whether we are maintaining an
> artificial separation from similar, possibly overlapping concepts.

I would argue that similar, posssibly overlapping, but non-identical concepts
are precisely the ones that *most* need to be separated. Because, if you don't
clearly separate them, then people can't learn them incrementally.

That has been Perl's way (perhaps as a product of its linguistics heritage)
For example, hashes and arrays are distinguished, even though their
functionality has considerable overlap. Perl 6 continues this process:
subroutines and methods are to be distinguished. C<eval> (compile and
interpret a string) is to be distinguished from C<eval> (catch exceptions).


> That's interesting, because I sortof agree with that (threads being too
> low-level). Perhaps I should turn my question around; is there a way to
> move thread syntax closer to coroutine syntax, for common cases? :-)

Probably. But I still question whether we *really* ought to blur the
distinctions between the two concepts?

Damian

Damian Conway

unread,
May 24, 2003, 7:27:43 PM5/24/03
to perl6-l...@perl.org
Luke Palmer wrote:

> However, would it not be valid to have a "coroutine method"? I don't
> see any fundamental paradox prohibiting it (as with a "method sub",
> (not to be confused with "submethod" :-) ). In fact, I would consider
> it quite useful; e.g. Tree::traverse.

Yes. I too can see many uses for resumable methods. However, I can envisage
that they would have slightly different semantics to regular coroutines. For
example, they're state should probably be per-object, rather than global.

Hence we might need a separate subclass (and declarator) for them. Perhaps
C<comethod>.

Damian

Simon Cozens

unread,
May 24, 2003, 7:29:51 PM5/24/03
to perl6-l...@perl.org
dam...@conway.org (Damian Conway) writes:
> Michael Lazzaro wrote:
>> My concern is not particularly the jargon of "coroutine", or "closure", or anything else by itself, but the possible aggregation of state/closure/coroutine/continuation/thread/whatzit into the language --
>> that's what's worrying me. Any one of those concepts is "hard" in
>> both jargon and practice,
>
> I would dispute the second part of that assertion. I remember closures
> being considered a "hard" technique until Perl had them and people saw
> that they were actually a very simple idea.

Michael, I'm very sorry. I remember saying something like "it'll all be
OK so long as we remember that these are actually hard concepts". It
seems now like the driving forces have decided that these aren't actually
hard concepts at all. And now I can no longer guarantee that it will all
be OK. Consider my previous post on this subject anulled.

--
"He was a modest, good-humored boy. It was Oxford that made him insufferable."

Dave Whipp

unread,
May 24, 2003, 7:58:35 PM5/24/03
to perl6-l...@perl.org
Damian Conway wrote:
> Perhaps C<comethod>.

I think this is my greatest concern about the proposed C<coro>
declarator: we'll end up with cosub, comthod, cosubmethod and comulti
(one of which wil be named coro). Perhaps even coblock. Then, when we
later find another concept that is routine-like, we multiply up the
declarators. I hope that all these things are just aliases for traits on
a generic C<routine> declarator. Then we are simply providing names for
the common cases, but the C<is> syntax will allow us still to declar the
strange combinations. I find myself thinking that we don't yet have
sufficient experience with the abstractions to know which cases warrant
their own keywords. It feels like we have multiple intersecting
hierarchies of concepts, which need to be teased apart a bit more.

BTW, here's a paper I came across from Microsoft, which seems slightly
related: http://research.microsoft.com/~adya/pubs/usenix2002-fibers.pdf


Dave.

Damian Conway

unread,
May 24, 2003, 8:35:15 PM5/24/03
to perl6-l...@perl.org
Dave Whipp wrote:

>> Perhaps C<comethod>.
>
> I think this is my greatest concern about the proposed C<coro>
> declarator: we'll end up with cosub, comthod, cosubmethod and comulti
> (one of which wil be named coro). Perhaps even coblock. Then, when we
> later find another concept that is routine-like, we multiply up the
> declarators.

This is a valid concern, and one that I too have been mulling over for the
last day or so. It may well be that Larry will decide that resumability is
better made orthogonal to the type of C<Code> object involved.

And, whilst that might make life a little harder for Dan's team, it would
certainly make life easier for everyone else. ;-)

On the other hand, multiple dispatch falls into the same category of
"potentially orthogonal", and Larry went with a C<multi> declarator, rather
than an C<is multi> trait. I guess (as usual) we'll just have to wait and see
what he decides.

And whatever that is, one thing I am sure of is that he was have valued this
entire discussion of the many issues involved.


> I hope that all these things are just aliases for traits on
> a generic C<routine> declarator. Then we are simply providing names for
> the common cases, but the C<is> syntax will allow us still to declar the
> strange combinations. I find myself thinking that we don't yet have
> sufficient experience with the abstractions to know which cases warrant
> their own keywords.

Yes, this certainly sounds like the kind of solution Larry might favour.


> BTW, here's a paper I came across from Microsoft, which seems slightly
> related: http://research.microsoft.com/~adya/pubs/usenix2002-fibers.pdf

Thanks for that.

Damian


Damian Conway

unread,
May 24, 2003, 9:47:58 PM5/24/03
to perl6-l...@perl.org
Simon Cozens wrote:

> The trouble only comes when people
> decide to throw coroutines into the core because "everybody knows" what
> a coroutine is and uses them on a daily basis.

I'm pretty confident that, if Larry does decide to throw coroutines into the
core, that won't be the reason.


> Wouldn't it have been a lot easier if each new professor had taken the time to
> relate the new concept to a similar concept in another language you were
> familiar with?

Simon is perfectly correct here. And that's exactly what many of us professors
try to do when teaching. If anyone here *is* actually interested in
connectivity-based teaching approaches, I discussed them very informally and
at some length in:

D M Conway: "Only connect: Teaching as communication",
in "Reflecting on University Teaching: Academic Stories",
Australian Committee for University Teaching and Staff Development,
pp. 337-346, 1997.

which is available on-line at:

www.csse.monash.edu.au/~ajh/adt/resources/DamianAnecdotes.html

Of course, we professors also assume that, as intelligent adult participants
in higher education, the students will also shoulder some of the burden for
making the kinds of interlanguage comparisons necessary.

So, rather than always spoon-feeding them "Feature A of language X is the same
as feature B of language Y", we'd much prefer to encourage them to draw on
their existing knowledge of programming languages and make the appropriate
connections themselves. That's because the vast body of educational research
indicates that forming DIY cognitive associations (provided, of course, that
they're appropriately verified) results in a far better learning outcome.


> If someone said "Perl 6 co-routines will be just like
> co-routines in language X", then not only would you have an instant point of
> reference, you'd have another working model against which you could test your
> preconceptions of how these concepts ought to behave,

We're aiming for Perl 6 to follow the Perl tradition of taking the best ideas
from whatever language they may be found in, and then melding them into a
useful, practical, and seamless synthesis. So it's generally not possible to
say that a Perl 6 *anything* will be just like something in any other
language. That's the case here.

For example, Perl 6 coroutines can't be exactly like Icon's corresponding
construct (generators) because Perl 6 isn't going to have the universal
implicit backtracking architecture that Icon provides. And they're not the
same as Sather's corresponding construct (iters) either, because we're not
looking at restricting their resumability to the scope of the loop construct
that originally invoked them.

Curiously, this ties in with the mystery of why an idea as conceptually simple
as coroutines ("subroutines whose execution you can suspend and later resume")
should be considered "hard". I suspect that this is simply a reflection of the
fact that -- so far -- no language designer has found a way to take the
simplicity of the idea and express its full power in a syntax and semantics
that are equally easy to comprehend.

Larry has an extraordinarily good track-record of doing exactly that, and I
see no reason why he should not succeed again in this case.

But, of course, the on-going perception that coroutines are "hard" does imply
that Perl 6 coroutines will probably not be just like coroutines in any
existing language X. If we could just borrow coroutine semantics from some
existing language X, that would imply that language X's coroutines would
already be practical, usable, and easy to understand.

Damian

John Macdonald

unread,
May 24, 2003, 11:40:06 PM5/24/03
to Damian Conway, perl6-l...@perl.org
On Sun, May 25, 2003 at 10:35:15AM +1000, Damian Conway wrote:
> Dave Whipp wrote:
>
> >>Perhaps C<comethod>.
> >
> >I think this is my greatest concern about the proposed C<coro>
> >declarator: we'll end up with cosub, comthod, cosubmethod and comulti
> >(one of which wil be named coro). Perhaps even coblock. Then, when we
> >later find another concept that is routine-like, we multiply up the
> >declarators.
>
> This is a valid concern, and one that I too have been mulling over for the
> last day or so. It may well be that Larry will decide that resumability is
> better made orthogonal to the type of C<Code> object involved.

I've been starting to think that too. A traverse
routine can know how to go through a data structure
and "do something" at each node without having to know
or care whether "do something" is self contained or
is yielding a sequence of value to a coroutine that
is operating in parallel with the coroutine that
called traverse. It *does* have to know whether
"do something" is a method that each object node
has defined, or whether it is a non-object-oriented
function to be called (either a fixed name function
or more usefully a function passed in a reference).

For implementation, you have to know whether a
specific invokation is calling a method (because
the object must be passed as an extra argument,
and the code for the method must be found by a
search based on the object), a multi-method (like a
method, but the search for the method is based on the
calling and argument context as well as the object),
is forking a thread, or is forking a coroutine.
Within the execution of a coroutine it should
be possible to call functions, methods, threads,
start new coroutines, or resume existing coroutine -
just as it would be during the execution of a thread.
(A coroutine can be used for many of the same things
that a thread can; it is distinct in that a coroutine
is tranparent to that operating system. So, a thread
can do a blocking system call without stopping other
threads in the same process, while a coroutine cannot.
That is a distinction as useful as the one between
a thread and a forked process.)

> And, whilst that might make life a little harder for Dan's team, it would
> certainly make life easier for everyone else. ;-)
>
> On the other hand, multiple dispatch falls into the same category of
> "potentially orthogonal", and Larry went with a C<multi> declarator, rather
> than an C<is multi> trait. I guess (as usual) we'll just have to wait and
> see what he decides.

Unless I am wrong about what multiple dispatch is,
the difference is that multiple dispatch affects how
a particular "function" call operates. The ability
to yield a value ("forking" a coroutine) means that
nested things (functions, methods, multi-methods,
etc.) should be able to resume other coroutines.
Meanwhile, something like the generic traverse routine
might be in the middle and not care whether it was
called from a "forked" coroutine and its nested
invokations are resume (yielding values), or whether
it was called from a mainline and its nested node
handlers are doing mainline operations (like print).

So, that suggests to me that both a declarator type
of code object and an attribute for all types of code
object are both of interest in relation to coroutines.

A code thingy that always forks a new coroutine
when it is called (a "coro") needs a declarator.
When called, it not only creates a new coroutine, but
the caller must keep track of this new coroutine and
arrange to resume it whenever appropriate - either
explicitly in the code or secretly by the language
implementation. Since the way that the caller handles
the process of calling this code thingy it should be
a declaratory form of code object.

Meanwhile, a code thingy that yields values requires
that it be called in a coroutine context, but that
need not be known to or affect the immediate caller.

A regular subroutine might have been called in a
coroutine context or not (and it need not care) but
it passes that context on to anything that it calls.

So, there should be a "coroutine context" attribute
that is set when a "coro" is called, and passed on
unchanged for every other type of code object is
called.

Piers Cawley

unread,
May 25, 2003, 1:12:45 AM5/25/03
to Damian Conway, perl6-l...@perl.org
Damian Conway <dam...@conway.org> writes:

Okay, now what happens when I want to overload an existing method of a
class that I didn't write. Suppose I want to set things up so I can do

File.open($path, -> $line {...});

(and assume for a minute that File::open isn't already a
multimethod). If I do

multi open(Class $class is 'File', $path, Block &block) {
my $handle = $class.open or fail $!;
for <$handle>, &block;
close $handle;
}

Then, because multidispatch is only attempted after single dispatch
has failed to find a method, Perl will find the original File::open
(how do we declare class methods by the way?) and try to evaluate
that, fail and throw an error.

I realise of course that 'open' isn't the best example in the world as
it's almost certainly going to be a multimethod in the first place...

Personally, I'd like to see multimethods that are declared in Class
scope (ie, inside a Class block) which have the same class as their
first argument automagically coerce any existing monomethod with the
same name into an equivalent multimethod. Assuming a suitably set up
Method class I'm not sure that this is all that impossible...


> BTW, you could also write:
>
> multi open(String $path, &block) {
> my $fh = $path.open;
> for <$fh>, &block;
> close $fh;
> }
>
> (that is, directly use the closure block as the C<for> block).

I thought I could, but I wasn't entirely sure. I did think of writing
this as:

multi open(String $path, &block) {
my $fh = $path.open or fail $!; POST { close $fh }
for <$fh>, &block;
}

But what happens when someone does:

$path.open(-> $each {
my $resume_point;
call_cc -> { $resume_point = $_ };
# We will resume here
...
do_something or throw ResumableException: resume_at => $resume_point;
...
}

If we leave a routine via an exception, does the POST block get
triggered? Or do we only trigger the POST block if we leave the
routine and there's no 'live' copy of any continuation that has the
routine in its chain?

--
Piers

Piers Cawley

unread,
May 25, 2003, 1:15:12 AM5/25/03
to Damian Conway, perl6-l...@perl.org
Damian Conway <dam...@conway.org> writes:

Or even:

method whatever(...) is Resumable {
...
}

--
Piers

Simon Cozens

unread,
May 25, 2003, 3:38:34 AM5/25/03
to perl6-l...@perl.org
dam...@conway.org (Damian Conway) writes:
> For example, Perl 6 coroutines can't be exactly like Icon's
> corresponding construct (generators) because Perl 6 isn't going to
> have the universal implicit backtracking architecture that Icon
> provides. And they're not the same as Sather's corresponding construct
> (iters) either, because we're not looking at restricting their
> resumability to the scope of the loop construct that originally
> invoked them.

Thanks. That's precisely what I was looking for.

--
Building translators is good clean fun.
-- T. Cheatham

John Macdonald

unread,
May 25, 2003, 10:28:34 AM5/25/03
to Damian Conway, perl6-l...@perl.org
On Sun, May 25, 2003 at 09:27:43AM +1000, Damian Conway wrote:
> Luke Palmer wrote:
>
> >However, would it not be valid to have a "coroutine method"? I don't
> >see any fundamental paradox prohibiting it (as with a "method sub",
> >(not to be confused with "submethod" :-) ). In fact, I would consider
> >it quite useful; e.g. Tree::traverse.
>
> Yes. I too can see many uses for resumable methods. However, I can envisage
> that they would have slightly different semantics to regular coroutines. For
>
> example, they're state should probably be per-object, rather than global.

I don't know that there is any need for separate
syntax. Imagine a Data::Dumper sort of routine that
traverses a data structure. It might return tokens
identifying value type followed by a sequence of
value elements and attribute elements and maybe
a terminator. If the item being dumped is an
object, it might invoke a dump_state method in the
process of dumping the attributes. So, it could be
calling type-specific subroutines and object methods
interchangably - and the coroutine that receives
the resulting tokens wouldn't know or care whether a
particular token came from a subroutine or a method.
Using that saved token stream to rebuild the data
structure would run a similar coroutine pair (but
calling creation subroutines and methods in place of
the dumping ones that were used to dump the structure.

Michael Lazzaro

unread,
May 26, 2003, 2:54:46 PM5/26/03
to Damian Conway, perl6-l...@perl.org

On Saturday, May 24, 2003, at 04:24 PM, Damian Conway wrote:

> Michael Lazzaro wrote:
>> That's interesting, because I sortof agree with that (threads being
>> too low-level). Perhaps I should turn my question around; is there a
>> way to move thread syntax closer to coroutine syntax, for common
>> cases? :-)
>
> Probably. But I still question whether we *really* ought to blur the
> distinctions between the two concepts?

Well, let me try my hand at convincing that it can be done in a
uniform, intuitive way...

(What follows is largely stream of consciousness -- anyone who doesn't
care about closures vs. coroutines vs. threads, I suggest you skip
this...)

[1.1] Introduction

The abstract concepts of:

- hypotheticals
- closures
- coroutines
- continuations
- threads

("HCCCT") are related; the goal behind each is to save and restore some
aspect of program "state". It is theoretically possible for a single
entity to be invented that allows the functionality of each of these
concepts, using largely the same underlying syntax, semantics, and
implementation for each.

[1.2] Problem Space

HCCCT implementations are generally designed to allow simple solutions
for a rather narrow set of problems:

- Iterators, Sources and Sinks, including "pipes" and "lazily"
generated lists. (coroutines)

- The ability to save the state of a specific element of code, and to
revert to that saved state either unconditionally (temp) or
conditionally (try/catch; hypotheticals).

- The related ability to suspend an element of code, and to resume
processing from that saved state at a later time. (closures, coroutines)

- Parallelism, including shared data and resynchronization
capabilities. (threads)

(mjl-- am I missing any?)

Nearly all real-world applications of HCCCT can be pigeonholed into one
of these specific categories.


[1.3] What is State?

Each of the problem spaces described above can more directly be
described as simply the ability to save, suspend, and resume a program
"state". The primary difference between them is the scope of the
collected state.

For "temporary" or "hypothetical" variables, "state" represents the
value of that single variable.

For closures, "state" represents the specific parameters bound to that
closure instance.

For coroutines, "state" represents the collected state of the routine,
including local var contents, local stack contents, and the current
"position" within the routine.

For threads, "state" represents essentially everything except those
elements explicitly marked as "shared" between threads.

It is worth noting that closures, temporary variables, and hypothetical
variables can all be considered simplifications of coroutine
functionality: a closure can be implemented as a coroutine that has
been suspended immediately after the binding of routine parameters, but
before execution of the routine contents; any call to the closure
resumes from that point, executing the routine and returning the final
results. (Multiple calls to the same closure will each use the same
pre-bound parameters, implying the coroutine is perhaps cloned before
each execution, or that the variables are re-bound upon each
invocation.)

Likewise, a sufficiently top-level coroutine is similar to a (blocking)
thread, in that the "state" saved by the coroutine will be nearly as
encompassing as the "state" saved by an actual thread.

It would seem, then, that of HCCCT implementations, coroutines are the
closest thing to a "base" implementation by which the others could be
derived.


[1.4] Cothreads

Consider a single programmatic entity that could handle the entire
problem space described -- a single unit that represented all the most
common usages of HCCCT. We will choose the name "cothreads"; in
implementation it will likely be most similar to a coroutine, but with
additional capabilities for parallelization.

A cothread acts as a container for any arbitrary Code block
(Block|Sub), and has the following (minimal) capabilities:

- create - creates a cothread given an arbitrary Code block

- clone - duplicates the cothread, including any current
saved "state".

- suspend - saves the "state" of the associated Code,
suspends execution of the cothread

- resume - restores the "state" of the associated Code
to that last saved; resumes execution of the thread

- restart - removes any saved "state" of the associated Code,
restarts the Code from the original entry point

- set_priority - set the priority of this cothread, in relation
to any simultaneously executing cothreads.

- active - whether the given cothread is active,
or is currently suspended.


Hypotheticals and temporary variables are near-trivialities; the
"state" to be saved consists of a single variable. Creating a C<temp>
or C<let> variable clones the current state of the variable, which is
restored or overriden according to the results of the Code. Closures
are similarly simple; "state" consists solely of the bound parameters.

Coroutines behave as expected; by definition, the "default" behavior of
a cothread will be identical to a coroutine.

Threads are essentially cothreads executing in parallel; they require
significant additional implmentation cruft, but, conceptually, are not
particularly different from the user's point of view.


[1.5] Syntax and Semantics

C<temp> and C<let> variables need no additional semantics.

Associating an arbitrary block of code with a cothread could possibly
be accomplished using constructs such as:

cothread {...block...} # unary C<cothread>

sub is cothread { # as a trait
...
yield; # or C<&_.yield>; suspends the block
return; # or C<&_.return>; exits the block,
# (deleting any previously saved state)
}


Implementing thread-like behavior requires parallelization (likely
preemptive); this, in turn, requires some form of invocation. We
already have adequate semantics for a junction of code:

my Code $junct = (&block1 | &block2 | &block3);

It is dubious that we would want code junctions to "automatically" be
parallelized, unless we could guarantee the absence of side-effects
within each of the blocks through analysis, which is not likely. We
therefore probably need an explicit construct for parallel block
execution, which begins preemptive-parallel execution of each element
of the junction, returning an identically sized junction for the
results of each block:

my $results = parallel (&block1 | &block2 | &block3 );

We might want shorthand "parallel" capabilities:

my $results = (&block1 ||| &block2 ||| &block3 );

Or a construct which allows "launching" of the parallel cothread,
blocking only upon first usage of the return value, so you can prep
your vars in a parallel fashion:

# assume foo() and bar() are expensive, I/O intensive, whatever

my $r1 |||= foo; # launches parallel cothread
my $r2 |||= bar; # launches parallel cothread

... do more stuff ...

print $r1; # blocks unless foo() has completed execution.
print $r2; # blocks unless bar() has completed execution.


------------

(OK, that was about 45 minutes worth of pondering, so it has some
gaping holes, misspellings, etc.)

What I'm getting at is that all these concepts are much more related
than they might at first seem, and that the differences between them
are largely of "scope". If we have some form of coroutines, _and_ can
run them in parallel, all the other constructs fall out of them fairly
naturally. Even threads, for some pretty decent definition of
"threads".

After all, the problem space being addressed by each concept is *quite*
overlapping -- it seems a shame to have to introduce people to
multiple, similar-but-not-quite concepts if we could possibly combine
it into unified yield/resume/parallel semantics that Do The Right Thing.

MikeL

Dave Whipp

unread,
May 26, 2003, 9:10:00 PM5/26/03
to perl6-l...@perl.org
Michael Lazzaro wrote:

(note that I'll use the term "cothread" as Mike did: I'm not sure I like
the name, but its a label for a concept that allows discussion to continue)

> ("HCCCT") are related; the goal behind each is to save and restore some
> aspect of program "state". It is theoretically possible for a single
> entity to be invented that allows the functionality of each of these
> concepts, using largely the same underlying syntax, semantics, and
> implementation for each.

> What I'm getting at is that all these concepts are much more related

> than they might at first seem, and that the differences between them are
> largely of "scope". If we have some form of coroutines, _and_ can run
> them in parallel, all the other constructs fall out of them fairly
> naturally. Even threads, for some pretty decent definition of "threads".

First, may I suggest that everyone (re) reads Austin Hasting's draft for
A17 (threads):
http://archive.develooper.com/perl6-l...@perl.org/msg14771.html.

OK, Mike sayes "a single entity", and then "these concepts are much more
related than...". Perhaps I'm being pedantic, but I relationships
generally imply multiple entities. But perhaps he meant that things are
so related that they become attributes of a single entity. I'd like to
disagree: they may belong the a single domain, but to stick them in a
single class would result in a bloated blob with a fat interface.

I'm reminded of Bertrand Meyer's book on OO software construction (1st
ed). In it, he explained that we should think of containers as machines
that iterate over their contents. He completely missed the concept of an
independent iterator. MikeL uses the metaphor of cothread as
"container", but then attempts to stuff everything into it.

Most obvious is that the "Priority" attribute is a property of the
relation between the execution context and its CPU resource manager (i.e
its scheduler). In fact, I'd like to suggest the redical proposal that
every shared lock should be able to have an associated priority for each
blocked thread: and to enable this we'd want each lock to have an
associated arbiter (the simplest of which would not have priorities). To
really understand the power of this, imagine a debugger that overrides
these arbiters using a file input to reproduce exact sequences of
interactions.

The concepts of C<clone> and C<reset> ("restart") seem to belong the the
cothread object (whatever its called). But the activate/suspend/resume
stuff seems to me to be related, rather than instrinsic. Calling a
subroutine could be described (continuation-style passing) as swapping
in a new cothread into the currently executing thread; and launching a
thread could be described as the two-step process of asking *a*
scheduler for a new execution context (thread) and then initializing it
with a cothread. Of course, this leads to the possibility that the same
cothread may be in more than one real-thread -- if that wasn't what you
wanted then you'd have C<clone>ed the cothread before assigning it.

So, in summary, its good to have a clean abstraction for all the HCCCT
things. But I think it is a mistake to push them too close. Each of the
HCCCT things might be implemented as facades over the underlying
othogonal concepts of data management and execution management (hmm, we
mustn't forget IO and other resource managers). Concepts like mutexes
around shared data probably belong in that under-the-covers domain: In
fact,a requirement for a multi-scheduler system is that each scheduler
may have different implementations of synchronization primitives.


Dave.

Jonathan Scott Duff

unread,
May 23, 2003, 11:52:03 AM5/23/03
to Damian Conway, perl6-l...@perl.org, la...@wall.org
On Fri, May 23, 2003 at 10:43:06AM +1000, Damian Conway wrote:
> ==============================================================================
>
> A coroutine is declared with the C<coro> declarator:
>
> coro fibs (?$a = 0, ?$b = 1) {
> loop {
> yield $b;
> ($a, $b) = ($b, $a+$b);
> }
> }
>
> and is allowed to have zero or more C<yield> statements inside it.
>
> A coroutine is an object of type C<Coro> (just as a subroutine is an object of
> type C<Sub>). C<Coro> objects have the following methods:
>
> next() # resumes coroutine body until next C<yield>
>
> next_args(PARAM_LIST) # resumes coroutine body until next C<yield>,
> # rebinds params to the args passed to
> # C<next_args>.
> # PARAM_LIST is the same as the parameter list
> # of the coroutine itself

So, immediately after the C<yield> that we are resuming, the saved
state of the args is thrown away and replaced with whatever is in
PARAM_LIST? For example, in C<fibs> above

$x = fibs(2,3); # $x is 3
$x = fibs(); # $x is 5
$x = fibs(); # $x is 8
$x = &fibs.next_args(18); # $x is 26
$x = fibs(); # $x is 34
$x = fibs(); # $x is 60
$x = &fibs.next_args(b => 5); # $x is 39
$x = fibs(); # $x is 44
$x = fibs(); # $x is 83

Right?

>
> each() # returns a lazy array, each element of which
> # is computed on demand by the appropriate
> # number of resumptions of the coroutine body
>
> A C<Coro> object is therefore a type of iterator.
>
> Calling a coroutine using the normal subroutine call syntax is the same as
> calling the C<next> or C<next_args> method on it:
>
> $next = fibs(); # same as: $next = &fibs.next()
>
> $next = fibs(10,11); # same as: $next = &fibs.next_args(10,11)
>
> while ($next = fibs()) {
> if ($next < 0) {
> fibs() for 1..2; # skip the next two values
> }
> }
>
>
> To create multiple independent coroutine interators using a single coroutine
> definition, one can simply use an *anonymous* coroutine:
>
> sub fibs_iter (?$a = 0, ?$b = 1){
> return coro (?$x = $a, ?$y = $b) {
> loop {
> yield $b;
> ($a, $b) = ($b, $a+$b);
> }
> }
> }
>
> # and later...
>
> $iter = fibs_iter(7,11);
>
> while ($next = $iter.next()) {
> if ($next < 0) {
> $iter.next() for 1..2; # skip the next two values...
> $iter = fibs_iter(11,7); # and change iterator
> }
> }
>
> Additionally, class C<Coro> would have a C<clone> method:
>
> $iter1 = &fibs.clone;
> $iter2 = &fibs.clone;
>
> which would return a reference to a new anonymous C<Coro> with the same
> implementation as (but distinct state from) the original C<Coro>.

So, C<clone> is the moral equivalent of C<fork> on a coroutine?

And could we dispense with the explicitness of C<fibs_iter> by doing
something like this?

$iter = &fibs.clone(7,11);
while ($next = $iter.next()) {
if ($next < 0) {
$iter.next() for 1..2; # skip the next two values...
$iter = &fibs.clone(11,7); # and change iterator
}
}

i.e. C<clone> would be like C<next_args>, but instead of getting the next
value out, you'd get a new coroutine (iterator) with the parameter list
that was passed to clone.

-Scott
--
Jonathan Scott Duff
du...@cbi.tamucc.edu

John Macdonald

unread,
May 26, 2003, 9:51:41 PM5/26/03
to Michael Lazzaro, Damian Conway, perl6-l...@perl.org
On Mon, May 26, 2003 at 11:54:46AM -0700, Michael Lazzaro wrote:
>
> On Saturday, May 24, 2003, at 04:24 PM, Damian Conway wrote:
> >Michael Lazzaro wrote:
> >>That's interesting, because I sortof agree with that (threads being
> >>too low-level). Perhaps I should turn my question around; is there a
> >>way to move thread syntax closer to coroutine syntax, for common
> >>cases? :-)
> >
> >Probably. But I still question whether we *really* ought to blur the
> >distinctions between the two concepts?
>
> Well, let me try my hand at convincing that it can be done in a
> uniform, intuitive way...

This is an interesting idea. I'd add forked processes
to the list (which includes the magic opens that fork
a child process on the end of a pipeline instead of
opening a file.

There is a danger in hiding the distinction between
them too much. They have quite difference performance
overheads.

A forked process uses a completely separate memory
and process space and there is significant overhead
both in starting the process and in passing data to it
(compared to a simple subroutine call that these are
all extensoins of in some way).

A thread has a separate process space but shared
memory space (from the point of view of the operating
system) so a thread fork is a bit cheaper. However,
it uses a mostly disjoint copy of the memory
space (with the point of view of Perl itself) so
that is an additional cost to creating a thread.
Exchanging data between threads is mostly normal
access (but with some magic required either in the
data access or thread execution switch or both to
ensure the data is not mishandled). I'd also add that
a significant characteristic of threads is that they
block independently on system calls ( a consequence of
being separate processes from the OS point of view).

A coroutine is totally transparent to the operatinf
system, the are no sytem calls invokved in either creating
a courtine or switching control flow or data from one to
another. It does require creation of a private stack and
data context at the time that it is created, and switching
between stack contexts to change from one to another.

A continuation requires setting up a private data context
(but not a code stack) when it is created. It gets
called using the callers control stack.

Conditional values require adding checks to operations that
are returning or exiting from structures, which is a different
sort of thing than the specialed "function calls" implied
by the other items discussed.

Because of the different performance costs involved
and the different other side-effects on how the
program is written when using these different
concepts, there is justification for having them
look different, so that the programmer is more aware
of the consequencess of the choice he is making
when he selects between them. (Hidden performance
cost is one thing I don't really like about ties.
You can write code that looks like it is incrementing
a scalar variable, but instead of an inline arithmetic
operation it could end up actually being an incredibly
complicated action, perhaps involving writing values
out to a database for example.)

If you blur the distinctions and someone gets a
thread when he expects a coroutine, the variables
that he thinks are shared between the routines are
suddenly separate copies, with originally the same
value but not otherwise related. (I long ago tripped
over a similar problem when I first used fork without
understanding that the two forked processes were not
sharing their memory. The initial starting values
makes it much harder to realize the nature of the
problem.)

Michael Lazzaro

unread,
May 27, 2003, 1:23:28 PM5/27/03
to Dave Whipp, perl6-l...@perl.org

On Monday, May 26, 2003, at 06:10 PM, Dave Whipp wrote:

> Michael Lazzaro wrote:
>> What I'm getting at is that all these concepts are much more related
>> than they might at first seem, and that the differences between them
>> are largely of "scope". If we have some form of coroutines, _and_
>> can run them in parallel, all the other constructs fall out of them
>> fairly naturally. Even threads, for some pretty decent definition of
>> "threads".
[snip]

> OK, Mike sayes "a single entity", and then "these concepts are much
> more related than...". Perhaps I'm being pedantic, but I relationships
> generally imply multiple entities. But perhaps he meant that things
> are so related that they become attributes of a single entity. I'd
> like to disagree: they may belong the a single domain, but to stick
> them in a single class would result in a bloated blob with a fat
> interface.

Sorry; I meant that the _current_ concepts are strongly related, but
they _could_ be a single entity. Please forgive the somewhat spastic
parts of the proposal -- like I said, it was less than an hour of
thought, I certainly don't mean it to be a be-all-end-all. (In
particular I'm aware I'm badly conflating the thread vs. scheduling
issues.)

> Most obvious is that the "Priority" attribute is a property of the
> relation between the execution context and its CPU resource manager
> (i.e its scheduler). In fact, I'd like to suggest the redical proposal
> that every shared lock should be able to have an associated priority
> for each blocked thread: and to enable this we'd want each lock to
> have an associated arbiter (the simplest of which would not have
> priorities).

Certainly, a separate and overridable thread-scheduler seems a
necessity. I'd be annoyed if you couldn't swap out the scheduler.

What I really want, most of all, is a syntax for coroutines and a
syntax for threads that is Damn Similar; similar enough so that one can
be taught as merely a more encompassing extension of the other.
Whether they should be absolutely identical is _very_ debatable, but I
think it's worth some thought.

More in a sec...

MikeL

Michael Lazzaro

unread,
May 27, 2003, 2:09:29 PM5/27/03
to Dave Whipp, perl6-l...@perl.org

On Monday, May 26, 2003, at 06:10 PM, Dave Whipp wrote:
> So, in summary, its good to have a clean abstraction for all the HCCCT
> things. But I think it is a mistake to push them too close. Each of
> the HCCCT things might be implemented as facades over the underlying
> othogonal concepts of data management and execution management (hmm,
> we mustn't forget IO and other resource managers).

Yeah, that.

What I STRONGLY SUGGEST WE AVOID is a situation where _some_ of those
interfaces are object-based, using <new Thread:> or similar, and others
of those are trait or keyword based, such as <coro foo> or <sub foo is
coroutine>, and others are implicit and invisible (closures).[*]

I think that's where we _might_ be headed, and we should turn that
burro 'round.

MikeL

[*] I would even be ~somewhat~ willing to debate whether Perl5's
ability to implicitly create closures, in absence of any marker to
specify that you _meant_ to do that, might be a slight misfeature. In
that I've seen people accidentally create closures -- a damn hard bug
to find, sometimes -- when they would have been better served by Perl
throwing var-not-found errors, and requiring a specific keyword to
create an actual closure. Somewhat willing. Maybe. Sortof.

Michael Lazzaro

unread,
May 27, 2003, 2:51:53 PM5/27/03
to John Macdonald, Damian Conway, perl6-l...@perl.org

On Monday, May 26, 2003, at 06:51 PM, John Macdonald wrote:
> This is an interesting idea. I'd add forked processes
> to the list (which includes the magic opens that fork
> a child process on the end of a pipeline instead of
> opening a file.

I thought about that quite a bit, but fork() is a process-level thing,
whereas even threads are more internally controllable/implementable, so
I thought that would be too controversial. People already *know* how
to fork processes in Perl, whereas thread syntax is newer and more
malleable.

> There is a danger in hiding the distinction between
> them too much. They have quite difference performance
> overheads.

Some thinking out loud: a thread is significantly more expensive than a
coroutine. Why? Because threads must be executable in parallel, so
you're duping quite a bit of data. (You don't dup nearly as much stuff
for a coroutine, _unless_ of course you're cloning an already-active
coroutine.) So a conventional thread is like cloning a
"very-top-level" coroutine.

Now, if we were to say:

sub foo(...) is coroutine { ... yield() ... return() }

We would expect

foo(...args...);

to give us back that coroutine, from the last yield point. In the
above, C<yield> yields out of the coroutine, and C<return>
yields-without-saving-state such that the next foo() invocation will
start from the beginning of the routine.

Similarly, then, I would expect:

sub foo(...) is threaded { ... yield() ... return() }

foo(...args...)

to start &foo as a new thread. C<yield()> would temporarily suspend
the thread, and C<return()> would end the thread. (Note that you could
use &_.yield to yield the current Code object, so you can have nested
yields w/out confusion -- see C<leave>, from A6.)

These are using nearly identical syntax, but there is still a clear
semantic difference between them -- the trait C<threaded> is sufficient
for that.

We could also have things like:

sub { ... }
closure { ... }
coroutine { ... }
thread { ... }

If that's preferable syntax. As long as they're similar, and share
similar suspend/resume capabilities.

OTOH, the difference between a thread and a coroutine is mostly
internal, not external. If we define a thread as a coroutine that runs
in parallel, the syntax might converge:

sub foo() is cothread { ... yield() ... return() }

# start foo() as a coroutine, (blocks until explicitly yields):

my $results = foo(...);

# start foo() as a parallel thread (nonblocking):

my $results_junction = parallel( &foo(...), &bar(...), &baz(...) )

In this situation, C<parallel> would indicate that you couldn't
continue on until all three threads had suspended or exited -- so in
order to have truly "top-level" parallel threads, you'd have to set
them up so that your entire program was encapsulated within them.
(Otherwise you'd just get temporary parallelization, which is just as
desirable.) (You could declare a scheduler as an optional named
parameter of C<parallel>.)

So to what extent is it OK to "hide" the complexity of a coroutine, or
a thread, in order to have the caller side interface as clean and brief
as possible? (A question over which I remember having a vigorous but
unanswerable debate with someone -- specifically over C++ operator
overloading, which suffers from the problem in spades.)

I'm of mixed opinion. I like the notion of merging them, because I
think they are largely the same concept, implemented in different ways.
I _VERY MUCH_ like the idea of any arbitrary Code block being
parallelizable with the addition of a single word. Few languages do a
decent job of parallelization, and current thread syntax is often
overly burdensome.

Importantly, I hope that it _might_ be possible to, in the
multiprocessor-laden future, automagically parallelize some coroutines
without going full-fledged into multiple threads, which are far too
expensive to produce any net benefit for anything but the largest
tasks. (The trick is in the dataflow analysis -- making sure there's
no side effects on either path that will conflict, which is bloody
difficult, if not impossible.) Such that given:

my $result = foo(...);

... more stuff ...

print $result;

foo() recognizes that it can be run in parallel with the main flow,
which is only blocked at the C<print> statement if C<$result> isn't
completed yet. Until such time as dataflow analysis can accomplish
that, however, it may take a keyword:

my $result = parallel foo(...);
...
print $result;

or

my $result |||= foo(...);
...
print $result;


MikeL

Luke Palmer

unread,
May 27, 2003, 3:26:46 PM5/27/03
to mlaz...@cognitivity.com, j...@perlwolf.com, dam...@conway.org, perl6-l...@perl.org

The problem with unifying Coroutines with Threads is that Coroutines
are largely a data transfer/communication method, while Threads make
use of the very opposite -- that some computations are independent and
can be done in parallel.

> These are using nearly identical syntax, but there is still a clear
> semantic difference between them -- the trait C<threaded> is sufficient
> for that.
>
> We could also have things like:
>
> sub { ... }
> closure { ... }

I think you've finally gone crazy. :-)

All four of these things are closures.

> coroutine { ... }
> thread { ... }
>
> If that's preferable syntax. As long as they're similar, and share
> similar suspend/resume capabilities.
>
> OTOH, the difference between a thread and a coroutine is mostly
> internal, not external.

Again, I beg to differ. But, these are the kinds of misunderstandings
that are keeping a good coroutine proposal from getting in....

Here's how I think of things:

coroutines - Used for easy iteration over complex data structures,
pipelines, and communication of the results of certain
algorithms.

threads - Used for user interfaces, speed (on SMP systems), very
occasionally pipelines, and headaches.

I may have missed things in the threads section, because I haven't
done very much threaded programming. To be honest, my favorite use so
far has been using them to (painstakingly) emulate coroutines :-)

But I'm pretty sure these two concepts are things that we don't want
to unify, even though they both have to do with "state". I like your
musings on "state", however, and making them more explicit might
enable us to come up with some very cool ideas.

> If we define a thread as a coroutine that runs
> in parallel, the syntax might converge:
>
> sub foo() is cothread { ... yield() ... return() }
>
> # start foo() as a coroutine, (blocks until explicitly yields):
>
> my $results = foo(...);
>
> # start foo() as a parallel thread (nonblocking):
>
> my $results_junction = parallel( &foo(...), &bar(...), &baz(...) )
>
> In this situation, C<parallel> would indicate that you couldn't
> continue on until all three threads had suspended or exited -- so in
> order to have truly "top-level" parallel threads, you'd have to set
> them up so that your entire program was encapsulated within them.
> (Otherwise you'd just get temporary parallelization, which is just as
> desirable.) (You could declare a scheduler as an optional named
> parameter of C<parallel>.)
>
> So to what extent is it OK to "hide" the complexity of a coroutine, or
> a thread, in order to have the caller side interface as clean and brief
> as possible? (A question over which I remember having a vigorous but
> unanswerable debate with someone -- specifically over C++ operator
> overloading, which suffers from the problem in spades.)

There is very little complexity to a coroutine. It's a difficult
*concept*, in the ways it has traditionally been explained.

Somewhat unrelatedly, I have a mini-rant about encapsulating
coroutines inside the sub call interface. Why do so many people
consider this a good thing? I don't go around coding all of my
functions with ten C<state> variables, and I consider it a good
practice. My subs tend to do the same thing each time they're
called... which is precisely how the concept works. There are things
that are meant to keep state, and these things are called objects!
Why don't we use their interface to manage our state?

I say this because Damian's coroutine proposal could be greatly
simplified (IMHO making it clearer and easier) if calling the sub
didn't imply starting an implicit coroutine the first time. I might
write something that exemplifies this.

> I'm of mixed opinion. I like the notion of merging them, because I
> think they are largely the same concept, implemented in different ways.
> I _VERY MUCH_ like the idea of any arbitrary Code block being
> parallelizable with the addition of a single word. Few languages do a
> decent job of parallelization, and current thread syntax is often
> overly burdensome.

Hm. I've never liked programming with threads, which is a good
argument for your point. Austin's said a few things about threads
that I mostly like.

> Importantly, I hope that it _might_ be possible to, in the
> multiprocessor-laden future, automagically parallelize some coroutines
> without going full-fledged into multiple threads, which are far too
> expensive to produce any net benefit for anything but the largest
> tasks. (The trick is in the dataflow analysis -- making sure there's
> no side effects on either path that will conflict, which is bloody
> difficult, if not impossible.) Such that given:
>
> my $result = foo(...);
>
> ... more stuff ...
>
> print $result;

An important concept for optimization is "pure" routines. These are
routines that have no side-effects. It should be possible to declare
these things, for sure. If all the source is readily available, it's
also possible to deduce this from data flow analysis (we know a
routine is pure if it calls only pure routines). Even if we have a
good analyzer, declaring this kind of thing can help catch some nasty
bugs.

> foo() recognizes that it can be run in parallel with the main flow,
> which is only blocked at the C<print> statement if C<$result> isn't
> completed yet. Until such time as dataflow analysis can accomplish
> that, however, it may take a keyword:
>
> my $result = parallel foo(...);
> ...
> print $result;
>
> or
>
> my $result |||= foo(...);
> ...
> print $result;
>
>
> MikeL

Luke

Dave Whipp

unread,
May 27, 2003, 4:00:27 PM5/27/03
to perl6-l...@perl.org

"Michael Lazzaro" <mlaz...@cognitivity.com> wrote > Similarly, then, I

would expect:
>
> sub foo(...) is threaded { ... yield() ... return() }
>
> foo(...args...)
>
> to start &foo as a new thread.

Perhaps we should fall back on the old message passing abstraction, where
messages can be either synchronous or asynchronous. When you send a synch
message, the sender waits for a reply. Messages can be sent to either
threads that already exsit, or might require a new thread to be created.
This is analagous to the difference between calling a CTOR and calling a
method on an existing object.

Under this abstraction, using a co-routine would be sending synchronous
messages to an existing thead (a yield is a synchronous message); and a
normal sub call would be sending a synchronous message to a new thread (perl
might optimize to not actually create the new thread). Any server may choose
to send its reply before it has actually completed its work: i.e. sending a
return value is not necessarily the same thing as leaving a sub. I've found
this particular concept very useful when modeling pipelines in hardware
(admitedly that's a somewhat niche application).

So the concepts would be:

* create a new execution context?
* send synch message (and therefore wait for a reply)
* send asynch message (possibly a reply to an earlier synch message)
* wait for a message (possibly a reply; possibly a new request) -- the reply
might itelf be a synch message that you're expected to reply to.
* halt (aka wait for finalize) -- the thread has nothing more to do.
* destroy execution context -- not needed if GCed.


Dave.


Austin Hastings

unread,
May 27, 2003, 4:16:35 PM5/27/03
to Michael Lazzaro, perl6-language
Michael,

I like and agree with some of what you've been saying. I too think that
there's a case of "an x is just a y with ..." underlying the whole
coro/thread/parallel thing. That's why I'm in favor of deconstructing
the threading thing -- a lower thread overhead means more people can
spawn more threads for lower cost.

The coroutine Damian describes is basically, as someone mentioned, a
"thread" running in "non-preemptive, non-prioritized, run-until-yield
mode on a single cpu" -- in other words, the existing concept of
coroutines has been based on the assumption that there is a single
thread of control. The notion of synchronization, of running in a
parallel or multiprocessing environment, is totally not addressed.

So with that in mind, see my enormous proposal from April 15th. I think
that coroutine behavior could be coded with the stuff I proposed, maybe
with a few helper items added in.

Specifically, if we consider that the coroutines are using some kind of
semaphore, then C<yield> has to be a macro (so it can wrap/replace the
C<will do> block of the caller) but the C<do_yield> sub can use the
.active and .result items of the thread. What's more, the .result item,
or some other thread item, could be used to communicate in both
directions, allowing some form of data pass-back. (Probably not
needful, since the wrapper could handle that, too.)

All of which boils down to this: the implementation of "basic"
coroutines is within the scope of the Threads proposal I posted, I<so
long as we understand the semantics>.

So what should the semantics be? Some ideas:

Proposal I:

This has no-yield-from-subs behavior, since the "yield" is only one
level deep.

Further, C<yield> will be a macro which wraps the sub with a shell that
sets up the coro data:

sub do_coro {
my %hash := Thread::CurrentThread.yield_info;
if exists %hash{&CALLER.callee} {
%hash{&CALLER.callee}.(@_)
} else {
call;
}
}

sub do_yield {
my %hash := Thread::CurrentThread.yield_info;
call-cc -> { %hash{&CALLER} = shift; }
leave &CALLER, result => @_;
}

macro yield {
# Get name of caller from parser
# Add BEGIN { $caller.wrap &do_coro; } somewhere
"do_yield";
}

===============================

Proposal II:

This has arbitrary yield depth, allowing nested subs to yield, since
the declaration of the coro stores the return vector:

sub yield {
call-cc -> { throw Yield, results => @_, resume => shift; }
}

class coro is Subroutine will BEGIN {
&.do.wrap sub {
CATCH {
when Yield {
&resume = &.resume;
return @.results;
}
}

state &resume;
my &go = &resume // &call;
&resume = undef;
&go @_;
};
};

=Austin

Michael Lazzaro

unread,
May 27, 2003, 4:26:05 PM5/27/03
to Luke Palmer, j...@perlwolf.com, dam...@conway.org, perl6-l...@perl.org

On Tuesday, May 27, 2003, at 12:26 PM, Luke Palmer wrote:
>> We could also have things like:
>>
>> sub { ... }
>> closure { ... }
>
> I think you've finally gone crazy. :-)
> All four of these things are closures.
>
>> coroutine { ... }
>> thread { ... }

Well, yes, I've been crazy for a while now. But seriously, all four of
those are closures -- but I'm theorizing that the last two constructs
create something "more" -- a closure bound to an encapsulating
something-else.


>> OTOH, the difference between a thread and a coroutine is mostly
>> internal, not external.
>
> Again, I beg to differ. But, these are the kinds of misunderstandings
> that are keeping a good coroutine proposal from getting in....
>
> Here's how I think of things:
>
> coroutines - Used for easy iteration over complex data structures,
> pipelines, and communication of the results of certain
> algorithms.
>
> threads - Used for user interfaces, speed (on SMP systems), very
> occasionally pipelines, and headaches.
>
> I may have missed things in the threads section, because I haven't
> done very much threaded programming. To be honest, my favorite use so
> far has been using them to (painstakingly) emulate coroutines :-)

AHA! I think what I am proposing as a "thread" is perhaps not the
all-encompassing "thread" as implemented by many other languages, but
merely "an encapsulated method of parallelization".

We are perhaps used to thinking of threads as intra-process versions of
fork(), which I would argue is a damn annoying way to do it -- far too
low-level. All a thread really has to be is a block of code that:

(a) executes in parallel with "sibling" blocks of code, and
(b) is independent of exceptions thrown by "sibling" blocks of code

The conventional thread interface is sortof lame. What I'm very
fuzzily envisioning, while hoping that my dumb-guy analysis inspires a
"eureka!" moment in somebody more experienced in these implementations
than I am, is "nestable" threads and subthreads, the way coroutines can
be "nestable".

So it's not like doing a fork(). It's like calling a subroutine and
getting a result. Now, in some cases (like a top-level event loop)
that subroutine will never return, which is just as true of normal
subroutines.

If you call one routine, piece o' cake, it's not a thread, and it
doesn't have to do anything fancy. If you call a _junction_ of
routines, however, _then_ it knows it has to do the extra fluff to make
them parallel, which it then automatically does. So don't execute a
junction of Code blocks in parallel unless you intend to do that!

So rather than having fork()y threads, perhaps we can use Code
junctions to represent parallelization, and call threads _as if they
were simply coroutines_.

(?)

I'd be quite interested in this -- please do. I *like* Damian's latest
coroutine proposal quite a bit, enough so that it really got me
thinking about how perplexingly lame typical thread syntax is.

MikeL

Jonathan Scott Duff

unread,
May 27, 2003, 4:49:32 PM5/27/03
to Michael Lazzaro, Luke Palmer, j...@perlwolf.com, dam...@conway.org, perl6-l...@perl.org
On Tue, May 27, 2003 at 01:26:05PM -0700, Michael Lazzaro wrote:
> If you call one routine, piece o' cake, it's not a thread, and it
> doesn't have to do anything fancy. If you call a _junction_ of
> routines, however, _then_ it knows it has to do the extra fluff to make
> them parallel, which it then automatically does. So don't execute a
> junction of Code blocks in parallel unless you intend to do that!
>
> So rather than having fork()y threads, perhaps we can use Code
> junctions to represent parallelization, and call threads _as if they
> were simply coroutines_.

I think there's some timing missing (or maybe it's just me). Executing a
Code junction implies that I have all of the routines I wish to execute
in parallel available at the same time. This is often not the case.

Or if adding a Code block to a junction is how you parallelize them at
differing times, then I think the syntax would be horrid. Besides *I*
don't want to have to keep track of the junction, I just want my threads
to execute.

Michael Lazzaro

unread,
May 27, 2003, 5:05:57 PM5/27/03
to Austin_...@yahoo.com, perl6-language

On Tuesday, May 27, 2003, at 01:16 PM, Austin Hastings wrote:
> I like and agree with some of what you've been saying. I too think that
> there's a case of "an x is just a y with ..." underlying the whole
> coro/thread/parallel thing. That's why I'm in favor of deconstructing
> the threading thing -- a lower thread overhead means more people can
> spawn more threads for lower cost.
(snip)

> So with that in mind, see my enormous proposal from April 15th. I think
> that coroutine behavior could be coded with the stuff I proposed, maybe
> with a few helper items added in.

Yes, I just re-read it. Of what you wrote, the one thing I'd like to
think extra hard about is whether we really _need_ the fork()-like
behavior of threads, at all. No, seriously. Stop laughing!

If we could think about "threads" not in terms of forkyness, but simply
in terms of coroutines that can be called in parallel, it should be
possible to create an implementation of "threading" that had to do a
whole heck-of-a-lot less duplication of state, etc. Things "outside"
the scope of the thread group would automatically be shared(?), things
"inside" the thread group would not be shared unless explicitly marked
as such.

Which, if I read it right, is what you proposed too, but with a
slightly different syntax.

That _might_ make threads a heck of a lot faster upon creation/startup,
and a lot less bulky in general.

MikeL

Michael Lazzaro

unread,
May 27, 2003, 5:17:53 PM5/27/03
to du...@pobox.com, Luke Palmer, j...@perlwolf.com, dam...@conway.org, perl6-l...@perl.org

On Tuesday, May 27, 2003, at 01:49 PM, Jonathan Scott Duff wrote:
> I think there's some timing missing (or maybe it's just me). Executing
> a
> Code junction implies that I have all of the routines I wish to execute
> in parallel available at the same time. This is often not the case.
>
> Or if adding a Code block to a junction is how you parallelize them at
> differing times, then I think the syntax would be horrid. Besides *I*
> don't want to have to keep track of the junction, I just want my
> threads
> to execute.

I suppose you could make C<parallel> or whatever return a "thread
group" object, which, if necessary, you could use to add additional
blocks:

my $tgroup = parallel( &foo | &bar | &baz );
...
$tgroup.add( &fuz, &fup, &waz ); # three more

though I metaphysically like the idea of executing a junction of Code
in parallel, and returning a junction of results. But that still keeps
the idea of routine-like, as opposed to fork-like, threads.

And no matter what, we'd need to have a "global" thread group, so if
your intent was to make a globally-parallel thread, you'd do it on a
builtin var called $*THREADS or something.

MikeL

Austin Hastings

unread,
May 27, 2003, 5:50:46 PM5/27/03
to Michael Lazzaro, Austin_...@yahoo.com, perl6-language

Right. My call was that we should share everything except the innermost
context -- and that was just to allow the coder to keep some status
variables differentiating parent from child.

>
> That _might_ make threads a heck of a lot faster upon
> creation/startup, and a lot less bulky in general.

That's the Objective: I want both "implicit threading" -- cases where
the interpreter could automatically spawn off parallel processing
without being invited to -- and "explicit threading" to coexist.

=Austin

Dave Whipp

unread,
May 27, 2003, 6:31:22 PM5/27/03
to perl6-l...@perl.org
A few more random thoughts:

Austin proposed

{
thread { print "hello\n" };
print "world";
}

would spawn the first print into a separate thread, and then join before
executing the second print. I would like to see if this can be extended a
bit, using Luke's "object Sigil" proposal:

my $result = thread { slow_calc() };
# do some stuff;
&$result.join;
print $result

And then use one of my proposals ("lazy_thread") so that $result would be a
"lazy" variable that did an implicit join when it is used. Thus:

my $result = thread { slow_calc() };
# do some stuff;
print $result; # implicit join;

This enables us to separate the lexical scope of the "cothread" object from
the scope of its execution.

But then I started thinking about iterators. Imagine you have a thread that
spits out values, periodically:

for thread {generate} -> { consume }

With coroutines, these two routines would ping-pong between themselves. But
if you want them to run in parallel, then we could create an implicit fifo
between the generator an the consumer. If we were really clever about it (I
think this is what MikeL wants), the code for C<generate> and C<consume>
would be broadly independent of the execution mode. Though there could
clearly be an expectation:

for thread { my $a=0; sleep 1 and yield ++$a while $a<10 } -> $t
{
print "time=$t\n"
}

then s/thread/coro/;

Dave.


John Macdonald

unread,
May 27, 2003, 7:02:28 PM5/27/03
to perl6-l...@perl.org
Wow, what a flood.

The idea of keep the various degrees of code
parallelism similar in form yet distinct in detail
sounds good to me.

I would like to suggest a radically different
mechanism, that there be operators: fork, tfork, and
cfork to split off a process, thread, or coroutine
respectively. (The first might be called pfork
if we're willing to give up history for forward
consistancy.)

Each would return a reference to the original code
flow unit, the child code flow unit, and an indication
of which one you are or if there was an error.
(This is much like the current fork, except that I'd
like the child to be provided with the ID of the
parent, as well as the parent being provided with
the ID of the child.)

Fork, of course, would result in totally separate
process encapsulation (or something transparently
equivalent as is done in Perl 5 on Windows, I
believe, where they use threads to simulate fork).
All variables are independent copies after the fork,
system calls from one process should not block the
other, etc. Switching execution from one process to
the other is at the whim of the operating system and
is not necessarily going to happen at perl operation
boundaries. Data transfer must be done external
(either an already open pipeline or through the file
system or such).

Tfork would result in separate threads. If the OS
provides them, they would likely be used, otherwise
the interpreter might have to simulate them. All "my"
variables are independent copies after the tfork, but
"our" variables are shared. Operations that access
"our" variables must be managed by the interpreter
so that no language level actions are seen as broken
up into separate portions by an ill-timed process
switch to another thread. Especially if OS threads
are used, the interpreter may have to take special
action to ensure that switches do not expose an "our"
update that is partially complete. While pipelines
might work (depends upon whether the OS is supporting
threads) and file system communication would work,
normally communication would be done through "our"
variables.

Cfork would result in separate coroutines. They would
have separate call stacks, and a separate copy of the
current stack frame, but otherwise all "my" amd "our"
variables would be fully shared. Process switching
only occurs under control of the process themselves,
so no operation level protection must be done.

I'll assume the cfork is defined to return to the parent.

The auto forking coroutine that Damian described could
be a syntactic sugar based on this mechanism (while not
being the only way of getting coroutines) so that:

cosub func {
# func body ...
yield
# func body ...
}

is treated as a macro that gets turned into:

{
my $parent = 0;
my $child = 0;

sub func {
unless $state {
my ( $p, $c, $which ) = cfork;
if( $which eq IS_PARENT ) {
$child.next;
return $child;
} elsif( $which eq IS_CHILD ) {
@_ -> sub {
yield;
# func body
yield;
# func body
};
yield undef while 1;
}
}
$child.next( @_ );
}
}

which takes care of the magic "first time starts the
coroutine, subsequent times resume it from where it
left off" semantic for this special case, without
requiring that anything that is a coroutine has to
have that magic behaviour.

The "next" method would be code of the form:

next( $dest, @stuff ) {
resume( $dest, @stuff );
}

But resume needs a bit of magic. I'll write as
a special variable $CUR_COROUTINE which always
contains that object for the currently executing
coroutine (one has to be generated for the initial
mainline, either when perl starts or the first time
that a cfork occurs). Every coroutine object has an
attribute caller. When resume is called, it updates
the caller attribute of the coroutine that is being
resumed with a reference to $CUR_COROUTINE.

Within a coroutine, then, you can always determine
the ID of the coroutine that last resumed you with
$CUR_ROUTINE.caller.

This means that the yield operator could be macro
that expands to:

# yield( stuff )
resume( $CUR_COROUINE.caller, stuff )

Providing resume as the underlying mechanism for
next/yield allows for non subordinate coroutine flow,
like a round robin if you use resume (which loses the
advantage/restriction of yield in which the coroutine
that is the target is implicitly that coroutine that
called you last); while still providing the simpler
subordinate viewpoint for the more common simple cases
(like generators).

Dave Whipp

unread,
May 27, 2003, 9:28:47 PM5/27/03
to perl6-l...@perl.org

"John Macdonald" <j...@algate.perlwolf.com> wrote in message
news:2003052719...@algate.perlwolf.com...

> I would like to suggest a radically different
> mechanism, that there be operators: fork, tfork, and
> cfork to split off a process, thread, or coroutine
> respectively. (The first might be called pfork
> if we're willing to give up history for forward
> consistancy.)

Nice, but insufficiently general. I have written systems which use multiple
different thread schedulers (e.g. pthreads for preemptive; qThreads for
non-preemptive) in addition the process forking. Perhaps we just need a
single fork method, implemented polymorphically for different schedulers (or
scheduler interfaces). One scheduler could be written in terms of others: so
a thread-group could be a scheduler that keeps track of IDs, but delegates
the real work to another scheduler. (I think I could describe a coroutine as
an idiom whereby user-code becomes a thread-scheduler).

The problem I see with the general approach is that it might not make the
common cases sufficiently trivial.

Dave.


Luke Palmer

unread,
May 27, 2003, 9:36:23 PM5/27/03
to mlaz...@cognitivity.com, j...@perlwolf.com, dam...@conway.org, perl6-l...@perl.org
MikeL writes:
> > I say this because Damian's coroutine proposal could be greatly
> > simplified (IMHO making it clearer and easier) if calling the sub
> > didn't imply starting an implicit coroutine the first time. I might
> > write something that exemplifies this.
>
> I'd be quite interested in this -- please do. I *like* Damian's latest
> coroutine proposal quite a bit, enough so that it really got me
> thinking about how perplexingly lame typical thread syntax is.

Here we go:

Damian's current coroutine proposal suggests that to recurse, you
start a coroutine of your own code and yield what it yields:

coro pre_traverse(%data) {
yield %data{value};
yield $_ for <&_.clone(%data{left})>;
yield $_ for <&_.clone(%data{right})>;
}

Now, imagine yourself a beginner and take a long look at this line:

yield $_ for <&_.clone(%data{left})>;

What the heck? That uses an awful lot of "advanced" Perl 6 features
all in one line. And that's the easiest way to recurse?

Worse yet, what if you yield I<and> you need to pay attention to
return values, which isn't an uncommon thing:

coro fibo_tree($n) {
return $n if $n <= 1;
yield $n;

my @ret = <&_.clone($n-1)>;
my $n1 = pop @ret;
@ret = <&_.clone($n-2)>;
my $n2 = pop @ret;
return $n1 + $n2;
}

And all this pain comes from support for this idiom:

my $fibo_1 = fibo();
my $fibo_2 = fibo();

Which, as I've described, is "bad form" anyway, because subs ideally
keep no state. That's what objects were invented for.

C<let>'s remove that ability, hypothetically.

Now there's a distinction between calling a coroutine as a coroutine
or as a regular sub. When it looks like a sub call, it is a sub call;
when it's used as an iterator (perhaps cloned), it's a coroutine. We
now have:

coro pre_traverse(%data) {
yield %data{value};
pre_traverse(%data{left});
pre_traverse(%data{right});
}

coro fibo_tree($n) {
return $n if $n <= 1;
yield $n;

return fibo_tree($n-1) + fibo_tree($n-2);
}

Those are remarkably simpler, and use no "advanced" syntax. To use
them, as before:

for <&pre_traverse.clone(%tree)> { ... }
for <&fibo_tree.clone($n)> { ... }

Or better yet:

for < coro { pre_traverse(%tree) } > { ... }
for < coro { fibo_tree($n) } > { ... }

That's not as nice as it could be. I'm not here to rework that part
of the syntax, though. It also supports factories for cases like
this:

sub pre_traverse(%data) {
coro (?%arg = %data) {
yield %arg{value};
_(%arg{left});
_(%arg{right});
}
}

for <pre_traverse(%tree)> {...}

The overall simplification is that it now supports saving the whole
stack again, which results in much clearer and easier recursion. It
poses no burden on other kinds of implementations, and removes one bit
of the interface that secretly stores state when it shouldn't.

Luke

Jonathan Scott Duff

unread,
May 27, 2003, 10:22:46 PM5/27/03
to Michael Lazzaro, Damian Conway, perl6-l...@perl.org
On Mon, May 26, 2003 at 11:54:46AM -0700, Michael Lazzaro wrote:
[ snip ]

> [1.4] Cothreads
>
> Consider a single programmatic entity that could handle the entire
> problem space described -- a single unit that represented all the most
> common usages of HCCCT.

I've read this post far more times than I care to admit and I still
see no merit to conflating "threads" and "coroutines".

To me, the salient feature of coroutines is the ability to start from
the beginning or from where we left off, while the salient feature of
threads is independent simultaneous execution with (possible) data
sharing. These are separate "primitive" concepts in my brain and
merging them into one cothread thing just seems wrong.

Jonathan Scott Duff

unread,
May 27, 2003, 10:32:05 PM5/27/03
to Michael Lazzaro, Austin_...@yahoo.com, perl6-language
On Tue, May 27, 2003 at 02:05:57PM -0700, Michael Lazzaro wrote:
>
> On Tuesday, May 27, 2003, at 01:16 PM, Austin Hastings wrote:
> > I like and agree with some of what you've been saying. I too think that
> > there's a case of "an x is just a y with ..." underlying the whole
> > coro/thread/parallel thing. That's why I'm in favor of deconstructing
> > the threading thing -- a lower thread overhead means more people can
> > spawn more threads for lower cost.
> (snip)
> > So with that in mind, see my enormous proposal from April 15th. I think
> > that coroutine behavior could be coded with the stuff I proposed, maybe
> > with a few helper items added in.
>
> Yes, I just re-read it. Of what you wrote, the one thing I'd like to
> think extra hard about is whether we really _need_ the fork()-like
> behavior of threads, at all. No, seriously. Stop laughing!
>
> If we could think about "threads" not in terms of forkyness, but simply
> in terms of coroutines that can be called in parallel, it should be
> possible to create an implementation of "threading" that had to do a
> whole heck-of-a-lot less duplication of state, etc.

See, this where I start to feel all Cozeny and wonder what the heck
we're doing even thinking about how it's implemented. What I want to
know is how it looks from the user perspective. If, in order to
understand threads, I have to first understand coroutines, I think
that's a loss because it throws away (or at least morphs into an
unrecognizable form) all of collect CS knowledge of what "threading"
usually means. In other words, I think the idea of fork-like
behaviour is important to threads.

Dave Whipp

unread,
May 27, 2003, 11:00:03 PM5/27/03
to perl6-l...@perl.org
Jonathan Scott Duff wrote:

> I've read this post far more times than I care to admit and I still
> see no merit to conflating "threads" and "coroutines".
>
> To me, the salient feature of coroutines is the ability to start from
> the beginning or from where we left off, while the salient feature of
> threads is independent simultaneous execution with (possible) data
> sharing. These are separate "primitive" concepts in my brain and
> merging them into one cothread thing just seems wrong.

I think you are right, and that your post (unintentionally) demonstrates
the merrit of what Michael was saying.

The "salient" features that you mention are the ways in which they
differ -- and thus the reason not to merge the two into a single entity.
But there are many things that are common, such as the need to manage
multiple stacks, and the need to communicate values between peers. Are
the following snippets really more different than they are alike:

for < thread { sleep 1 and yield $a++ while 1 } > { print }
for < coro { sleep 1 and yield $a++ while 1 } > { print }

You could argue that using C<yield> to send a message from a thread is
incorrect. Indeed, we could use a different keyword there. But to do so
is to miss the similarity in the intent.

my $tid_child = thread -> $tid_parent { do_stuff }
my $cid_child = coro -> $cid_parent { do_stuff }

These cases are perhaps less similar than they appear. Each creates a
new context for executing a code block, and each provides handles for
communication between the parent and the child. We might use the tid/cid
handles to setup communication channels between the two contexts. In the
case of the thead, a communication channel would probably require a
fifo; whilst for the coroutines, communication is intimately tied to the
control flow.

With coroutines, locking is probably not necessary (though in complex
situations, locks can be used as assertions). With threads, locks are
probably necessary. But a lock manager is associated with the
schedulers, not the threads/coroutines themselves.

My summary is that yes, coroutines and threads are very different
things. But most of the language-infrastructure needed to support one,
is also needed for the other.


Dave.

It is loading more messages.
0 new messages