NRE committed: PLEASE TEST

9 views
Skip to first unread message

miguel

unread,
Jul 13, 2008, 5:06:10 AM7/13/08
to
I have committed to HEAD my NRE implementation [Patch #2017110].

* REQUEST * (All, but especially to extension maintainers)
Please test thoroughly! HEAD should be able to load and run without errors
all(?) extensions that were compiled against 8.6 headers.

* WHAT *
NRE is a modification in the deep guts of Tcl that is somewhat similar to
stackless Python. It massively reduces Tcl's C-stack footprint, and enables new
features: the included ::tcl::unsupported::tailcall for now, more exciting stuff
in the future (but in time for 8.6).

For more info please refer to http://msofer.com:8080/wiki?name=NRE

* COMPAT *
NRE is completely transparent for extensions that use the public or internal Tcl
APIs, as long as they do not mess with calls to TEBC directly. It has been
adapted to deal properly with Itcl and TclOO, which do.

As a previous version was tested against many extensions (see
http://msofer.com:8080/wiki?name=NRE+Compatibility), and tclkits were
succesfully built (thanks go to patthoyts) I am confident that this will work
fine and any problems that might arise will be dealt as bugfixes.

* PERF *
The performance impact is generally very small, and often NRE turns out to be
faster: http://wiki.tcl.tk/nre

Enjoy

Miguel

miguel

unread,
Jul 13, 2008, 5:10:49 AM7/13/08
to
miguel wrote:
> * WHAT *
> NRE is a modification in the deep guts of Tcl that is somewhat similar
> to stackless Python. It massively reduces Tcl's C-stack footprint, and
> enables new features: the included ::tcl::unsupported::tailcall for now,
> more exciting stuff in the future (but in time for 8.6).

That should read "but *not* in time for 8.6"

Colin Macleod

unread,
Jul 14, 2008, 8:26:16 AM7/14/08
to

This sounds very promising indeed. In particular the concurrency
possibilities suggested at
http://msofer.com:8080/wiki?name=Concurrency+possibilities look like
they could allow us to do coroutines within a single interpreter.
Then we could combine this with the event system to do un-nested
vwaits, and I/O or delays that yield to another coroutine while
waiting, etc. etc. :-)
Colin.

ZB

unread,
Jul 14, 2008, 3:45:32 PM7/14/08
to
Dnia 14.07.2008 Colin Macleod <colin....@tesco.net> napisał/a:

> they could allow us to do coroutines within a single interpreter.

Something like in Lua?
--
ZB

Colin Macleod

unread,
Jul 15, 2008, 10:15:14 AM7/15/08
to
On 14 Jul, 20:45, ZB <zbREMOVE_THIS@AND_THISispid.com.pl> wrote:
> Dnia 14.07.2008 Colin Macleod <colin.macl...@tesco.net> napisa³/a:

>
> > they could allow us to do coroutines within a single interpreter.
>
> Something like in Lua?
> --
> ZB

Well I don't know a lot about Lua, but I would imagine so.
Colin.

ZB

unread,
Jul 16, 2008, 8:31:40 PM7/16/08
to
Dnia 15.07.2008 Colin Macleod <colin....@tesco.net> napisał/a:

>> > they could allow us to do coroutines within a single interpreter.
>>
>> Something like in Lua?
>> --
>> ZB
>
> Well I don't know a lot about Lua, but I would imagine so.

http://lua-users.org/wiki/CoroutinesTutorial
--
ZB

miguel

unread,
Jul 17, 2008, 12:29:05 PM7/17/08
to

Mmhhh ... stop giving me ideas, I'll never get to sleep.

Would http://msofer.com:8080/wiki?name=Coroutines be suitable?

ZB

unread,
Jul 17, 2008, 1:40:44 PM7/17/08
to
Dnia 17.07.2008 miguel <mso...@users.sf.net> napisał/a:

>> http://lua-users.org/wiki/CoroutinesTutorial
>
> Mmhhh ... stop giving me ideas, I'll never get to sleep.
>
> Would http://msofer.com:8080/wiki?name=Coroutines be suitable?

I think so... is it possible (or will be soon) to test it?
--
ZB

miguel

unread,
Jul 17, 2008, 2:00:13 PM7/17/08
to

Right now, all that's been done is that wiki page :)

I may have something to test soon (a week or two?); it will initially live in
the fossil repo at http://msofer.com:8080, then (maybe) in HEAD under ::unsupported.

I'll announce here whenever things are running; you and everyone else are
welcome to monitor progress, provide feedback and ideas, etc.

Cheers
Miguel

ZB

unread,
Jul 17, 2008, 3:09:14 PM7/17/08
to

OK, waiting (im)patiently.

BTW: does there exist a possibility to have "local" installation of TCL?
I mean: especially for testing, made on the "production" machine - not
"system-wide" (to not interfere with already present libraries), but just
in the user account area only?

As I saw, it was very simple with Lua - but TCL/Tk needs(?) to have its
libraries system-widely available (talking about Linux as host-OS)?
--
ZB

miguel

unread,
Jul 17, 2008, 6:58:38 PM7/17/08
to

./configure --prefix=/whatever/you/want
make
make install

Alternatively

./configure --disable-shared
make

(and then run it from the build dir, without installing)

ZB

unread,
Jul 18, 2008, 6:44:00 AM7/18/08
to
Dnia 17.07.2008 miguel <mso...@users.sf.net> napisał/a:

>> As I saw, it was very simple with Lua - but TCL/Tk needs(?) to have its
>> libraries system-widely available (talking about Linux as host-OS)?
>
> ./configure --prefix=/whatever/you/want
> make
> make install

Yes, there's always "--prefix" - but will it limit availability to account
area only? Never tried it such way - maybe I should have been...

> Alternatively
>
> ./configure --disable-shared
> make
>
> (and then run it from the build dir, without installing)

Right, forgot about "static build" possibility.
--
ZB

Donal K. Fellows

unread,
Jul 18, 2008, 8:15:29 AM7/18/08
to
ZB wrote:
> Yes, there's always "--prefix" - but will it limit availability to account
> area only? Never tried it such way - maybe I should have been...

If you install (as yourself) in somewhere below your home directory, Tcl
works just fine. It's availability will be limited to those people who
can read the place where you put it (as per normal Unix perms). FWIW, I
used Tcl this way for many years; there was a system install, but it was
massively out of date.

Donal.

Larry W. Virden

unread,
Jul 18, 2008, 8:36:09 AM7/18/08
to
On Jul 17, 3:09 pm, ZB <zbREMOVE_THIS@AND_THISispid.com.pl> wrote:

>
> BTW: does there exist a possibility to have "local" installation of TCL?
> I mean: especially for testing, made on the "production" machine - not
> "system-wide" (to not interfere with already present libraries), but just
> in the user account area only?


An alternative that is pretty nifty is the tclkit version of NRE that
was mentioned - that's a one file version of tcl. That way, you can
test out the tcl without any impact whatsoever to directories, etc.

The --prefix configure and build also works great. I've used this for
years.

Finally, I have also used ActiveTcl, whose installer asks for a
location and only writes into that location (well, along with a
$HOME/.teapot directory).

So there are several alternatives to the default directories.

Note that, in my "production" environment, I install Tcl into a
"tclVersion.Level" directory these days. Originally I was using /usr/
tcl - the problem with that is that it prevented me from updating Tcl
without a lot of headaches, trying to get programs which needed to be
updated changed at the same time as I installed the new version of
Tcl. Installing a new version of Tcl into its own original directory
meant that I never impacted the existing environment. It also means
that someone has to take explicit steps to use the new Tcl (which can
be a good or bad thing, depending on the circumstances...)

ZB

unread,
Jul 18, 2008, 9:06:27 AM7/18/08
to
Dnia 18.07.2008 Larry W. Virden <lvi...@gmail.com> napisaďż˝/a:

> So there are several alternatives to the default directories.

Maybe I'm a bit too careful - but on my machine I've got 2 TCL installs
already: one made directly from sources, and Activestate's TEA archive.
So I prefer a bit of exaggeration rather than to make a total mess... but
indeed, it seems, that local install or static/toolkit will be OK.
--
ZB

Colin Macleod

unread,
Jul 18, 2008, 10:48:12 AM7/18/08
to
On 17 Jul, 17:29, miguel <mso...@users.sf.net> wrote:
>
> Wouldhttp://msofer.com:8080/wiki?name=Coroutinesbe suitable?

Sounds great!

Once we have these building blocks, I'm thinking we could create
coroutine-enabled ways of waiting for events (time/input/variable
change).
These might look something like:

co::after delay
- calling coroutine pauses for (at least) delay milliseconds and then
continues, while allowing other events to be processed in the
meantime.

co::vwait variable
- calling coroutine waits until the named variable is set and then
continues. Unlike standard vwait, these calls do not nest, ie.
multiple co::vwaits wait in parallel and can be activated in any
order.

co::read channelId
- calling coroutine reads data from specified channel. If no data is
available the coroutine waits for data, while allowing other events to
be processed.

At present, if you want to write a flow-of-control which includes
waiting for events, and you don't want to block processing of other
events, you have to structure your code as multiple event handlers.
Coroutine-enabled event handling like these co::* commands would let
this be written as straight-line code, which could be much more
natural.

Colin.

Scott

unread,
Jul 18, 2008, 12:35:26 PM7/18/08
to
On Jul 18, 8:48 am, Colin Macleod <colin.macl...@tesco.net> wrote:
> On 17 Jul, 17:29, miguel <mso...@users.sf.net> wrote:
>
>
>
> > Wouldhttp://msofer.com:8080/wiki?name=Coroutinesbesuitable?
>
> Sounds great!

Ditto

> At present, if you want to write a flow-of-control which includes
> waiting for events, and you don't want to block processing of other
> events, you have to structure your code as multiple event handlers.
> Coroutine-enabled event handling like these co::* commands would let
> this be written as straight-line code, which could be much more
> natural.

Agreed. I've had a number of applications over the years that would
have been far more natural to write as a co-routine than as a chain of
event handlers.

But one thing I'm unclear on is how does NRE work in the presence of C
extensions. Can I have a co-routine invoked if there exists some
custom C code somewhere in the (Tcl) callstack
(Tcl_CreateObjCommand()) above the co-routine yield?

Colin Macleod

unread,
Jul 18, 2008, 12:48:41 PM7/18/08
to
On 18 Jul, 17:35, Scott <sgarg...@comcast.net> wrote:
>
> But one thing I'm unclear on is how does NRE work in the presence of C
> extensions.  Can I have a co-routine invoked if there exists some
> custom C code somewhere in the (Tcl) callstack
> (Tcl_CreateObjCommand()) above the co-routine yield?

No, Miguel has a note on his page saying:

A yield is only allowed if the C stack is at the same level
as it was when the coroutine is invoked; otherwise and error
will be returned and the coroutine will be killed.
It is the last condition that makes things iffy ...

Don Porter

unread,
Jul 18, 2008, 1:47:04 PM7/18/08
to
miguel wrote:
> Alternatively
>
> ./configure --disable-shared
> make
>
> (and then run it from the build dir, without installing)

This won't flawlessly work without additional caretaking.

DGP

miguel

unread,
Jul 18, 2008, 4:39:03 PM7/18/08
to

Yes ... and no.

You will not be able to yield from within code created by Tcl_CreateObjCommand;
but you will if it was Tcl_NRCreateCommand.

IOW: NRE runs any standard extension as is, but it cannot take advantage of the
new features if it evals stuff in a non-NRE aware way. To understand what it
means to ne NRE-aware, the best I have to offer for now is
http://msofer.com:8080/wiki?name=Exploiting+NRE
http://www.tcl.tk/cgi-bin/tct/tip/322.html

HTH
Miguel

Neil Madden

unread,
Jul 20, 2008, 11:31:46 AM7/20/08
to

I still do this on some Solaris machines at university. System tclsh
there is still 8.0p2! (At one point the tclkit I had in my home dir
became very popular with undergrads running amsn).

-- Neil

Donal K. Fellows

unread,
Jul 21, 2008, 4:36:06 AM7/21/08
to
Scott wrote:
> But one thing I'm unclear on is how does NRE work in the presence of C
> extensions. Can I have a co-routine invoked if there exists some
> custom C code somewhere in the (Tcl) callstack
> (Tcl_CreateObjCommand()) above the co-routine yield?

It won't work. But my experience with NRE-enabling the implementation of
TclOO is that it's not difficult to convert as long as your command code
falls into one of two patterns.

1) Doesn't call back into Tcl (other than via traces, which aren't
NRE-enabled at all). In this case, you don't need to do anything.

2) Can be described as custom code wrapped around a call to Tcl_Eval
(or one of its family of friends). The trick here is to arrange for
the code after the invocation back into the interpreter to be
restructured as a registered callback, which isn't actually
difficult.

The other big case is when you're calling back into the interpreter
multiple times. This might be working now; I've not reviewed all the
past 24 hours' changes yet. :-)

Donal.

Alexandre Ferrieux

unread,
Jul 21, 2008, 5:30:05 PM7/21/08
to

TCL_LIBRARY=../library export TCL_LIBRARY
./tclsh86

works for me.

-Alex

Don Porter

unread,
Jul 22, 2008, 1:02:09 AM7/22/08
to

>>> Alternatively
>>> ./configure --disable-shared
>>> make
>>> (and then run it from the build dir, without installing)

>> This won't flawlessly work without additional caretaking.

> TCL_LIBRARY=../library export TCL_LIBRARY
> ./tclsh86
>
> works for me.

In my book, a requirement to set a special value for
TCL_LIBRARY counts as additional caretaking.

DGP

Alexandre Ferrieux

unread,
Jul 22, 2008, 3:10:09 AM7/22/08
to

Right. I just wanted to say it was minimal caretaking, which was not
obvious from your wording :-)

-Alex

john roll

unread,
Aug 17, 2008, 6:17:08 PM8/17/08
to

I just downloaded the cvs HEAD but the comands arn't included:

% info commands ::tcl::unsupported::*
::tcl::unsupported::disassemble ::tcl::unsupported::tailcall

???

miguel

unread,
Aug 17, 2008, 6:26:32 PM8/17/08
to

HEAD (as of an hour ago) has in ::tcl::unsupported
disassemble
tailcall
atProcExit
coroutine
yield

Maybe you are linking to the wrong library? A good way to test is to configure
with --disable-shared, which insures that no such snafus happen. Another way is
to go all the may to 'make install', or for quick tests try out 'make shell'.

In order to use them without the ::tcl::unsupported prefix you can do one of
(a) namespace path ::tcl::unsupported
(b) namespace eval ::tcl::unsupported namespace export *
namespace import ::tcl::unsupported::*
(c) interp alias {} tailcall {} ::tcl::unsupported::tailcall
(etc)

john roll

unread,
Aug 17, 2008, 8:51:48 PM8/17/08
to

This is a very nice addition to tcl but its way to complex for me to
use. I need some sugar and I'm not sure how to write it. What I want
is a "generator" command which I can use to create different types of
generators, like iota:

generator iota { start end incr } {
for { set i $start } { $i <= $end } { incr $incr } { yield $i }
}

Then I might use it like this:

while { [set x [iota 2 20]] } { puts $x }

With the generator yielding 18 times and then returning "break". I
don't want to know about apply and other functional theory. I also
don't want to have to create a place holder token to hold the
coroutine. I know how I want to use coroutines and I don't want to
know all this other stuff.

Is an API like this possible?

john roll

unread,
Aug 17, 2008, 9:50:10 PM8/17/08
to

My fault. I didn't select --disable-shared, Sorry.

John

miguel

unread,
Aug 18, 2008, 11:39:08 AM8/18/08
to
Would this be close enough?

mig@cpq:/home/CVS/tcl_SF_clean/unix$ cat /tmp/iota.tcl
namespace path ::tcl::unsupported

proc iota { start end } {
yield
for { set i $start } { $i <= $end } { incr i } {
yield $i
}
return -code break
}

proc UNIQUE_NAME {} {return cor[incr COUNTER]}

proc makeIota { start end } {
set cmd [UNIQUE_NAME]
coroutine $cmd iota $start $end
return $cmd
}

set iota_1_5 [makeIota 1 5]

puts *[info command $iota_1_5]*
puts ---
while 1 {
set x [$iota_1_5]
puts $x
}
puts ---
puts *[info command $iota_1_5]*

mig@cpq:/home/CVS/tcl_SF_clean/unix$ ./tclsh /tmp/iota.tcl
*cor1*
---
1
2
3
4
5
---
**
mig@cpq:/home/CVS/tcl_SF_clean/unix$

john roll

unread,
Aug 18, 2008, 6:17:27 PM8/18/08
to

miguel,

This is very close and I was trying to formulate something like it
but, well it was hard for me. Is there a reason that you need the
first yield? Why doesn't "coroutine" create the command by default,
why does it wait for the first yield call? Also understand the
reasons for the "iota_1_5" command, as I called it the "token" to hold
the coroutine, but it feels in the way to me (awkward).

Thanks,

John

Donal K. Fellows

unread,
Aug 19, 2008, 4:52:53 AM8/19/08
to

Everything's still experimental right now. This is very new after all.
But we can use other features to make this better...

namespace path tcl::unsupported
proc iota {start end} {

for {set i $start} {$i<=$end} {incr i} {yield $i}


return -code break
}
proc UNIQUE_NAME {} {return cor[incr ::COUNTER]}
proc makeIota {start end} {
set cmd [UNIQUE_NAME]

coroutine $cmd apply {args {yield;tailcall {*}$args}} \


iota $start $end
return $cmd
}

I'm not yet 100% satisfied with what I've written, but it does show
how you can wrap the unpleasantness of the current API.

Donal.

john roll

unread,
Aug 19, 2008, 9:44:31 AM8/19/08
to

Miguel, Donal,

I'm very appreciative of these new features going in to the core, and
of your efforts. Tcl has such a clean look (to me) that I want to put
my input on the table early. I know that it is quite difficult to
hide some things due to Tcl's lack of syntax. Where other languages
have special constructs or know something special about a keyword, we
just have commands.

I see in the above example that you've moved the initial yield into
the body of the coroutine call to clear it out of the iota routine
proper, but I'm asking the more fundamental question, why have an
initial execution of the script passed to the coroutine command at
all? Why not just build the coroutine command immediately? Will this
feature be useful elsewhere?

When I initially suggested the possible usage :

while { [set x [iota 2 20]] } { puts $x }

I was going for the feeling of Icon, but I realized after I posted
that Tcl doesn't have the special value that Icon calls failure. The
"while { 1 } { set x [$iota] }" expression is almost as good but it is
less suggestive of what is controlling the loop termination.


Thanks,

John

Fredderic

unread,
Aug 19, 2008, 12:16:19 PM8/19/08
to
On Tue, 19 Aug 2008 06:44:31 -0700 (PDT),
john roll <jo...@rkroll.com> wrote:

> I see in the above example that you've moved the initial yield into
> the body of the coroutine call to clear it out of the iota routine
> proper, but I'm asking the more fundamental question, why have an
> initial execution of the script passed to the coroutine command at
> all? Why not just build the coroutine command immediately? Will this
> feature be useful elsewhere?

I think it makes sense to think of the usage pattern like this:

proc my-coroutine {args} {
initialisation
yield
# main functionality starts here
while {some-condition} {
calculate-value
yield return-value
}
}

In the simple iota example, then, you can read it simply as there being
no initialisation part.


Personally, I think it would make more sense if [yield] would function
regardless of whether the proc was evaluated via [coroutine] or not,
rather than throwing an error. You could still use [coroutine] to
create simultaneous invocations, and unlike a [coroutine] invocation,
the original proc it would essentially "reset" when it completes.
Basically acting exactly as though it was storing internal state in a
private global variable between runs.

Failing that, I think [coroutine] should work exclusively on lambda's
and not [proc]s at all, to highlight the fact that you seemingly can't
really use them in interchanged contexts; a proc with [yield]s won't
work too wekk when invoked directly rather than via construction by
[coroutine], and a proc without [yield]s doesn't make much sense to
invoke through [coroutine]. So you'd take a lambda style
function-in-a-variable, and use [coroutine] to create a command out of
it, which would then be used instead of invoking the lambda via apply.


That's my take on it, anyhow... Other than that little issue, I think
it's pretty damn awesome, and will help make an awful lot of
client-server type code MUCH neater.


--
Fredderic

Longhorn error#4711: TCPA / NGSCB VIOLATION: Microsoft optical mouse
detected penguin patterns on mousepad. Partition scan in progress to
remove offending incompatible products. Reactivate your MS software.

Debian/unstable (LC#384816) on i686 2.6.23-z2 2007 (up 47 days, 10:49)

miguel

unread,
Aug 22, 2008, 8:52:18 AM8/22/08
to
Fredderic wrote:
> Personally, I think it would make more sense if [yield] would function
> regardless of whether the proc was evaluated via [coroutine] or not,
> rather than throwing an error. You could still use [coroutine] to
> create simultaneous invocations, and unlike a [coroutine] invocation,
> the original proc it would essentially "reset" when it completes.
> Basically acting exactly as though it was storing internal state in a
> private global variable between runs.

Not sure about that ... when [yield] fails to accomplish what it is designed to
do, I think it is better for it to error out rather than do something else.

> Failing that, I think [coroutine] should work exclusively on lambda's
> and not [proc]s at all, to highlight the fact that you seemingly can't
> really use them in interchanged contexts; a proc with [yield]s won't
> work too wekk when invoked directly rather than via construction by
> [coroutine], and a proc without [yield]s doesn't make much sense to
> invoke through [coroutine]. So you'd take a lambda style
> function-in-a-variable, and use [coroutine] to create a command out of
> it, which would then be used instead of invoking the lambda via apply.

Conceptually (and very much in code too), what
coroutine foo cmd arg1 arg2 arg3
does is:
1. create a new execution environment
2. create a new command foo that resumes running this execEnv
3. swap in the new execEnv as the current execution environment, insuring that
the caller execEnv is swapped back in when this one exits (return or yield)
4. run (in the new env)
uplevel #0 [list cmd arg1 arg2 arg3]
5. if the execEnv returned, delete it together with the associated command foo

Yield just checks that the current environment is not the main one, ie, that we
are executing within an execEnv that can be suspended. It does not know about
procs, lambdas nor anything else. The other thing it checks is that the C stack
is currently at the same level it was when the execEnv was last resumed. This
means for instance that you can't [*] yield from within an [eval]ed or [source]d
script.

Miguel

[*] for now, until TclEvalEx is adapted to NRE or until we start compiling
everything and junk direct evaluation

Alexandre Ferrieux

unread,
Aug 22, 2008, 9:27:37 AM8/22/08
to
On Aug 22, 2:52 pm, miguel <mso...@users.sf.net> wrote:
>
> [*] for now, until TclEvalEx is adapted to NRE or until we start compiling
> everything and junk direct evaluation

By the way, do you have an up-to-date evaluation of the performance
hit of junking direct evaluation ?

-Alex


miguel

unread,
Aug 23, 2008, 8:04:35 AM8/23/08
to

Not that I know of. There are at least three different aspects to consider:

(a) time to compile/run vs interpret
(b) mem reqs, especially when sourcing large files
(c) code simplicity

Alexandre Ferrieux

unread,
Aug 23, 2008, 6:28:17 PM8/23/08
to

I would assume the overhead in (a) to be acceptable for everything but
corner cases like people sending thousands of individual "set var
value" commands per second over a socket to be [eval]led... What's
unclear to me is how we could reach the certainty that this bold jump
is worth it, while I strongly feel that it would benefit the majority.

For (b) I'm not sure I understand. Sourcing a file doesn't necessarily
mean you compile it as a single unit and then execute: toplevel
statements can be compiled and run one by one.. Or can they ?

For (c) it is quite obvious that simplicity would increase
drastically. Countless bugs about X working differently when compiled
and not, would vanish instantly. Getting rid of code duplication is
also an extremely valuable move for people "diving in", which is my
case right now ;-) So in the long run, the benefit to the community is
unquestionable.

Comments ?

-Alex

miguel

unread,
Aug 23, 2008, 6:55:37 PM8/23/08
to
Alexandre Ferrieux wrote:
> On Aug 23, 2:04 pm, miguel <mso...@users.sf.net> wrote:
>> Alexandre Ferrieux wrote:
>>> On Aug 22, 2:52 pm, miguel <mso...@users.sf.net> wrote:
>>>> [*] for now, until TclEvalEx is adapted to NRE or until we start compiling
>>>> everything and junk direct evaluation
>>> By the way, do you have an up-to-date evaluation of the performance
>>> hit of junking direct evaluation ?
>> Not that I know of. There are at least three different aspects to consider:
>>
>> (a) time to compile/run vs interpret
>> (b) mem reqs, especially when sourcing large files
>> (c) code simplicity
>
> I would assume the overhead in (a) to be acceptable for everything but
> corner cases like people sending thousands of individual "set var
> value" commands per second over a socket to be [eval]led... What's
> unclear to me is how we could reach the certainty that this bold jump
> is worth it, while I strongly feel that it would benefit the majority.

Not sure about that ...

> For (b) I'm not sure I understand. Sourcing a file doesn't necessarily


> mean you compile it as a single unit and then execute: toplevel
> statements can be compiled and run one by one.. Or can they ?

If you compile them one by one, it will be *very* slow: there is some constant
overhead per compilation, and you'd pay it over and over. I am planning to
experiment with doing it statement by statement, only compiling when there are
nested evals (ie, avoiding recursive evals as long as there are no traces[*]).
Stay tuned ...

OTOH, compiling the whole file may have big advantages in runtime ... if/when I
get around to fully spec'ing and coding "all bytecodes have a local var table,
not just the proc/lambda bodies". Not trivial but feasible.

[*] NRE-enabling traces: not yet considered, will eventually get done.

> For (c) it is quite obvious that simplicity would increase
> drastically. Countless bugs about X working differently when compiled
> and not, would vanish instantly. Getting rid of code duplication is
> also an extremely valuable move for people "diving in", which is my
> case right now ;-) So in the long run, the benefit to the community is
> unquestionable.

Wrong: this has nothing to do with that. Even if you compile all scripts,
something like
set set set
$set x foo
or even
[join [list s e t] {}] x foo
cannot be compiled inline. Or do you mean that the non-compiled Tcl_SetObjCmd
would actually compile the script 'set x foo'? In any case, we do need some
non-compiled version of the command, to be called when an otherwise bytecompiled
command cannot be recognized at compile time.

> Comments ?
>
> -Alex
>
>
>

Fredderic

unread,
Aug 24, 2008, 1:18:06 AM8/24/08
to
On Fri, 22 Aug 2008 09:52:18 -0300,
miguel <mso...@users.sf.net> wrote:

> Fredderic wrote:
>> Personally, I think it would make more sense if [yield] would
>> function regardless of whether the proc was evaluated via
>> [coroutine] or not, rather than throwing an error. You could still
>> use [coroutine] to create simultaneous invocations, and unlike a
>> [coroutine] invocation, the original proc it would essentially
>> "reset" when it completes. Basically acting exactly as though it
>> was storing internal state in a private global variable between
>> runs.
> Not sure about that ... when [yield] fails to accomplish what it is
> designed to do, I think it is better for it to error out rather than
> do something else.

How is it not doing what it's designed to do? If it fails because of
an impure stack, then by all means error out. But [yield] would be a
handy way to allow a [proc] to do some initialisation on first run.


The only issue I'm aware of, is how they would mix. [yield] would end
up yielding the current [coroutine] instead of creating a new one. (I
presume in your implementation, you can [yield] from any TCL call
depth.) It would need a [yield -local] switch (or its converse) to
function the way I would like to see it, I guess. I'd like to know
which most people think is the sanest default behaviour if both were
available; yield back to the inner-most [coroutine] and error if there
isn't one, or yield the current proc only which is more what the term
co-routine implies to me. (In either case there would be an option to
perform the other.)

Something that would at least let me fudge what I would like to see, is
the ability for [coroutine] to push the current proc out of the way and
bring it back later. So you'd end up with something like this:

proc something {args} {
coroutine something {{abc def} {
{ ... the real body goes here ... }
}} {*}$args
}

I'll certainly be adding a -coroutine switch to my standard [proc]
wrapper, I think, though it's going to be ugly and flakey without that
kind of capability.

I presume there'll be an [info] sub-command to tell us whether we're in
a [coroutine] or not...?


>> Failing that, I think [coroutine] should work exclusively on
>> lambda's and not [proc]s at all, to highlight the fact that you
>> seemingly can't really use them in interchanged contexts; a proc
>> with [yield]s won't work too wekk when invoked directly rather than
>> via construction by [coroutine], and a proc without [yield]s
>> doesn't make much sense to invoke through [coroutine]. So you'd
>> take a lambda style function-in-a-variable, and use [coroutine] to
>> create a command out of it, which would then be used instead of
>> invoking the lambda via apply.
> Conceptually (and very much in code too), what
> coroutine foo cmd arg1 arg2 arg3
> does is:
> 1. create a new execution environment

Could you elaborate on this part, please? What is different in this
new execution environment compared to the regular one, and how heavy is
this new environment. Would it be feasible to do this as part of the
regular proc invocation process?

By my understanding this basically entails starting a fresh TCL stack,
and pre-loading it with a small hook to regain control later on, within
which you evaluate the subject code.

I suppose my question, is whether there's anything within the TCL stack
that will stop you from basically taking the current TCL call level
with you to this new environment. As I understood it (which may well
be utterly wrong here ;) ), the TCL stack portions were relocatable, it
was (and still is) the C chunks that got in the way, which is what the
NRE eval deals with by allowing the C stack to unwind before performing
the eval.


> 2. create a new command foo that resumes running this execEnv
> 3. swap in the new execEnv as the current execution environment,
> insuring that the caller execEnv is swapped back in when this
> one exits (return or yield)
> 4. run (in the new env) uplevel #0 [list cmd arg1 arg2 arg3]
> 5. if the execEnv returned, delete it together with the associated
> command foo

Steps 2-5 sound like standard library-based task-swapping mechanics.


> Yield just checks that the current environment is not the main one,
> ie, that we are executing within an execEnv that can be suspended. It
> does not know about procs, lambdas nor anything else.

It was the fact that [yield] will work from any level that had slipped
my attention. Still, having [coroutine] take a lambda instead of a
proc would make it nestable within a proc, as in the example up top,
and I do still think it makes a whole lot more sense, as I said, with
[yield] not working from the main environment, even though [yield] can
still occur within other procs.

I wonder whether it would be sane to add an [info lambda] command that
will compile and return a proc as a lambda expression...?


> The other thing it checks is that the C stack is currently at the
> same level it was when the execEnv was last resumed. This means for
> instance that you can't [*] yield from within an [eval]ed or
> [source]d script.

I presume these two can be fixed, though...? Which leaves just
old-style non-NRE-aware extensions.


--
Fredderic

Elephant = mouse built to government specs

Debian/unstable (LC#384816) on i686 2.6.23-z2 2007 (up 51 days, 22:22)

Alexandre Ferrieux

unread,
Aug 24, 2008, 6:22:45 AM8/24/08
to
On Aug 24, 12:55 am, miguel <mso...@users.sf.net> wrote:
> If you compile them one by one, it will be *very* slow: there is some constant
> overhead per compilation, and you'd pay it over and over.

OK; I had wrongly assumed that the overhead was rather linear in the
size of the script, implying that it would be the same whether the
glueing occurs before (whole-script compilation) or after (per-
statement) compilation, which to my superficial understanding is just
the serialization of the AST.

Of course I'll believe you: I'm learning :-)

> > For (c) it is quite obvious that simplicity would increase
> > drastically. Countless bugs about X working differently when compiled
> > and not, would vanish instantly. Getting rid of code duplication is
> > also an extremely valuable move for people "diving in", which is my
> > case right now ;-) So in the long run, the benefit to the community is
> > unquestionable.
>
> Wrong: this has nothing to do with that. Even if you compile all scripts,
> something like
>    set set set
>    $set x foo
> or even
>    [join [list s e t] {}] x foo
> cannot be compiled inline. Or do you mean that the non-compiled Tcl_SetObjCmd
> would actually compile the script 'set x foo'?

Yes I do !

> In any case, we do need some
> non-compiled version of the command, to be called when an otherwise bytecompiled
> command cannot be recognized at compile time.

Again my knowledge of all this may lack depth, but the duplication
between the INST_LIST_INDEX case in TEBC
and the non-compiled Tcl_LindexObjCmd makes me wince. So make one call
the other, acknowledging the overhead. Slowing down Tcl by 1% (random
figure) might be worth eliminating the risk of desync in the
duplicated parts... But maybe the devil is in the actual figure ;-)

-Alex


Donal K. Fellows

unread,
Aug 24, 2008, 6:26:08 AM8/24/08
to
Fredderic wrote:
> It was the fact that [yield] will work from any level that had slipped
> my attention.  Still, having [coroutine] take a lambda instead of a
> proc would make it nestable within a proc, as in the example up top,
> and I do still think it makes a whole lot more sense, as I said, with
> [yield] not working from the main environment, even though [yield] can
> still occur within other procs.

Actually, [coroutine] takes a list of words after the name of the
command to create upon [yield]. The word list can designate a call of
a procedure or an [apply] call if you wish...

Donal.

miguel

unread,
Aug 24, 2008, 8:09:41 AM8/24/08
to
Fredderic wrote:
> On Fri, 22 Aug 2008 09:52:18 -0300,
> miguel <mso...@users.sf.net> wrote:
>
>> Fredderic wrote:
>>> Personally, I think it would make more sense if [yield] would
>>> function regardless of whether the proc was evaluated via
>>> [coroutine] or not, rather than throwing an error. You could still
>>> use [coroutine] to create simultaneous invocations, and unlike a
>>> [coroutine] invocation, the original proc it would essentially
>>> "reset" when it completes. Basically acting exactly as though it
>>> was storing internal state in a private global variable between
>>> runs.
>> Not sure about that ... when [yield] fails to accomplish what it is
>> designed to do, I think it is better for it to error out rather than
>> do something else.
>
> How is it not doing what it's designed to do? If it fails because of
> an impure stack, then by all means error out. But [yield] would be a
> handy way to allow a [proc] to do some initialisation on first run.

Oh - now I understood what you mean. This would make procs a bit heavier than
coroutines (see below). As currently implemented, it would also prevent the
second and later calls to take more than one argument.

IOW, your proposal may look like "let us change [yield], but ir really is "let
us change [proc]".

>
>
> The only issue I'm aware of, is how they would mix. [yield] would end
> up yielding the current [coroutine] instead of creating a new one. (I
> presume in your implementation, you can [yield] from any TCL call
> depth.) It would need a [yield -local] switch (or its converse) to
> function the way I would like to see it, I guess. I'd like to know
> which most people think is the sanest default behaviour if both were
> available; yield back to the inner-most [coroutine] and error if there
> isn't one, or yield the current proc only which is more what the term
> co-routine implies to me. (In either case there would be an option to
> perform the other.)

I think you are talking about stackful versus non-stackful coroutines, in the
terminology of http://www.inf.puc-rio.br/~roberto/docs/MCC15-04.pdf. The
experimental coroutines in HEAD are stackful, what the term implies to you seems
to be non-stackful.

> Something that would at least let me fudge what I would like to see, is
> the ability for [coroutine] to push the current proc out of the way and
> bring it back later. So you'd end up with something like this:
>
> proc something {args} {
> coroutine something {{abc def} {
> { ... the real body goes here ... }
> }} {*}$args
> }
>
> I'll certainly be adding a -coroutine switch to my standard [proc]
> wrapper, I think, though it's going to be ugly and flakey without that
> kind of capability.
>
> I presume there'll be an [info] sub-command to tell us whether we're in
> a [coroutine] or not...?

What is required for [info] is still an open question. Also a way to query if a
given command is a coroutine or not might be useful. Or if [yield] will succeed.
(as opposed to catching it). I'm waiting to see what the users need.

In the current implementation of CallFrames the previous levels at creation time
cannot be preserved, and that is why the coroutine runs at level #0. Preserving
the complete CallFrame stack is not impossible, but it is not a minor
undertaking as a lot of infrastructure has to change. It may happen for 8.7 or
9.0 if interesting use cases warrant it, but that's not within a reasonable
horizon for 8.6

The Tcl stack portions are not relocatable, no: there are pointers to them being
saved (eg by [upvar] used in child CallFrames, or stuff that is allocated using
TclStacklloc). The change you propose would imply that the Tcl stack goes, that
CallFrames are allocated independently (and refcounted). Something like that may
yet happen during the 8.6 cycle, I have some ideas that would make the cost of
separate allocs disappear. But it would be purely an internal change, just an
enabler for other things that may lie in the future.

>> 2. create a new command foo that resumes running this execEnv
>> 3. swap in the new execEnv as the current execution environment,
>> insuring that the caller execEnv is swapped back in when this
>> one exits (return or yield)
>> 4. run (in the new env) uplevel #0 [list cmd arg1 arg2 arg3]
>> 5. if the execEnv returned, delete it together with the associated
>> command foo
>
> Steps 2-5 sound like standard library-based task-swapping mechanics.
>
>
>> Yield just checks that the current environment is not the main one,
>> ie, that we are executing within an execEnv that can be suspended. It
>> does not know about procs, lambdas nor anything else.
>
> It was the fact that [yield] will work from any level that had slipped
> my attention. Still, having [coroutine] take a lambda instead of a
> proc would make it nestable within a proc, as in the example up top,
> and I do still think it makes a whole lot more sense, as I said, with
> [yield] not working from the main environment, even though [yield] can
> still occur within other procs.
>
> I wonder whether it would be sane to add an [info lambda] command that
> will compile and return a proc as a lambda expression...?

I'm afraid I did not understand much of this. You say "having [coroutine] take a
lambda instead of a proc" [it takes neither, it takes a command to run in the
new env] would make it nestable within a proc [I do not understand what you mean
here]".

>
>> The other thing it checks is that the C stack is currently at the
>> same level it was when the execEnv was last resumed. This means for
>> instance that you can't [*] yield from within an [eval]ed or
>> [source]d script.
>
> I presume these two can be fixed, though...? Which leaves just
> old-style non-NRE-aware extensions.

I hope to be fixing these two during the alpha/beta cycle. I am really worried
about [eval], not so much about [source].

Some other things do preempt yielding, traces come to mind and may yet be fixed
(haven't even explored what it would take).

miguel

unread,
Aug 24, 2008, 8:38:00 AM8/24/08
to
miguel wrote:
> Fredderic wrote: Oh - now I understood what you mean. This would make procs a

> bit heavier than coroutines (see below). As currently implemented, it would
^^^^^^^^^^

> also prevent the second and later calls to take more than one argument.

Not than coroutines, than currently ... sorry

Donal K. Fellows

unread,
Aug 28, 2008, 5:09:57 AM8/28/08
to
Alexandre Ferrieux wrote:
> Again my knowledge of all this may lack depth, but the duplication
> between the INST_LIST_INDEX case in TEBC
> and the non-compiled Tcl_LindexObjCmd makes me wince. So make one call
> the other, acknowledging the overhead. Slowing down Tcl by 1% (random
> figure) might be worth eliminating the risk of desync in the
> duplicated parts... But maybe the devil is in the actual figure ;-)

IIRC, this sort of thing is justified by benchmarks (see the tclbench
module in tcllib). I believe that the issue was that calls out of the
bytecode engine itself were surprisingly expensive.

Donal.

Colin Macleod

unread,
Aug 28, 2008, 11:18:04 AM8/28/08
to
On 18 Jul, 15:48, Colin Macleod <colin.macl...@tesco.net> wrote:
> On 17 Jul, 17:29, miguel <mso...@users.sf.net> wrote:
>
>
>
> > Wouldhttp://msofer.com:8080/wiki?name=Coroutinesbesuitable?
>
> Sounds great!
>
> Once we have these building blocks, I'm thinking we could create
> coroutine-enabled ways of waiting for events (time/input/variable
> change).
> These might look something like:
>
> co::after delay
> - calling coroutine pauses for (at least) delay milliseconds and then
> continues, while allowing other events to be processed in the
> meantime.
>
> co::vwait variable
> - calling coroutine waits until the named variable is set and then
> continues.  Unlike standard vwait, these calls do not nest, ie.
> multiple co::vwaits wait in parallel and can be activated in any
> order.
>
> co::read channelId
> - calling coroutine reads data from specified channel.  If no data is
> available the coroutine waits for data, while allowing other events to
> be processed.
>
> At present, if you want to write a flow-of-control which includes
> waiting for events, and you don't want to block processing of other
> events, you have to structure your code as multiple event handlers.
> Coroutine-enabled event handling like these co::* commands would let
> this be written as straight-line code, which could be much more
> natural.
>
> Colin.

I've posted very simple coroutine-enabled versions of after, vwait and
gets at http://wiki.tcl.tk/21555

Colin.

Fredderic

unread,
Sep 2, 2008, 12:31:40 PM9/2/08
to
miguel <mso...@users.sf.net> wrote:

> Fredderic wrote:
>> [yield] would be a handy way to allow a [proc] to do some
>> initialisation on first run.
> Oh - now I understood what you mean. This would make procs a bit
> heavier than coroutines (see below).

I kind of figured that, and pretty much wiped that idea myself.

My wish here would be served by the [coroutine] command being able to
displace an existing proc or command until it completes. Having
[coroutine] displace the current proc with an inline [apply] invocation
(as DKF mentioned is possible), would allow exactly the semantics I was
rather hoping would be possible. A proc could then trivially be built
that can [yield] without having to be explicitly invoked through
[coroutine].

Though perhaps someone will come up with a neat [proc] wrapper that
uses [rename]s to achieve the same goal... Maybe it's to do with being
1:42 in the morning, but it made my head spin trying to nut it out just
now. ;(


> As currently implemented, it would also prevent the second and later
> calls to take more than one argument.

Dare I ask, why? ;)


>> I presume there'll be an [info] sub-command to tell us whether
>> we're in a [coroutine] or not...?
> What is required for [info] is still an open question. Also a way to
> query if a given command is a coroutine or not might be useful. Or if
> [yield] will succeed. (as opposed to catching it). I'm waiting to see
> what the users need.

Having thought about it a little more, I'm wondering how [coroutine]
interacts with things like [after]. Being able to distinguish between
being [after]d or [uplevel #0]d, and being started directly by
[coroutine], is about the only useful piece of [info] I can think of
at this point. A self-generator could avoid self-generating in the
specific case that it was invoked by [coroutine]; an issue that would
be avoided if [proc] was modified as I was getting at originally, to
allow a [yield -local] or something to effectively detatch the current
[proc] as though it had been started with [coroutine] under the same
name. I understand though that doing that would mess with things like
[upvar] in rather nasty ways...

My focus on this issue, btw, is mostly in the interest of allowing
coroutines to be used as an implementation detail in the construction
of a proc, rather than a project-wide design decision from the outset.
I think _if_ it can be done without too much effort, it would be a
fantastically handy way to do the first-run initialisation thing.


>>>> Failing that, I think [coroutine] should work exclusively on
>>>> lambda's and not [proc]s at all, to highlight the fact that you
>>>> seemingly can't really use them in interchanged contexts; a proc

>>>> with [yield]s won't work too well when invoked directly rather


>>>> than via construction by [coroutine], and a proc without [yield]s
>>>> doesn't make much sense to invoke through [coroutine]. So you'd
>>>> take a lambda style function-in-a-variable, and use [coroutine] to
>>>> create a command out of it, which would then be used instead of
>>>> invoking the lambda via apply.

DKF answered this perfectly by pointing out that you can have [apply]
as the command being invoked by [coroutine]. Something like;

proc generator {blah fubar} {
coroutine goober apply {{abc def} {
do something useful
}} $blah $fubar
}

or expanded into my self-generator idea:

proc generator {blah fubar} {
rename generator temp-name
coroutine generator apply {{abc def} {
do something useful
...
rename generator ""
rename temp-name generator
}} $blah $fubar
}

Simplifying that, is where allowing [coroutine] to displace an
existing proc or command would come in handy, allowing the second
[generator] proc to be written as simply as the first one, without all
that [rename] guff and the problems that go with it (such as how to
handle being unexpectedly [rename]d again by something else in between).


> In the current implementation of CallFrames the previous levels at
> creation time cannot be preserved, and that is why the coroutine runs

> at level #0. The Tcl stack portions are not relocatable, no: there


> are pointers to them being saved (eg by [upvar] used in child
> CallFrames, or stuff that is allocated using TclStacklloc).

Fair enough. Makes perfect sense.


> The change you propose would imply that the Tcl stack goes, that
> CallFrames are allocated independently (and refcounted). Something
> like that may yet happen during the 8.6 cycle, I have some ideas that
> would make the cost of separate allocs disappear. But it would be
> purely an internal change, just an enabler for other things that may
> lie in the future.

I think I've read about that type of structure a couple years back,
where each "stack frame" is independent, and can form a tree or graph
layout rather than a linear monolithic stack. I can quite imagine that
would be a fairly major change...


>> I wonder whether it would be sane to add an [info lambda] command
>> that will compile and return a proc as a lambda expression...?

I still think that would be handy, btw... Essentially this;

proc info::lambda {proc} {
set args [list]
foreach arg [info args $proc] {
if { [info default $proc $arg def] } {
lappend args [list $arg $def]
} else {
lappend args $arg
}
}
return [list $args $body]
}

(Probably needs some [namespace] magic too...)


> I'm afraid I did not understand much of this. You say "having
> [coroutine] take a lambda instead of a proc" [it takes neither, it
> takes a command to run in the new env] would make it nestable within
> a proc [I do not understand what you mean here]".

What I was getting at there, was that stuff with the "generator" proc
up above. So all sorted.

--
Fredderic

Only a stupid person doesn't judge. But only an equally stupid
person forgets the limitations of their judgements.
--- Me! circa 2005

Debian/unstable (LC#384816) on i686 2.6.23-z2 2007 (up 61 days, 10:05)

miguel

unread,
Sep 2, 2008, 6:41:54 PM9/2/08
to
Fredderic wrote:
> miguel <mso...@users.sf.net> wrote:
>
>> Fredderic wrote:
>>> [yield] would be a handy way to allow a [proc] to do some
>>> initialisation on first run.
>> Oh - now I understood what you mean. This would make procs a bit
>> heavier than coroutines (see below).
>
> I kind of figured that, and pretty much wiped that idea myself.
>
> My wish here would be served by the [coroutine] command being able to
> displace an existing proc or command until it completes. Having
> [coroutine] displace the current proc with an inline [apply] invocation
> (as DKF mentioned is possible), would allow exactly the semantics I was
> rather hoping would be possible. A proc could then trivially be built
> that can [yield] without having to be explicitly invoked through
> [coroutine].
>
> Though perhaps someone will come up with a neat [proc] wrapper that
> uses [rename]s to achieve the same goal... Maybe it's to do with being
> 1:42 in the morning, but it made my head spin trying to nut it out just
> now. ;(

I did not understand this section, sorry. Now my head is spinning too :P

>> As currently implemented, it would also prevent the second and later
>> calls to take more than one argument.
>
> Dare I ask, why? ;)

Yup: the argument to the coroutine is the return value of the [yield] that
suspended the previous invocation. A design decision - may be good, may be bad.

>>> I presume there'll be an [info] sub-command to tell us whether
>>> we're in a [coroutine] or not...?
>> What is required for [info] is still an open question. Also a way to
>> query if a given command is a coroutine or not might be useful. Or if
>> [yield] will succeed. (as opposed to catching it). I'm waiting to see
>> what the users need.

HEAD now has ::tcl::unsupported::infoCoroutine (will become [info coroutine]
eventually) that returns the FQN of the coroutine's command (and {} when not in
a coroutine; I know, no {}-named coroutines).

Uhh? I think it might make more sense to do things the other way round, actually:
interp alias {} $proc {} ::apply $lambda

Donal K. Fellows

unread,
Sep 3, 2008, 4:34:52 AM9/3/08
to
Miguel Sofer wrote:
> Uhh? I think it might make more sense to do things the other way round, actually:
>    interp alias {} $proc {} ::apply $lambda

That's better actually; it's sane when you [rename] it...

Donal.

Fredderic

unread,
Sep 3, 2008, 7:43:15 AM9/3/08
to
On Tue, 02 Sep 2008 19:41:54 -0300,
miguel <mso...@users.sf.net> wrote:

> Fredderic wrote:


>> miguel <mso...@users.sf.net> wrote:
>>> As currently implemented, it would also prevent the second and
>>> later calls to take more than one argument.
>> Dare I ask, why? ;)
> Yup: the argument to the coroutine is the return value of the [yield]
> that suspended the previous invocation. A design decision - may be
> good, may be bad.

I used to think wrongly that [coroutine]s command was always going to
be a proc, and hence you could pass in arguments and have them update
the corresponding variables with each resume. But I think I'm better
now. ;)

Okay, so you give an argument to the coroutine and it pops out as the
return value of [yield]. Sounds about perfect to me. Just one
question; is it just one word, or a list of the words given to the
coroutine...?


>>>> I presume there'll be an [info] sub-command to tell us whether
>>>> we're in a [coroutine] or not...?
>>> What is required for [info] is still an open question. Also a way
>>> to query if a given command is a coroutine or not might be useful.
>>> Or if [yield] will succeed. (as opposed to catching it). I'm
>>> waiting to see what the users need.
> HEAD now has ::tcl::unsupported::infoCoroutine (will become [info
> coroutine] eventually) that returns the FQN of the coroutine's
> command (and {} when not in a coroutine; I know, no {}-named
> coroutines).

Not necessarily... A list containing an empty string is different from
an empty list.


I'm not 100% sure this isn't being discussed in the core group... But I
think it was said that the coroutine runs at global level (#0)...? Is
that a design decision, or a practical limitation? I'm asking because
since you can pass in arguments to the coroutine (a Good Thing!), how is
[upvar] going to react? For example in this type of construct;

coroutine coincr apply {{} {
while { 1 } {
uplevel 1 incr [yield $count] [incr count]
}
}}

will doing [coincr fuya] work? (Ignoring for the moment whether the
return value of [yield] is a single word or an args-style list of
arguments.)


>>>> I wonder whether it would be sane to add an [info lambda] command
>>>> that will compile and return a proc as a lambda expression...?
>> I still think that would be handy, btw... Essentially this;
>> proc info::lambda {proc} {
>> set args [list]
>> foreach arg [info args $proc] {
>> if { [info default $proc $arg def] } {
>> lappend args [list $arg $def]
>> } else {
>> lappend args $arg
>> }
>> }
>> return [list $args $body]
>> }
>> (Probably needs some [namespace] magic too...)
> Uhh? I think it might make more sense to do things the other way
> round, actually: interp alias {} $proc {} ::apply $lambda

That's pretty off topic anyhow, really, it was more a musing because
[info args] is half-way useful, but [info default] is just a pain in the
proverbial buttocks. So I was thinking it'd be nice to have an [info]
command to compliment [info proc], [info body], [info args], and [info
default] by returning a fully-built lambda, retaining the bytecompiled
version of the proc if it exists (assuming doing so is even possible).

Back before I was enlightened (ie. when I wrote that last post) it
seemed a little more useful than it does now. Still, trying to
construct a lamba that matches an existing proc is still a pain in
those buttocks I mentioned just before. [info lamba] could potentially
share everything with the proc it originally came from making it
extremely light-weight and avoiding the need to re-compile it all over
again and again and again. Where it might still be useful, though, is
making it really easy to pass a proc to another thread...


--
Fredderic

Elephant = mouse built to government specs

Debian/unstable (LC#384816) on i686 2.6.23-z2 2007 (up 62 days, 2:56)

miguel

unread,
Sep 3, 2008, 8:56:30 AM9/3/08
to
Fredderic wrote:
> On Tue, 02 Sep 2008 19:41:54 -0300, miguel <mso...@users.sf.net> wrote:
>
>> Fredderic wrote:
>>> miguel <mso...@users.sf.net> wrote:
>>>> As currently implemented, it would also prevent the second and later
>>>> calls to take more than one argument.
>>> Dare I ask, why? ;)
>> Yup: the argument to the coroutine is the return value of the [yield] that
>> suspended the previous invocation. A design decision - may be good, may be
>> bad.
>
> I used to think wrongly that [coroutine]s command was always going to be a
> proc, and hence you could pass in arguments and have them update the
> corresponding variables with each resume. But I think I'm better now. ;)
>
> Okay, so you give an argument to the coroutine and it pops out as the return
> value of [yield]. Sounds about perfect to me. Just one question; is it just
> one word, or a list of the words given to the coroutine...?

Just one ... now, but it could change: I need to hear arguments pro and con.
What I came up with is:

(a) if 1, caller has to [list] to pass several things in there; if n, receiver
has to [lindex [yield] 0] when it expects just one. I somehow expect that the
most usual cases will be 0 (return value {} ignored), then 1, then n. The case 0
is indifferent between the alternatives, of course. Depends on the use cases
really ...

(b) 1 was much easier to code, and suitable for at least the experimental
version, and I had already so much s**t to get right ...


>>>>> I presume there'll be an [info] sub-command to tell us whether we're
>>>>> in a [coroutine] or not...?
>>>> What is required for [info] is still an open question. Also a way to
>>>> query if a given command is a coroutine or not might be useful. Or if
>>>> [yield] will succeed. (as opposed to catching it). I'm waiting to see
>>>> what the users need.
>> HEAD now has ::tcl::unsupported::infoCoroutine (will become [info
>> coroutine] eventually) that returns the FQN of the coroutine's command (and
>> {} when not in a coroutine; I know, no {}-named coroutines).
>
> Not necessarily... A list containing an empty string is different from an
> empty list.
>
> I'm not 100% sure this isn't being discussed in the core group... But I
> think it was said that the coroutine runs at global level (#0)...? Is that a
> design decision, or a practical limitation? I'm asking because since you
> can pass in arguments to the coroutine (a Good Thing!), how is [upvar] going
> to react? For example in this type of construct;
>
> coroutine coincr apply {{} { while { 1 } { uplevel 1 incr [yield $count]
> [incr count] } }}
>
> will doing [coincr fuya] work? (Ignoring for the moment whether the return
> value of [yield] is a single word or an args-style list of arguments.)

Try it out ...: you'll get an 'undefined variable 'count' error on the first
yield. I do not quite understand what you're trying to do though.

As for the coro running in #0 and having no access to its caller's scope: it is
a limitation dictated

(a) by economy of implementation, doing anything else requires deep modification
of the whole CallFrame concept in Tcl's guts (from stack to tree). May be done
in the future, but it is a relatively largeish thing.

(b) by analogy: as one of the main uses of coroutines will be (I expect, may be
wrong) as event callbacks, this does precisely the same thing we've been doing
forever. Callback scripts run in #0.

(c) for semantic simplicity, the alternative may have some sharp edges that need
to be thought out carefuly. This one I understand fully [famous last words]

Fredderic

unread,
Sep 3, 2008, 1:59:46 PM9/3/08
to
On Wed, 03 Sep 2008 09:56:30 -0300,
miguel <mso...@users.sf.net> wrote:

> Fredderic wrote:


>> On Tue, 02 Sep 2008 19:41:54 -0300, miguel wrote:
>>> Yup: the argument to the coroutine is the return value of the
>>> [yield] that suspended the previous invocation. A design decision
>>> - may be good, may be bad.

>> Okay, so you give an argument to the coroutine and it pops out as
>> the return value of [yield]. Sounds about perfect to me. Just one
>> question; is it just one word, or a list of the words given to the
>> coroutine...?
> Just one ... now, but it could change: I need to hear arguments pro
> and con. What I came up with is:
>
> (a) if 1, caller has to [list] to pass several things in there; if n,
> receiver has to [lindex [yield] 0] when it expects just one. I
> somehow expect that the most usual cases will be 0 (return value {}
> ignored), then 1, then n. The case 0 is indifferent between the
> alternatives, of course. Depends on the use cases really ...

I'm looking at it from the least WTFish perspective... Commands don't
generally expect you to wrap up all their arguments into a list for
them. It just doesn't seem natural. heh

Besides, if you care about the arguments, you're going to be doing a
whole lot more than just [lindex ... 0], and it'll give me an excuse to
start another round of arguments for {n}$list being equivalent to
[lindex $list n]... ;)

But seriously, constraining the coroutine to one argument means it
simply can't be used where a regular command taking more than one
argument would be, without wrapping it in yet another proc (even
[interp alias] can't fix that one!).


> (b) 1 was much easier to code, and suitable for at least the
> experimental version, and I had already so much s**t to get right ...

heh Now that I can understand. Though I can't imagine it being more
than a few lines...

I can't get at the API docs right now, but there _must_ be a function to
TclList-ify an array (or portion thereof) of Tcl_Obj's...!


>> I think it was said that the coroutine runs at global level
>> (#0)...? Is that a design decision, or a practical limitation?
>> I'm asking because since you can pass in arguments to the coroutine
>> (a Good Thing!), how is [upvar] going to react? For example in
>> this type of construct;
>> coroutine coincr apply {{} { while { 1 } { uplevel 1 incr [yield
>> $count] [incr count] } }}
>> will doing [coincr fuya] work? (Ignoring for the moment whether
>> the return value of [yield] is a single word or an args-style list
>> of arguments.)
> Try it out ...: you'll get an 'undefined variable 'count' error on
> the first yield. I do not quite understand what you're trying to do
> though.

I'm still using the standard 8.5.3 as provided by the Debian repo's...
It's not quite here yet. On the up-side, this is how I see it as
someone relatively new to the implementation, who hasn't been working
on it for months or otherwise knows it backwards and already knows what
not to get confused about. ;)

It's a contrived example, but basically it _should_ increment the first
variable passed by 1, then the next one by 2, then the third by 3,
etc. It's inspired by a piece of code I've seen used that searches for
the next unused index in a series. Using [coroutine] it could be
written as a simple never-ending loop that runs until it hits an unused
index and then [yield]s the index it found. Not entirely sure that the
[coroutine] environment is lighter than a global variable, but that's a
permitted oversight for contrived examples. ;)

One place where I'd seriously like to use [coroutine] in my own toys
when it finally makes it through to Debian's repo, unfortunately makes
fairly heavy use of [upvar], essentially dragging a couple variables
right through practically the entire proc call stack, and needs those
changes to propagate backwards. It's fixable, but ugly.


> As for the coro running in #0 and having no access to its caller's
> scope: it is a limitation dictated
>
> (a) by economy of implementation, doing anything else requires deep
> modification of the whole CallFrame concept in Tcl's guts (from stack
> to tree). May be done in the future, but it is a relatively largeish
> thing.

I was afraid you'd say that. :(

I wonder, any way of allowing [upvar] to reach between environments?
I'm thinking an [info] command that returns an identifier for the
current coroutine environment which can be handed to [upvar] as a level
identifier, and a means to find out the identifier of the previous
environment. (Real references would be better, but that's a
frightening place we've already spent a looooong time at... ;) )


> (b) by analogy: as one of the main uses of coroutines will be (I
> expect, may be wrong) as event callbacks, this does precisely the
> same thing we've been doing forever. Callback scripts run in #0.

Yes, but those things are generally separated by time, so it makes sense
that locals up in the call stack wouldn't be available anymore. There
may well not be a call stack to look up through.

coroutines aren't. The call stack still exists, your [yield] returns
to it. Even still I've hashed this problem out before too, how to
provide access to a variable an unknown number of levels down the call
stack, without the need of any special magic. (My example was a
command whose implementation had grown and was now several proc calls
deep, passing only a simple value to another function which was written
to expect a variable exactly two levels below it, both functions were
old code and their semantics were locked in by other code elsewhere.)

I wonder if that's the key, then. If [upvar] is being asked to reach
below #0, check whether it's a coroutine, and add a little machinery to
be able tor each backwards. At least until something better comes
along. I'm pretty sure [upvar] (and [uplevel] too) already do that
check for error generation.


> (c) for semantic simplicity, the alternative may have some sharp
> edges that need to be thought out carefuly. This one I understand
> fully [famous last words]

hehehe Well, that's why I'm dragging up issues I can see BEFORE it
hits the masses (including myself). DKF got all grumpy at me for not
speaking up about how crummy [dict]s commands were, when it was being
worked on. So I'm making a point of going through this new feature I'm
most certainly going to be using this time.

--
Fredderic

That's odd. That's very odd. Wouldn't you say that's very odd?

Debian/unstable (LC#384816) on i686 2.6.23-z2 2007 (up 62 days, 12:30)

miguel

unread,
Sep 3, 2008, 2:49:02 PM9/3/08
to

No no, you fail to understand: there is "something" below #0 for the coroutine,
but it might be completely different at each call! If you create the coroutine
in some wrapper proc, for example, the previous caller does not even exist when
you resume. Consider [spawn] in http://wiki.tcl.tk/21532

So: either the coroutine has to rebind all upvared variables at each resume
(can't!), or has them linked to bog knows what (can't either).

Hence: communicate with coroutines via namespace/global vars.

Neil Madden

unread,
Sep 3, 2008, 3:19:33 PM9/3/08
to
Fredderic wrote:
> miguel <mso...@users.sf.net> wrote:
>
>> Fredderic wrote:
>>> [yield] would be a handy way to allow a [proc] to do some
>>> initialisation on first run.
>> Oh - now I understood what you mean. This would make procs a bit
>> heavier than coroutines (see below).
>
> I kind of figured that, and pretty much wiped that idea myself.
>
> My wish here would be served by the [coroutine] command being able to
> displace an existing proc or command until it completes. Having
> [coroutine] displace the current proc with an inline [apply] invocation
> (as DKF mentioned is possible), would allow exactly the semantics I was
> rather hoping would be possible. A proc could then trivially be built
> that can [yield] without having to be explicitly invoked through
> [coroutine].

A limitation of this approach is that you couldn't have 2 or more
simultaneous coroutine instances based on the same command. Another
difficulty is that the original proc may take different arguments than
the resume command. You should be able to write something like this if
you want, though:

proc coproc {name params body} {
set f [list $params $body]
interp alias {} $name {} coproc:spawn $name $f
}
proc coproc:spawn {name f args} {
coroutine $name ::apply $f {*}$args
}

(restoring the original proc left as an exercise).

>> As currently implemented, it would also prevent the second and later
>> calls to take more than one argument.
>
> Dare I ask, why? ;)

If you think about how the mechanism works, a coroutine suspends from a
yield, and is later resumed from that place. Any arguments to the resume
become the result of the yield as far as the coroutine is concerned:

set next [yield $val]

Therefore, if the resume command could take multiple arguments then it
would be impossible to distinguish between multiple separate arguments
and a list. You could maybe extend yield, something like:

yield $val here are the resume args

but I think the original is sufficient.

[...]


>
> My focus on this issue, btw, is mostly in the interest of allowing
> coroutines to be used as an implementation detail in the construction
> of a proc, rather than a project-wide design decision from the outset.
> I think _if_ it can be done without too much effort, it would be a
> fantastically handy way to do the first-run initialisation thing.

Coroutines should be possible to use as an implementation detail, as in
my worked example converting a simple synchronous program to use the
event loop: http://wiki.tcl.tk/21532

-- Neil

Fredderic

unread,
Sep 4, 2008, 3:19:33 AM9/4/08
to
On Wed, 03 Sep 2008 15:49:02 -0300,
miguel <mso...@users.sf.net> wrote:

> Fredderic wrote:
>> I wonder, any way of allowing [upvar] to reach between environments?
>> I'm thinking an [info] command that returns an identifier for the
>> current coroutine environment which can be handed to [upvar] as a
>> level identifier, and a means to find out the identifier of the
>> previous environment. (Real references would be better, but that's
>> a frightening place we've already spent a looooong time at... ;) )
>>> (b) by analogy: as one of the main uses of coroutines will be (I
>>> expect, may be wrong) as event callbacks, this does precisely the
>>> same thing we've been doing forever. Callback scripts run in #0.

>> The call stack still exists, your [yield] returns to it. I wonder


>> if that's the key, then. If [upvar] is being asked to reach below
>> #0, check whether it's a coroutine, and add a little machinery to be
>> able tor each backwards. At least until something better comes
>> along. I'm pretty sure [upvar] (and [uplevel] too) already do that
>> check for error generation.
> No no, you fail to understand: there is "something" below #0 for the
> coroutine, but it might be completely different at each call!

Actually, that's exactly what I did understand. If the argument(s) to
the coroutine are being fed through the [yield], then for [upvar] or
[uplevel] to work with it, it needs to link to whatever is below #0
right now, with the understanding that what's there now may be totally
different to what's there after the next [yield].


> So: either the coroutine has to rebind all upvared variables at each
> resume (can't!), or has them linked to bog knows what (can't either).

Or just let them go; letting the [upvar]d variables disappear on the
next [yield] would be better than nothing, and could be accomplished
with something resembling an unset trace? And I totally agree that
rebinding every variable every time we resume the coroutine is a rather
sucky idea.

Just trying to get a handle on the problem for my own understanding; is
there any difference between a local variable, and an alias constructed
by [upvar]...? I'm assuming from what you wrote that a variable is
essentially a named pointer to a pointer to the value. [upvar] would
then create a new named pointer to that same pointer to the value. So
the break down would happen if that stack frame went away while we
weren't looking. Or am I off with the fairies somewhere, again? ;)

Failing all that, [uplevel] shouldn't suffer from any of those issues,
and could even be used to fudge an inter-coroutine [upvar], especially
if the #level syntax can be extended to address a level within another
coroutine (most likely counting from its #0 upwards). It may even be an
acceptable interim hack to add such fudging into [upvar] to make it
look like it works for consistency sake, albeit painfully slowly.

Just food for insanity... ;)


--
Fredderic

Don't panic.

Parents of young organic lifeforms are warned that towels can be
harmful... if swallowed in large quantities.
-- Vague reference to HHGTTG

Debian/unstable (LC#384816) on i686 2.6.23-z2 2007 (up 63 days, 1:25)

Fredderic

unread,
Sep 4, 2008, 3:40:49 AM9/4/08
to
On Wed, 03 Sep 2008 20:19:33 +0100,
Neil Madden <n...@cs.nott.ac.uk> wrote:

> Fredderic wrote:
>> miguel <mso...@users.sf.net> wrote:
>>> Fredderic wrote:
>>>> [yield] would be a handy way to allow a [proc] to do some
>>>> initialisation on first run.
>>> Oh - now I understood what you mean. This would make procs a bit
>>> heavier than coroutines (see below).
>> I kind of figured that, and pretty much wiped that idea myself.
>> My wish here would be served by the [coroutine] command being able
>> to displace an existing proc or command until it completes. Having
>> [coroutine] displace the current proc with an inline [apply]
>> invocation (as DKF mentioned is possible), would allow exactly the
>> semantics I was rather hoping would be possible. A proc could then
>> trivially be built that can [yield] without having to be explicitly
>> invoked through [coroutine].
> A limitation of this approach is that you couldn't have 2 or more
> simultaneous coroutine instances based on the same command.

Not at all... The musings I muttered on coroutines ages ago over on
the wiki were along these lines (with the unenlightened assumption that
[yield] would only effect the proc it was invoked from). In those
musings a [coroutine] command would be used to explicitly create a
simultaneous instance until a new name, where the original command
would simply resume the default one. No reason the same couldn't be
applied here.


> Another difficulty is that the original proc may take different
> arguments than the resume command.

Actually that came out rather cleanly in the wash. The original proc
takes initialisation arguments, while the resumed coroutine gets any
arguments via the return value from [yield]. Unless it takes specific
action, those arguments will be ignored in the resuming case anyhow.


>>> As currently implemented, it would also prevent the second and
>>> later calls to take more than one argument.
>> Dare I ask, why? ;)
> If you think about how the mechanism works, a coroutine suspends from
> a yield, and is later resumed from that place. Any arguments to the
> resume become the result of the yield as far as the coroutine is
> concerned: set next [yield $val]

I believe I just said that as the answer to your prior question. ;)


> Therefore, if the resume command could take multiple arguments then
> it would be impossible to distinguish between multiple separate
> arguments and a list. You could maybe extend yield, something like:

No. You simply ALWAYS return a list, ala [proc]s special "args"
argument. It might be slightly inconvenient, but it's an awful lot
more convenient than not being able to do it at all. There are other
possibilities also, such as allowing [yield] to take an arguments list
after the return value. A bit odd, but it's an idea if you're
desperate. ;)

Besides, I'd still love to see {n} be synonymous for [lindex n], which
would make that issue a whole lot less annoying. ;)


>> My focus on this issue, btw, is mostly in the interest of allowing
>> coroutines to be used as an implementation detail in the
>> construction of a proc, rather than a project-wide design decision
>> from the outset. I think _if_ it can be done without too much
>> effort, it would be a fantastically handy way to do the first-run
>> initialisation thing.
> Coroutines should be possible to use as an implementation detail, as
> in my worked example converting a simple synchronous program to use
> the event loop: http://wiki.tcl.tk/21532

It gets a little more complicated when you're not writing the entire
application from scratch. When you're trying to retro-fit new ideas
into old code (which I tend to do a lot of), things get a little more
interesting.


--
Fredderic

Longhorn error#4711: TCPA / NGSCB VIOLATION: Microsoft optical mouse
detected penguin patterns on mousepad. Partition scan in progress to
remove offending incompatible products. Reactivate your MS software.

Debian/unstable (LC#384816) on i686 2.6.23-z2 2007 (up 63 days, 2:39)

Neil Madden

unread,
Sep 4, 2008, 6:15:56 AM9/4/08
to
Fredderic wrote:
> On Wed, 03 Sep 2008 20:19:33 +0100,
> Neil Madden <n...@cs.nott.ac.uk> wrote:
>
>> Fredderic wrote:
>>> miguel <mso...@users.sf.net> wrote:
>>>> Fredderic wrote:
>>>>> [yield] would be a handy way to allow a [proc] to do some
>>>>> initialisation on first run.
>>>> Oh - now I understood what you mean. This would make procs a bit
>>>> heavier than coroutines (see below).
>>> I kind of figured that, and pretty much wiped that idea myself.
>>> My wish here would be served by the [coroutine] command being able
>>> to displace an existing proc or command until it completes. Having
>>> [coroutine] displace the current proc with an inline [apply]
>>> invocation (as DKF mentioned is possible), would allow exactly the
>>> semantics I was rather hoping would be possible. A proc could then
>>> trivially be built that can [yield] without having to be explicitly
>>> invoked through [coroutine].
>> A limitation of this approach is that you couldn't have 2 or more
>> simultaneous coroutine instances based on the same command.
>
> Not at all... The musings I muttered on coroutines ages ago over on
> the wiki were along these lines (with the unenlightened assumption that
> [yield] would only effect the proc it was invoked from).

Link?

In those
> musings a [coroutine] command would be used to explicitly create a
> simultaneous instance until a new name, where the original command
> would simply resume the default one. No reason the same couldn't be
> applied here.

If the original command has been replaced by a default coroutine, how do
you construct other coroutines based on it?

>> Another difficulty is that the original proc may take different
>> arguments than the resume command.
>
> Actually that came out rather cleanly in the wash. The original proc
> takes initialisation arguments, while the resumed coroutine gets any
> arguments via the return value from [yield]. Unless it takes specific
> action, those arguments will be ignored in the resuming case anyhow.

Two cases:

1. resume only takes a single argument, whereas the original proc takes
>1, in which case anybody trying to call the original will get an error.
2. resume takes multiple arguments and the coroutine expects >1, whereas
the original proc takes only 1 argument, will again result in an error
for callers of the original proc.

The point is that the original proc and the coroutine resume command can
have wildly different signatures and expectations. Dynamically replacing
one with the other is likely to create incompatibilities in most cases.
What does it achieve?

>
>
>>>> As currently implemented, it would also prevent the second and
>>>> later calls to take more than one argument.
>>> Dare I ask, why? ;)
>> If you think about how the mechanism works, a coroutine suspends from
>> a yield, and is later resumed from that place. Any arguments to the
>> resume become the result of the yield as far as the coroutine is
>> concerned: set next [yield $val]
>
> I believe I just said that as the answer to your prior question. ;)
>
>
>> Therefore, if the resume command could take multiple arguments then
>> it would be impossible to distinguish between multiple separate
>> arguments and a list. You could maybe extend yield, something like:
>
> No. You simply ALWAYS return a list, ala [proc]s special "args"
> argument. It might be slightly inconvenient, but it's an awful lot
> more convenient than not being able to do it at all. There are other
> possibilities also, such as allowing [yield] to take an arguments list
> after the return value. A bit odd, but it's an idea if you're
> desperate. ;)

The alternative is to leave things as they are and pass a list when you
want to pass multiple values. That has the benefit of symmetry.

>
>>> My focus on this issue, btw, is mostly in the interest of allowing
>>> coroutines to be used as an implementation detail in the
>>> construction of a proc, rather than a project-wide design decision
>>> from the outset. I think _if_ it can be done without too much
>>> effort, it would be a fantastically handy way to do the first-run
>>> initialisation thing.
>> Coroutines should be possible to use as an implementation detail, as
>> in my worked example converting a simple synchronous program to use
>> the event loop: http://wiki.tcl.tk/21532
>
> It gets a little more complicated when you're not writing the entire
> application from scratch. When you're trying to retro-fit new ideas
> into old code (which I tend to do a lot of), things get a little more
> interesting.

Did you read the page? The entire point of the example is that you can
rewrite the implementation without touching the main logic. I.e., using
coroutines makes it *easier* to "retrofit new ideas into old code".

-- Neil

miguel

unread,
Sep 4, 2008, 8:08:09 AM9/4/08
to
Fredderic wrote:
>> So: either the coroutine has to rebind all upvared variables at each resume
>> (can't!), or has them linked to bog knows what (can't either).
>
> Or just let them go; letting the [upvar]d variables disappear on the next
> [yield] would be better than nothing, and could be accomplished with
> something resembling an unset trace? And I totally agree that rebinding
> every variable every time we resume the coroutine is a rather sucky idea.
>
> Just trying to get a handle on the problem for my own understanding; is there
> any difference between a local variable, and an alias constructed by
> [upvar]...? I'm assuming from what you wrote that a variable is essentially
> a named pointer to a pointer to the value. [upvar] would then create a new
> named pointer to that same pointer to the value. So the break down would
> happen if that stack frame went away while we weren't looking. Or am I off
> with the fairies somewhere, again? ;)

Something like that, yes. That is the reason why [upvar] as implemented will not
let you create links as follows:

% proc fail {} {upvar 0 foo ::foo}
% fail
bad variable name "::foo": upvar won't create namespace variable that refers to
procedure variable

As long as things are in the CallFrame stack the deletion order guarantees that
things disappear sanely, that we never get a "dangling pointer".

> Failing all that, [uplevel] shouldn't suffer from any of those issues, and
> could even be used to fudge an inter-coroutine [upvar], especially if the
> #level syntax can be extended to address a level within another coroutine
> (most likely counting from its #0 upwards). It may even be an acceptable
> interim hack to add such fudging into [upvar] to make it look like it works
> for consistency sake, albeit painfully slowly.

Oh - this is not "fudging upvar/uplevel", this is redesigning the whole concept
of levels in Tcl.

BTW: currently yield does nothing but stop execution and shift some structures
out of the way. It does not traverse the variable list to decide what to do with
upvars or anythin