Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Ann: Stackless Limbo Dancing Works Fine!

1 view
Skip to first unread message

Christian Tismer

unread,
May 17, 2002, 9:30:41 PM5/17/02
to
Announcement:
Stackless Python At It's Best, Ever.

18-May-02: Limbo Dancing works fine! Yet some more thinking for weeks,
and a few days of implementation. Now, the first implementation of
channels is available. It isn't complete, but works just fine. The
stackless module now supports to create channels by function channel().
A channel has the methods send() and receive(). Try it yourself! You
might either want to build your own Stackless, checking out the module
stackless from :pserver:anon...@tismer.com:/home/cvs , or, for Windows
users, just take the pre-built binary, extract it into some directory,
and start Python or PythonWin fom there. I will for sure build an
installer again, after I have got some feedback.

This implementation tries to follow the concepts of the OCCAM/ALEF/LIMBO
languages by supporting the channel concept as the central
communication/blocking/control feature. Tasks do not communicate
directly, but through channels.

You might have guessed it: Stackless will try to implement/support
Hoare's CSP, finally. But before, I have to understand more of it. This
will happen in the next few weeks.

This release is an alpha release. There are most probably a lot of
omissions. Some are obvious:

- There isn't yet a scheduler, like there was one for the uthread
module. This is trivial to implement and will be added in almost no time.

- There isn't yet an ALT/PRI ALT construct like in OCCAM/ALEF/LIMBO.
This is most sophisticated and very hard to implement correctly! I will
try to find out how to do this the best way, during my stay at IronPort,
from May 18th to May 29. If somebody from the US would like to contact
me by phone, please use the opportunity and call me via IronPort. I am
employed by this company.

- You shouldn't use channels from the interpreter prompt, but in
scripts, only. The machinery doesn't check for changed interactive
sessions yet and will most likely crash, soon. Despite of that, it seems
to work nicely with PythonWin and extensions.
--
Christian Tismer :^) <mailto:tis...@tismer.com>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/
14109 Berlin : PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776
PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04
whom do you want to sponsor today? http://www.stackless.com/


Fernando Pereira

unread,
May 18, 2002, 11:27:21 AM5/18/02
to
On 5/17/02 9:30 PM, in article
mailman.102168543...@python.org, "Christian Tismer"

<tis...@tismer.com> wrote:
> - There isn't yet an ALT/PRI ALT construct like in OCCAM/ALEF/LIMBO.
> This is most sophisticated and very hard to implement correctly! I will
> try to find out how to do this the best way, during my stay at IronPort,
> from May 18th to May 29.
I was once involved in designing a channel facility with an ALT-equivalent,
and I found the following paper very useful:

Luca Cardelli. An implementation model of rendezvous communication. S.D.
Brookes, A.W. Roscoe and G. Winskel. Seminar on Concurrency, Carnegie-Mellon
University, Pittsburgh, PA, July 1984. Lecture Notes in Computer Science,
Vol. 197, Springer-Verlag, 1985, ISBN 3-540-15670-4. pp 449-457
<http://research.microsoft.com/Users/luca/Papers/Rendezvous.pdf>

-- F

Christian Tismer

unread,
May 19, 2002, 5:01:24 AM5/19/02
to

Thank you very much for this link. I started to read it
(but it is hard -- I'm deadly jet-lagged) and will try
to get my brain around it tomorrow.

At first glance, I'm a bit stunned. Did you see my channel
implementation? It is quite straight-forward and performs
fine. But the principles appear to be upside down from
those of the paper. My channels have a single queue of
waiting tasks, and this queue is either for reading or
writing.
Currently I have a hard time to understand why there can
be both an output and an input pool. My channels always
fulfil whatever is possible, so only one pool would
be populated at any time.
I think I did quite something similar to their rendevouz
operation.

Interesting is that they don't even take processes as
first-class objects. After introducin channels, I also
found my tasklets less useful than before, and maybe
they will survive only in a shadowy existance.

Anyway, what I didn't grasp really yet is how to
implement PAR as a multiple select on a group
of channels. Isn't this group of channels like
a special object, that will try to figure out a
ready channel with almost no computation time,
and continue the waiting process ASAP?
And if so, what happens afterwards? Is the
channel group thrown away? This looks quite
expensive. I'm expecting to wait on, say, 10000
channels over and over. This implies to create
such objects quite seldom and to re-use them as
long as possible, switching some off using guards.

Maybe I'm completely on the wrong track? I'd
really be interested to read about your
implementation. Anyway I hope to have a better
understanding of the article tomorrow when the
jet-lag has vanished a little.

thanks so much for your support, I really need to
learn more, soon!

cheers - chris

Fernando Pereira

unread,
May 19, 2002, 12:06:51 PM5/19/02
to

I've not looked at this stuff since the mid 80s, but my recollection is that
the too pools are needed because task scheduling in Cardelli's scheme is
non-preemptive, independent of readiness to perform a communication on a
channel. That is, actual rendezvous may occur after both reading and writing
tasks are ready to communicate.


>
> Interesting is that they don't even take processes as
> first-class objects. After introducin channels, I also
> found my tasklets less useful than before, and maybe
> they will survive only in a shadowy existance.

This was designed in the context of languages and formalisms like CSP and
Squeak (Cardelli and Pike's Squeak), in which tasks are not first-class
objects. In the introduction he says "processes are not denotable values".


>
> Anyway, what I didn't grasp really yet is how to
> implement PAR as a multiple select on a group
> of channels.

I'm not sure of why you would need this, but maybe I've just been away from
this stuff too long. PAR just puts a bunch of tasks in the runnable pool.
When any of them wants to communicate, it suspends in the appropriate
channel queue waiting for a matching communication request (the matching
request may already be available, in which case the scheduler is free to do
the rendezvous and move the satisfied tasks to the runnable pool). In other
words, I don't understand why PAR would need to create a collection from the
channels used by the PARed tasks.


> Maybe I'm completely on the wrong track? I'd
> really be interested to read about your
> implementation.

I wrote a design document (which I may not have any longer, I'll look), but
it was never implemented because the resources were not available.


> thanks so much for your support, I really need to
> learn more, soon!

Welcome, and I hope Stackless prospers as it deserves.

-- F

PS. For a different but intriguing approach to communication in an OO
framework, check out Cardelli's recent paper on concurrency abstractions for
C#.

http://research.microsoft.com/Users/luca/Papers/Polyphony%20ECOOP.A4.pdf

Andrew Henshaw

unread,
May 20, 2002, 8:48:06 AM5/20/02
to
Christian Tismer wrote:
...snip...
>
>
> /**********************************************************
>
> The central functions of the channel concept.
> A tasklet can either send or receive on a channel.
> A channel has a queue of waiting tasklets.
> They are either all waiting to send or all
> waiting to receive.
> Initially, a channel is in a neutral state.
> The queue is empty, there is no way to
> send or receive without becoming blocked.
>
> Sending 1):
> A tasklet wants to send and there is
> a queued receiving tasklet. The sender puts
> its data into the receiver, unblocks it,
> and inserts it at the top of the runnables.
> The receiver is scheduled.
> Sending 2):
> A tasklet wants to send and there is
> no queued receiving tasklet.
> The sender will become blocked and inserted
> into the queue. The next receiver will
> handle the rest through "Receiving 1)".
> Receiving 1):
> A tasklet wants to receive and there is
> a queued sending tasklet. The receiver takes
> its data from the sender, unblocks it,
> and inserts it at the end of the runnables.
> The receiver continues with no switch.
> Receiving 2):
> A tasklet wants to receive and there is
> no queued sending tasklet.
> The receiver will become blocked and inserted
> into the queue. The next sender will
> handle the rest through "Sending 1)".
> */
>

Just so that I am clear -- in your concept of channels, can channels be
shared among multiple, parallel, sending processes? What about multiple,
parallel, receiving processes? In other words are your channels similar to
the Queue.Queue object?

The reason that I ask is that this is different from Occam's behavior (and
I believe CSP's). In those systems, a channel should be assigned to at
most one concurrent sender and receiver. If you need to merge the output
of multiple processes, then an ALT construct would be used to arbitrate
among the individual channels. Under Occam, sending to the first available
receiver is more complex than in CSP, as it doesn't have ALT output guards
(difficult to implement on distributed processors, I believe); so, I'd
follow the CSP model here.

I suspect you already know all of this, but I thought I would mention it in
case there is some advantage to you in (conceivably) simplifying your
channel design. You can still get the more complex behavior, but it would
be done explicitly.

Andy Henshaw

Christian Tismer

unread,
May 20, 2002, 3:14:51 PM5/20/02
to
Andrew Henshaw wrote:
> Christian Tismer wrote:
> ...snip...
>
[snipped my description as well]

> Just so that I am clear -- in your concept of channels, can channels be
> shared among multiple, parallel, sending processes? What about multiple,
> parallel, receiving processes? In other words are your channels similar to
> the Queue.Queue object?

a) yes, b) too, c) ...reading up Queue.Queue...
you meant Java's Queue class?
No, I don't implement almost any of the methods.
And I'm not queueing data, but proclets.
But yes, it is in fact a simple FIFO which
triggers reschedules as needed.

> The reason that I ask is that this is different from Occam's behavior (and
> I believe CSP's). In those systems, a channel should be assigned to at
> most one concurrent sender and receiver.

At any time, there will be only one sender or receiver.
After one is done, another one may show up through
the queue.
Do you say a channel is only conected to at most
one sender and receiver in the sense of the whole
program? I believe so, since in Occam, channels
are denoted literally in the program text.

I didn't look at Occam in the first place, but
by reading about the Limbo language. There, a channel
is a first-class object, and it is explicitly
shared between processes which might want to communicate.
This is the reason that bade the implementation look
this way.

Anyway I'm not yet convinced if this was a good decision.
I have no education in parallelism and just followed
the hints of a couple of people at Bell Labs.

> If you need to merge the output
> of multiple processes, then an ALT construct would be used to arbitrate
> among the individual channels. Under Occam, sending to the first available
> receiver is more complex than in CSP, as it doesn't have ALT output guards
> (difficult to implement on distributed processors, I believe); so, I'd
> follow the CSP model here.

I'm trying hard to understand the CSP model at all,
reading articles, books, ...

> I suspect you already know all of this, but I thought I would mention it in
> case there is some advantage to you in (conceivably) simplifying your
> channel design. You can still get the more complex behavior, but it would
> be done explicitly.

I just tried to do the simplest possible while effective
thing. As said, I don't have the background to foresee
if the concept can stand a higher level of communication.

Does my attempt contradict CSP, and how? Then please
let me know. I also learned a few days ago that Occam
systems don't seem to use simple queues, but instead
pools of processes which are ready to send or receive
ofver certain channels. Appears to be quite more
complicated, using explicit scheduling decisions and so
on. I believe this is also necessary, because Occam
has to cope with real parallism, true channel I/O between
multiple processors or even machines and such.
My simple idea was based on the fact that I have one
CPU only, and even easier, it is always shielded by
the Python interpreter, the GIL, so I have complete
control over this simplified "hardware".

Please let me know. I don't want to fight for my idea,
but for the best solution in the context of Python.
We're talking about two pages of code, and I'm happy
to try out different approaches.

Thanks & cheers - chris

Fernando Pereira

unread,
May 20, 2002, 5:14:05 PM5/20/02
to
On 5/20/02 3:51 AM, "Christian Tismer" <tis...@tismer.com> wrote:

> This is where I stumbled. For me, it was only one
> task, waiting for multiple channels. But well, this
> was just a special case of PAR which I saw in LIMBO.
> Limbo has a special case where you wait on an array
> of channels, all for input.
I'm a bit confused here. There is no PAR in Limbo, just ALT

alt { channel-expr1 => statement1; channel-expr2 => statement2; ... }

The special case you mention is a channel-expression in which the channel is
actually an array of channels of the same type, which is a special case of
the above: perform the channel-expression with the first element of the
array to be ready.
> The more general case has different code for every
> branch, and these code pieces appear to be again
> tiny tasks, right?
I agree that it's overkill to create tasks for all the branches of an ALT,
which would be required in a direct implementation of the Cardelli scheme I
referred to.
> Since I thought of the special case of one task
> waiting for multiple channels, all in the same
> manner. In the more general case, with a large
> ALT construct, syntactically like a case, I
> expect this is done with as many small tasks,
> together with guards, and then I can imagine
> to arrange these tasks in groups per channel.
In Cardelli's scheme (the first paper I mentioned, not the C# one) ALT and
PAR are implemented in almost the same way. The only difference is that all
the communication requests coming from the guards in the ALT are put in a
mutual exclusion pool. This scheme is very general is that it doesn't assume
single readers or writers on channels, or immediate matching of read and
write requests (thus each channel may have multiple pending reads and writes
queued up).

-- F

Christian Tismer

unread,
May 20, 2002, 8:26:45 PM5/20/02
to
Fernando Pereira wrote:
> On 5/20/02 3:51 AM, "Christian Tismer" <tis...@tismer.com> wrote:
>
>
>>This is where I stumbled. For me, it was only one
>>task, waiting for multiple channels. But well, this
>>was just a special case of PAR which I saw in LIMBO.
>>Limbo has a special case where you wait on an array
>>of channels, all for input.
>
> I'm a bit confused here. There is no PAR in Limbo, just ALT

Sorry about this -- of course I meant ALT.

> alt { channel-expr1 => statement1; channel-expr2 => statement2; ... }
>
> The special case you mention is a channel-expression in which the channel is
> actually an array of channels of the same type, which is a special case of
> the above: perform the channel-expression with the first element of the
> array to be ready.

Yes, exactly.

>>The more general case has different code for every
>>branch, and these code pieces appear to be again
>>tiny tasks, right?
>
> I agree that it's overkill to create tasks for all the branches of an ALT,
> which would be required in a direct implementation of the Cardelli scheme I
> referred to.

Ok, but this way I now understand what he meant.
Things are clearing up for me.

...

> In Cardelli's scheme (the first paper I mentioned, not the C# one) ALT and
> PAR are implemented in almost the same way. The only difference is that all
> the communication requests coming from the guards in the ALT are put in a
> mutual exclusion pool. This scheme is very general is that it doesn't assume
> single readers or writers on channels, or immediate matching of read and
> write requests (thus each channel may have multiple pending reads and writes
> queued up).

Yes, I understand much more now. The pool approach seems
to be reasonable. Many many thanks! I have to reconsider
things again.

The C# article btw. was *incredibly* interesting. Too bad,
I see no chance yet to squeeze this into Python,
since the idea of chords is really fascinating.
But it appears to make sense with syntactical
support, only. ?

Andrew Henshaw

unread,
May 21, 2002, 12:07:16 AM5/21/02
to
Christian Tismer wrote:

> Andrew Henshaw wrote:
>> Christian Tismer wrote:
>> ...snip...
>>
> [snipped my description as well]
>
>> Just so that I am clear -- in your concept of channels, can channels be
>> shared among multiple, parallel, sending processes? What about multiple,
>> parallel, receiving processes? In other words are your channels similar
>> to the Queue.Queue object?
>
> a) yes, b) too, c) ...reading up Queue.Queue...
> you meant Java's Queue class?
> No, I don't implement almost any of the methods.

Actually, I meant the Python Queue class.

> And I'm not queueing data, but proclets.
> But yes, it is in fact a simple FIFO which
> triggers reschedules as needed.

Okay, that confirms my understanding.

>
>> The reason that I ask is that this is different from Occam's behavior
>> (and
>> I believe CSP's). In those systems, a channel should be assigned to at
>> most one concurrent sender and receiver.
>
> At any time, there will be only one sender or receiver.
> After one is done, another one may show up through
> the queue.
> Do you say a channel is only conected to at most
> one sender and receiver in the sense of the whole
> program? I believe so, since in Occam, channels
> are denoted literally in the program text.

Well sort of. A channel can be directly reused by different parallel
senders/receivers; but, not while the first pair is still active. For
instance, let say you have defined 3 processes xmitA, xmitB, and recv, and
the only parameter to each is a channel. Then the following is fine:

CHAN c:
SEQ -- do the following two PARs sequentially
PAR -- do the following two processes "in parallel"
xmitA(c)
recv(c)
PAR
xmitB(c)
recv(c)

But, the following would be illegal:

CHAN c:
PAR -- do all three processes "in parallel"
xmitA(c)
xmitB(c)
recv(c)


In the first example, channel c is being reused; but it only connects
between two processes at any one time. Furthermore, the direction of the
channel must not change within the scope of the parallel processes.

In the second example, two processes are trying to both transmit over the
same channel. They may not be actually doing it at the same time (for
example, if you had a semaphore to control access to it); but, Occam does
not permit such a construct. That code would be done more correctly as:

CHAN c1, c2, c:
PAR
xmitA(c1)
xmitB(c2)
multiplexer(c, c1, c2)
recv(c)

and multiplexer would look something like:

PROC multiplexer(CHAN out, CHAN in1, CHAN in2):
WHILE TRUE
ALT -- perform the communication that is first ready, and then the
-- indented process
in1 ? data -- receive data on channel in1
out ! data -- transmit data on channel out
in2 ? data
out ! data

Well, actually you'd use an array of input channels and a replicated ALT;
but, I thought this would be clearer.

Notice the use of colons as declaration terminators and indentation for
block definition. Very Pythonesque!


>
> I didn't look at Occam in the first place, but
> by reading about the Limbo language. There, a channel
> is a first-class object, and it is explicitly
> shared between processes which might want to communicate.
> This is the reason that bade the implementation look
> this way.

I'd say that is also the case with Occam.

Well, channels don't use queues at all (at least, not without the
programmer building them). You may be referring to the classic Occam
example of using multiple parallel processes to create a pipeline or queue.
This is often given as an example because it's easy to visualize and
demonstrates the simplicity of Occam's parallel model; but, it wouldn't be
used to create a queue in real code. A real queue would be done with an
array and index pointers.

An Occam channel is very simple and was implemented efficiently in the
Transputer's microcode. I suspect that having only a single transmitting
process and a single receiving process simplifies that task. As I said
before, I believe that this holds for CSP also. The main difference
between CSP and Occam channels is the ability of an ALT-type construct to
know whether a receiving process on the other end of a channel is ready to
receive (called an output guard, IIRC). In Occam, if you're a transmitting
process, you can't simply check whether a receiver is ready to go; you have
to commit to your send and then the communication either completes or you
block. As you say, with your complete control over the model, you could
implement the output guards, if you desired.

>
> Please let me know. I don't want to fight for my idea,
> but for the best solution in the context of Python.
> We're talking about two pages of code, and I'm happy
> to try out different approaches.
>
> Thanks & cheers - chris
>

You're welcome!

Andy

Fernando Pereira

unread,
May 21, 2002, 4:16:19 PM5/21/02
to
On 5/21/02 12:07 AM, in article ueji35i...@corp.supernews.com, "Andrew

Henshaw" <andrew....@mail.com> wrote:
> An Occam channel is very simple and was implemented efficiently in the
> Transputer's microcode. I suspect that having only a single transmitting
> process and a single receiving process simplifies that task. As I said
> before, I believe that this holds for CSP also.
I can't find my CSP book, but from memory I don't think this is correct.
Through || (PAR), it is possible for several concurrent processes to attempt
to read or to write on the same channel. Then the channel implementation
needs a queue to hold all the blocked processes until a complementary event
occurs, at which point one of the pending requests is matched to the event
and the blocked process becomes runable.

> The main difference
> between CSP and Occam channels is the ability of an ALT-type construct to
> know whether a receiving process on the other end of a channel is ready to
> receive (called an output guard, IIRC). In Occam, if you're a transmitting
> process, you can't simply check whether a receiver is ready to go; you have
> to commit to your send and then the communication either completes or you
> block.
The critical difference is not any ALT-type construct, but the fact that the
same channel can be shared among multiple concurrent readers and writers.

BTW, you might find useful some of the materials on JCSP

http://www.cs.ukc.ac.uk/projects/ofa/jcsp/

-- F

Andrew Henshaw

unread,
May 23, 2002, 11:31:25 PM5/23/02
to
Fernando Pereira wrote:

> On 5/21/02 12:07 AM, in article ueji35i...@corp.supernews.com, "Andrew
> Henshaw" <andrew....@mail.com> wrote:
>> An Occam channel is very simple and was implemented efficiently in the
>> Transputer's microcode. I suspect that having only a single transmitting
>> process and a single receiving process simplifies that task. As I said
>> before, I believe that this holds for CSP also.
> I can't find my CSP book, but from memory I don't think this is correct.
> Through || (PAR), it is possible for several concurrent processes to
> attempt to read or to write on the same channel. Then the channel
> implementation needs a queue to hold all the blocked processes until a
> complementary event occurs, at which point one of the pending requests is
> matched to the event and the blocked process becomes runable.

I believe that my statement was correct. From a 1985 version of
Communicating Sequential Processes (p. 134):

"We shall observe the convention that channels are used for communication
in only one direction and between only two processes. A channel which is
used only for output by a process will be called an output channel of that
process; and one used only for input will be called an input channel. In
both cases, we shall say loosely that the channel name is a member of the
alphabet of the process."


>> The main difference
>> between CSP and Occam channels is the ability of an ALT-type construct to
>> know whether a receiving process on the other end of a channel is ready
>> to
>> receive (called an output guard, IIRC). In Occam, if you're a
>> transmitting process, you can't simply check whether a receiver is ready
>> to go; you have to commit to your send and then the communication either
>> completes or you block.
> The critical difference is not any ALT-type construct, but the fact that
> the same channel can be shared among multiple concurrent readers and
> writers.
>

As above.

> BTW, you might find useful some of the materials on JCSP
>
> http://www.cs.ukc.ac.uk/projects/ofa/jcsp/
>
> -- F

Thanks for the reference!

Sincerely,

Andy Henshaw

Fernando Pereira

unread,
May 24, 2002, 10:55:58 AM5/24/02
to
On 5/23/02 11:31 PM, in article uerd0l9...@corp.supernews.com, "Andrew
Henshaw" <andrew....@mail.com> wrote:

> Fernando Pereira wrote:
>> I can't find my CSP book, but from memory I don't think this is correct.
>> Through || (PAR), it is possible for several concurrent processes to
>> attempt to read or to write on the same channel. Then the channel
>> implementation needs a queue to hold all the blocked processes until a
>> complementary event occurs, at which point one of the pending requests is
>> matched to the event and the blocked process becomes runable.
>
> I believe that my statement was correct. From a 1985 version of
> Communicating Sequential Processes (p. 134):
>
> "We shall observe the convention that channels are used for communication
> in only one direction and between only two processes. A channel which is
> used only for output by a process will be called an output channel of that
> process; and one used only for input will be called an input channel. In
> both cases, we shall say loosely that the channel name is a member of the
> alphabet of the process."

As I said, I can't find my copy of the book. But this restriction is a
*convention* only. The language syntax does not impose that restriction, nor
does the semantics AFAIK.

-- F

Niki Spahiev

unread,
May 24, 2002, 4:11:38 PM5/24/02
to
Hello Fernando,

Tuesday, May 21, 2002, 11:16:19 PM, you wrote:

FP> On 5/21/02 12:07 AM, in article ueji35i...@corp.supernews.com, "Andrew


FP> Henshaw" <andrew....@mail.com> wrote:
>> An Occam channel is very simple and was implemented efficiently in the
>> Transputer's microcode. I suspect that having only a single transmitting
>> process and a single receiving process simplifies that task. As I said
>> before, I believe that this holds for CSP also.

FP> I can't find my CSP book, but from memory I don't think this is correct.
FP> Through || (PAR), it is possible for several concurrent processes to attempt
FP> to read or to write on the same channel. Then the channel implementation
FP> needs a queue to hold all the blocked processes until a complementary event
FP> occurs, at which point one of the pending requests is matched to the event
FP> and the blocked process becomes runable.

IIRC transputers had 4 ports and special chip which can be programmed to produce
channel to any port of any transputer. So channels can't be shared.

--
Best regards,
Niki


______________________________________
VEGA(TM) Internet Service Provider
<http://www.vega.bg>
Scanned and protected by Inflex
Tova pismo e provereno i zashtiteno
ot programite "Inflex/Antivir".


Fernando Pereira

unread,
May 24, 2002, 9:28:17 PM5/24/02
to
On 5/24/02 4:11 PM, in article
mailman.102227272...@python.org, "Niki Spahiev"

<spah...@vega.bg> wrote:
> IIRC transputers had 4 ports and special chip which can be programmed to
> produce
> channel to any port of any transputer. So channels can't be shared.
Sure, but that's not a syntactic or semantic requirement in CSP, which was
what I was referring to.

-- F

Andrew Henshaw

unread,
May 25, 2002, 1:02:30 AM5/25/02
to
Fernando Pereira wrote:

CSP is a systematic collection of algebraic laws. Thus, when the
formulator states "we **shall** observe the convention that channels are
used ... between only two processes" (emphasis mine), then that is how it
is defined. Another wording later in the book eliminates the use of the
word "convention" and refers to "...the restriction that a channel is
between two processes only...." Perhaps you could find a statement that
contradicts this. I cannot find anything that even hints at such a use as
you describe. Certainly, as I demonstrated with the Occam examples, you
can build higher-level constructs that achieve the effect; but, the channel
primitive itself does not have that characteristic.

Andy Henshaw

Andrew Henshaw

unread,
May 25, 2002, 1:29:13 AM5/25/02
to
Niki Spahiev wrote:

> Hello Fernando,
>
> Tuesday, May 21, 2002, 11:16:19 PM, you wrote:
>
> FP> On 5/21/02 12:07 AM, in article ueji35i...@corp.supernews.com,
> "Andrew FP> Henshaw" <andrew....@mail.com> wrote:
>>> An Occam channel is very simple and was implemented efficiently in the
>>> Transputer's microcode. I suspect that having only a single
>>> transmitting
>>> process and a single receiving process simplifies that task. As I said
>>> before, I believe that this holds for CSP also.
> FP> I can't find my CSP book, but from memory I don't think this is
> correct. FP> Through || (PAR), it is possible for several concurrent
> processes to attempt FP> to read or to write on the same channel. Then the
> channel implementation FP> needs a queue to hold all the blocked processes
> until a complementary event FP> occurs, at which point one of the pending
> requests is matched to the event FP> and the blocked process becomes
> runable.
>
> IIRC transputers had 4 ports and special chip which can be programmed to
> produce channel to any port of any transputer. So channels can't be
> shared.
>

As Fernando points out, this hardware implementation doesn't actually
affect his argument (which I still dispute); however, it actually turns out
that channels can be physically shared between parallel* processes on the
Transputer. As you might expect, on-chip channel communications are
implemented via memory mapping and off-chip channel communications are
implemented with links (the ports you spoke of) which are also memory
mapped. Since the Transputer doesn't have memory protection, promiscuous
processes can access the channels simultaneously*. The Occam compiler will
prevent that behavior; but, when programming with C (with Transputer
libraries for parallel processing), this technique is used, occasionally.

* "parallel" and "simultaneously" are loose terms. The Transputer uses
task switching with a microcoded scheduler; so, in general, computations on
a single Transputer aren't truly simultaneous.

Regards,

Andy Henshaw

Andy Rabagliati

unread,
May 25, 2002, 8:04:13 AM5/25/02
to
According to Andrew Henshaw <andrew....@mail.com>:

> Fernando Pereira wrote:
>
> > On 5/23/02 11:31 PM, in article uerd0l9...@corp.supernews.com, "Andrew
> > Henshaw" <andrew....@mail.com> wrote:
> >
> >> Fernando Pereira wrote:
> >>> I can't find my CSP book, but from memory I don't think this is correct.
> >>> Through || (PAR), it is possible for several concurrent processes to
> >>> attempt to read or to write on the same channel. Then the channel
> >>> implementation needs a queue to hold all the blocked processes until a
> >>> complementary event occurs, at which point one of the pending requests
> >>> is matched to the event and the blocked process becomes runable.

My experience has been exclusively with occam, on transputers and
other hardware, So I can only comment on the practicality of these
conventions.

I would have a problem if the possibly arbitrary order the sending
processes arrived at their output statements, would then determine the
order the inputting process had to serve them. Smells of deadlock to me.
- I would very much prefer the occam ALT to an imposed order.


> >> "We shall observe the convention that channels are used for
> >> communication in only one direction and between only two processes.
> >> A channel which is used only for output by a process will be called
> >> an output channel of that process; and one used only for input will
> >> be called an input channel.

> Certainly, as I demonstrated with the Occam examples, you can build


> higher-level constructs that achieve the effect; but, the channel
> primitive itself does not have that characteristic.

Indeed , I had (have ..) an arsenal of these - FIFOs, multiplexers,
tiny little processes little more than an ALT statement wrapped in a
WHILE loop.

The occam compiler impoved vastly in its useability when automated
checking of channel usage per the 'shall ..' directive above was
included. Without that checking, complex programs became a nightmare. I
cannot say if this was an occam 'feature' or a CSP feature, but I would
guess it applies to both.

Cheers, Andy!

Fernando Pereira

unread,
May 25, 2002, 1:31:25 PM5/25/02
to
On 5/25/02 1:29 AM, in article ueu89e5...@corp.supernews.com, "Andrew

Henshaw" <andrew....@mail.com> wrote:
> The Occam compiler will
> prevent that behavior; but, when programming with C (with Transputer
> libraries for parallel processing), this technique is used, occasionally.
Occam != CSP

-- F

Christian Tismer

unread,
May 26, 2002, 6:15:09 AM5/26/02
to

Fernando, this is not at you, but at the whole thread:

Please let me add -- with every respect --

Stackless != Occam and Stackless != CSP.

I really appreciate any input on this, and I do my very
best to find Python's place between these worlds.
But please make these liveable for me.

I'd really appreciate if we can keep unrelated
stuff, may it be real discussion or just hairsplitting,
out of this thread. Just change the subject to what it is.

I am looking for help on an issue that's not my home match.
Discordance on these issues might make me want to leave them.
Please, all, your task is to make me do the right thing(TM),
which is a real challenge. Internals have for sure their own
auditory. Please continue with advice, which was great so far.

kind regards - chris

Fernando Pereira

unread,
May 26, 2002, 7:17:34 AM5/26/02
to
On 5/26/02 6:15 AM, "Christian Tismer" <tis...@tismer.com> wrote:

> Discordance on these issues might make me want to leave them.
> Please, all, your task is to make me do the right thing(TM),
> which is a real challenge. Internals have for sure their own
> auditory. Please continue with advice, which was great so far.

Here are the design decisions that underlies the most recent hair-splitting:
do channels have single readers and writers (1-to-1 channels), or not? If
they are 1-to-1, how is that enforced? I think JCSP deals with this the
right way. There are several classes of channel: 1-to-1, 1-to-many,
many-to-1, many-to-many, but all of them share common interfaces for reading
and writing. The most complicated case, many-to-many, would need some ariant
of the full machinery described in that old Cardelli paper; the others would
be easier sub-cases. I also like this design because you can start with
1-to-1, which is the easiest and most common case, and add the others if and
when needed.

-- F


Christian Tismer

unread,
May 26, 2002, 7:15:53 PM5/26/02
to

Many many thanks!
Then I think I should get the JCSP source (btw. the PowerPoint
slides were a delight!) and study how they do it.
Something to lean on is better than re-inventing the wheel.

have a nice holiday - chris

Andrew Henshaw

unread,
May 27, 2002, 9:26:56 PM5/27/02
to
Fernando Pereira wrote:

Of course. I was merely agreeing with your statement that the Transputer's
implementation of channels did not indicate anything about the CSP
definition of channels. Furthermore, despite the impression of the original
poster, Transputer channels are more like your previously stated impression
of CSP then my stated impression. This behavior could be accessed under C
but not under Occam (the language in which most Transputer programs are
programmed). I suspect that this may have been the source of confusion for
the original poster, as Occam is so closely identified with the Transputer,
and vice versa.

-- Andy

Roy Smith

unread,
May 27, 2002, 9:33:54 PM5/27/02
to
Andrew Henshaw <andrew....@mail.com> wrote:
> Occam is so closely identified with the Transputer, and vice versa.

Are either actually still around?

Andrew Henshaw

unread,
May 27, 2002, 10:00:23 PM5/27/02
to
Christian Tismer wrote:

> Fernando Pereira wrote:
>> On 5/25/02 1:29 AM, in article ueu89e5...@corp.supernews.com, "Andrew
>> Henshaw" <andrew....@mail.com> wrote:
>>
>>>The Occam compiler will
>>>prevent that behavior; but, when programming with C (with Transputer
>>>libraries for parallel processing), this technique is used, occasionally.
>>
>> Occam != CSP
>
> Fernando, this is not at you, but at the whole thread:
>
> Please let me add -- with every respect --
>
> Stackless != Occam and Stackless != CSP.
>
> I really appreciate any input on this, and I do my very
> best to find Python's place between these worlds.
> But please make these liveable for me.
>
> I'd really appreciate if we can keep unrelated
> stuff, may it be real discussion or just hairsplitting,
> out of this thread. Just change the subject to what it is.
>
> I am looking for help on an issue that's not my home match.
> Discordance on these issues might make me want to leave them.
> Please, all, your task is to make me do the right thing(TM),
> which is a real challenge. Internals have for sure their own
> auditory. Please continue with advice, which was great so far.
>
> kind regards - chris
>

Christian,

Sorry about the hijacking of this thread. I only mentioned that Occam and
CSP implement channels as connections between single pairs of processes as
I thought that approach might let you simplify your design (by eliminating
your proclet queues, perhaps). I was also trying to show that with this
primitive approach, you can still construct the more complex behavior that
you were indicating. I certainly don't think that you need to emulate
Occam for some compatibility reason. If this concept doesn't help you with
your design, by all means, go with multi-source, multi-destination
channels. I'd still love to have them!

If you do go with the single-process-pair channels, I wouldn't worry about
trying to enforce that convention, as the Occam compiler does. Unless it
was easy to restrict, I'd let the primitive implementation be open to
abuse. If a more "correct" implementation is needed, subclass it and
impose the process constraints at a higher level. I have an "Occam"
library module (poorly done) that I've written to work with the Python
standard threads, and the primitive channels work fine for me, without the
checking.

Thanks for your efforts!

Andy

Christian Tismer

unread,
May 28, 2002, 4:51:49 AM5/28/02
to
Andrew Henshaw wrote:
> Christian Tismer wrote:
[complaining about "off-topic" stuff, in a sense]

> Christian,
>
> Sorry about the hijacking of this thread. I only mentioned that Occam and
> CSP implement channels as connections between single pairs of processes as
> I thought that approach might let you simplify your design (by eliminating
> your proclet queues, perhaps). I was also trying to show that with this
> primitive approach, you can still construct the more complex behavior that
> you were indicating. I certainly don't think that you need to emulate
> Occam for some compatibility reason. If this concept doesn't help you with
> your design, by all means, go with multi-source, multi-destination
> channels. I'd still love to have them!

Sorry abut my reaction, too. I was overly stressed since
a harder internal issue popped up -- being incompatible
with Tcl !!! This really made me nervous, and I didn't rest
for two days and night until this killer-issue was solved.
Couldn't stand stuff that looked like hair-splitting for me,
while I have to admit that it wasn't!

> If you do go with the single-process-pair channels, I wouldn't worry about
> trying to enforce that convention, as the Occam compiler does. Unless it
> was easy to restrict, I'd let the primitive implementation be open to
> abuse. If a more "correct" implementation is needed, subclass it and
> impose the process constraints at a higher level. I have an "Occam"
> library module (poorly done) that I've written to work with the Python
> standard threads, and the primitive channels work fine for me, without the
> checking.

Erhmm, are you saying that I should even dispose of the queues
in my channels? Replace them by what -- just switching threads
if I cannot fulfil their needs? Well, after implementing the
stuff, I probably won't step back. I just have to learn about
an intelligent way to produce a descent structure for ALT.
While I have a possible implementation on stock already,
I'd really like to peer into existing code.
May I look into yours perhaps, pleasydiplease?

> Thanks for your efforts!

Thanks for taking care so much!!

sincerely -- chris

Fernando Pereira

unread,
May 28, 2002, 9:36:28 AM5/28/02
to
On 5/27/02 10:00 PM, in article uf5p9rt...@corp.supernews.com, "Andrew
Henshaw" <andrew....@mail.com> wrote:

> If you do go with the single-process-pair channels, I wouldn't worry about
> trying to enforce that convention, as the Occam compiler does. Unless it
> was easy to restrict, I'd let the primitive implementation be open to
> abuse. If a more "correct" implementation is needed, subclass it and
> impose the process constraints at a higher level. I have an "Occam"
> library module (poorly done) that I've written to work with the Python
> standard threads, and the primitive channels work fine for me, without the
> checking.

If you assume 1-to-1 channels but don't check, wouldn't the effect of
breaking the assumption be that errant tasks could hijack channels and
create zombie tasks? In particular, if t1 writes to c1, until something
reads from c1, t1 blocks, which would be represented by storing t1 in c1's
write-wait slot (a single wait slot for reading and writing is also
possible, since we are assuming that write-read matches immediately move
both writing and reading task to the run queue). Now if t2 accidentally
writes to c1, since you don't check for this situation, the wait slot in c1
would be overwritten by t2. But t1 is still suspended, and now it won't ever
wake up! (similar problem with hijacking the reading side). It would not
cost much to test that there was a writer suspended on c1, and throw an
appropriate exception.

-- F

Andrew Henshaw

unread,
May 28, 2002, 9:42:39 PM5/28/02
to
Fernando Pereira wrote:

I believe that you are absolutely correct.

> It would
> not cost much to test that there was a writer suspended on c1, and throw
> an appropriate exception.
>
> -- F

If it wouldn't cost much to test, then I'd say that is the way to go.

Andy

0 new messages