open thread: concurrency in Shen

145 views
Skip to first unread message

Mark Tarver

unread,
Feb 6, 2011, 5:28:36 AM2/6/11
to Qilang
Now we come to the question of concurrency for Shen.

* What model is suitable for concurrency?
* What should be the fundamental primitives of concurrency?

And here is something to think on.

From my study in concurrency (in progress)

QUOTE
The problem is in the idea of placing a process in the background.
What kind of object is a process? It looks like an application. But if
it is an application, how can it receive a message? In functional
programming, applications execute, they do not have ports to receive
messages. They evaluate but do not themselves receive arguments. On
the other hand if it is a closure, then fine - these do receive
arguments (call them messages if you must) but they do not in
themselves execute. The application (factorial 100) will evaluate to
something new; the abstraction (/. X (factorial X)) is frozen. In
functional programming; either we have executing applications which
are deaf to input or functions which listen to input but do not
execute.
UNQUOTE

Can you argue for your model and can you explain how it meets the
dilemma quoted above? There is a solution which I have in the same
essay, and I'm actively engaged in looking at concurrent languages,
but I do not wish to prejudice discussion. This is a listening space
for me.

Mark

Brian Spilsbury

unread,
Feb 6, 2011, 8:27:20 AM2/6/11
to qil...@googlegroups.com
Qi is not purely functional, and I understand from that that Shen also
is not purely functional.

This means that with respect to concurrency you need to consider Shen
as a procedural system in terms of handling side-effects.
This means that you need to consider the propagation of side-effects
and ordering constraints upon operations.
For this reason I do not think that it is useful or correct to think
about concurrency in Shen in functional terms.

A process is a (virtual) machine.
Qi defines a virtual machine for Qi code to run in, so this is our
starting position.
Qi defines a basic input port via (input) and an output port via
interleaved (print) and (output) operations.

We could add multiprocessing trivially by adding an optional port
parameter to these three operations, a new operation to establish a
port to a Qi process address, and an operation -- e.g., select -- to
inform the process about the set of ports with pending inputs to be
read.

e.g., (set *port* (make-port "abc")) (print [+ 2 3] *port*) (print
(input *port*))

If there is no process existing at an address, then one can be generated.

The default Qi process could then effectively (set *default-port*
(first (select))) and everything could proceed normally.

Upon exit the process would be destroyed, leaving the address vacant,
and any undelivered message could generate a new process to handle it.

That would be the least intrusive approach to multiprocessing in Qi
that would scale that I can see.

Coroutines and futures would complement multiple processes nicely.

Coroutines would allow the interleaving of operations to support the
demands of multiple i/o ports.

Futures would allow multiprocessing within a process, but the
requirement to maintain the dependency ordering of side-effects makes
this more complicated.

Concurrent shared memory systems (e.g., conventional threads) don't
scale and make every memory operation implicitly contentious (unless
you can prove that it can't involve shared memory). The verdict of the
last decades seems to be that this is a bad idea, and will only get
worse as distributed systems mature.

Anyhow, that's my two cents.

> --
> You received this message because you are subscribed to the Google Groups "Qilang" group.
> To post to this group, send email to qil...@googlegroups.com.
> To unsubscribe from this group, send email to qilang+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/qilang?hl=en.
>
>

Mark Tarver

unread,
Feb 6, 2011, 9:20:35 AM2/6/11
to Qilang
> Qi is not purely functional, and I understand from that that Shen also
> is not purely functional.

Qi is basically an enriched lambda calculus backed with a sequent
calculus type system. The side-effect stuff is not central.

The reason I am asking the question 'What is a process?' is because
unless it can be formally defined then the proper treatment of a
process is unresolved. I think it should be more than just a question
of a few primitives; if processes are to be passed around we should
know what we are talking about. Not answering this question will
cause problems later.

> A process is a (virtual) machine.

OK. If we say that a process is a virtual machine - how do we define
a virtual machine? The AUM in FPQi is a virtual machine. It is
specified by a BNF and a mapping from Prolog into the AUM and from the
AUM to Lisp. It is an abstraction layer between Prolog and the
underlying platform. Is a process really like this?

Mark

Brian Spilsbury

unread,
Feb 6, 2011, 9:38:04 AM2/6/11
to qil...@googlegroups.com
On Sun, Feb 6, 2011 at 11:20 PM, Mark Tarver <dr.mt...@gmail.com> wrote:
>> Qi is not purely functional, and I understand from that that Shen also
>> is not purely functional.
>
> Qi is basically an enriched lambda calculus backed with a sequent
> calculus type system.  The side-effect stuff is not central.

Well, it's fairly central to concurrency -- so you need to work out if
we're talking about Qi or a purely functional subset of it.

>
> The reason I am asking the question 'What is a process?' is because
> unless it can be formally defined then the proper treatment of a
> process is unresolved.  I think it should be more than just a question
> of a few primitives; if processes are to be passed around we should
> know what we are talking about.  Not answering this question will
> cause problems later.
>
>> A process is a (virtual) machine.
>
> OK. If we say that a process is a virtual machine - how do we define
> a virtual machine?  The AUM in FPQi is a virtual machine.  It is
> specified by a BNF and a mapping from Prolog into the AUM and from the
> AUM to Lisp. It is an abstraction layer between Prolog and the
> underlying platform. Is a process really like this?

Yes, I think so.

Consider a posix process.

It is a virtual machine that runs a program in a language that is
generally a subset of the instruction set of the underlying
architecture, which communicates with the outside world via a syscall
interface.

A process is fundamentally a machine that interprets a program.

The AUM specifies a machine which could be implemented as an interpreter.
I don't think this is fundamentally different to the posix process case.

Normally you pair the AUM with a specific program and translate that
into a specific program in another language (i.e., compilation), but
that's functionally equivalent to running that AUM program in a CL
interpreter.

Likewise you have the C Abstract Machine that specifies the machine
that strictly conforming C programs run in.

In all cases we have realizations of virtual machines -- some general
and some specialized upon particular programs.

pbacchi

unread,
Feb 6, 2011, 10:26:06 AM2/6/11
to Qilang
The languages Oz and Clojure have interesting concurrency models

Links for Oz
<a href = "http://www.info.ucl.ac.be/~pvr/book.html"> Concepts,
Techniques, and Models of Computer Programming</a>
<a href = "http://www.info.ucl.ac.be/~pvr/
paradigms.html">Classification of the principal programming paradigms</
a>

Links for Clojure
<a href = "http://clojure.org/concurrent_programming">Concurrent
Programming</a>
<a href = "http://vimeo.com/8672404">Stuart Halloway - Concurrent
Programming with Clojure - Vimeo</a>

I think both Clojure and Oz have taken a crack at answering what good
concurrency primitives should be. I believe both come to the
conclusion that Shared State Concurrency should NOT be the default
model.

I hope this helps

Philip M. Bacchi

Harold Ancell

unread,
Feb 6, 2011, 11:01:43 AM2/6/11
to Qilang
At 08:20 AM 2/6/2011, Mark Tarver wrote:
>> Qi is not purely functional, and I understand from that that Shen also
>> is not purely functional.
>
>Qi is basically an enriched lambda calculus backed with a sequent
>calculus type system. The side-effect stuff is not central.
>
>The reason I am asking the question 'What is a process?' is because
>unless it can be formally defined then the proper treatment of a
>process is unresolved.

My talents lie elsewhere so I'm afraid I can't help (much) with
this important question, however:

>I think it should be more than just a question
>of a few primitives; if processes are to be passed around we should
>know what we are talking about. Not answering this question will
>cause problems later.
>
>> A process is a (virtual) machine.

I really want abstractions that allow me to safely and
efficiently use multiple threads working in one address space
(e.g. what Clojure is focused on solving). The failure of
speed (raw or architectural) to scale with Moore's law means
multiple core SMP systems are for the foreseeable future
where the action is and for Shen to be competitive in this
environment it needs an approach to concurrency that
addresses this.

The classical Actor model, which is what I think Brian
Spilsbury is advocating, certainly solves the safety issue
but at a great cost for many problems that can fit into one
SMP system since it requires copying data into messages.

While it doesn't solve the question of what you are asking, I
suspect a purely functional approach should be considered;
certainly without one the utility of a Shen port to Clojure or
any other seriously functional language is questionable.

On the other hand there's the question of what to do about
ports to languages rife with side effects. As Brian correctly
points out, "The verdict of the last decades seems to be that


this is a bad idea, and will only get worse as distributed

systems mature." E.g. witness the general failure to graft
Software Transactional Memory (STM) onto imperative languages
vs. the relative ease of adding it to Haskell or Clojure being
built on it.

- Harold

Mark Tarver

unread,
Feb 6, 2011, 2:42:38 PM2/6/11
to Qilang


On Feb 6, 2:38 pm, Brian Spilsbury <brian.spilsb...@gmail.com> wrote:
> On Sun, Feb 6, 2011 at 11:20 PM, Mark Tarver <dr.mtar...@gmail.com> wrote:
> >> Qi is not purely functional, and I understand from that that Shen also
> >> is not purely functional.
>
> > Qi is basically an enriched lambda calculus backed with a sequent
> > calculus type system.  The side-effect stuff is not central.
>
> Well, it's fairly central to concurrency -- so you need to work out if
> we're talking about Qi or a purely functional subset of it.

Perhaps this is too fast. Part of my question was whether concurrent
processes could be accomodated within the functional model and whether
the dilemma in my opening question could be addressed. This question
was:

QUOTE
What kind of object is a process? It looks like an application. But if
it is an application, how can it receive a message? In functional
programming, applications execute, they do not have ports to receive
messages. They evaluate but do not themselves receive arguments. On
the other hand if it is a closure, then fine - these do receive
arguments (call them messages if you must) but they do not in
themselves execute. The application (factorial 100) will evaluate to
something new; the abstraction (/. X (factorial X)) is frozen. In
functional programming; either we have executing applications which
are deaf to input or functions which listen to input but do not
execute.

The question is; can we find a solution within the functional
framework which can do both?
UNQUOTE

I guess you are indicating a 'no' to the question. I think the answer
might actually be a qualified 'yes'. The potential solution lies in
the penultimate sentence.

> > The reason I am asking the question 'What is a process?' is because
> > unless it can be formally defined then the proper treatment of a
> > process is unresolved.  I think it should be more than just a question
> > of a few primitives; if processes are to be passed around we should
> > know what we are talking about.  Not answering this question will
> > cause problems later.
>
> >> A process is a (virtual) machine.
>
> > OK. If we say that a process is a virtual machine - how do we define
> > a virtual machine?  The AUM in FPQi is a virtual machine.  It is
> > specified by a BNF and a mapping from Prolog into the AUM and from the
> > AUM to Lisp. It is an abstraction layer between Prolog and the
> > underlying platform. Is a process really like this?
>
> Yes, I think so.
>
> Consider a posix process.
>
> It is a virtual machine that runs a program in a language that is
> generally a subset of the instruction set of the underlying
> architecture, which communicates with the outside world via a syscall
> interface.
>
Well is it? I mean suppose I have a background program that prints
"hello world" if a specific file is created. Is that program a
virtual machine? Or even simpler, it counts up to 10^10 while I'm
typing and then prints the same. I would say it is not; certainly not
in the usual sense of the phrase.

But if we were to stretch the phrase in this way, then we really would
be making the phrase "virtual machine" mean what "process" or
"external process" means and we would have made little advance in
saying a process is a virtual machine. What I'm after is a
characterisation that takes us deeper than swapping one name for
another and points the way for the correct treatment of processes.

Mark

Brian Spilsbury

unread,
Feb 6, 2011, 7:18:12 PM2/6/11
to qil...@googlegroups.com
> I guess you are indicating a 'no' to the question.  I think the answer
> might actually be a qualified 'yes'.   The potential solution lies in
> the penultimate sentence.

As I see it, a function is an invariant mapping from domain to range.
Being invariant, it exists outside of time, which removes it from the
domain of concurrency which is concerned with operations that occur
over time.
Which means that in this conversation we must be talking about not
functions but about machines used to compute functions, which I think
is a very important distinction.

>> It is a virtual machine that runs a program in a language that is
>> generally a subset of the instruction set of the underlying
>> architecture, which communicates with the outside world via a syscall
>> interface.
>>
> Well is it?  I mean suppose I have a background program that prints
> "hello world" if a specific file is created.  Is that program a
> virtual machine?  Or even simpler, it counts up to 10^10 while I'm
> typing and then prints the same.  I would say it is not; certainly not
> in the usual sense of the phrase.

The program is not a virtual machine, but the machine running the
program is a (virtual) machine.

One clue that you're not talking about a functional system is that
you're saying "while I'm typing".
Here you must be talking about the machine running the program, since
that's the only thing that can be doing something while you're
talking.

In a functional system, we could model the above by constructing a
list encoding the problem, and then using a lazy list that somehow
depends on that first list for the result -- i.e., as a functional i/o
mechanism.

>
> But if we were to stretch the phrase in this way, then we really would
> be making the phrase "virtual machine" mean what "process" or
> "external process" means and we would have made little advance in
> saying a process is a virtual machine.  What I'm after is a
> characterisation that takes us deeper than swapping one name for
> another and points the way for the correct treatment of processes.

The important point here, in my opinion, is distinguishing between the
program (which might be functional) and whatever machine is used to
compute that program (which will not be functional).

Since functions do not involve time we are limited in the approaches
that we can take here.

(a) We can have a translation process determine where concurrency can
be added without affecting the semantics of the program (which more or
less limits you to implicit futures).

(b) We can try to functionally model time in some way, and have the
translation process use this model to choose concurrency strategies --
which seems difficult.

(c) We can add operations with functional semantics that are used as
directives by a translation process without modeling time -- e.g.,
having a function equivalent to identity that tells the translator
that it should use a future here -- which seems inelegant.

(d) We can abstract other computational devices as data streams and
turn this into essentially an i/o problem which is equivalent to IPC.

Personally, I think that (a) and (d) make the most sense, and these
can be used in conjunction.

Brian Spilsbury

unread,
Feb 6, 2011, 7:24:06 PM2/6/11
to qil...@googlegroups.com
> I really want abstractions that allow me to safely and
> efficiently use multiple threads working in one address space
> (e.g. what Clojure is focused on solving).  The failure of
> speed (raw or architectural) to scale with Moore's law means
> multiple core SMP systems are for the foreseeable future
> where the action is and for Shen to be competitive in this
> environment it needs an approach to concurrency that
> addresses this.

The problem is that multicore SMP systems don't scale, either.

And with a shift toward cloud computing I think their dominance will
be extremely short lived.

I regularly use hundreds of machines in order to do large scale
computations and data processing.

While that might be a little expensive for the average user, cloud
providers like Amazon are making it increasingly cheap to simply rent
hundreds of machines on a demand basis.

Mark Tarver

unread,
Feb 7, 2011, 11:33:01 AM2/7/11
to Qilang


On Feb 7, 12:18 am, Brian Spilsbury <brian.spilsb...@gmail.com> wrote:
> > I guess you are indicating a 'no' to the question.  I think the answer
> > might actually be a qualified 'yes'.   The potential solution lies in
> > the penultimate sentence.
>
> As I see it, a function is an invariant mapping from domain to range.
> Being invariant, it exists outside of time, which removes it from the
> domain of concurrency which is concerned with operations that occur
> over time.
> Which means that in this conversation we must be talking about not
> functions but about machines used to compute functions, which I think
> is a very important distinction.

Yes; I see your reasoning. This was the dilemma that was posed; if we
have either
frozen functions that listen but do not execute and applications that
execute and
do not listen then certainly we have no adequate basis for processes
in a functional
language. This dilemma was the beginning of my study. I concluded
that the dilemma arose from traditional practice in FPLs. Here is a
quote.

QUOTE
At this point we have to need to look at the foundations of functional
programming and revisit the lambda calculus. When we do that we find
something interesting.

In the lambda calculus, the dilemma visited on us in functional
programming does not exist. In functional programming we are
confronted with either dynamic applications which are deaf or
listening closures which are paralysed. The lambda calculus does not
think like that; it is perfectly legitimate to perform a reduction
within the scope of an abstraction; e.g.

(lambda x ((lambda y y) a)) => (lambda x a)

** It appears then, that the dilemma we face in understanding how to
represent processes is a result of current functional programming
practice and does not arise from the lambda calculus. Functional
programming has created paralysed abstractions called closures which
do not sustain execution within themselves. But in lambda calculus we
find that abstractions are as fully 'unparalysed' as applications.**
UNQUOTE

In my study I call these evaluable abstractions 'lambda processes'.

* A process is a lambda process.
* A lambda process is an abstraction which can sustain evaluation
within its body.
* A lambda process behaves in most important respects like an
abstraction.

There is a strong correlation between lambda processes and processes
in languages like Termite Scheme and Erlang. Effectively if we equip
a language with lambda processes and include within these processes
some primitives for handling events, you have, IMO, a model for
concurrency.

Mark

Harold Ancell

unread,
Feb 7, 2011, 11:27:40 AM2/7/11
to qil...@googlegroups.com
At 06:24 PM 2/6/2011, Brian Spilsbury wrote:
>> I really want abstractions that allow me to safely and
>> efficiently use multiple threads working in one address space
>> (e.g. what Clojure is focused on solving). The failure of
>> speed (raw or architectural) to scale with Moore's law means
>> multiple core SMP systems are for the foreseeable future
>> where the action is and for Shen to be competitive in this
>> environment it needs an approach to concurrency that
>> addresses this.
>
>The problem is that multicore SMP systems don't scale, either.

They do up to a point, which will continue to increase for
some time. I submit that a lot of problems fit within the
limit of what can be economically put on one motherboard.

>And with a shift toward cloud computing I think their dominance will
>be extremely short lived.

When do you think cloud computing vendors (or sites using their
approach) will stop building their clouds on top of SMP systems?

If we look at Amazon's AWS EC2 offerings, every one of them
other than the Micro or Small instance offers 2 or more cores.

>I regularly use hundreds of machines in order to do large scale
>computations and data processing.
>
>While that might be a little expensive for the average user, cloud
>providers like Amazon are making it increasingly cheap to simply rent
>hundreds of machines on a demand basis.

Indeed, and it would be great if Shen addresses this capability in
a native and powerful way. My point is that since it's functional
and some of its target platforms are also functional we should see if
we can also address the SMP problem/opportunity.

- Harold

Anthony Di Franco

unread,
Feb 7, 2011, 3:03:15 PM2/7/11
to qil...@googlegroups.com
The question of data structures should be part of this discussion.

Having an immutable, structure-sharing data structure for a mapping and an ordering upon which to base the construction of messages for actors or for the sharing of state in general makes actors work without unneeded copying, and would make just about anything more tractable, and Haskell and Scala and Clojure have all gone this road.  Erlang too, maybe?

Presumably the type system of shen will also be able to enforce the immutability of messages unlike in Scala for example.

Regarding this, “In functional programming, either we have executing applications which are deaf to input or functions which listen to input but do not execute.”  Can we not escape this conundrum by thinking in functional-reactive terms?

Anthony Di Franco

unread,
Feb 7, 2011, 3:06:52 PM2/7/11
to qil...@googlegroups.com
This sounds a lot like the approach in the graph-reduction machines.


Brian Spilsbury

unread,
Feb 7, 2011, 10:54:39 PM2/7/11
to qil...@googlegroups.com
> In the lambda calculus, the dilemma visited on us in functional
> programming does not exist. In functional programming we are
> confronted with either dynamic applications which are deaf or
> listening closures which are paralysed. The lambda calculus does not
> think like that; it is perfectly legitimate to perform a reduction
> within the scope of an abstraction; e.g.
>
> (lambda x ((lambda y y) a)) => (lambda x a)

Isn't this just equivalent to partial evaluation?

Actually, I think there is a terminology gap that will require
examples to bridge.

Can you provide examples of this paralysis and deafness, respectively?

> ** It appears then, that the dilemma we face in understanding how to
> represent processes is a result of current functional programming
> practice and does not arise from the lambda calculus. Functional
> programming has created paralysed abstractions called closures which
> do not sustain execution within themselves. But in lambda calculus we
> find that abstractions are as fully 'unparalysed' as applications.**
> UNQUOTE

Can you provide examples of this paralysis, since I evidently don't
understand what it is?

>
> In my study I call these evaluable abstractions 'lambda processes'.
>
> * A process is a lambda process.
> * A lambda process is an abstraction which can sustain evaluation
> within its body.
> * A lambda process behaves in most important respects like an
> abstraction.

What does it mean to sustain evaluation?

Are you referring to something along the lines of a coroutine?

>
> There is a strong correlation between lambda processes and processes
> in languages like Termite Scheme and Erlang.   Effectively if we equip
> a language with lambda processes and include within these processes
> some primitives for handling events, you have, IMO, a model for
> concurrency.

Can you provide an example of two lambda processes interacting?

Brian Spilsbury

unread,
Feb 7, 2011, 10:57:10 PM2/7/11
to qil...@googlegroups.com
> Indeed, and it would be great if Shen addresses this capability in
> a native and powerful way.  My point is that since it's functional
> and some of its target platforms are also functional we should see if
> we can also address the SMP problem/opportunity.
>

It's always trivial to deal with an additional core as another machine
that is cheap to talk to.

Dealing with remote machines as additional cores is tricky.

I guess that's the point I want to make with respect to scaling.

Dominikus

unread,
Feb 8, 2011, 2:33:59 AM2/8/11
to Qilang
Dear Mark,

I believe this to be an important and very interesting discussion. I
would like to rephrase your thoughts just in order to make sure that I
catched your point.

Your example is the following expression:

(lambda x ((lambda y y) a))

Your argument is that current functional programming practice has the
following evaluation model: the outer lambda abstraction stalls the
evaluation of the inner lambda application since the outer lambda
expression waits for being applied to an argument. (A smart compiler
would possibly recognize the application of the inner lambda and
resolve it; but I think that is not your point.) The lambda calculus
however lets you reduce the application of the inner lambda even
though the outer lambda abstraction is not yet applied to something.

Does that correctly summarize your line of argumentation?

If so, how does your argumentation continue regarding concurrency? In
which way does the lambda calculus help you solve the concurrency
issue?

Cheers,

Dominikus

Mark Tarver

unread,
Feb 8, 2011, 5:00:03 AM2/8/11
to Qilang
**Generally in this discussion; I want to listen, rather than drive my
views home.**

But since people asked.

The point I am making is very simple. Lambda calculus contains
within it already abstractions which sustain input and nevertheless
execute - these I call lambda processes. Hence the dilemma posed in
my question is resolved because the dilemma applies to a limited
representation of lambda calculus in the computer. What is
conventionally called a closure is simply a lambda process which has
ceased evaluation.

There are 3 basic properties of these processes.

1. A lambda process is just like an abstraction (closure) in that we
should be able to apply it to an argument and get a result.
2. That just like a regular abstraction we should be able to create
any lambda process and move on without the whole of our program
getting caught in any non-terminating computation inside the
process.
3. That when the body of a lambda process has returned a normal form
it behaves then just like a closure.

If you allow lambda processes into your language and equip them with
some additional primitives for handling events then you have a
representation for handling concurrency. My initial experiments
suggest the set of these primitives may be quite small.

But as I have said, at the beginning, I want to listen to other
people's ideas.

Mark
> > Mark- Hide quoted text -
>
> - Show quoted text -

Mark Tarver

unread,
Feb 8, 2011, 5:46:27 AM2/8/11
to Qilang
> > The program is not a virtual machine, but the machine running the
program is a (virtual) machine.

What I have got here is

process = virtual machine
virtual machine = what runs the program

ergo

process = what runs the program

Generally I'm not sure where that gets us; I don't think processes run
programs, rather that a process is the running of a program. A
virtual machine provides perhaps the pattern to which the process
conforms. I think this is going in circles. But to return to my
questions which were

a. What are processes?
b. What are the fundamental primitives for describing processes?

If a. runs into a marsh; is b. the better question to ask? In other
words we characterise processes by describing the primitive operations
necessary to handle them and the properties of those operations.
Whatever satisfies these operations we can describe as a process.
Just as for instance, in number theory, the question "What is a
number?" does not get us very far, but "What are the fundamental
properties of numbers we need to assume to recover number theory?"
gets us much further. We can then answer the question "What is a
number?" by stating that any domain that satisfies our axioms
concerning numbers is a domain of numbers.

Hence to invert the order of the questions; what are the fundamental
primitives for describing processes?

Mark

vasil

unread,
Feb 8, 2011, 7:02:23 AM2/8/11
to qil...@googlegroups.com
We can think of processes like this:
process = vm + lambda + i/o

vm is calculating lambda and have next states:
1. Running
2. Waiting for i/o
3. Stopped (because of error or finished in ordinary way)

i/o may be done via message passing, using next base primitives:
1. recv - blocking
2. send - non-blocking
3. peek - like recv, but non-blocking

or we can do i/o via stdin, stdout and use functions for streams

Some primitives to work with vm as first order object:
1. (run (lamdba (stdin stdout) ... )) (stream --> stream --> A) --> [vm A]
2. (stop VM) [vm A] --> boolean
3. (vm? vm) A --> boolean
3. (vm-state? vm) [vm A] --> vm-state, where vm-state one of: running,
error, finished
4. (vm-result? vm) [vm A] --> A, throws exception on vm in error state
and blocks when vm is running

If we introduce vm like first order object then we can get type secure
concurrency.

VM in this case just an abstraction and all low-level VM machinery may
be implemented in many ways in underlying platform.

For instance, VM could be implemented like ordinary OS process running
CL with Shen image with TCP/IP communication channels.

Or VM could be implemented using threads and shared memory. Or VM could
be built on top of green threads in Scheme.

Later a low-level protocol for communication between VMs may be defined
to allow interoperability between VMs running on different host systems.

Vasil.

Mark Tarver

unread,
Feb 8, 2011, 9:22:10 AM2/8/11
to Qilang
> Isn't this just equivalent to partial evaluation?

Right now, the work is in progress and far from publication. I have
not,
to use my analogy, recovered the theorems of number theory from my
model of lambda processes. Systematically I am adding primitives and
seeing what I can compute.

What I have determined so far is this:

That partial evaluation is probably not the right model for lambda
processes.

QUOTE
The basis here is that the evaluator refuses to evaluate in any way
any unevaluable expression; where an expression is unevaluable if

1. It is a unbound variable.
2. It is an expression containing an unbound variable.

This is a very strong model of evaluation; too strong in that it
prevents lambda processes that use bound variables from being computed
at all. A different model must be found.

One model would be a form of partial evaluation, call it
"opportunistic evaluation", that evaluates an expression if it is free
of unbound variables and leaves the rest alone. Unfortunately this
will not do; in the expression

(if (= X 0) (blow-up-the-powerstation) (go-for-a-beer))

the expression (= X 0) is not evaluable but (blow-up-the-powerstation)
and (go-for-a-beer) are; consequently the computer blows up the power
station and goes for a beer.
UNQUOTE

Based on an operator /! for lambda processes and two primitives while
and bound? I have found that:

1. Lambda processes can represent parallelism e.g. parallel or.
2. They can represent event loops that can be terminated on command.
3. They can be set up to have memory of past events.
4. There is a mapping of this model into message passing systems
like Erlang and Termite Scheme.

But it is still early days.

Mark

Mark Tarver

unread,
Feb 8, 2011, 1:58:29 PM2/8/11
to Qilang
> Regarding this, “In functional programming, either we have executing
> applications which are deaf to input or functions which listen to input but
> do not execute.”  Can we not escape this conundrum by thinking in
> functional-reactive terms?

The online lit. on functional reactive programming is sparse; would
you care to elaborate? I see FRP takes objects called behaviours as
first class objects. Could you explain? In what way do you see FRP as
escaping the dilemma posed? (Not saying the dilemma is unsolvable of
course).

Mark

Mark Tarver

unread,
Feb 8, 2011, 2:25:09 PM2/8/11
to Qilang
> Can you provide examples of this paralysis and deafness, respectively?

Yes; any closure is paralysed. For example;

(/. X (event-loop)) does not execute although it will receive inputs.
However the 0-place application (event-loop) executes but is deaf to
input.

The idea of a lambda process is that it combines features of both an
application and a closure and corresponds to a abstraction in the
lambda calculus. They can execute in their body, but receive inputs
through the bound variable.

The idea is that lambda processes should be creatable in just the same
way as closures; so the fact that a lambda process encapsulates a
possibly non-terminating or long-lasting computation
is irrelevant to the computation in which it is embedded - it still
moves forward just as if a closure had been created. When a lambda
process ceases, having found a normal form, or perhaps being halted,
it becomes a closure. This requirement is ideally suited to
concurrent execution and in fact requires it.

Let us suppose that we use (/! X Y) to denote a lambda process. This
simple notation, despite having only one primitive, enables us to
define
one version of parallel and.

(p-and P Q) = (let Proc1 (/! X P)
Proc2 (/! Y Q)
(and (Proc1 0) (Proc2 0)))

The lambda processes Proc1 and Proc2 run concurrently and terminate on
the evaluation of P or Q to a normal form PN or QN returning a closure
and we pull out PN and QN by applying the abstractions to zero (the
input here is arbitrary) and comparing the results. This is the
simplest form of a lambda process - where the bound variable is not
found in the body of the process. Effectively then, the creation of N
such processes in the machine is equivalent to an order to run N
processes concurrently.

Lambda processes can do more than this if you allow process variables
to occur in the body - thus in (/! X Y) X is the process variable.
The trick then is to enrich these processes by a notation that allows
the programmer to do the necessary conjurations; such creating
interruptable event loops and processes with stateful memory.

Mark

Brian Spilsbury

unread,
Feb 8, 2011, 11:06:34 PM2/8/11
to qil...@googlegroups.com
> (p-and P Q) = (let Proc1 (/! X P)
>                          Proc2 (/! Y Q)
>                          (and (Proc1 0) (Proc2 0)))
>
> The lambda processes Proc1 and Proc2 run concurrently and terminate on
> the evaluation of P or Q to a normal form PN or QN returning a closure
> and we pull out PN and QN by applying the abstractions to zero (the
> input here is arbitrary) and comparing the results.  This is the
> simplest form of a lambda process - where the bound variable is not
> found in the body of the process.  Effectively then, the creation of N
> such processes in the machine is equivalent to an order to run N
> processes concurrently.

This sound like a future.

> Lambda processes can do more than this if you allow process variables
> to occur in the body - thus in (/! X Y) X is the process variable.
> The trick then is to enrich these processes by a notation that allows
> the programmer to do the necessary conjurations; such creating
> interruptable event loops and processes with stateful memory.

This sound like a combination of a future and a coroutine.

Is this accurate?

Brian Spilsbury

unread,
Feb 10, 2011, 4:29:01 AM2/10/11
to qil...@googlegroups.com
> One model would be a form of partial evaluation, call it
> "opportunistic evaluation", that evaluates an expression if it is free
> of unbound variables and leaves the rest alone. Unfortunately this
> will not do; in the expression
>
> (if (= X 0) (blow-up-the-powerstation) (go-for-a-beer))
>
> the expression (= X 0) is not evaluable but (blow-up-the-powerstation)
> and (go-for-a-beer) are; consequently the computer blows up the power
> station and goes for a beer.
> UNQUOTE

These are normally referred to as "futures".

You can consider a side-effect as being like unbound variable, which
can be bound by attempting to use the result of the expression.

So (blow-up-the-powerstation) might well be able to proceed with all
of the functional ancillary work involved in that task, and then block
on the detonation until the future is realized by attempting to get
its value.

>
> Based on an operator /! for lambda processes and two primitives while
> and bound? I have found that:
>
> 1.  Lambda processes can represent parallelism e.g. parallel or.
> 2.  They can represent event loops that can be terminated on command.
> 3.  They can be set up to have memory of past events.
> 4.  There is a mapping of this model into message passing systems
> like Erlang and Termite Scheme.

This makes a lambda process sound very much like a Qi interpreter ...
which is what I was talking about earlier.

One thing that I think has been neglected is the locality of a process
-- not all resources are equally available to all processes in
concurrent systems, which is why I suggested using a system of
addresses.

Since it is clear that we are now talking about an essentially
procedural system, we need to consider limits on the propagation of
side-effects -- can side-effects be propagated between processes
implicitly or only as messages?

I think that both approaches are useful, but produce qualitatively
different results -- where I want optimistic evaluation (e.g.,., a
future), then implicit propagation of side-effects is useful and
necessary in a procedural system.

Where I want to deal with effectively a remote Qi interpreter, then I
think that the implicit propagation of side-effects would be a very
bad idea.

There is a third case, which corresponds to generators or coroutines
-- where I want to be able to do a partial computation, yield a
result, then later continue possibly with additional input.

Maybe the distinction here can be between processes with addresses and
processes without addresses -- the address implying a limit to the
propagation of side-effects, requiring messaging to escape?

pbacchi

unread,
Feb 10, 2011, 10:32:27 AM2/10/11
to Qilang
I don't even pretend to understand these process calculi that are
talked about in the following paper but you probably will

Uniform Confluence in Concurrent Computation (Unabridged)
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.6085

If I understood what I read this is the process calculi that is used
as the basis for the Abstract Machine for the OZ programming
language.

Also these papers might be of interest.
http://www.mozart-oz.org/papers/

Philip M. Bacchi

On Feb 6, 5:28 am, Mark Tarver <dr.mtar...@gmail.com> wrote:

Mark Tarver

unread,
Feb 10, 2011, 3:50:10 PM2/10/11
to Qilang
Many of the questions revolves around 3 aspects

a. the proper treatment of lambda process bodies; the nature of the
evaluation that takes place in them.
b. the primitives necessary to use in and on lambda processes.
c. the nature of applying a lambda process to an argument. What
happens?

Answering these questions at length requires some space. What follows
here is brief and directed to your post.

Regarding a. the question really concerns the proper treatment of
process variables. They have no value but can assume a future
value. Are they futures? In the model I am investigating, not
really, they cannot be an object of evaluation until they are bound.
We cannot add the promise of an integer to 3. This raises the problem
of how lambda processes are to execute in the presence of process
variables.

An answer is that a lambda process may test a process variable to see
if it is bound before executing. If we introduce two primitives:

bound?
while

bound? applied to a variable produces 'true' if it is bound and false
otherwise. 'while' can be defined thus

(while P Q R) = (if P (do Q (while P Q R)) R)

This is not definable in a strict applicative language and requires a
macro. Let unbound? be defined as

(unbound? X) = (not (bound? X))

then a simple event-loop that terminates on receiving any message can
be represented as:

(/! X (while (unbound? X)
(look-for-event)
halted))

or a more sophisticated one that needs a specific value (e.g. halt)

(/! X (while (or (unbound? X) (not (= halt X)))
(look-for-event)
halted))

Here the process of evaluation is precisely the default one used in
Qi; no special evaluation is assumed. Thus a lambda process model
with 3 primitives

/!
while
bound?

is capable of

1. Representing parallel processes like parallel and etc.
2. Simple event loops that terminate on command.
3. Creating processes with event memory - that can remember the past.
This is less obvious. Requires another post.

The second question; are lambda processes like co-routines? Can one
enter them, exit, and return? In the examples given, no. With more
advanced cases and additional primitives I'm studying, I believe
yes.

Mark

vasil

unread,
Feb 10, 2011, 6:57:43 PM2/10/11
to qil...@googlegroups.com
> a. the proper treatment of lambda process bodies; the nature of the
> evaluation that takes place in them.

I think that lambda process bodies should not be distinguished from
ordinary lambdas.

> c. the nature of applying a lambda process to an argument. What
> happens?

(let Proc1 (/! X (while (unbound? X) (thread-yield!) (+ 1 X)))
(let Result1 (Proc1 123)
(Result1 0)))

First call of lambda process passes argument to process (X becomes
bound) and returns lambda which is called when we want to get result of
process.

> /!
> while
> bound?

It is insufficient. There is need of synchronization primitive like
wait-until-bound.

(let Proc1 (/! X (do (wait-until-bound X) (+ 1 X)))
(let Result1 (Proc1 123)
(Result1 0)))


There is another questions:
1. how to deal with variables captured by process body?
(let A 1
(let P (/! X (+ X A))))
treat captured variables like captured variables in ordinary lambdas?
can process modify captured variables?

2. can process access global variables using (value Var) ?
can process write to global variables?

Vasil

Mark Tarver

unread,
Feb 12, 2011, 9:09:52 AM2/12/11
to Qilang
I think we need to be clear on how lambda processes behave. There are
quite a few models of lambda processes; so deciding to represent
concurrency by lambda processes really is the beginning of a journey.

A lambda process is active version of a closure. That is, we expect
lambda processes to behave like closures in many ways because a lambda
process and a closure are both instances of an abstraction. We can
compare the relationship between a lambda process and a closure to the
relationship between a film and a still taken from the film. A
closure is freezing of a lambda process.

We can represent the behaviour of a lambda process using sequent
calculus notation; let X ==> Y mean that X evaluates to Y. The first
rule should be

Rule 1:

Y ==> Z;
______________________
(/! X Y) ==> (/! X Z);

A lambda process (/! X Y) evaluates to another lambda process (/! X Z)
when the body of the lambda process Y evaluates to Z.

Second, if there is nothing left to evaluate within its body -
evaluation has completed, and the process has ended, then the lambda
process is to all effects a closure. This is just a formalisation of
the idea that 'a closure is freezing of a lambda process.

Rule 2:

if Y is a normal form
______________________
(/! X Y) ==> (/. X Y);

I think that these rules are unexceptionable.

The other rules relate to the meaning of ==> itself, the primitives
and the proper treatment of lambda process applications. There are
quite a few surprises. I'll discuss some of those issues in next
post.

This is likely to be a long thread, so I'll open a page on Lambda
Associates to capture the main points of discussion so that the
progress of the discussion is clear.

Mark

Mark Tarver

unread,
Feb 13, 2011, 6:27:33 AM2/13/11
to Qilang
I remarked in the above post, that lambda processes can be set up to
have memory; but did not provide any examples. Here is one from my
notes.

QUOTE
One characteristic of concurrent processes is we want them to be
stateful objects that possess a memory of what has happened to them.
Lambda processes don't seem to work at this level, because their
description (but not their execution) does not change. They seem to be
in the position of the Bourbon kings of France who as Tallyrand put
it, 'had learned nothing and forgotten nothing'.

However this is misleading; lambda processes can remember if we set
them up to do so. The trick lies in understanding that the lambda
process A does not itself remember, but rather that A sets up another
lambda process B in which the remembered data is part of the
definition of B.

Let us see this by setting up a lambda process that drives an event
loop and responds to the third order to halt an event-loop by halting
it.

We begin by creating a closure inside which the lambda process will be
embedded. I'll simplify by assigning this to the function event-loop-
closure; effectively giving it a name.

(define event-loop-closure
-> (/. C <put lambda process here>))

We will see that the variable C will be a counter for the number of
times that the lambda process is called upon to halt. Now we need to
fill in <put lambda process here>; here is what goes in

(/! X (while (unbound? X)
(look-for-event)
(if (= C 3) halted <create new process>)))))

The new process is created and the old process returns the new process
as a value. Now finally <create new process> is defined. It applies
(event-loop-closure) with C incremented by 1.

((event-loop-closure) (+ C 1))

So the whole expression is

(define event-loop-closure
-> (/. C (/! X (while (unbound? X)
(look-for-event)
(if (= C 3) halted ((event-loop-closure) (+ C
1)))))))

Seeing It Work

Let's see it work. We can set the process going by applying the value
of (event-loop-closure) to 1.

((event-loop-closure) 1)

which creates the lambda process

(/! X (while (unbound? X)
(look-for-event)
(if (= 1 3) halted ((event-loop-closure) (+ 1 1)))))

which executes until supplied any input. When it does, it evaluates
the body to

(if (= 1 3) halted ((event-loop-closure) (+ 1 1))))

Since (= 1 3) is false this evaluates to ((event-loop-closure) 2)
which creates a new lambda process with C incremented by 1; namely

(/! X (while (unbound? X)
(look-for-event)
(if (= 2 3) halted ((event-loop-closure) (+ 2 1)))))

If this is supplied an input, by parity of reasoning the following
process is created

(/! X (while (unbound? X)
(look-for-event)
(if (= 3 3) halted ((event-loop-closure) (+ 2 1)))))

which stops when sent a message returning halted. The whole thing
evaluates to

(/! X halted)

which since the body is a normal form is equivalent to the closure

(/. X halted)

Now we can design a top level which contains an event loop which works
until pinged 3 times with exit and which then shuts down.

(define repl
-> (repl-help (event-loop-closure 1)))

(define repl-help
L-process -> (let I (do (output "~%>") (input))
(if (= I exit)
(repl-help (if (= L-process halted) halted (L-
process 0)))
(do (print (eval I)) (repl-help L-process))))))
UNQUOTE

I'm going to talk in the next post about models of application and
evaluation for lambda processes because there are some very important
issues which this account has glossed over and which must be dealt
with.

Mark

Brian Spilsbury

unread,
Feb 14, 2011, 3:31:51 AM2/14/11
to qil...@googlegroups.com
Ok, I've been looking through the examples.

(/! X (while (unbound? X)

; optimistic evaluation -- wait
(look-for-event)
; yield the final result, which is another lambda process,
to which we transfer control.


(if (= 1 3) halted ((event-loop-closure) (+ 1 1)))))

(define repl
-> (repl-help (event-loop-closure 1)))

(define repl-help
L-process -> (let I (do (output "~%>") (input))
(if (= I exit)

(repl-help (if (= L-process halted) halted (L-process 0)))


(do (print (eval I)) (repl-help L-process))))))

So L-process has either already yielded its final result (which turns
it into halted, effectively), or we bind X in it to 0, causing it to
yield as its final result either halted or another lambda process.

In this scheme it appears that we cannot access the result of anything
in the 'optimistic evaluation' section, so there is never any point in
putting anything other than (look-for-event) there.

This looks similar to using futures that are forced by application,
and do nothing until forced -- is this fair?

I'd like to suggest a more practical example -- asynchronous i/o.

In my application I know that I will need the result of a database
lookup, but I have something to do before I can make use of it, so I'd
like the database lookup to be able to run in parallel while I'm doing
this.

I can see how I could use futures or a lazy list to achieve this, but
how could I use a lambda process to do it?

e.g., (using a made up lisp)
(let ((io (future (look-up-database-key "foo"))))
(let ((value (do-something)))
(+ value (force io))))

The next concern that I have is that since the lambda process does not
retain identity between steps, it is not possible to have a three-way
conversation with a process -- this is also true of lazy lists --
since the last reply becomes essentially a capability for continuing
the conversation.

I suspect that this makes this model almost useless for anything other
than what futures do, but I'm probably missing something.

So, as a second practical example, how could I implement the database,
as a lambda process, that I looked up in the first example, such that
it could service multiple clients, and such that they could use it to
hold an indirect conversation?

I suspect that you would need to do it as you would for lazy lists,
and have them conspire via side-effect with some central location.

But since this model does not represent processes as machines, there
is no-where for those side-effects to be contained within ...

Mark Tarver

unread,
Feb 15, 2011, 6:16:21 AM2/15/11
to Qilang
Generally I need to get back to coding up Shendoc 0.2 this month, so I
can't spend
too much more time online. I'll return to this thread from time to
time.

> In my application I know that I will need the result of a database
> lookup, but I have something to do before I can make use of it, so I'd
> like the database lookup to be able to run in parallel while I'm doing
> this.

OK; this brings me to the remarks I made earlier.

QUOTE
The other rules relate to the meaning of ==> [evaluation] itself, the
primitives and the proper treatment of lambda process applications.
There are quite a few surprises. I'll discuss some of those issues in
next post.
UNQUOTE

A lambda process application is the application of a lambda process;
and the computation of such a process can be either destructive or non-
destructive depending on the model. This has not so far been
discussed.

A destructive application is one where the description of the lambda
process is changed or the process is halted as a result of the
application. All the examples given in this thread by me have been
based on the destructive model with the exception
of the parallel and case. Thus

(/! X (while (unbound? X)
(event-loop)
halted))

is halted by the reception of any input because we assume that the
expression

(while (unbound? X)
(event-loop)
halted)

evaluates to halted if X is bound. Hence the result is (/! X halted)
which becomes
(/. X halted) since halted is a normal form.

Destructive applications have the same relationship to lambda
processes as REPLACA and RPLACD in Lisp have to lists. They allow the
manipulation and examination of their objects subject to the
destruction of those objects in the course of so doing. Hence they
are not suited to programming in the large. However this is not a
weakness of lambda processes (or lists), but a weakness of one model
for handling them.

In my notes I observe.

QUOTE
According to one interpretation of Heisenberg's Uncertainty Principle,
we cannot exactly determine the position and momentum of an electron
by observation without interfering with at least one of those
variables. That is, our attempts to observe a process (the behaviour
of an electron) can actually interfere with that process itself. This
same problem arises with lambda processes. A lambda process cannot
return a value unless it returns a normal form; but the act of
returning a normal form causes it to cease to be a process and become
a closure.
UNQUOTE

Accordingly I looked for a non-destructive version of application ?>.

There are several models for handling this; two are based on

1. Copy
2. Pause

The idea of the first is that the lambda process is first copied
before being applied and in the second that is paused before being
applied.

To illustrate the techniques, I used the following example sent in a
letter to Willi, reproduced by permission - it amused him.


The Parable of Washing Up and Drinking Wine
_______________________________________

My friend Willi is an oenophile; sometimes he does the washing up
after cooking for his partner Ankhe. Given a choice between drinking
wine and washing up, Willi, like most men, prefers to drink wine. He
reads about lambda processes and likes them. Behaving like a lambda
process Willi diligently washes up until his partner asks him how he
is doing; whereupon he says 'fine', ceases to do the washing up and
assumes a normal form where if asked to do anything he drinks wine.

Anke does not like this result and decides to apply our model of
lambda processes using copy. Now when she asks Willi how he is doing,
she in fact first replicates him, producing a Willi B from a Willi A
and asks Willi B how things are. The Willi B now says 'fine' and sits
down waiting to drink wine while the original Willi A continues to
wash the dishes.

However it is not so simple, because between creating Willi B and
asking Willi B how he is doing, there are in fact two Willis doing the
dishes. Assuming each Willi accupies his own memory and does not share
resources this is fine. But having two Willis doing the dishes
competing for the same global resource could cause problems e.g. one
Willi wants to dry a dish while the other wants to wash it. The result
is chaos in the kitchen.

Anke does not like all these Willis cluttering up the kitchen. What
she really wants is one Willi who answers questions and doesn't do
anything until he is told to. She realised that a paused Willi does
not go through these dynamic and unsettling alterations that an active
Willi does. Accordingly she creates the following
procedure based on the pause model.

1. The first Willi, Willi A is paused.
2. Willi A is asked how he is doing and she remembers the answer.
3. Willi A is unpaused.
4. The remembered answer is taken away.

(define ?>
X L-process -> (let Pause (pause L-process)
Answer (Pause X)
Unpause (unpause Pause)
Answer))

Hence the copy primitive disappears; and pause and unpause take its
place.

A third possibility, which may be the best is to combine the two
models, and arrange that when a lambda process is applied it sheds a
closure and the closure is applied to the input. Hence the lambda
process continues to process without pause even when applied.

Mark

Mark Tarver

unread,
Feb 17, 2011, 11:19:58 AM2/17/11
to Qilang
I should also say what I have already said.

1. This is a listening thread for me; feel free to suggest *your*
ideas. You don't
have to follow my ideas.
2. When putting your ideas forward; bear in mind the questions I
opened the thread with:

* What model is suitable for concurrency?
* What should be the fundamental primitives of concurrency?

The second is important.

Don't be put off because I do not respond. It does not mean I agree/
disagree/think it trivial. It simply means I am busy right now. I do
read everything posted.

Mark

Anthony Di Franco

unread,
Feb 18, 2011, 5:17:11 PM2/18/11
to qil...@googlegroups.com
First, I don't think I fully understand what you mean by “functions which listen to input but do not execute” – seems to me like you are talking about some variety of closure-based construct I'm not familiar with but I can't even speculate much. However, to answer your question, FRP makes the state of the program a function of a lazily-observed input stream from the outside in roughly the same way it might be a function of a lazily-computed not-a-priori-bounded list of another nature.

Also, to revisit my original mention of persistent data structures, it's not clear to me from the descriptions of your concept of lambda processes that have gradually unfolded on this list whether you intend the sorts of issues usually addressed by the use of persistent data structures to be implicitly handled by the lambda process idea perhaps by representing all data in the form of a lambda process at some level. I hope in your efforts to more coherently document the lambda process idea you can address the question of sharing of state with this in mind.

Mark Tarver

unread,
Feb 18, 2011, 6:52:34 PM2/18/11
to Qilang
> First, I don't think I fully understand what you mean by “functions which
> listen to input but do not execute” – seems to me like you are talking about
> some variety of closure-based construct I'm not familiar with but I can't
> even speculate much.

I see. When you said that you thought your solution escaped the
conundrum, I took it that you understood the problem posed. Now you
say you don't understand the conundrum at all. Hum. OK.

I don't think I could make it much clearer than I have here in this
thread. I've given several examples above. An application executes,
but does not receive input during its execution. A closure receives
an input but does not sustain evaluation within its body until it
does.

The conundrum can be escaped by stating that concurrency requires some
extra provision or that the classical lambda calculus does not help us
understand concurrency and that we need to look elsewhere. This is
one strategy. It could be right.

** In general I'm open to suggestions of all kinds but they at least
have to address clearly the questions posed in my top post. I'll
repeat the important one. **

* What are the fundamental primitives of concurrency?

Some examples of your favoured approach wrt to even the simple
examples I've posted here would be good (or-parallelism, simple event
loop, event memory).

Telling me that a process is e.g. a machine/virtual machine/gum drop/
based on FRP does not help unless this idea is fleshed out and
illustrated by compelling examples. We cannot program with a
metaphor. Why do you favour your solution over others? Give the
primitives of your model and the equations governing their behaviour.
Show some examples.

Perhaps this open thread is not the right medium. I suggest anybody
who wants to try the question mail me. This removes the pressure of
having to recite on stage.

Mark

Mark Tarver

unread,
Feb 19, 2011, 7:22:10 AM2/19/11
to Qilang
> As I see it, a function is an invariant mapping from domain to range.
> Being invariant, it exists outside of time, which removes it from the
> domain of concurrency which is concerned with operations that occur
> over time. Which means that in this conversation we must be talking about not
> functions but about machines used to compute functions, which I think
> is a very important distinction.

I think it is too strong to say that because functions are invariant
over time, that
they cannot represent time or change. A similar charge was laid on
FOL which was designed by Frege to represent a timeless world. The
solution adopted by Quine was to quantify over temporal durations
(equivalent to accepting time as a parameter in FPL terms) while
others like Prior voted for incorporating tense into logic and
treating change seriously. This is not to argue for either approach,
but simply to point out that saying a function is timeless does not,
in itself, settle this debate. However on to machines, which is more
relevant.

> A process is a (virtual) machine.

There is something missing from this account. Machines do not do
anything unless instructed. A machine is not a process. A machine
enacts or carries out a process. Merely creating a machine within a
program w.o. giving it anything to do does not create a process.

Perhaps what you want is:

process = machine + instruction

> Qi defines a virtual machine for Qi code to run in, so this is our starting position.
> Qi defines a basic input port via (input) and an output port via interleaved (print) and (output) operations.

Piecing your mail together I get this:

A machine is a quadruple <S, I, O, E>

1. An internal state S.
2. An input port (input stream?) I.
3. An output port (output stream?) O.
4. An evaluation method E for evaluating objects in I and S whose
results
can be placed in O or used to change S.

An instruction is therefore an object which can be evaluated by a
machine and which falls within the range of E.

Is that close to your idea?

Mark

Anthony Di Franco

unread,
Feb 19, 2011, 1:26:09 PM2/19/11
to qil...@googlegroups.com
I have no reputation to maintain nor any other reservations that would lead me to prefer working in private and springing things upon the world fully formed rather than working in public. As long as you don't resent the effects on the information density of this thread.

So then, to clarify the substance of the main discussion, I don't see a conundrum because as far as I can tell in FRP one has functions that are quite functional responding to input so long as the input is given in the form of a stream, which as far as the functions are concerned looks like any other lazy functional stream. Even though I don't understand the internal structure of one half of a supposed conundrum with symmetrical structure, (“functions which listen to input but do not execute”) I think I am able to settle the conundrum by providing an example of functions which both listen to input and execute, which is what happens in FRP, where functions get their input off an input stream which looks like any other stream in the functional world so far as they are concerned.

Not a metaphor (actually a fully-worked-out and implemented mechanism in current use in various places), not an approach I necessarily favor over any other, just something I offer as a supposed counterexample to a supposed conundrum mainly to clarify the nature of said conundrum, but maybe also as an escape or the start of one. If the words in their usual meanings are all there is to it, then here are functions that execute while listening to input. They do so by consuming elements lazily from an input stream.

I think I will learn a great deal about exactly what you mean if you say why (if indeed) consuming elements lazily from an input stream is not listening to input in the sense you care about.

Brian Spilsbury

unread,
Feb 19, 2011, 12:07:22 PM2/19/11
to qil...@googlegroups.com, Mark Tarver
> Piecing your mail together I get this:
>
> A machine is a quadruple <S, I, O, E>
>
> 1. An internal state S.
> 2. An input port (input stream?) I.
> 3. An output port (output stream?) O.
> 4. An evaluation method E for evaluating objects in I and S whose
> results
>   can be placed in O or used to change S.
>
> An instruction is therefore an object which can be evaluated by a
> machine and which falls within the range of E.
>
> Is that close to your idea?

Yes, but I think there is one part missing -- address -- a mechanism
by which we can locate a machine, and by which we can estimate the
cost for a given machine to perform some action (since not all
machines have equal access to all resources).

We can also use addresses to handle basic resource consensus -- as we
use dynamic variables in CL to do.

If sending a message to an address where there is no machine currently
can (optionally) establish a machine at that address to receive that
message, then we can use this to establish agents for coordination --
a trivial example would be a semaphore machine.

I think that a quintuple <A, S, I, O, E> is probably sufficient to
describe a qi-machine.

The critical point of a thinking in terms of machines is that it acts
as a limit upon the ability to reason about side-effects, and requires
side-effects propagating beyond that limit to be explicitly expressed
in terms of messages -- and we cannot reason about messages in the
same way because messages are fundamentally unreliable.

> Mark
>
> --
> You received this message because you are subscribed to the Google Groups "Qilang" group.
> To post to this group, send email to qil...@googlegroups.com.
> To unsubscribe from this group, send email to qilang+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/qilang?hl=en.
>
>

Mark Tarver

unread,
Feb 19, 2011, 5:29:06 PM2/19/11
to Qilang
> I have no reputation to maintain nor any other reservations that would lead
> me to prefer working in private and springing things upon the world fully
> formed rather than working in public.

This remark is unnecessarily snide in the context of a thread which in
fact is inviting open discussion in public on an important issue. It
spoils the rest of what you have to say which is useful and
interesting. I regret you chose to open your response this way.

The tenor of your reply (which actually shows you do grasp the issues
I was talking about) is that FRP transcends the dilemma by using
applications which carry lazily evaluated object (viz) streams as
arguments which are capable of receiving time dependent information
and hence as applications are receptive to input.

A good answer; pity about the tone in which it was written.

Mark
Message has been deleted

Mark Tarver

unread,
Feb 21, 2011, 6:29:25 AM2/21/11
to Qilang
I retitled this to put some structure into a long thread. Sorry about
the double post.

> I hope you can accept the remark as such, and
> accept my apologies for the misunderstanding, and I would still very much
> appreciate your help in understanding what is going on here.

Lets' move past that.

FRP as you've described it looks like the technique used in Qi/tk to
provide reaction to events within the framework of a CLisp where
threads etc. are not supported. The implementation carries around
both the standard input stream and the program stream to TCL/tk and
polls each one looking for data. The implementation runs its
application, is sensitive to time dependent input and provides
'concurrency' though the CLisp is completely standard and the machine
on which it ran was uniprocessor.

This brings me to distinguish between 'concurrency' with a small 'c' -
the ability to support this sort of application where, to the
observer, N (N > 1) processes are supported at the same time, and
*parallelism* which requires that a task be divided up between
different processors.

Both these aspects are part of what you could call 'big C concurrency'
or 'Concurrency'. I guess that what we started out discussing here on
this thread was big C Concurrency. A theory that would provide a
representation suitable both for parallelism and the small c
concurrency of running background processes.

Somebody might criticise FRP as you have described it by saying that
it is really giving a name to what we already know how to do, and that
it is a small c concurrency theory but not a big C concurrency
theory.

A lot turns here on how you consider parallism should be implemented.
If you think that parallelism on multicore systems is something to be
handled *entirely* by the compiler (implicit parallelism) then you
might reasonably argue that a small c concurrency approach answers all
the questions and FRP as described is enough if a theory of implicit
parallelism is provided. I think that is a big 'if' though.

Mark

Brian Spilsbury

unread,
Feb 21, 2011, 8:19:27 AM2/21/11
to qil...@googlegroups.com, Mark Tarver
I think that you might more usefully look at systems that cross
boundaries of reliability and those that do not.

In the case of uniprocessor and multiprocessor systems we normally
consider these reliable, because they generally fail as a whole (e.g.,
our process or machine dies).

In this case the semantics of message passing can be equivalent to CPS
application or assignment -- we don't need to consider failure because
if it fails, so will everything else.

In the case of a message between machines, we need to consider the
possibility that the message was corrupted, not delivered, or that it
was delivered but the other system failed.

Within a reliable system we can use transparent mechanisms such as
futures, and we need to consider the propagation and ordering of
implicit side-effects (if any).

Across an unreliable system we need to make communication explicit in
order to support failure.

I'm not sure that trying to unify these is a sensible idea.

Mark Tarver

unread,
Feb 22, 2011, 10:50:23 AM2/22/11
to Qilang
On Feb 21, 1:19 pm, Brian Spilsbury <brian.spilsb...@gmail.com> wrote:
> I think that you might more usefully look at systems that cross
> boundaries of reliability and those that do not.

No; this is the wrong level of abstraction for this debate at this
time. This is arguing about the colour of the wallpaper before the
house is built.

I'd like to hear more about FRP. I'm interested. What Anthony said
was intriguing but left much unsaid. I'd like to hear more w.r.t. to
what I was asking. He wrote.

"So then, to clarify the substance of the main discussion, I don't see
a
conundrum because as far as I can tell in FRP one has functions that
are
quite functional responding to input so long as the input is given in
the
form of a stream, which as far as the functions are concerned looks
like any
other lazy functional stream."

It's a good start; what is the difference (if any) between FRP and the
style of programming I was using in Qi/tk?

Mark

Brian Spilsbury

unread,
Feb 22, 2011, 9:08:50 PM2/22/11
to qil...@googlegroups.com, Mark Tarver
On Wed, Feb 23, 2011 at 12:50 AM, Mark Tarver <dr.mt...@gmail.com> wrote:
> On Feb 21, 1:19 pm, Brian Spilsbury <brian.spilsb...@gmail.com> wrote:
>> I think that you might more usefully look at systems that cross
>> boundaries of reliability and those that do not.
>
> No; this is the wrong level of abstraction for this debate at this
> time.  This is arguing about the colour of the wallpaper before the
> house is built.

This is fundamental, not superficial.

Mark Tarver

unread,
Feb 23, 2011, 10:45:44 AM2/23/11
to Qilang


On Feb 23, 2:08 am, Brian Spilsbury <brian.spilsb...@gmail.com> wrote:
> On Wed, Feb 23, 2011 at 12:50 AM, Mark Tarver <dr.mtar...@gmail.com> wrote:
> > On Feb 21, 1:19 pm, Brian Spilsbury <brian.spilsb...@gmail.com> wrote:
> >> I think that you might more usefully look at systems that cross
> >> boundaries of reliability and those that do not.
>
> > No; this is the wrong level of abstraction for this debate at this
> > time.  This is arguing about the colour of the wallpaper before the
> > house is built.
>
> This is fundamental, not superficial.

No; I'm going to overrule on this. The recent surge of interest in
concurrency has come about precisely because of the development of
multicore machines. Without parallelism (i.e. = utilising multicore),
there would be far less interest in concurrency and this thread would
not exist. It is very relevant to keep this in mind when looking at
notations for representing processes.

Re: FPR. The description Anthony gives is very open. I looked into
the background to get more information. It seems the essence of FPR
is that it enriches FPLs by two new kinds of data objects - behaviours
and events.

Behaviours are objects that fall under the type (behaviour A) where an
object B is of type (behaviour A) if it is an object of type A whose
value varies over time. There is one 2-place primitive 'at' where

at : (behaviour A) --> time --> A

The idea is that 'at' applied to a behaviour B and a real number T
representing a moment of time gives the value of B at time T.

Events are formed by applying predicates to boolean behaviours and
times. So say this description

'the value of the variable 'Counter' first changing to 0
afer time T'

describes an event. Here I guess 'Counter' would be an object of type
(behaviour number). The event of 'Counter' first changing to 0 after
time Tm might be written in Qi as:

(denote-event (= Counter 0) Tm)

where 'denote-event' has the type (behaviour boolean) --> time -->
event. Here comparing a behaviour with something coerces the result
to be of type (behaviour boolean) rather than boolean.

There is one other primitive for events which is 'occ' (occurs) which
returns the least time after and including Tm at which the event
description is true i.e. the time the event happened.

What they have done here is what I was alluding to in Tarver (29) of
this thread. They've abstracted away from change by introducing time
as a parameter; exactly the classical approach of Quine et al. in
first-order logic. Plus ça change, plus c'est la même chose

Jolly interesting Anthony. Many thanks for the contribution! I'm off
for a cup of tea and then a walk

Mark

Anthony Di Franco

unread,
Feb 24, 2011, 3:32:50 PM2/24/11
to qil...@googlegroups.com
I've got a lot of remedial reading to do in and around this thread as usual, but since you're intrigued about the reification of time going on, let me mention something interesting I recently came across a propos of that, relative vs. absolute time, which gets into an, maybe the, interesting issue at the heart of reifying time.


Also more on the finer distinctions to be drawn in the primitives used in FRP:


Plus ça change, plus ça change. Or maybe Τα Πάντα ῥεῖ or 方丈記.

Mark Tarver

unread,
Feb 25, 2011, 8:10:44 AM2/25/11
to Qilang
Brian posted this, and at first it seemed a fruitless wrangle, but on
reflection I think I need to air it to make some important
distinctions.

>> This is fundamental, not superficial.
>
> No; I'm going to overrule on this. =A0The recent surge of interest in
> concurrency has come about precisely because of the development of
> multicore machines. =A0Without parallelism (i.e. =3D utilising multicore)=
,
> there would be far less interest in concurrency and this thread would
> not exist. =A0It is very relevant to keep this in mind when looking at
> notations for representing processes.

QUOTE

Brian:

You have misunderstood, in that case.

What you are doing here is restricting concurrency to only within the
bounds of the local reliable system.

If that's the only kind of concurrency that you care about, and you do
not want to consider any form of distributed concurrency, then that's
fine.

Otherwise you're making a critical error in ignoring this.

I also think that you're making a critical error in thinking that
restricting parallelism to individual multicore systems is sufficient
for scalability in the near future, particularly with the increasing
shift toward cloud computing facilities.
UNQUOTE

**I think we have to be clear about what we are doing here; we are NOT
dealing with
actual computers**, but schematic notations for representing
concurrent processes and machines. At this stage we must abstract
away from questions about actual machines - including resources, their
physical location, remote connections, cloud computing - in order to
clarify the notation and the model.

For the moment, that is the level at which the debate is taking place
and which contributions need to be made. People need to grasp this to
contribute. If we agree on some notation at a schematic level, we can
look further at the physical realisation and the issues raised by
cloud computing.

Mark

Mark Tarver

unread,
Feb 25, 2011, 8:11:23 AM2/25/11
to Qilang
Yes; it seems FRP is still evolving.

Mark

On Feb 24, 8:32 pm, Anthony Di Franco <di.fra...@gmail.com> wrote:
> I've got a lot of remedial reading to do in and around this thread as usual,
> but since you're intrigued about the reification of time going on, let me
> mention something interesting I recently came across a propos of that,
> relative vs. absolute time, which gets into an, maybe the, interesting issue
> at the heart of reifying time.
>
> http://conal.net/blog/posts/trimming-inputs-in-functional-reactive-pr...http://lukepalmer.wordpress.com/2008/11/28/relative-time-frp/

Daniel Jomphe

unread,
Feb 28, 2011, 1:32:47 PM2/28/11
to qil...@googlegroups.com
I've been hesitating to post in this thread since it started.

The trigger for my post is that I believe this might be a useful contribution to the problem:


Also, what I had in mind all this time is that in Shen's treatment of concurrency, I would hope the fundamental ideas presented by Hickey in Are we there yet? won't be overlooked. Since I'm not sure when would be the best time to introduce them, I'll give you a link and let you judge for yourself. I suggest reading/watching a few minutes of the presentation before making any judgment.


Although Clojure is not the ultimate gold standard, I believe Shen could learn quite a few important lessons from its design. For example, I feel making clojure use an abstraction over lists/vectors/sets/maps instead of treating lists as first-class citizens, and the rest as second-class citizens of the language like lisp does, has been quite a breakthrough. (Although this particular example is out of subject here.)

Mark Tarver

unread,
Feb 28, 2011, 3:27:56 PM2/28/11
to Qilang

> Although Clojure is not the ultimate gold standard, I believe Shen could
> learn quite a few important lessons from its design. For example, I feel
> making clojure use an abstraction over lists/vectors/sets/maps instead of
> treating lists as first-class citizens, and the rest as second-class
> citizens of the language like lisp does, has been quite a breakthrough.
> (Although this particular example is out of subject here.)

Dan,

It's an interesting idea. I should point out that CL already has
something like this in its idea of a sequence which applies to lists,
strings and vectors. The function LENGTH works on all three. This
is not as new as you might think.

Generally you can define a type 'sequence' in Qi and Shen which will
allow you to do exactly that sort of thing and you can define your own
'head' or 'CAR' which will work and type check over all sequences.
Then, if you do not want to coerce your sequence to a list, you can
program in the style of Clojure.

The general problem in making head, tail, cons etc. generic across all
sequences is that you lose a certain amount of performance in testing
for cases; because you will inevitably devolve control to specialised
lower level functions after you have determined the type of the
argument. Static type checking may improve this somewhat; but a
language built on sequences will be slower than one built on lists.

But if you follow the Qi/Shen approach it is easy to define the type
'sequence' and program in this style.

Mark

Raoul Duke

unread,
Feb 28, 2011, 4:31:49 PM2/28/11
to qil...@googlegroups.com
On Mon, Feb 28, 2011 at 12:27 PM, Mark Tarver <dr.mt...@gmail.com> wrote:
> The general problem in making head, tail, cons etc. generic across all
> sequences is that you lose a certain amount of performance in testing
> for cases; because you will inevitably devolve control to specialised
> lower level functions after you have determined the type of the
> argument.  Static type checking may improve this somewhat; but a
> language built on sequences will be slower than one built on lists.

while it can indeed add up, i would hazard to guess that in the
Clojure world it is seen as more important to be flexible and clean at
a higher level than to be hampered semantically but faster?

Mark Tarver

unread,
Feb 28, 2011, 6:09:26 PM2/28/11
to Qilang
On Feb 28, 9:31 pm, Raoul Duke <rao...@gmail.com> wrote:
It's a value-driven decision. The thing is that in a untyped language
L which does discriminate between the head of a string, a list and a
vector, you can
nevertheless very easily design a language which does not. But in a
language
which does not, it is not inevitably true that you must have access to
these
specialised lower level functions which give you performance of L. So
in terms of
freedom of choice, Qi/Shen gives you a choice between a generic
function and a
faster type-specific one and leaves the decision to the programmer.

Mark Tarver

unread,
Mar 1, 2011, 2:09:31 AM3/1/11
to Qilang


On Feb 28, 6:32 pm, Daniel Jomphe <danieljom...@gmail.com> wrote:
> I've been hesitating to post in this thread since it started.
>
> The trigger for my post is that I believe this might be a useful
> contribution to the problem:
>
> http://lambda-the-ultimate.org/node/4211
>
> Also, what I had in mind all this time is that in Shen's treatment of
> concurrency, I would hope the fundamental ideas presented by Hickey in *Are
> we there yet?* won't be overlooked. Since I'm not sure when would be the
> best time to introduce them, I'll give you a link and let you judge for
> yourself. I suggest reading/watching a few minutes of the presentation
> before making any judgment.
>
> https://groups.google.com/forum/#!topic/clojure/_XPTb4dDtQA
>
Regarding what Rich has to say about processes etc. I read the
transcript at great speed.

I don't agree with the remark that because functions are immutable
they cannot represent time-dependent results (see my remarks to Brian
above on that). Rich likes Alfred North Whitehead; many years since I
studied Whitehead. Interesting ontology. Here is a view.

In orthodox logic a sentence like 'Mark runs' comes out as '(runs
Mark)' predicating a property of an object. Let's call that the
atomistic view. The basic ontology is one of physical objects that
have properties. Later logicians like Donald Davidson argued that
representing adverbial constructions requires quantification over
events. So 'Mark runs quickly' needs to be parsed as 'There is an
event of running which is fast and done by Mark'. This becomes in
logic

Ex (running x) & (quick x) & (by x Mark)

So you have a dual ontology - events like running and objects like
Mark.

In Whitehead's ontology there are only events at a physical level. A
physical object is simply an event with a invariant collection of
properties with respect to a time interval. A river is not a thing in
itself; it is a process which shows a set of time invariant
properties. In the 'Concept of Nature' Whitehead following Russell's
1911 Lowell lectures showed that even moments of time and intervals
can be analysed into an ontology of events and sets.

You will also find elements of this thinking in Buddhism w.r.t. the
analysis of mind as a causally linked flow of ideas and also in David
Hume's analysis of personal identity.

Back to computers. I think what Rich is trying to get to is a language
run on Whitehead's principles. He is saying that all programming
languages have so far been atomistic and that time dependent event-
driven programming has been represented as this kind of awkward case.
On Whitehead's ideas, this is wrong. Processes are all that really
exist and the problems of concurrency are partly created by trying to
graft a notation for processes onto a notation based on an atomistic
ontology. The correct solution would incorporate Whitehead's thinking
from the start in the design of a computer language. AFAIK this
remains to be done.

Mark

Daniel Jomphe

unread,
Mar 1, 2011, 10:03:18 PM3/1/11
to qil...@googlegroups.com
Although I can't contribute any other thing useful to this discussion, I hope it might have raised something useful soon or late. And regarding your argument for Shen being more bare to the metal, I understand and I respect that Shen could very well someday propose as a (standard) library such abstractions, although they won't make it into the core of the language. Well, I can't say I don't feel _inspired_ by Qi/Shen's clean and complete treatment of many beautiful aspects, like that of the lambda calculus' complete support, and its incredible logic-based static type checker, which I hope to be able to revisit.
Reply all
Reply to author
Forward
0 new messages