as anticipated, I am notifying the ppl in the group about the Threading
Module Docs:
(follow the "documentation" link).
The main pages talk about the overall threading concept, the idea of "by
object memory model", the limits and the reasons for this approach. The
API doc details the currently available synchronization structure.
I'd be pleased to have feedbacks, ideas, suggestions and comments.
I will post some paper about the "Thread.wait()" method that implements
multiple object waits through POSIX calls, if the thing is considered
interesting.
TIA,
Giancarlo Niccolai.
> as anticipated, I am notifying the ppl in the group about the Threading
> Module Docs:
>
> http://tinyurl.com/5prs69
>
> (follow the "documentation" link).
>
> The main pages talk about the overall threading concept, the idea of "by
> object memory model", the limits and the reasons for this approach.
I could not find any description of the so-called "by object memory
model". Can you give a closer hint where it can be found? However, why
do you think you need any "memory model" in a scripting language?
> The
> API doc details the currently available synchronization structure.
Do you mean the semaphore, barrier, event, and the queue as high level
synchronization structures?
> I'd be pleased to have feedbacks, ideas, suggestions and comments.
The main problem is that you provide hardly any information. Mainly
claims are there but no details.
I could find hardly any information there with respect to concurrent
programming features. Even if you mention something like the barrier,
the example has nothing to do with the synchronisation tool you intend
to describe.
You should provide a language description wrt multithreading aspects.
The "official" language description mentions nothing about your stuff.
You should provide some examples at least for some canonical
concurrent programming problems. You could also demonstrate how some
parallel programming patterns can be specified in your language.
> I will post some paper about the "Thread.wait()" method that implements
> multiple object waits through POSIX calls, if the thing is considered
> interesting.
Yes, it would be very interesting to know what that wait() method is
intended for. Can you implement Conditional Critical Regions with it
or is it just a mutex for alternative shared resources? For instance,
can you wait for resources so that you expect that some condition is
fulfilled before it is granted for you?
Some use cases would be especially useful.
Furthermore, you write on that page:
"it provides as set of powerful high-level synchronization structures
enabling multi-agent coordination patterns"
http://www.falconpl.org/index.ftd?page_id=prjs&prj_id=threading§_id=main
What are these "powerful high-level synchronization structures"? Note
that I would not regard the semaphore as high-level synchronization
tool even if you rename it. I am really curious about the "powerful
high-level synchronization structures".
Best Regards,
Szabolcs
I don't. In fact I don't provide any, and let each object provided by
applications and 3d party modules to manage its own model.
Someone thinks a memory model for script is needed, as the previous
discussion about this topic shows.
>
> > The
> > API doc details the currently available synchronization structure.
>
> Do you mean the semaphore, barrier, event, and the queue as high level
> synchronization structures?
Yes.
You need low(er) level primitives to implement those structures.
>
> > I'd be pleased to have feedbacks, ideas, suggestions and comments.
>
> The main problem is that you provide hardly any information. Mainly
> claims are there but no details.
The link to the page that is in this thread points directly to the
threading module project page. Other than the official docs, which
contain an overall description of the frame concepts, the rationale of
the picked choices and the complete reference of the API, the page
links to the download section, where you can find a set of 14 samples
(some of them non trivial) and the complete source code of the module.
Please let us know what other details you need.
Giancarlo.
> Please let us know what other details you need.
I would be curious in the details of just what you skipped from my
previous post. What you have reacted on were rather my suggestions
what you were asking for (e.g. include examples, describe the language
wrt concurrency, information cannot be found where you claim they are,
etc.). You were asking for help and I tried to help.
So I repeat the unanswered issues:
<quote>
...you write on that page:
"it provides as set of powerful high-level synchronization structures
enabling multi-agent coordination patterns"
http://www.falconpl.org/index.ftd?page_id=prjs&prj_id=threading§_...
What are these "powerful high-level synchronization structures"? Note
that I would not regard the semaphore as high-level synchronization
tool even if you rename it. I am really curious about the "powerful
high-level synchronization structures".
</quote>
Thanks for your efforts.
Best Regards,
Szabolcs
The lines was skipped as I already answered them: yes, those are the
"powerful" "high level" structures.
"Powerful" doesn't just mean "full of power", and neither "better than
anything else"; it means "just having more features than the low level
ones". And if you take some time to read the docs instead of the
newsgroups, you'll find out that this is not mis-presented as an
absolute advantage. For a sync structure, presenting more "features"
than the lower level primitives means to be less flexible and
efficient. I won't discuss here why this loss is needed, and what is
gained, as it is explained in the docs.
About the semaphore with a changed name; among those structures there
is also a "counter" which IS a semaphore with a changed name, as the
docs clearly state. The reason for the name change is in the docs; and
part of that reason, as the docs state, is that although sharing the
vast majority of semantics with a common semaphore, it can be waited
in a multiple-object wait, which is not strictly part of that common
semantics.
Please, refrain from posting further mails prior having read the
documentation, or aimed to discredit other's work without serious
evaluation. Mispresenting a work released with (well commented)
sources, samples, in-depth documentation, overall rationales as just
"not providing any detail" and being "mainly claims" has a clear
denigratory intent, or is fruit of an overly superficial approach.
Present serious issues, and we will talk again.
Giancarlo.
> "Powerful" doesn't just mean "full of power", and neither "better than
> anything else"
I see.
> Please, refrain from posting further mails prior having read the
> documentation...
I am afraid I can be the only one who has read your precious
documentation. That is not much after all so it was not a big effort.
I was trying to find the highly powerful elements in them but ... yes,
I know, I know, they are there.
> Present serious issues, and we will talk again.
Ok. I am afraid, if I take your aggressive responses, I must conclude
that there are no serious issues there at all. There can't be any,
after all.
In fact, now, as you answered all my questions so thoroughly (thanks
for that great effort), I think your language is the greatest one so
far in computer science. It has very advanced and highly developed
concurrency elements. Really. I could not find them but you claim it
so aggressively that I must admit it.
From now on it should be broadcasted that there is no reason to work
on concurrency issues any further because the perfect one is created.
There cannot be any comment on it.
Congratulations.
Best Regards,
Szabolcs
P.S. I just wanted to know what are these highly powerful features
that we must welcome but never mind it is really not important if a
work is as great as this.
>
>> Present serious issues, and we will talk again.
>
> Ok. I am afraid, if I take your aggressive responses,
Oh. But __anyone__ can see that this is *not* an aggressive response by
any mean.
Actually, it's an __opening__. I am sorry if you misunderstood this
wordings.
I asked the respectful and respected people in this NG for help, advices
and integration in this delicate topic, that is, to develop a parallel
processing paradigm that can be effective and efficient in the context
of a scripting language, and I proposed a starting point.
The fact that the model was presented as superior or perfect is only a
mystification that you are trying to sell, I'm afraid.
I am also sorry that you feel so much evident pain in seeing works that
is not your own being carried on and talked about. I'd wish your
distress to be eased, no one here is happy about other's bad.
So, present serious issues, that is, technically relevant issues, and we
will talk again.
Giancarlo.
> Please let us know what other details you need.
This would be still an interesting issue. It might be that you think
you have already answered this one too and I could not get it because
of your arrogance. So this is what I am referring to from my earlier
post:
<quote>
Yes, it would be very interesting to know what that wait() method is
intended for. Can you implement Conditional Critical Regions with it
or is it just a mutex for alternative shared resources? For instance,
can you wait for resources so that you expect that some condition is
fulfilled before it is granted for you?
</quote>
Thanks.
Best Regards,
Szabolcs
> I asked the respectful and respected people in this NG for help, advices
> and integration in this delicate topic, ...
You have got it but you were busy trying to attack instead of
listening to the help.
> The fact that the model was presented as superior or perfect is only a
> mystification that you are trying to sell, I'm afraid.
Was it me who claimed that the re-named semaphore is high level
synchronisation structure? You claimed that your language has high
level synchronisation structures and I was curious what these were.
There is a wide consensus that the semaphore is not a high level
synchronisation tool even if it renamed. It is not only an agreement
but has good reasons too.
> I am also sorry that you feel so much evident pain in seeing works that
> is not your own being carried on and talked about.
You can be sorry when you make such a fool of yourself making such a
stupid note.
> So, present serious issues, that is, technically relevant issues, and we
> will talk again.
First you should present your notation. Don't you think that it is
evident that if you come with something then you must explain it and
not just talking big.
If you want help, you should be modest and patient and trz to answer
questions. When you answer questions and do not escape from the answer
with some arrogance, you can ask for serious issues.
Currently the serious issue is that you should be able to demonstrate
the high level synchronisation constructs that you claimed.
I strongly advise you to calm down, re-read my posts, and refrain from
prejudices.
Best Regards,
Szabolcs
Yes, the model allows it.
===========================================
Synchronization structures
Falcon agent-based model leverages on the concept of non-primitive
structures used to synchronize and coordinate threads. Each structure
has a different working principle which is explained in its detailed
description, but they all share the concept of wait, acquisition and
release.
An agent can wait for one or more structures to be ready for
acquisition. When this happens, the agent is notified and it acquires
atomically one of the structures it was waiting for.
<note>
^^^^ This wording requires some refining. The thread actually
checks for acquisition, as it may race on the structures with other
threads, and so, it may actually succeed to acquire, an then
proceed, or fail to acquire, and then wait again. Will fix asap.
</note>
Once performed the work for which the structure was needed, the agent
must release the structure, and is then free to wait again or terminate.
Acquisition and release are not necessarily bound to exclusive access.
There are structures that may be acquired by many threads, or others
that can only be acquired (their release is an void operation). The
concept of acquisition is rather a semantic equivalent to "allowance",
"permission" to perform certain operations bound with the structure.
More details are explained in the description of each structure.
===========================================
It is possible to implement a conditional critical region as you can
bound the acquisibility of a exclusively acquirable structure with an
atomic predicate.
Let's say that
p = <conditional predicate>
and
CCR( p ) can be acquired IF and only IF:
|- p is true
|- CCR is not currently acquired.
(atomically checked)
THEN
if( wait( CCR(p), 0 ) == CCR(p) ) // immediate check for acquistion
<code conditionaly executed>
CCR(p).release()
end
At the moment there isn't any structure working like that, but it may be
easily added, provided the wait/acquire/release model is powerful
(ehm... let's say adequate, you don't seem to like this word) enough to
handle the semantic as I drawn here.
It may be really an interesting addition to the set of structures.
BTW, this is the description of Thread.wait:
===========================================
Thread.wait()
Wait for one or more synchronization structures to become available.
Thread.wait( ..., [waitTime] )
...: One or more synchronization structure to wait for.
waitTime: Maximum timeout in seconds and fractions.
Returns:
nil if timeout expires, or the acquired item on success.
Raises:
InterrutpedError in case the thread receives a stop request.
This method waits for one of the structures in the given parameters to
become acquirable, and acquires it before returning.
The acquired structure must be released manually after the thread has
used the shared resource.
Typical usage pattern is that of acquiring the needed structures in the
thread main loop, work with the achieved structure and release it. Also,
it is useful to test for interruption and eventually honor the
interruption request as soon as possibile:
class MyThread from Thread
...
function run()
loop
try
// wait at max 1 second.
res = self.wait( resA, resB, 1.0 )
catch InterruptedError
// honor the request
return
end
// what are we up to?
switch res
case nil
// timed out; perform some periodic operations
case resA
// do things with resA
resA.release()
case resB
// do things with resB
resB.release()
end
// do extra work with signaled data (*)
end
end
end
The method tries to acquire the resource in the same order they are
passed as parameters. If the first resource is always available when the
thread enters the wait, this will actually prevent the thread from
acquiring other resources. As some resources can be acquired relatively
often, it is necessary to be careful about this aspect. Repeated
acquisition and release may cause starving of other threads and of other
resources being in need of handling.
It is advisable to release the resources as soon as possible and perform
work on the shared data after they have been release, in the code
section marked with (*). Also, if the structured waited on may become
available at the same time, it is advisable to use Thread.vwait instead,
to easily rotate the order in which the call tries to acquire the resources.
=======================================
Giancarlo.
P.s. Yes, this is a relevant technical issue.
Szabolcs, I understand we can have and that we had what it is called a
cultural shock. Your wordings seemed offensive and boastful to me, and
my wordings seemed offensive or boastful to you, while none of us
intended to be neither of the two. On my side, I apologize.
Can we just concentrate on tech aspects now?
>
>> The fact that the model was presented as superior or perfect is only a
>> mystification that you are trying to sell, I'm afraid.
>
> Was it me who claimed that the re-named semaphore is high level
> synchronisation structure? You claimed that your language has high
> level synchronisation structures and I was curious what these were.
Ok, with this wordings I understand that the intent wasn't offensive.
Well, the "SyncCounter", that is semantically very similar to a
semaphore, and that I would have named just "Semaphore" if I didn't have
this name already occupied by another class, in this version, IS
implemented as a high level structure because it uses internally a
mutex, a condition variable and a list of waiting threads.
> There is a wide consensus that the semaphore is not a high level
> synchronisation tool even if it renamed. It is not only an agreement
> but has good reasons too.
"High level structure", in the way I intend it, doesn't mean "very
complex and featureful structure", it just means that:
1) it is realized with/based on lower level primitives (that cannot be
accessed).
2) It has a complex operational internals that is made of more simpler
operations, and is provided "as is", functionally closed.
Someone here would argue that a semaphore IS a non-primitive
synchronization object as it can be built with a mutex, a condition
variable and an integer variable, but I can agree with you that the
semaphore in itself is "primitive enough", and I don't really care about
its "primitivity" status (because:)
The implementation in the Falcon Threading Module (FTM) is far from
primitive, as the SyncCounter has not just a "wait" method, but can be
put in multiple objects waits. There is a MS-Windows "Semaphore" object
which can be used in WaitForMultipleObjects, but I suppose its internals
are very similar to the non-simple implementation I have.
So, when I was talking about "high level structures", I was just stating
that the things in there are complex, heavy, relatively much less
efficient than the "primitives" (semaphore included), and not trying to
sell myself as the inventor of a new age Semaphore. Is it clear now?
> First you should present your notation. Don't you think that it is
> evident that if you come with something then you must explain it and
> not just talking big.
About this point; I introduced my work quite some time ago, and told
that when I would have a draft documentation I would have notified the
group and asked for more help. The head message of this thread was not a
"big talk", but just a "ping".
>
> If you want help, you should be modest and patient and trz to answer
> questions. When you answer questions and do not escape from the answer
> with some arrogance, you can ask for serious issues.
This is true. But also the suggestions should be advanced with kindness,
and the questions should be posed in an unequivocal propositive manner.
Also, starting the reply with the claim that there was no sample nor
other information sources to be seen (while the informations are present
and relatively abundant), in an accusatory tone, didn't help the
discussion.
It seemed a deliberate try to prevent other readers to visit the site
and see for themselves. I am sorry if it wasn't, but any sensible reader
may legitimately be caught by this doubt.
>
> Currently the serious issue is that you should be able to demonstrate
> the high level synchronisation constructs that you claimed.
Each one of them is explained in detail, and each one of them is
presented in one or more sample at the indicated address.
I know that the raw source code (even if well commented) is not the most
presentable way to talk about an implementation; and in fact I promised
to post here an abstract description of the algorighms used to realize
the structures (and especially the multiple objects wait) at a later
time, as soon as possible, but I clearly stated that what I wanted to
present and comment in this moment was just the threading model. In short:
1) Is the multiple wait/acquire/release model sufficient to provide a
fairly complete MT support?
2) What about complete isolation of each agent except for exchanges
through well-defined synchronization structures?(this is half needed
because of the scripty nature of the things; the idea here is not seeing
if it's "good", as full visibility is just "better", but to see if it
can stand, can be anyhow a workable solution).
3) Can you see any black spot that should need attention?
4) Can you think of other high-level synchronization structures that may
enrich the model?
--- in this sense, I am grateful to you about talking of the conditional
critical region, it is really an interesting note, and I am going to
implement it.
As said, at a second stage (i.e. as soon as I am able to
comment/describe it), I will post also the algorithms used to realize
the model and the structures, but this is a preliminary step: If the
model is crap and just can't work, it would be a wasted effort.
Also, the source is there if you are curious, but as I didn't want to
ask too much by people just doing this for pleasure/will to help, I
didn't explicitly request to review the code and see if it's broken.
First, let's discuss on the model, and possibly let's make it better,
then we can concentrate on the means to realize it.
(Said this, notice that the FTM with its tests is completely working,
but "working in practice" with a bunch of tests doesn't mean that the
code is OK in general; I/we can/will review it later).
Thanks for your help,
Giancarlo.
What is what here? What is the resource? What are the atomic
operations? A more exact example would help here.
What is `CCR'? You say it atomically checks `p'. But what then? What
does it return? It must return a resource since you use it as an
argument of `wait'. But who does it ensure that the resource will not
change between the time the `CCR' checks the status and the `wait' may
acquire it?
When I mentioned Conditional Critical Region I referred to the
original concept where it was a language construct.
It is a problematic statement in a concurrent environment.
I suggest you not to specify it like this since you will not be able
to implement it deterministically. What if the first resource is
available just a picoseconds after you finished with it and you return
the second one? One can complain that you should have returned the
first one since it was available at the time you returned the second
one.
Therefore it is better to specify it so that any resource can be
acquired non-deterministically. Once you specify it like this, you are
free to implement it so that `the method tries to acquire the resource
in the same order they are passed as parameters.' The main difference
is that this implementation satisfies the broader specification but
you cannot enforce this implementation.
You can have a look at the Guarded Commands (GC) concept of Dijkstra.
Or you can check it out how the Guarded Commands is adopted to
concurrent environment elsewhere, e.g. in Communicating Sequential
Processes (CSP) of Hoare.
> P.s. Yes, this is a relevant technical issue.
Yes, you can find this relevant technical issue in my very first post.
Best Regards,
Szabolcs
--
Ian Collins.
> I'd be pleased to have feedbacks, ideas, suggestions and comments.
Let us see the SyncQueue:
http://falconpl.org/project_docs/threading/_class_SyncQueue.html
Forget about the level of the structure. How would you to use it?
It is quite unusual from a data structure intended to be used by
concurrent threads of computation that the method gives an error if
the data structure is not in the expected state. The pop methods of
SyncQueue rise an error if the queue is empty. I would not recommend
that. In a concurrent environment the operation should be delayed
instead of rising an error.
I guess you mean that first you acquire the SyncQueue object and then
check the `empty()' status. What if it is still empty? Do you release
it and keep busy waiting?
Best Regards,
Szabolcs
> Someone here would argue that a semaphore IS a non-primitive
> synchronization object as it can be built with a mutex, a condition
> variable and an integer variable, ...
I can imagine that in this discussion forum they understand it like
this. However, anybody who learnt about programming languages will
know that the term primitive means that what cannot be expressed in
the language itself and what is provided in the environment to build
from.
If the language or environment provides the mutex and the condition
variable as primitives, semaphores can be built from them as a complex
structure. But vice versa.
From the concurrency point of view, however, the semaphore as well as
the mutex and condition variable are low level synchronisation tools.
They are not appropriate for a higher level concurrent language. They
do not provide the expressive power and the safety needed for any
decent concurrent programming language.
Best Regards,
Szabolcs
Atomic operations are "acquire" (performed by wait(), and not available
at script level) and "release". Their semantic is explained in my
previous mail.
>
> What is `CCR'? You say it atomically checks `p'. But what then? What
> does it return?
"p is a predicate" is a well known lisp-ish dialect for "p is an
expression evaluable in "true" or "false" boolean values".
It must return a resource since you use it as an
> argument of `wait'. But who does it ensure that the resource will not
> change between the time the `CCR' checks the status and the `wait' may
> acquire it?
Wait itself. When wait() returns, it returns either the already acquired
(and so, safe) instance of the structure or nil if the acquisition
failed i.e. because of a timeout. In our case, timeout is 0 and the
wait() operation assumes the characteristics of a try-acquire: if the
CCR having p as predicate can be acquired now (that is, if its NOT
currently acquired AND p evaluates atomically to true), then CCR on p is
acquired, and the waiting thread is notified it by wait() returning the
acquired CCR on p.
>
> When I mentioned Conditional Critical Region I referred to the
> original concept where it was a language construct.
Then the answer is no. We won't provide language constructs for
synchronization items (primitives or complex structures). However,
notice that a canonical CCR in pseudocode is expressed as:
region N when <p> do
begin
<code>
end
is 1:1 mapped, semantically equivalent and visually similar to
N = CCR()
...
if wait( N.when( <p> ), 0 ) == N
<code>
end
or you may also do:
N = CCR( <p> )
...
if wait(N,0)
<code>
end
with the non-secondary advantage to bring your predicate around.
(Notice: if wait fails the acquisition, it returns nil, which is
evaluate as false in falcon, and if it succeeds it returns an object,
which is evaluated as non-false; so we can drop the "== N" part in this
case).
===
However, notice that the following structure can currently be used to
achieve the same result:
g = Grant() // sort of waitable mutex
if wait( g, 0 )
if <p>
<code>
end
g.release()
end
This checks <p> with g acquired, and releases it after having
conditionally executed <code>. Nevertheless, the CCR as defined above
seems a nice "packaging" of the code and allows for possibly optimized
checks of <p> outside the script scope; so, despite of the fact that
there is already a mean to write the same algorithms that may be written
using a CCR, it's addition could be interesting.
It's not a "feature", it's just how wait() does it. It tries to acquire
all the resources it has been asked to wait for in the order they are
provided; if one acquisition fails (even for a picosecond) it tries the
next.
>
> Therefore it is better to specify it so that any resource can be
> acquired non-deterministically.
I would rather prefer it, but that's not what would happen now with the
way I implemented wait(). If you say
... acq = wait( a, b ) ...
switch acq
case a:
<do things with a acquired>
a.release()
case b:
<do things with b acquired>
b.release()
end
...
and "a" resource is constantly acquirable, (i.e. because this thread is
currently the only thread trying to acquire it), then the effect will be
that b will NEVER be acquired.
I would love to define wait() as picking one ready-to-be-acquired
resource at random, but the implementation is more complex and I don't
know if the benefits outweigh the extra cost you get also in the general
case where this wouldn't be needed.
So, I wrote the other version (with variable parameters passed in an
array) to let the users "rotate" their resources in waits, if this is
needed for their algorithms.
If we can find a zero-cost way to pick non-deterministically any one of
the resources on which wait is required, I will prefer it.
Once you specify it like this, you are
> free to implement it so that `the method tries to acquire the resource
> in the same order they are passed as parameters.' The main difference
> is that this implementation satisfies the broader specification but
> you cannot enforce this implementation.
Surely; in fact this is a limit of the current implementation. Just, I
don't know if the extra cost I need to implement the random picking is
worth; in the general case, you shouldn't have resources you can ALWAYS
acquire. But the topic is interesting, and I would love to find a
costless way to do that.
>
> You can have a look at the Guarded Commands (GC) concept of Dijkstra.
> Or you can check it out how the Guarded Commands is adopted to
> concurrent environment elsewhere, e.g. in Communicating Sequential
> Processes (CSP) of Hoare.
A guarded command is an interesting principle, provided we need to
synchronize the guard, but not the command. In that case, implementing
the guarded command as a sync structure can prove interesting,
>
>> P.s. Yes, this is a relevant technical issue.
>
> Yes, you can find this relevant technical issue in my very first post.
Didn't we agree do drop any non-technical issue, as useless polemics? --
being open means also don't start the fire again.
Gian.
Oh, I use it quite a lot, and there are some nice samples too :-)
>
> It is quite unusual from a data structure intended to be used by
> concurrent threads of computation that the method gives an error if
> the data structure is not in the expected state.
True, it is unusual, but not unseen.
The pop methods of
> SyncQueue rise an error if the queue is empty. I would not recommend
> that. In a concurrent environment the operation should be delayed
> instead of rising an error.
You can check for the queue being empty before pop() with it being
critically acquired; as you say...
>
> I guess you mean that first you acquire the SyncQueue object and then
> check the `empty()' status. What if it is still empty? Do you release
> it and keep busy waiting?
No. The queue is atomically acquired if and *only* if it is not empty.
From the manual:
===================================
Detailed description
This class implements a synchronized Falcon items FIFO or LIFO queue
that can be waited on for non-empty status.
A single waiting thread will acquire the queue when it is not empty; at
that point, it can dequeue one or more items being previously pushed,
released the queue and processed the dequeued items.
Holding the queue in acquired status prevents concurrent insertion of
new items, as well as (<<missing word: concurrent>>) removal, so it's
advisable to release the queue as soon as possible, that is, as soon as
the items that must be processed are retrieved.
======================================
In other words, the thread that wins a wait on a SyncQueue:
1) knows the queue has one or more elements available.
2) holds an exclusive access on the queue.
Note that push*() methods do not require acquisition. You can just push
at anytime, and the queue will handle your push as soon as it is not
acquired (nor contended). Implementation needs a bit of refining; for
now I use a wide lock (that causes threads willing to push to be swapped
out), but I can have smarter solutions as letting a "reserve queue"
receive pushed data, and commit the reserve queue to the main one as
soon as the queue is released and before allowing a new acquisition.
This would also make room for some non-blocking algos (Chris, where are
you? :-)
So, the only thread having granted the acquisition should just pop one
or more items, and immediately release the queue.
Popped items are not shared themselves. They are local to the dequeuing
thread, so they can be safely processed after the queue has been
released. (*They can also be shareable items as i.e. streams, but each
shareable item has its own internal model to deal with concurrency).
Also; there is no busy waiting in this model. Waits are not busy, they
just swap threads out of the execution environment until there is a
chance (and often a very high chance or a certain condition) that they
are able to succeed in an acquisition. When this happens, they are
rescheduled for execution asap, and they try the acquisition on all the
resources they are waiting for.
Gian.
> If the language or environment provides the mutex and the condition
> variable as primitives, semaphores can be built from them as a complex
> structure. But vice versa.
>
I don't know of anyone having demonstrated that a semaphore can replace
the condition variable/mutex pair in all the situations. mutex/condvar
has an intrinsic guard against some races that can slip through semaphores.
Is there any paper we can look at?
Gian.
> Oh, I use it quite a lot, and there are some nice samples too :-)
Very good. Then please put here an example how you use a SyncQueue
instance for the producers and consumers problem.
Best Regards,
Szabolcs
You can take the advice and look up my postings since I have
demonstrated it before on comp.programming.threads
> mutex/condvar
> has an intrinsic guard against some races that can slip through semaphores.
What race? Somebody could seriously misinformed you.
> Is there any paper we can look at?
Certainly. You can look up:
C. A. R. Hoare, Monitors: An Operating System Structuring Concept
(1974)
Best Regards,
Szabolcs
Alex Terekhov invented a nice condition variable algorithm using semaphores;
check pthread-win32. He uses a critical-section, however that can be
replaced with a bin-sema. So, one can definitely use semaphores to create
workable condvars; and vise-versa.
This sample is quite complex, so it may not be a good sample to start
with, but it demonstrates working principles also in a complex situation.
I will extract the simple sample of one of this consumers after the
general description.
From the samples on the site (I'll resume the most interesting parts at
the end):
====================================================
/*
FALCON - Samples
FILE: th_queue.fal
Queue test.
In this test we pass data to worker threads, and they pass
data back to a main inspector thread.
-------------------------------------------------------------------
Author: Giancarlo Niccolai
Begin: Sun, 13 Apr 2008 23:26:44 +0200
-------------------------------------------------------------------
(C) Copyright 2004: the FALCON developers (see list in AUTHORS file)
See LICENSE file for licensing details.
*/
load threading
const threadCount = 4
const countTo = 10000
const INTERVAL = 0.2
const TEST_TIME = 10.0
const BURST_SIZE = 2000
class Inspector( evts, quit ) from Thread
evts = evts
quit = quit
function run()
// set up initial time
started = seconds()
// a storage to be displayed
status = [=>]
// publish every INTERVAL seconds
nextPublish = started + INTERVAL
// a useful formatter
fmt = Format( ".3" )
loop
waited = self.wait( self.quit, self.evts, 0.2 )
// have we been notified?
if waited == self.evts
id, count = waited.popFront()
if id in status
status[ id ] += count
else
status[ id ] = count
end
waited.release()
// or should we quit
elif waited == self.quit
break
end
// In case of timeout, see if we should publish
now = seconds()
if now >= nextPublish
nextPublish += INTERVAL
>> "\rAt ", fmt.format( now - started ), ": "
for id, count in status
>> id, ": ", count
formiddle: >>"; "
end
end
// loop and wait again
end
end
end
class WorkerThread(id, dataQueue, notifQueue, quit ) from Thread
id = id
dataQueue = dataQueue
notifQueue = notifQueue
quit = quit
function run()
done = 0
loop
// Wait for the previous thread to be done
waited = self.wait( self.quit, self.dataQueue )
// are we told to quit?
if waited == self.quit: break
// otherwise, process data.
data = waited.popFront()
// ...that we can release
waited.release()
// should be a string -- do something long (hopefully)
count = 0
for i in [0:len( data ) ]
for j in [0:len(data)]
for k in [0:len(data)]
count ^= data[*k]
end
end
end
// and notify the inspector
self.notifQueue.push( [self.id, 1] )
end
end
end
// Main thread
notifQueue = SyncQueue()
dataQueue = SyncQueue()
quit = Barrier()
// create the inspector
inspth = Inspector( notifQueue, quit )
inspth.start()
// ... create some consumer
t = arrayBuffer(threadCount)
for i in [0:t.len()]
// assign the i, i+1 (circular) to the events.
t[i] = WorkerThread( "ID" + i.toString(), dataQueue, notifQueue, quit )
t[i].start()
end
// Ok, produce some data for all.
started = seconds()
while seconds() - started < TEST_TIME
// how many data to produce? 1-100
burst = random() * BURST_SIZE
letter = random() * 26 + ord('A')
for i in [0: burst]
size = random() * 10 + 10
dataQueue.push( strReplicate( chr(letter), size ) )
end
notifQueue.push( [ "Main", int( burst ) ] )
// sleep a bit
sleep( random() * 0.5 )
end
quit.open()
// now that we started the threads, join them.
for thread in t
Threading.wait( thread )
end
// and join inspector too
Threading.wait( inspth )
>
> "Main thread complete."
======================================
This sample has a producer (the main thread) which creates some random
garbage and sends it to the "dataQueue".
The dataQueue is then consumed by N (in this case 4) threads. I produce
a burst of data and then sleep on the main thread to simulate some
intense peak of operations.
The consumer threads waits for one of the following events (whichever
comes first):
1) they can acquire a non empty queue
2) they receive a termination request
In case of termination request, the threads terminate.
In case of successful acquisition of the queue, they dequeue an element,
release the queue and work on the element they received.
For each operation they perform, a "result value" is sent to a fifth
monitor thread, which is a consumer (so we have a sample of 1-to-N and
N-to-1 producer/consumer model).
The monitor threads is the simpler part; let's see it's main loop
(unimportant parts cut off):
loop
waited = self.wait( self.quit, self.evts, 0.2 )
// have we been notified?
if waited == self.evts
id, count = waited.popFront()
//... some work, actually we may release before doing it...
waited.release()
// or should we quit
elif waited == self.quit
break
end
// In case of timeout, see if we should publish
now = seconds()
if now >= nextPublish
// ... display ...
end
// loop and wait again
end
"Waited" may be nil if the wait times out (if no update is pushed by any
of the threads in 0.2 secs); in that case we just check if it's time for
publish some screen update.
Gian.
> 1) Is the multiple wait/acquire/release model sufficient to provide a
> fairly complete MT support?
I am afraid it is not sufficient in this form. It depends on what you
can wait for. As far as I understand so far, you can wait for the
availability of multiple resources. That is not enough to solve any
problem. It only means that you can non-deterministically select among
resources for the short term. For a general solution you must provide
a means for the long term scheduling too. In other words, you must be
able to wait for resources which you acquire only if it has a certain
state. A typical example is a bounded buffer.
> 2) What about complete isolation of each agent except for exchanges
> through well-defined synchronization structures?(this is half needed
> because of the scripty nature of the things; the idea here is not seeing
> if it's "good", as full visibility is just "better", but to see if it
> can stand, can be anyhow a workable solution).
Isolation can mean that agents cannot access shared resources but
communicate via messages only. This is the CSP model (Communicating
Sequential Processes).
Isolation can mean that you allow your agents to access shared
resources only via shared data structures known as monitors. That is
why the monitor concept was invented, to isolate the threads of
computation via well defined shared resources. But for that, it is
very important, you must provide monitors at the language level. Hand
built monitors will not do for you if you are going to isolate agents.
Remember that monitors are Conditional Critical Region plus module
added and delay is optimised.
Monitor = CCR + module + optimised delays
Best Regards,
Szabolcs
Oh, yes; I implemented Terekhov algorithms (several versions) in many
contexts.
I was thinking to just ONE semaphore.
With several semaphores it is actually possible to build a condition
variable, so we cannot say that one is more "primitive" that the other
(just because it takes a very, very complex semaphore-based algorithm to
build a CV and a very, very simple CV-based algorithm to build a
semaphore...)
OTOH, it takes a CV AND a MUTEX to build a semaphore, while a semaphore
is enough to build a CV and a MUTEX, theoretically. So I see also
Szabolcs point now.
Gian.
I don't think its possible to use a single semaphore to build a CV. I would
be very interested if somebody can prove otherwise!
;^)
[...]
> You can take the advice and look up my postings since I have
> demonstrated it before on comp.programming.threads
Especially the parts where you clearly state that a CV cannot be used to
wait on events...
As I said to Chris, I was thinking about using ONE semaphore as a mean
to signal a sync predicate being true. I don't know if you were meaning
the same thing.
But talking in general, you're right (because of Terekhov algos).
>> Is there any paper we can look at?
>
> Certainly. You can look up:
>
> C. A. R. Hoare, Monitors: An Operating System Structuring Concept
> (1974)
>
I knew this paper, but it says:
"""
The single resource monitor simulates a Boolean semaphore [7] with
acquire and release used for P and V respectively. This is a simple
proof that the monitor/condition concepts are not in principle less
powerful than semaphores, and that they can be used for all the same
purposes.
"""
It actually takes a bit more to prove that: a single semaphore becomes
acquirable when the guarded predicate is true at post(), while it may be
not true anymore by the time wait() on the waiting thread performs the
acquisition. CV+mutex checks atomically the predicate during the
acquisition step. That was the race I was referring to.
But I didn't relate the Terekhov algos with semaphores, as they are
commonly implemented with MS-Windows sync objects different from
semaphores (for performance reasons).
As those objects (mutex under the name of CRITICAL_SECTION and the
hybrid Event) are implementable with semaphores, we have the proof that
CVs can be implemented with semaphores as well. Mutexes can be
implemented with semaphores, so CV+Mutex and semaphores have the same
"primitivity" level :-)
Gian.
That is right. You cannot use any CV to wait on events.
You wait for a predicate (not an event) and the CV just optimises your
busy waiting loop. What you think is an event is just an indication
that the predicate can be re-evaluated. Besides, you cannot use a CV
to wait on events just because the spurious wakeup alone. You may
think the signal is arrived but it is just a spurious wakeup. Think
about it.
I am afraid it is time for you to re-read that post.
http://groups.google.com/group/comp.programming.threads/msg/64cfa20ab92bcd12
I claimed that CV cannot be used as a semaphore, i.e. it cannot be
used for event signaling (see the terminology used in concurrent
programming).
You can use the semaphore to signal an event from one thread of
computation to another thread. You do not need any additional flag for
it. Just the semaphore alone is sufficient. It is at the very
beginning of any concurrent programming cirruculum.
You cannot do the same with CV. Try to do it without any flag (i.e.
predicate) and if you can demonstrate that you can safely wait for an
event with a CV alone you are awarded with a price.
Remember, the CV does not play any role in the logic of the program.
Any correct program must work correctly if you short cut the CV. Hence
the well known condition variable test:
http://groups.google.com/group/comp.programming.threads/browse_frm/thread/0ff7ac10adadee22
Best Regards,
Szabolcs
Good. Please read on and in the next section you will find: "Having
proved that semaphores can be implemented by a monitor, the next task
is to prove that monirors can be implemented by semaphores. ..."
Best Regards,
Szabolcs
Yes. Each structure must present the acquirability logic. Doing:
wait( a )
means
"wait unitl a.acquire() succeeds".
and "a.acquire()" is semantically different in each structure you want
to provide. In example, acquire() for a bounded buffer will succeed when
the bounded buffer is in the desired state.
wait() is scheduled so that a test on a.acquire() is performed only when
the thread has a chance to succeed (or for some structures, only if
a.acquire() WILL succeed).
So, wait() as it is now presents the features you request, AFAIK.
>
>> 2) What about complete isolation of each agent except for exchanges
>> through well-defined synchronization structures?(this is half needed
>> because of the scripty nature of the things; the idea here is not seeing
>> if it's "good", as full visibility is just "better", but to see if it
>> can stand, can be anyhow a workable solution).
>
> Isolation can mean that agents cannot access shared resources but
> communicate via messages only. This is the CSP model (Communicating
> Sequential Processes).
>
> Isolation can mean that you allow your agents to access shared
> resources only via shared data structures known as monitors. That is
> why the monitor concept was invented, to isolate the threads of
> computation via well defined shared resources. But for that, it is
> very important, you must provide monitors at the language level. Hand
> built monitors will not do for you if you are going to isolate agents.
> Remember that monitors are Conditional Critical Region plus module
> added and delay is optimised.
>
> Monitor = CCR + module + optimised delays
As for the monitors, actually data passing through agents is divided into:
1) copied data (not shared)
2) shared data.
Some shared data do NOT require any monitor; in example, a panel of
shared memory without any guard can be used to solve the Sieve of
Eratosthenes (the algorithm was in a paper you pointed out). It can also
be used, guarded and even signaled, to solve a lot of other problems
that do not require the full monitor semantics. In example, some data
may require RO/RW access, and using an exclusive monitor for all of it
may result in unneeded contention.
Then, there is shared data that do require monitors, but they can be
built-in the object themselves; In example, you can share files, and
their usage will be synchronized by system level monitors.
Finally, the structure I called "grant" can be attached to an object
and to a "semantic usage" to provide a full monitor, as it provides a
critical region (conditional on a single state predicate as "object now
available"), an optimized delay (wait will wake the wating thread only
when the grant can be acquired NOW), and the module is the object
itself, so that:
wait(grant) // which acquires grant ASAP
object.do()
grant.release() // which signals other threads waiting on grant.
You may also bound it as
class Monitor( obj )
g = Grant() // a new grant
myobj = obj
// will be a bit different in practice, but get the idea
function do( x, ... )
Thread.wait( self.g ) // WILL succeed
self.myobj.x( ... )
self.g.release()
end
end
so that
m = monitor( obj )
m.do( something ) ---> monitored obj.something()
ALSO, notice that the above structure, or similar, can be created and
provided at language level (not "as statements", but anyhow at
script-primitive functions/objects).
===
BUT whit a monitor-only model you lose lot of exciting things with such
encapsulations, as being able to wait on more waitables, or use time in
which you can't get the grant for self.g to do something else.
Gian.
> CV+mutex checks atomically the predicate during the
> acquisition step. That was the race I was referring to.
OK. Think about it and you will find that what you see now as a race
is just illusion since the semaphore differes from the CV in that the
semaphore has a state but the CV is stateless. This latter is why you
need that the release of the mutex and the suspension on the CV must
be atomic. In case of a semaphore it is not required, since the
semaphore can store signals. Just think it over.
Besides, if there were a race, you could describe a valid scenario for
it, wouldn't you?
Best Regards,
Szabolcs
A mutation to a predicate which warrents a signal _is_ an event:
http://groups.google.com/group/comp.programming.threads/msg/fc1e13463a9ecb43?hl=en
> I am afraid it is time for you to re-read that post.
> http://groups.google.com/group/comp.programming.threads/msg/64cfa20ab92bcd12
> I claimed that CV cannot be used as a semaphore, i.e. it cannot be
> used for event signaling (see the terminology used in concurrent
> programming).
Wrong thread. I meant this one:
http://groups.google.com/group/comp.programming.threads/browse_frm/thread/110e57bb3290832c
With the focus on your conversation with David Butenhof. Especially these
posts:
Here is David trying to help correct your assertion that CV cannot be used
to wait on events:
http://groups.google.com/group/comp.programming.threads/msg/a934807273c2d95b
Here is your response where you challenged him to come up with code that
proves a CV can be used to wait on events:
http://groups.google.com/group/comp.programming.threads/msg/6ffe39b2cf90cc46
Here is my response which has some extremely trivial code:
http://groups.google.com/group/comp.programming.threads/msg/3c017578d62e0524
Why do you insist on making the incorrect assertion that CV cannot be used
to wait on events? Please try and understand that a predicate mutation IS an
event in and of itself indeed!
:^/
> > CV+mutex checks atomically the predicate during the
> > acquisition step. That was the race I was referring to.
> OK. Think about it and you will find that what you see now as a race
> is just illusion since the semaphore differes from the CV in that the
> semaphore has a state but the CV is stateless.
A CV must be used in conjunction with a predicate. This is the state.
Mutations to said state are transformed into events which are explicitly
signaled about.
> This latter is why you
> need that the release of the mutex and the suspension on the CV must
> be atomic. In case of a semaphore it is not required, since the
> semaphore can store signals. Just think it over.
> Besides, if there were a race, you could describe a valid scenario for
> it, wouldn't you?
Semaphores do not have the thread wake semantics that CV has. There are
algorihtms which depend on this. That is one reason why its difficult to use
semaphores to build a proper CV. Can't you think of a scenario? I can. But,
I am curious to see if you can. Its rather trivial. There are examples on
this group. You can bust an algorithm if you expose it to a CV with broken
wakeup semantics.
This you can try to formulate tha above claim intelligently because it
just does not make any sense.
> > I am afraid it is time for you to re-read that post.
> >http://groups.google.com/group/comp.programming.threads/msg/64cfa20ab...
> > I claimed that CV cannot be used as a semaphore, i.e. it cannot be
> > used for event signaling (see the terminology used in concurrent
> > programming).
>
> Wrong thread. I meant this one:
>
> http://groups.google.com/group/comp.programming.threads/browse_frm/th...
My pointer is for the same thread but to a specific post. Why don't
you think before you post?
> With the focus on your conversation with David Butenhof. Especially these
> posts:
He simply could not understand concurrent programming terminology.
Besides, he is biased and extremly arrogant. He could not prove his
claim since then. That is not an accident. Because his claim was
nonsense comeing from his ignorance.
> [...]
> Why do you insist on making the incorrect assertion that CV cannot be used
> to wait on events?
Just because nobody could come up here with any demonstration where he
uses a single CV and communicates an event from one thread of
computation to another thread.
On the other hand, a single semaphore can be used to safely
communicate an event from one thread of computation to another thread.
That is the difference that those of you who lack any education in
concurrent programming, including your beloved master, cannot follow.
> Please try and understand that a predicate mutation IS an
> event in and of itself indeed!
I cannot understand nonsense like this.
Best Regards,
Szabolcs
That is why you cannot use a CV for event signaling. You need at least
an additional flag for it.
On the other hand, you can use a semaphore for event signaling without
any additional flag.
Q.E.D.
Best Regards,
Szabolcs
What do you mean exactly?
Semaphores can delay a thread and threads can wake up just like in the
case of CV. There are at least two main differences:
1. semaphores have internal state
2. threads waiting on semaphores do not wake up spuriously
> There are
> algorihtms which depend on this.
You are welcome to demonstrate it.
> That is one reason why its difficult to use
> semaphores to build a proper CV.
You do not have to build CV from semaphores but you can since the
semaphore is a general purpose synchronisation tool in concurrent
programming.
If the CV is provided by your environment as a primitive, use it.
If the CV is do not provided by your environment, you do not need it.
> Can't you think of a scenario? I can. But,
> I am curious to see if you can. Its rather trivial. There are examples on
> this group. You can bust an algorithm if you expose it to a CV with broken
> wakeup semantics.
You are welcome to demonstrate.
Perhaps before that you want to take a course in concurrent
programming which could help you avoid talking nonsense.
Best Regards,
Szabolcss
Probably we're talking at two different levels; I say that signaling a
semaphore implies that the predicate is true at signal time; a wakeup on
semaphore wait implies that the predicate, for which the result was
cached in the state of the semaphore, was true sometime in the past. If
you signal with a semaphore that the event "queue filled" has happened,
you must also check that the queue is not empty after you wake up to do
something with this event.
Of course, as any program written with semaphores can be re-written only
with CV+mutexes, and the other way around, you can just (re)write
programs so that this isn't an issue, or that is a requirement.
Race is not necessary a failure (see "harmless race"); is a concurrency
situation that must be separately handled; I never meant that this race
isn't manageable.
Just, at spots it can get a bit nasty, and prevent some programming
scheme (some of them efficient and useful, even if not conforming with
some theorizations of MT) from being effectively used.
Is Mutex+Condvar+(predicate|external-status) superior to semaphore? -- I
don't know nor care (here).
Gian.
An event means "something happened" such as mutating the predicate into a
state which warrants a signal; e.g.:
class event {
bool m_predicate;
condvar m_cond;
mutex m_mutex;
public:
event() : m_predicate(false) {}
void set() {
{
mutex::guard lock(m_mutex);
// THIS IS A MUTATION TO THE PREDICATE
// WHICH WARRENTS A SIGNAL; AN EVENT HAS OCCURED!
m_predicate = true;
}
// WE NEED TO SIGNAL ABOUT THE EVENT!
m_cond.signal();
}
void wait() {
mutex::guard lock(m_mutex);
while (! m_predicate) {
m_cond.wait(lock);
}
// AN EVENT OCCURED!
m_predicate = false;
}
};
Why can't you understand that!?
:^/
> > > I am afraid it is time for you to re-read that post.
> > >http://groups.google.com/group/comp.programming.threads/msg/64cfa20ab...
> > > I claimed that CV cannot be used as a semaphore, i.e. it cannot be
> > > used for event signaling (see the terminology used in concurrent
> > > programming).
> >
> > Wrong thread. I meant this one:
> >
> > http://groups.google.com/group/comp.programming.threads/browse_frm/th...
> My pointer is for the same thread but to a specific post. Why don't
> you think before you post?
I meant to say "wrong post"; sorry about that. Anyway, the OP in that thread
was trying to use a CV WITHOUT a predicate. From that clear misuse, you make
an erroneous conclusion that CV cannot be used for event signaling. You are
in error, big time!
> > With the focus on your conversation with David Butenhof. Especially
> > these
> > posts:
> He simply could not understand concurrent programming terminology.
> Besides, he is biased and extremly arrogant. He could not prove his
> claim since then. That is not an accident. Because his claim was
> nonsense comeing from his ignorance.
> > [...]
> > Why do you insist on making the incorrect assertion that CV cannot be
> > used
> > to wait on events?
> Just because nobody could come up here with any demonstration where he
> uses a single CV and communicates an event from one thread of
> computation to another thread.
I just did. In fact, I have demonstrated it to you several times. Here is
one of those times:
http://groups.google.com/group/comp.programming.threads/msg/6ffe39b2cf90cc46
You just don't get it; oh well...
> On the other hand, a single semaphore can be used to safely
> communicate an event from one thread of computation to another thread.
> That is the difference that those of you who lack any education in
> concurrent programming, including your beloved master, cannot follow.
Your too funny:
http://groups.google.com/group/comp.programming.threads/msg/298e5004c06b0747
> > Please try and understand that a predicate mutation IS an
> > event in and of itself indeed!
> I cannot understand nonsense like this.
Of course you can't...
:^D
> What do you mean exactly?
Do your own homework.
> Semaphores can delay a thread and threads can wake up just like in the
> case of CV. There are at least two main differences:
> 1. semaphores have internal state
> 2. threads waiting on semaphores do not wake up spuriously
> > There are
> > algorihtms which depend on this.
> You are welcome to demonstrate it.
There are numerous examples; do your own homework.
> > That is one reason why its difficult to use
> > semaphores to build a proper CV.
> You do not have to build CV from semaphores but you can since the
> semaphore is a general purpose synchronisation tool in concurrent
> programming.
> If the CV is provided by your environment as a primitive, use it.
> If the CV is do not provided by your environment, you do not need it.
> > Can't you think of a scenario? I can. But,
> > I am curious to see if you can. Its rather trivial. There are examples
> > on
> > this group. You can bust an algorithm if you expose it to a CV with
> > broken
> > wakeup semantics.
> You are welcome to demonstrate.
> Perhaps before that you want to take a course in concurrent
> programming which could help you avoid talking nonsense.
You have obviously never built a condition variable from scratch. Why should
I waste my time trying to educate you?
> That is why you cannot use a CV for event signaling. You need at least
> an additional flag for it.
You are so dense!
> On the other hand, you can use a semaphore for event signaling without
> any additional flag.
In certain scenarios, that additional state can be REDUNDANT and completely
unnecessary. Think about it for a moment. For instance, take a queue... A
condvar can use the actual queue state to build the predicate. Why would you
use a semaphore to guard access to queue conditions when the number of items
in the queue can be determined from the queue state itself? The counter in
the semaphore is REDUNDANT in this case. Example:
class cond_queue {
queue m_queue;
mutex m_mutex;
cond m_cond;
public:
void push(void* state) {
{
mutex::guard lock(m_mutex);
m_queue.push(state);
}
m_cond.signal();
}
void* pop() {
void* state;
mutex::guard lock(m_mutex);
while (! (state = m_queue.try_pop())) {
m_cond.wait(lock);
}
return state;
}
};
- vs -
class sema_queue {
queue m_queue;
mutex m_mutex;
sema m_sema;
public:
void push(void* state) {
{
mutex::guard lock(m_mutex);
m_queue.push(state);
}
m_sema.inc();
}
void* pop() {
m_sema.dec();
mutex::guard lock(m_mutex);
return m_queue.try_pop();
}
};
The semaphore version is using redundant state to determine if the queue is
empty or not. The condvar version is using information provided by the queue
itself; got it?
[...]
OOPS! Wrong post. Here is the correction:
http://groups.google.com/group/comp.programming.threads/msg/3c017578d62e0524
> You just don't get it; oh well...
[...]
> What do you mean exactly?
Here are some basic questions which tie into proper CV wakeup semantics:
How do you broadcast a semaphore? Once you got that... How do you ensure
that a broadcast operates on currently waiting threads only? In other words,
how do you ensure that a broadcast to wait generation A does not interfere
with any subsequent wait generations? These questions are important when
your forced to use semaphores to build a condvar...
[...]
Well, I will cut you some slack. You are experienced when it comes to
misusing semaphores wrt building fairly trivial synchronization primitives:
http://groups.google.com/group/comp.lang.c++/browse_frm/thread/8f21ab45c5d72a03
Yes, wrong post or you have some mental problem. I said: `nobody could
come up here with any demonstration where he uses a single CV and
communicates an event from one thread of computation to another
thread.'
And you come with a demonstration where you use a MUTEX and a flag
VARIABLE in addition to the CV.
On the other hand you can use a single semaphore to communicate an
event.
You failed again young friend.
Try again.
Optionally you may calm down.
Best Regards,
Szabolcs
> Yes, wrong post or you have some mental problem.
lol.
> I said: `nobody could
> come up here with any demonstration where he uses a single CV and
> communicates an event from one thread of computation to another
> thread.'
> And you come with a demonstration where you use a MUTEX and a flag
> VARIABLE in addition to the CV.
I don't think you don't know how to use condvars. See, you MUST use a
condvar in conjunction with a predicate. Any use of a condvar without a
predicate is a clear error. Based on that error, you make an assertion that
CV cannot be used for event signaling. This is a ridiculous statement. Only
you would use a condvar without a predicate. So, when you say they cannot be
used for events, please clarify that your actually saying that Szabolcs
Ferenczi cannot use a CV for events. I know how to use CV properly, you
apparently do not. Fine. Speak for yourself.
> On the other hand you can use a single semaphore to communicate an
> event.
> You failed again young friend.
You are delusional.
> Try again.
> Optionally you may calm down.
lol.
>
> OTOH, it takes a CV AND a MUTEX to build a semaphore, while a semaphore
> is enough to build a CV and a MUTEX, theoretically. So I see also
> Szabolcs point now.
>
Sorry, I meant "while semaphores are enough...". My mind was a bit
fuzzy at 3am and I tend to mess up English uncountable/plurals even
when I am fully awake :-).
Gian
In contrast to you and your clown master, I not only know how to use a
condition variable, I also know why is it available and how was it
invented.
So, I saw that the guy used it in a wrong way:
http://groups.google.com/group/comp.programming.threads/msg/fc06ff81b98a9823
I have shown him how to use correctly:
http://groups.google.com/group/comp.programming.threads/msg/41fa64966fe090ca
I have also shown him how to solve the same problem with semaphores:
http://groups.google.com/group/comp.programming.threads/msg/64cfa20ab92bcd12
That was the moment both you and your clawn master misunderstood some
basic stuff, jumped in, and started talking nonesense. Both of you are
lucky because you cannnot make a fool of yourself already. So you are
free to behave like that.
> See, you MUST use a
> condvar in conjunction with a predicate.
And you need an additional mutex as well. The codition variable is
nothing else but an optimisation of the busy waiting loop because you
must wait for the predicate and not for any event. The condition
variable only makes it possible for you to delay the pooling until it
is indicated that you can re-check the predicate because there might
some change happen. However, you do not wait for the signal rather you
wait for the predicate.
> Any use of a condvar without a
> predicate is a clear error.
Yes, that is why you and your clawn master will never able to do your
homework. You will keep trolling like this.
> Based on that error, you make an assertion that
> CV cannot be used for event signaling.
Clear. The CV just on its own cannot be used for event signaling. You
have just realised that you need some additional stuff for that as
well.
On the other hand you can use a semaphore and just a semaphore alone
for event signaling.
Is it so hard for the clowns to follow this simple issue?
> This is a ridiculous statement. Only
> you would use a condvar without a predicate.
I would not. And I never did. In fact I came up with a test for just
the kind of hackers like you and for other beginners. If you check
your code with the condition variable test, it fails if you used the
condition varialbe without any predicate:
http://groups.google.com/group/comp.programming.threads/browse_frm/thread/0ff7ac10adadee22
You are strongly advised to use it until you can learn the basics of
concurrent programming.
> So, when you say they cannot be
> used for events, please clarify that your actually saying that Szabolcs
> Ferenczi cannot use a CV for events.
Nobody can use the CV without additional flag for event signaling and
nobody could show an example for that. Neither you nor your clown
master. You both failed.
> I know how to use CV properly, you
> apparently do not. Fine. Speak for yourself.
Yes, you are an excellent hacker, you can use it in practice as any
hacker can, only you are not aware what you are doing. Apparently you
have some mental problem. But in hacking you are very good, I must
admit.
So, just calm down my young friend.
Best Regards,
Szabolcs
> Probably we're talking at two different levels; I say that signaling a
> semaphore implies that the predicate is true at signal time; a wakeup on
> semaphore wait implies that the predicate, for which the result was
> cached in the state of the semaphore, was true sometime in the past. If
> you signal with a semaphore that the event "queue filled" has happened,
> you must also check that the queue is not empty after you wake up to do
> something with this event.
Correct. You do the same with the condition variable as well, i.e. re-
check the predicate. Both the condition variable and a semaphore in
this position are just optimising the busy waiting loop. That was the
reason the condition variable was invented for.
> Of course, as any program written with semaphores can be re-written only
> with CV+mutexes, and the other way around, [...]
Yes, I agree.
> Race is not necessary a failure (see "harmless race"); is a concurrency
> situation that must be separately handled; I never meant that this race
> isn't manageable.
I never heard of any "harmless race". I know that in any concurrent
program there is an inherent non-determinism what you might interpret
as "harmless race".
> Just, at spots it can get a bit nasty, and prevent some programming
> scheme (some of them efficient and useful, even if not conforming with
> some theorizations of MT) from being effectively used.
However, the aim of the non-deterministic constructs in the program is
to cope with the "harmless race" or to transform the "harmless race"
into a deterministic result. See the Guarded Commands of Dijkstra and
its applications in concurrent settings.
> Is Mutex+Condvar+(predicate|external-status) superior to semaphore? -- I
> don't know nor care (here).
I think there is no question of superiority. Both are low level
library means and as such they are unsafe and error prone to use too.
In any modern programming notation concurrency should be built at the
language level instead of the low level means like these.
Best Regards,
Szabolcs
> In contrast to you and your clown master, I not only know how to use a
> condition variable, I also know why is it available and how was it
> invented.
A condvar, mutex and predicate are married together. Basically, they can be
thought of as a single unit, IMO of course.
> So, I saw that the guy used it in a wrong way:
> http://groups.google.com/group/comp.programming.threads/msg/fc06ff81b98a9823
> I have shown him how to use correctly:
> http://groups.google.com/group/comp.programming.threads/msg/41fa64966fe090ca
You forgot to call `pthread_attr_destroy()' in the `init()' function. Anyway
here is a quote:
"Please note that the shared variables`action' and `done' are used for
communicating the events and they are used in a handshake manner. The
condition variable only signals that there is some change in the share
state space."
The condvar signals that there is a state change (e.g., predicate mutation).
This state change IS an event. What does the condvar signal? It signals a
thread waiting on the condvar. So, you basically admit that a condvar is
used to signal about events which wakes a thread that is waiting for an
event. Therefore, condvars can be used to signal and wait for events. The
events are predicate mutations which warrant a signal.
;^)
> I have also shown him how to solve the same problem with semaphores:
> http://groups.google.com/group/comp.programming.threads/msg/64cfa20ab92bcd12
>
> That was the moment both you and your clawn master misunderstood some
> basic stuff, jumped in, and started talking nonesense. Both of you are
> lucky because you cannnot make a fool of yourself already. So you are
> free to behave like that.
> > See, you MUST use a
> > condvar in conjunction with a predicate.
> And you need an additional mutex as well. The codition variable is
> nothing else but an optimisation of the busy waiting loop because you
> must wait for the predicate and not for any event.
A mutation to the predicate which warrants a signal _is_ an event. The
condvar can be used to wait for these events to occur.
> The condition
> variable only makes it possible for you to delay the pooling until it
> is indicated that you can re-check the predicate because there might
> some change happen. However, you do not wait for the signal rather you
> wait for the predicate.
> > Any use of a condvar without a
> > predicate is a clear error.
> Yes, that is why you and your clawn master will never able to do your
> homework. You will keep trolling like this.
:^D
> > Based on that error, you make an assertion that
> > CV cannot be used for event signaling.
> Clear. The CV just on its own cannot be used for event signaling. You
> have just realised that you need some additional stuff for that as
> well.
> On the other hand you can use a semaphore and just a semaphore alone
> for event signaling.
What about event broadcasting?
> Is it so hard for the clowns to follow this simple issue?
> > This is a ridiculous statement. Only
> > you would use a condvar without a predicate.
> I would not. And I never did. In fact I came up with a test for just
> the kind of hackers like you and for other beginners. If you check
> your code with the condition variable test, it fails if you used the
> condition varialbe without any predicate:
> http://groups.google.com/group/comp.programming.threads/browse_frm/thread/0ff7ac10adadee22
> You are strongly advised to use it until you can learn the basics of
> concurrent programming.
I know how to use condvars. I guess that your test would be useful for
newbie's who fail to understand that a mutex, CV and predicate must be used
together; as a single unit...
> > So, when you say they cannot be
> > used for events, please clarify that your actually saying that Szabolcs
> > Ferenczi cannot use a CV for events.
> Nobody can use the CV without additional flag for event signaling and
> nobody could show an example for that. Neither you nor your clown
> master. You both failed.
Asking somebody to deliberately misuse a condvar to prove a point is
non-sense.
> > I know how to use CV properly, you
> > apparently do not. Fine. Speak for yourself.
> Yes, you are an excellent hacker, you can use it in practice as any
> hacker can, only you are not aware what you are doing. Apparently you
> have some mental problem.
lol.
> But in hacking you are very good, I must
> admit.
> So, just calm down my young friend.
Can I borrow some of your tranquilizers? Weeeeeeeeeeeeeee!
:^D
> > In contrast to you and your clown master, I not only know how to use a
> > condition variable, I also know why is it available and how was it
> > invented.
>
> A condvar, mutex and predicate are married together. Basically, they can be
> thought of as a single unit, IMO of course.
IYO, of course. But you are just a hacker and hence you cannot
understand the reasons.
I have explained to you and to your clown masters here several times
what was the reason the condition variable was invented. Well, it used
to be a single unit (see original monitor concept) until the unit was
blown up by the MESA folks and it is propagated to POSIX as well as to
Java and its derivatives.
Java at least does not allow you to use the condition variable outside
the scope of the mutex but POSIX is even more clumsy. In POSIX
everything is falling apart.
So it is not a unit any more and hackers and beginners are able to
misuse it as the examples show.
You are not only ignorant, but as it seems, you are unable to learn
too.
Relax.
Best Regards,
Szabolcs
> > > In contrast to you and your clown master, I not only know how to use a
> > > condition variable, I also know why is it available and how was it
> > > invented.
> >
> > A condvar, mutex and predicate are married together. Basically, they can
> > be
> > thought of as a single unit, IMO of course.
> IYO, of course. But you are just a hacker and hence you cannot
> understand the reasons.
lol.
> I have explained to you and to your clown masters here several times
> what was the reason the condition variable was invented. Well, it used
> to be a single unit (see original monitor concept) until the unit was
> blown up by the MESA folks and it is propagated to POSIX as well as to
> Java and its derivatives.
> Java at least does not allow you to use the condition variable outside
> the scope of the mutex but POSIX is even more clumsy. In POSIX
> everything is falling apart.
Apparently, you want a magical bulletproof language that does everything for
you. Fine. BTW, what's the name of that lang? Concurrent Pascal or
something? That's not going to work for me because I am a C programmer; oh
well.
;^(...
> So it is not a unit any more and hackers and beginners are able to
> misuse it as the examples show.
POSIX RULES! lol :^D
> You are not only ignorant, but as it seems, you are unable to learn
> too.
Were you looking into a mirror when you wrote that?
> Relax.
--
Ian Collins.
> The consumer threads waits for one of the following events (whichever
> comes first):
> 1) they can acquire a non empty queue
> 2) they receive a termination request
So far so good. Now I can see more about your idea of `concept of
acquisition'. I see you have a kind of a built-in condition.
Now I wonder how would you solve the problem where you have two or
more consumers on the same queue and each of the consumers must either
receive two successive items from the SyncQueue or none. I am just
curious, you know.
Best Regards,
Szabolcs
Quite simple:
TH-consumer:
wait( queue )
v1, v2 = queue.popFront()
queue.release()
... work with v1 and v2 ...
TH-produrcer:
p1.push( [v1, v2] )
Gian.
Sorry, I meant
TH-produrcer:
queue.push( [v1, v2] )
Gian.
Please let the producer send the items one by one.
What is the situation in this case?
Best Regards,
Szabolcs
1) don't. You don't have to write threading programs just to demonstrate
things, you have to write threading programs to do things better. Show
a a wage-earning program in which the producer should push an item at a
time while the consumers must dequeue two items at a time, and I'll show
you an equivalent program in which the producer cache one item and
pushes them only in couples.
However, even if I there is no use for that constraint...
2) Like this:
TH-Producer:
// provided push returns the atomically
// read size of the queue at insertion...
if queue.push( x ) % 2 == 0: sem.signal()
TH-Consumer:
wait( sem )
v1, v2 = queue.pop(), queue.pop()
If there is only one consumer, or if there are more consumers:
TH-Consumer:
wait( sem )
wait( queue ) // this is a pure acquire; no wait.
// but here, no
v1, v2 = queue.pop(), queue.pop()
if queue.size() > 2
queue.release()
sem.post() // == sem.release()
else
queue.release()
// but don't post, so other threads won't be awaken.
end
=================================
In case of need, we can add more structures at language level to deal
with more complex cases. I.e. we may add a monitored queue with a full
monitor semantics. Wait() will still acquire() it, and acquire() will
still check the acquisition condition, which may be provided from the
outside.
Gian.
> TH-Producer:
> [...]
> if queue.push( x ) % 2 == 0: sem.signal()
> [...]
> TH-Consumer:
> wait( sem )
> wait( queue ) // this is a pure acquire; no wait.
> [...]
> In case of need, we can add more structures at language level to deal
> with more complex cases.
OK, clear. I think I know enough now.
Best Regards,
Szabolcs
>> With the focus on your conversation with David Butenhof. Especially these
>> posts:
>
> He simply could not understand concurrent programming terminology.
> Besides, he is biased and extremly arrogant. He could not prove his
> claim since then. That is not an accident. Because his claim was
> nonsense comeing from his ignorance.
Wow. Imagine the irony. Szabolcs calling someone else "biased and arrogant".
I think we have evidence of yet another concept you're incapable of
comprehending.
For the language you've used describing anyone who disagrees with you,
in any forum with moderation, you would have been banned completely. You
are not merely narrow minded but outright cruel with anyone who makes a
dissenting statement, no matter how much patience anyone attempts to
show with you.
This is why I avoid wasting time fulfilling your requests for pointless
"evidence", which would be entirely redundant, and more importantly
which your history has shown you would merely ignore anyway. You're only
grasping for vague excuses to avoid admitting your mistakes, not making
valid technical arguments. I don't care to feed trolls; I'm not a
zookeeper for exotic fantasy creatures.
You have worked extremely hard in this forum to destroy any possible
respect anyone might have had for your opinions or statements.
Clown? Sorry, but that describes your behavior perfectly. While it's
clear that you have some technical knowledge, you go to great lengths to
hide it behind your "darkly comedic" facade.
You are both irrelevant and boring, Szabolcs.
> In contrast to you and your clown master, I not only know how to use a
> condition variable, I also know why is it available and how was it
> invented.
You apparently missed the fact that my statement was a double negative...
[...]