Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

POSIX Threads and Ada

8 views
Skip to first unread message

Jim Rogers

unread,
Sep 3, 2002, 1:44:26 PM9/3/02
to
There is a newly available document describing the GNAT Ada compiler
(part of the GNU tool chain) and how it handles concurrency.

You may find the document in many forms at
http://www.iuma.ulpgc.es/users/jmiranda/

Jim Rogers

Toshitaka Kumano

unread,
Sep 3, 2002, 5:48:25 PM9/3/02
to
I add one reference relevant to the subject.

"Real Time Systems and Programming Languages:
Ada 95, Real-Time Java and Real-Time C/POSIX (3rd Edition)"
http://www.amazon.com/exec/obidos/ASIN/0201729881/

Further material of this book, such as presentations for lecturers,
are available at
http://www.cs.york.ac.uk/rts/RTSBookThirdEdition.html

Especially for "Shared variable-based synchronization and communications",
the chapter 8
http://www.cs.york.ac.uk/rts/RTSbookThirdEdition/chap8.pdf
has criticisms of "monitors" (e.g. using condition variable)
and a brief introduction to "protected objects" in Ada.

--
Toshitaka Kumano
Kamakura, Japan

Alexander Terekhov

unread,
Sep 3, 2002, 7:33:52 PM9/3/02
to

Toshitaka Kumano wrote:
[...]

> Especially for "Shared variable-based synchronization and communications",
> the chapter 8
> http://www.cs.york.ac.uk/rts/RTSbookThirdEdition/chap8.pdf

protected Blocker is
entry Proceed;
private
Release : Boolean := False;
end Blocker;

protected body Blocker is
entry Proceed when Proceed'Count = 5 or Release is
begin
if Proceed'Count = 0 then
Release := False;
else
Release := True;
end if;
end Proceed;
end Blocker;

Uhmm. Could someone please elaborate on this (1,2,3,4{,5} is clear,
but I'm somewhat puzzled with respect to subsequent entry calls...)?

regards,
alexander.

Jim Rogers

unread,
Sep 3, 2002, 11:47:18 PM9/3/02
to
Alexander Terekhov wrote:


The condition for the Proceed entry is true when the number of tasks
waiting for the condition to be true equals 5, or when Release equals
true.

Inside the entry Release is set to False if no more tasks are waiting
in this entry queue, and to True if more tasks are waiting. Ada processes
all the tasks in an entry queue before adding additional tasks, as long
as the entry predicate is true.

This construct allows tasks to proceed in groups of 5. The attribute
Proceed'Count returns the number of tasks suspended in the entry's
queue.

Note that Ada uses the Algol comparison syntax. Therefore "=" is
used for comparison and ":=" is used for assignment.

I hope I have answered your question.

Jim Rogers

Alexander Terekhov

unread,
Sep 4, 2002, 5:21:00 AM9/4/02
to

Yep. Thank you, Jim. I'm somewhat disappointed that "Ada processes


all the tasks in an entry queue before adding additional tasks, as

long as the entry predicate is true", though. That isn't throughput
and performance "friendly", AFAICS. Anyway, I have one more
question: < "Readers/Writers IV"/Acrobat:"78 of 100" >

protected body Control is

entry Start_Read when not Writers and Request_Write'Count = 0 is
begin Readers := Readers + 1; end Start_Read;

procedure Stop_Read is
begin Readers := Readers - 1; end Stop_Read;

entry Request_Write when not Writers is
begin Writers := True; end Request_Write;

entry Start_Write when Readers = 0 is
begin null; end Start_Write;

procedure Stop_Write is
begin Writers := False; end Stop_Write;

end Control;

Why not (get rid of Request_Write entry):

protected body Control is

entry Start_Read when not Writers and Start_Write'Count = 0 is
^^^^^^^^^^^^^^^^^
begin Readers := Readers + 1; end Start_Read;

procedure Stop_Read is
begin Readers := Readers - 1; end Stop_Read;

entry Start_Write when not Writers and Readers = 0 is
begin Writers := True; end Start_Write;

procedure Stop_Write is
begin Writers := False; end Stop_Write;

end Control;

<?> TIA.

regards,
alexander.

Jim Rogers

unread,
Sep 4, 2002, 10:28:46 AM9/4/02
to
Alexander Terekhov wrote:


This rule was placed in Ada for reliability. Let's compare with the
Java approach. Java forces you to use wait/notify_all pairs to handle
condition variables. There is absolutely no guarantee that a particular
thread will be able to proceed when many threads are waiting. In effect
the systems randomly chooses which thread will execute next. If more
threads wait before the waiting thread of interest gets a chance to
continue, then the chances go up again that this poor waiting thread
will not proceed.

Ada tasking was designed for use in hard real time systems. There must
be some guarantee that every task waiting for a condition variable will
be able to proceed in a timely manner. This is, in fact, very friendly
to performance. It prevents your system from randomly failing due to
uncontrollable delays.


>
> protected body Control is
>
> entry Start_Read when not Writers and Request_Write'Count = 0 is
> begin Readers := Readers + 1; end Start_Read;
>
> procedure Stop_Read is
> begin Readers := Readers - 1; end Stop_Read;
>
> entry Request_Write when not Writers is
> begin Writers := True; end Request_Write;
>
> entry Start_Write when Readers = 0 is
> begin null; end Start_Write;
>
> procedure Stop_Write is
> begin Writers := False; end Stop_Write;
>
> end Control;
>
> Why not (get rid of Request_Write entry):


This does appear to be a poorly written protected object.
The Request_Write allows a task to proceed when there are no
active writers. The Start_Write allows a task to proceed when
there are no active readers.

This solution effectively places a higher priority on writing than
on reading. Reading can only proceed when no task is waiting to
write. Once tasks start reading, the writers must wait.

I agree that either Request_Write or Start_Write can be eliminated.
See my example below:

entry Request_Write when not Writers and Readers = 0 is
begin
Writers := True;
end Request_Write;

This ensures that a writer has exclusive access to the resource.


>
> protected body Control is
>
> entry Start_Read when not Writers and Start_Write'Count = 0 is
> ^^^^^^^^^^^^^^^^^
> begin Readers := Readers + 1; end Start_Read;


This adjustment insists that you cannot read if another task
is writing and another task is waiting to write. You can develop
deadlock with this approach (similar problem in Start_Write below).
What happens when several tasks want to write to the resource while
several tasks want to read from the resource? You get a very nasty
deadlock.

Start_Read will also allow you to read if another task is writing
but no more tasks are waiting in the Start_Write entry queue.
This violates the exclusive access needed for the write operation.

The requeue solution above eliminates such a deadlock by making
the condition basically a two step operation.


>
> procedure Stop_Read is
> begin Readers := Readers - 1; end Stop_Read;
>
> entry Start_Write when not Writers and Readers = 0 is
> begin Writers := True; end Start_Write;
>
> procedure Stop_Write is
> begin Writers := False; end Stop_Write;
>
> end Control;
>


Jim Rogers


Jim Rogers

unread,
Sep 4, 2002, 10:29:36 AM9/4/02
to
Alexander Terekhov wrote:

This rule was placed in Ada for reliability. Let's compare with the
Java approach. Java forces you to use wait/notify_all pairs to handle
condition variables. There is absolutely no guarantee that a particular
thread will be able to proceed when many threads are waiting. In effect
the systems randomly chooses which thread will execute next. If more
threads wait before the waiting thread of interest gets a chance to
continue, then the chances go up again that this poor waiting thread
will not proceed.

Ada tasking was designed for use in hard real time systems. There must
be some guarantee that every task waiting for a condition variable will
be able to proceed in a timely manner. This is, in fact, very friendly
to performance. It prevents your system from randomly failing due to
uncontrollable delays.


>

> protected body Control is
>
> entry Start_Read when not Writers and Request_Write'Count = 0 is
> begin Readers := Readers + 1; end Start_Read;
>
> procedure Stop_Read is
> begin Readers := Readers - 1; end Stop_Read;
>
> entry Request_Write when not Writers is
> begin Writers := True; end Request_Write;
>
> entry Start_Write when Readers = 0 is
> begin null; end Start_Write;
>
> procedure Stop_Write is
> begin Writers := False; end Stop_Write;
>
> end Control;
>
> Why not (get rid of Request_Write entry):

This does appear to be a poorly written protected object.
The Request_Write allows a task to proceed when there are no
active writers. The Start_Write allows a task to proceed when
there are no active readers.

This solution effectively places a higher priority on writing than
on reading. Reading can only proceed when no task is waiting to
write. Once tasks start reading, the writers must wait.

I agree that either Request_Write or Start_Write can be eliminated.
See my example below:

entry Request_Write when not Writers and Readers = 0 is
begin
Writers := True;
end Request_Write;

This ensures that a writer has exclusive access to the resource.


>

> protected body Control is
>
> entry Start_Read when not Writers and Start_Write'Count = 0 is
> ^^^^^^^^^^^^^^^^^
> begin Readers := Readers + 1; end Start_Read;

This adjustment insists that you cannot read if another task
is writing and another task is waiting to write. You can develop
deadlock with this approach (similar problem in Start_Write below).
What happens when several tasks want to write to the resource while
several tasks want to read from the resource? You get a very nasty
deadlock.

Start_Read will also allow you to read if another task is writing
but no more tasks are waiting in the Start_Write entry queue.
This violates the exclusive access needed for the write operation.


>

> procedure Stop_Read is
> begin Readers := Readers - 1; end Stop_Read;
>
> entry Start_Write when not Writers and Readers = 0 is
> begin Writers := True; end Start_Write;
>
> procedure Stop_Write is
> begin Writers := False; end Stop_Write;
>
> end Control;
>


Jim Rogers


Alexander Terekhov

unread,
Sep 4, 2002, 11:02:31 AM9/4/02
to

Jim Rogers wrote:
>
> Alexander Terekhov wrote:

[...I'm somewhat disappointed that "Ada processes all the tasks

in an entry queue before adding additional tasks, as long as

the entry predicate is true", though...]

> This rule was placed in Ada for reliability. Let's compare with the
> Java approach.

Let's compare it with POSIX Threads conditions/monitors under realtime
scheduling...

http://www.opengroup.org/onlinepubs/007904975/functions/pthread_cond_signal.html

[...]


> Ada tasking was designed for use in hard real time systems. There must
> be some guarantee that every task waiting for a condition variable will
> be able to proceed in a timely manner. This is, in fact, very friendly
> to performance.

I don't think so. What's wrong with POSIX approach?

[...]


> > protected body Control is
> >
> > entry Start_Read when not Writers and Start_Write'Count = 0 is
> > ^^^^^^^^^^^^^^^^^
> > begin Readers := Readers + 1; end Start_Read;
>
> This adjustment insists that you cannot read if another task
> is writing and another task is waiting to write. You can develop
> deadlock with this approach (similar problem in Start_Write below).
> What happens when several tasks want to write to the resource while
> several tasks want to read from the resource? You get a very nasty
> deadlock.

Uhmm. Could you please provide some illustration (with a couple of
threads) demonstrating that we really can get very nasty *deadlock*
using the predicated/barriers from the modified version? Here we go:

entry Start_Read when not Writers and Start_Write'Count = 0 is

begin Readers := Readers + 1; end Start_Read;

procedure Stop_Read is
begin Readers := Readers - 1; end Stop_Read;

entry Start_Write when not Writers and Readers = 0 is


begin Writers := True; end Start_Write;

procedure Stop_Write is
begin Writers := False; end Stop_Write;

regards,
alexander.

Jim Rogers

unread,
Sep 4, 2002, 12:24:15 PM9/4/02
to
Alexander Terekhov wrote:

> Jim Rogers wrote:
>
>>Alexander Terekhov wrote:
>>
>
> [...I'm somewhat disappointed that "Ada processes all the tasks
> in an entry queue before adding additional tasks, as long as
> the entry predicate is true", though...]
>
>
>>This rule was placed in Ada for reliability. Let's compare with the
>>Java approach.
>>
>
> Let's compare it with POSIX Threads conditions/monitors under realtime
> scheduling...
>
> http://www.opengroup.org/onlinepubs/007904975/functions/pthread_cond_signal.html
>
> [...]


From the reference above:

The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by
a thread whether or not it currently owns the mutex that threads calling
pthread_cond_wait() or pthread_cond_timedwait() have associated with the
condition variable during their waits; however, if predictable scheduling
behavior is required, then that mutex shall be locked by the thread calling
pthread_cond_broadcast() or pthread_cond_signal().


Please explain to me how locking a mutex prior to calling pthread_cond_signal()
or pthread_cond_broadcast() ensures predictable scheduling. This still causes
the waiting threads to contend for the mutex. This contention is in itself
not performance friendly because it can cause a number of threads to be
scheduled, blocking other threads on the same processor. Most of the
scheduled threads will simply have to call pthread_cond_wait() or
pthread_cond_timedwait() again. This is a lot of wasted cycles for the
threads that simply must wait again.


>
>>Ada tasking was designed for use in hard real time systems. There must
>>be some guarantee that every task waiting for a condition variable will
>>be able to proceed in a timely manner. This is, in fact, very friendly
>>to performance.
>>
>
> I don't think so. What's wrong with POSIX approach?


The POSIX approach does not produce a provable result.
You cannot prove that a particular thread will be able to proceed.
It may always loose the contention for the mutex.

There is no way to enforce a FIFO wait for a resource.


>
> [...]
>
>>>protected body Control is
>>>
>>> entry Start_Read when not Writers and Start_Write'Count = 0 is
>>> ^^^^^^^^^^^^^^^^^
>>> begin Readers := Readers + 1; end Start_Read;
>>>
>>This adjustment insists that you cannot read if another task
>>is writing and another task is waiting to write. You can develop
>>deadlock with this approach (similar problem in Start_Write below).
>>What happens when several tasks want to write to the resource while
>>several tasks want to read from the resource? You get a very nasty
>>deadlock.
>>
>
> Uhmm. Could you please provide some illustration (with a couple of
> threads) demonstrating that we really can get very nasty *deadlock*
> using the predicated/barriers from the modified version? Here we go:
>
> entry Start_Read when not Writers and Start_Write'Count = 0 is
> begin Readers := Readers + 1; end Start_Read;
>
> procedure Stop_Read is
> begin Readers := Readers - 1; end Stop_Read;
>
> entry Start_Write when not Writers and Readers = 0 is
> begin Writers := True; end Start_Write;
>
> procedure Stop_Write is
> begin Writers := False; end Stop_Write;


Sorry about that. I see that I misread the code (I just woke up this
morning). I thought it said

entry Start_Write when not Writers and Start_Readers'Count = 0

I agree there is no deadlock as written.

How would you achieve this using POSIX?
Does POSIX allow you to determine the number of threads waiting for
a condition to become true?

Jim Rogers

Dale Stanbrough

unread,
Sep 4, 2002, 5:47:14 PM9/4/02
to
Jim Rogers wrote:

> The POSIX approach does not produce a provable result.
> You cannot prove that a particular thread will be able to proceed.
> It may always loose the contention for the mutex.
>
> There is no way to enforce a FIFO wait for a resource.


Unless of course you use SCHED_FIFO.

Dale

Jim Rogers

unread,
Sep 4, 2002, 8:57:34 PM9/4/02
to
Dale Stanbrough wrote:


Please explain how SCHED_FIFO works.
To me this implies some sort of tracking of the order that different
threads attempt to access a monitor.

Jim Rogers

Dale Stanbrough

unread,
Sep 4, 2002, 11:24:45 PM9/4/02
to
Jim Rogers wrote:

> Please explain how SCHED_FIFO works.
> To me this implies some sort of tracking of the order that different
> threads attempt to access a monitor.


If you specify the scheduling algorithm as being SCHED_FIFO you
get much the same scheduling as Ada promises with it's real time
annex, i.e. priority based, first in first out, non preemptive,
run till blocked semantics (i don't think it implies priority
inheritance though).

Dale

Steve Watt

unread,
Sep 5, 2002, 12:36:00 AM9/5/02
to
In article <dstanbro-91DC10...@news-server.bigpond.net.au>,

Well, sorta. The _POSIX_THREAD_PRIORITY_SCHEDULING option provided
by the implementation (in <unistd.h>) means that the wakeup sequence
between threads of different priority with either SCHED_FIFO or SCHED_RR
will be well defined, a.k.a. FIFO.

SCHED_FIFO, all by itself, does *not* do that -- it only implies a
"run to completion" model for each thread, with only higher priority
threads preempting. At least on a uniprocessor. Hard to say what
it means on a multiprocessor...
--
Steve Watt KD6GGD PP-ASEL-IA ICBM: 121W 56' 57.8" / 37N 20' 14.9"
Internet: steve @ Watt.COM Whois: SW32
Free time? There's no such thing. It just comes in varying prices...

Alexander Terekhov

unread,
Sep 5, 2002, 5:03:37 AM9/5/02
to

Jim Rogers wrote:
[...]

> Please explain how SCHED_FIFO works.

http://www.opengroup.org/onlinepubs/007904975/functions/xsh_chap02_08.html#tag_02_08_04_01
(Scheduling Policies)

http://www.opengroup.org/onlinepubs/007904975/functions/xsh_chap02_09.html#tag_02_09_04
(Thread Scheduling)

regards,
alexander.

Alexander Terekhov

unread,
Sep 5, 2002, 5:46:30 AM9/5/02
to

Jim Rogers wrote:
[...]

> The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by
> a thread whether or not it currently owns the mutex that threads calling
> pthread_cond_wait() or pthread_cond_timedwait() have associated with the
> condition variable during their waits; however, if predictable scheduling
> behavior is required, then that mutex shall be locked by the thread calling
> pthread_cond_broadcast() or pthread_cond_signal().
>
> Please explain to me how locking a mutex prior to calling pthread_cond_signal()
> or pthread_cond_broadcast() ensures predictable scheduling.

http://groups.google.com/groups?selm=3A70296F.B651AE34%40compaq.com
(Subject: Re: pthread_cond_* implementation questions)

> This still causes the waiting threads to contend for the mutex. This

> contention is in itself not performance friendly...

http://groups.google.com/groups?selm=379C79ED.2CE07AB2%40zko.dec.com
("wait morphing", etc. Subject: Re: scheduling after cond_broadcast)

> There is no way to enforce a FIFO wait for a resource.

Uhmm, do you mean FIFO "ownership" (i.e. sort of "fairness")? If so,
check out the following articles:

http://groups.google.com/groups?selm=JUEn8.1441%24fL6.28666%40news.cpqcorp.net
http://groups.google.com/groups?selm=KiZn8.1500%24fL6.29599%40news.cpqcorp.net

regards,
alexander.

Jim Rogers

unread,
Sep 5, 2002, 9:37:14 AM9/5/02
to
Alexander Terekhov wrote:

> Jim Rogers wrote:
> [...]
>
>>The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by
>>a thread whether or not it currently owns the mutex that threads calling
>>pthread_cond_wait() or pthread_cond_timedwait() have associated with the
>>condition variable during their waits; however, if predictable scheduling
>>behavior is required, then that mutex shall be locked by the thread calling
>>pthread_cond_broadcast() or pthread_cond_signal().
>>
>>Please explain to me how locking a mutex prior to calling pthread_cond_signal()
>>or pthread_cond_broadcast() ensures predictable scheduling.
>>
>
> http://groups.google.com/groups?selm=3A70296F.B651AE34%40compaq.com
> (Subject: Re: pthread_cond_* implementation questions)
>
>
>>This still causes the waiting threads to contend for the mutex. This
>>contention is in itself not performance friendly...
>>
>
> http://groups.google.com/groups?selm=379C79ED.2CE07AB2%40zko.dec.com
> ("wait morphing", etc. Subject: Re: scheduling after cond_broadcast)


This document supports my point. A broadcast stimulates all waiting
threads to attempt to acquire the mutex. One will succeed and the others
will be placed back in the synchronization wait queue. Furthermore, just
because a thread acquires the mutex does not mean that it succeeds. It must
then evaluate the predicate. If the predicate is false the thread must
release the mutex and rejoining the synchronization wait queue.


>
>
>>There is no way to enforce a FIFO wait for a resource.
>>
>
> Uhmm, do you mean FIFO "ownership" (i.e. sort of "fairness")? If so,
> check out the following articles:
>
> http://groups.google.com/groups?selm=JUEn8.1441%24fL6.28666%40news.cpqcorp.net
> http://groups.google.com/groups?selm=KiZn8.1500%24fL6.29599%40news.cpqcorp.net


Note that these URLs do talk about a synchronization wait queue.
There is also a set of scheduling queues defined in POSIX.

I noted that the pthread signal is used in special cases where all
threads are waiting for the same predicate. The existence of the signal
is for efficiency in this special case. The Ada model provides a queue
for each entry. Each entry corresponds to a single predicate. What Ada
has done is provide an additional queue of tasks (threads) waiting for
a specified predicate. When the predicate is evaluated to be true an
equivalent to a pthread signal is sent to the entry queue. This relieves
the need to awaken unrelated tasks, saving the processing overhead
associated with awakening a thread that will not be allowed to proceed.

For Ada, there are two defined queuing policies for real time systems.
They correspond to POSIX policies: FIFO and Priority. In Ada these
policies are used by entry queues to determine which task to awaken
on an entry queue. Efficiency is also gained in Ada by forbidding
blocking calls during a protected operation. This means that the
overall locking time for a protected object is typically extremely
short.

Jim Rogers

Alexander Terekhov

unread,
Sep 5, 2002, 11:11:52 AM9/5/02
to

Jim Rogers wrote:
[...]

> A broadcast stimulates all waiting threads to attempt to acquire the mutex.

Well, I'd say that broadcast stimulates all waiting threads stop waiting,
reacquire the mutex and reevaluate the predicate upon return from *wait()
calls on condition variable.

> One will succeed and the others will be placed back in the
> synchronization wait queue.

Yeah, others will have to "wait", but *on mutex*.

> Furthermore, just
> because a thread acquires the mutex does not mean that it succeeds. It must
> then evaluate the predicate. If the predicate is false the thread must
> release the mutex and rejoining the synchronization wait queue.

Right. "so what"?

[...]


> I noted that the pthread signal is used in special cases where all
> threads are waiting for the same predicate. The existence of the signal
> is for efficiency in this special case. The Ada model provides a queue
> for each entry.

Pthreads model provides a queue for each condition variable.

> Each entry corresponds to a single predicate.

Well, condition variable may correspond to multiple predicates
(and broadcast is the way to handle it), but ``usually'', each
condition variable corresponds to a single predicate.

> What Ada has done is provide an additional queue of tasks (threads)
> waiting for a specified predicate.

Well, to me, what Ada has done is this:

A) Protected object is "protected" via a sort of read/write lock.

B) This lock is released (after it was acquire in write mode by
some task that called some entry or procedure) only when the
predicates of all entries WITH WAITERS (inside the "eggshell")
are false. Essentially, Ada implementation ought to "hand off"
lock ownership from one task to another (while all other tasks
contending for the eggshell's lock remain blocked).

Is this correct?

http://www.iuma.ulpgc.es/users/jmiranda/gnat-rts/node17.htm#SECTION00512000000000000000
("The eggshell model")

regards,
alexander < still disappointed ;-) >

Mark Lorenzen

unread,
Sep 5, 2002, 12:12:55 PM9/5/02
to

"Alexander Terekhov" <tere...@web.de> wrote in message
news:3D75D07C...@web.de...

[cut]

> Yep. Thank you, Jim. I'm somewhat disappointed that "Ada processes
> all the tasks in an entry queue before adding additional tasks, as
> long as the entry predicate is true", though. That isn't throughput
> and performance "friendly", AFAICS. Anyway, I have one more

[cut]

Have a look at the Ada 95 Rationale:

http://www.adapower.com/rationale/

Especially the part on protected types:

http://www.adapower.com/rationale/rat95-p2-9.html#1

The efficiency considerations of protected types are explained in section
9.1.3.

Generally the Rationale is a good place to look for an explanation of Ada
design decisions.

- Mark


Jim Rogers

unread,
Sep 5, 2002, 1:24:11 PM9/5/02
to
Alexander Terekhov wrote:

>>What Ada has done is provide an additional queue of tasks (threads)
>>waiting for a specified predicate.
>>
>
> Well, to me, what Ada has done is this:
>
> A) Protected object is "protected" via a sort of read/write lock.
>
> B) This lock is released (after it was acquire in write mode by
> some task that called some entry or procedure) only when the
> predicates of all entries WITH WAITERS (inside the "eggshell")
> are false. Essentially, Ada implementation ought to "hand off"
> lock ownership from one task to another (while all other tasks
> contending for the eggshell's lock remain blocked).
>
> Is this correct?


It is mostly correct.
There are three kinds of protected operations.
Tasks calling Protected functions must acquire a read lock on the
protected object, but multiple tasks may simultaneously acquire
that lock. Protected functions are supposed to only report the state
of the protected object.

Tasks calling protected procedures must acquire the write lock on the
protected object. Protected procedures have no predicate. There is no
task queue associated with protected procedure calls. Lock acquisition
may be implemented exactly as is done for pthread mutex acquisition.

For protected entries I will quote from "Ada as a Second Language" by
Norman Cohen.

"When the barriers of a protected object are reevaluated following a
change in the state of the protected object, if there are any calls
queued on entry queues with open barriers one of these calls must be
serviced before any *new* protected-entry call is added to the entry
queue or any *new* protected operation is started.

After a previously queued entry call is serviced in accordance with
this rule, the barriers are reevaluated and the rule is applied once
again. This repeats until all entry queues are either closed or empty;
then new calls are considered. The effect of the rule is that a
protected object makes as much progress as possible by servicing already
queued (or requeued) entry calls before any new calls are allowed to
influence the protected object. From outside a protected object, calls
on visible protected procedures or protected entries form a sequence of
stimuli each potentially causing the protected object to undergo a
transition from one state to another. If we look inside the protected
object, we may see that some of these transitions actually consist of
a series of internal transitions, resulting from the protected entry or
protected procedure call that formed the original stimulus, pluse the
already queued entry calls that were serviced before another external
stimulus was considered. The sequence of protected operations that
constitute an external transition is called a *protected action*."

Jim Rogers

Alexander Terekhov

unread,
Sep 5, 2002, 1:30:05 PM9/5/02
to

Indeed. Okay. Here's what I was missing:

"....The barriers may be evaluated,
and the entry bodies executed, by any convenient thread of control.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It need not be the thread of the original caller."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Thanks. I guess, it would be really nice to have some mechanism (e.g.
pthread_protectedobject_t ;-) ) that would provide the same "mechanics"
in Pthreads/C... and, perhaps, something using function objects and
templates (sort of wrapper) in hypothetical Pthreads/C++... (yeah, C++
"boosters" at boost.org should definitely consider it ;-) ;-) )

regards,
alexander.

mvitalo

unread,
Sep 5, 2002, 10:25:30 PM9/5/02
to
Alexander Terekhov <tere...@web.de> wrote in message news:<3D77949D...@web.de>...

[snip]

>
> Indeed. Okay. Here's what I was missing:
>
> "....The barriers may be evaluated,
> and the entry bodies executed, by any convenient thread of control.
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> It need not be the thread of the original caller."
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> Thanks. I guess, it would be really nice to have some mechanism (e.g.
> pthread_protectedobject_t ;-) ) that would provide the same "mechanics"
> in Pthreads/C... and, perhaps, something using function objects and
> templates (sort of wrapper) in hypothetical Pthreads/C++... (yeah, C++
> "boosters" at boost.org should definitely consider it ;-) ;-) )

I'm curious, given the issues with the Boost threads approach that I've read,
is there another threading implementation or competing library being considered
for the C++ threads library? Is someone working on a Pthreads/C++ library
actively?

thanks and take care,
mike vitalo

Toshitaka Kumano

unread,
Sep 6, 2002, 4:28:44 PM9/6/02
to
Alexander Terekhov wrote:
> > I noted that the pthread signal is used in special cases where all
> > threads are waiting for the same predicate. The existence of the signal
> > is for efficiency in this special case. The Ada model provides a queue
> > for each entry.
>
> Pthreads model provides a queue for each condition variable.

What "standard" makes you think Pthreads model provides a "queue",
in other words, First Come First Served ?

Imagine that several threads of same priority are blocked with
pthread_cond_(timed)wait() on a CV.
When a thread issues pthread_cond_signal/broadcast() on the CV,
which thread among waiting is expected to run FIRST ?

In my understanding, that is "undefined" or "implementation-defined".

If any statement in standards, e.g. "IEEE Std 1003.1-2001", specifies
the sitiuation clealy, please show me. I am eager to know.


In Ada, "First Blocked First Enter" is *required* by ISO/IEC 8652:1995(E)
as following:
http://www.adaic.org/standards/95lrm/html/RM-9-5-3.html
"The entry queuing policy controls selection among queued calls both
for task and protected entry queues.
The default entry queuing policy is to select calls on a given entry queue
in order of arrival."

Steve Watt

unread,
Sep 7, 2002, 3:00:10 AM9/7/02
to
In article <alb340$g84$1...@newsreader.mailgate.org>,

Toshitaka Kumano <kum...@cl.cilas.net> wrote:
>Alexander Terekhov wrote:
>> > I noted that the pthread signal is used in special cases where all
>> > threads are waiting for the same predicate. The existence of the signal
>> > is for efficiency in this special case. The Ada model provides a queue
>> > for each entry.
>>
>> Pthreads model provides a queue for each condition variable.
>
>What "standard" makes you think Pthreads model provides a "queue",
>in other words, First Come First Served ?

POSIX 1003.1-1994, a.k.a. POSIX.1b. When threads have SCHED_FIFO
or SCHED_RR as their scheduling policy, the highest priority thread
waiting the longest time will be awakened first.

>Imagine that several threads of same priority are blocked with
>pthread_cond_(timed)wait() on a CV.
>When a thread issues pthread_cond_signal/broadcast() on the CV,
>which thread among waiting is expected to run FIRST ?

The longest waiter, assuming scheduling policy is set to a realtime
one.

>In my understanding, that is "undefined" or "implementation-defined".
>
>If any statement in standards, e.g. "IEEE Std 1003.1-2001", specifies
>the sitiuation clealy, please show me. I am eager to know.

It's only implementation defined for SCHED_OTHER (which is, naturally,
the default scheduling policy). It is clearly defined in the other
cases.

I would be *VERY* surprised if this changed from 1003.1-1994 to
1003.1-2001, but I haven't looked at the latter nearly as closely.

Alexander Terekhov

unread,
Sep 7, 2002, 9:48:16 AM9/7/02
to

Toshitaka Kumano wrote:
>
> Alexander Terekhov wrote:
> > > I noted that the pthread signal is used in special cases where all
> > > threads are waiting for the same predicate. The existence of the signal
> > > is for efficiency in this special case. The Ada model provides a queue
> > > for each entry.
> >
> > Pthreads model provides a queue for each condition variable.
>
> What "standard" makes you think Pthreads model provides a "queue",
> in other words, First Come First Served ?
>
> Imagine that several threads of same priority are blocked with
> pthread_cond_(timed)wait() on a CV.
> When a thread issues pthread_cond_signal/broadcast() on the CV,
> which thread among waiting is expected to run FIRST ?
>
> In my understanding, that is "undefined" or "implementation-defined".
>
> If any statement in standards, e.g. "IEEE Std 1003.1-2001", specifies
> the sitiuation clealy, please show me. I am eager to know.

http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap03.html#tag_03_286
("Priority Scheduling"...)

http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap03.html#tag_03_330
("Scheduling" and "Scheduling Allocation Domain")

http://www.opengroup.org/onlinepubs/007904975/functions/xsh_chap02_08.html#tag_02_08_04
("Process Scheduling"...)

http://www.opengroup.org/onlinepubs/007904975/xrat/xsh_chap02.html#tag_03_02_08_16
("Process Scheduling"...)

http://www.opengroup.org/onlinepubs/007904975/functions/xsh_chap02_09.html#tag_02_09_04
("Thread Scheduling"...)

http://www.opengroup.org/onlinepubs/007904975/functions/pthread_cond_wait.html

"....
Features of Mutexes and Condition Variables

It had been suggested that the mutex acquisition and release
be decoupled from condition wait. This was rejected because
it is the combined nature of the operation that, in fact,
facilitates realtime implementations. Those implementations
can atomically move a high-priority thread between the
condition variable and the mutex in a manner that is
transparent to the caller. This can prevent extra context
switches and provide more deterministic acquisition of a
mutex when the waiting thread is signaled.

Thus, fairness and priority issues can be dealt with directly
by the scheduling discipline.
....
Scheduling Behavior of Mutexes and Condition Variables

Synchronization primitives that attempt to interfere with
scheduling policy by specifying an ordering rule are
considered undesirable. Threads waiting on mutexes and
condition variables are selected to proceed in an order
dependent upon the scheduling policy rather than in some
fixed order (for example, FIFO or priority). Thus, the
scheduling policy determines which thread(s) are awakened
and allowed to proceed.
...."

http://www.opengroup.org/onlinepubs/007904975/functions/pthread_cond_signal.html

"....
If more than one thread is blocked on a condition variable,
the scheduling policy shall determine the order in which
threads are unblocked. When each thread unblocked as a
result of a pthread_cond_broadcast() or pthread_cond_signal()
returns from its call to pthread_cond_wait() or
pthread_cond_timedwait(), the thread shall own the mutex
with which it called pthread_cond_wait() or
pthread_cond_timedwait(). The thread(s) that are unblocked
shall contend for the mutex according to the scheduling
policy (if applicable), and as if each had called
pthread_mutex_lock().

The pthread_cond_broadcast() or pthread_cond_signal()
functions may be called by a thread whether or not it
currently owns the mutex that threads calling
pthread_cond_wait() or pthread_cond_timedwait() have
associated with the condition variable during their waits;
however, if predictable scheduling behavior is required,
then that mutex shall be locked by the thread calling
pthread_cond_broadcast() or pthread_cond_signal().

...."

regards,
alexander.

t...@cs.ucr.edu

unread,
Sep 9, 2002, 12:11:50 AM9/9/02
to
Alexander Terekhov <tere...@web.de> wrote:
[...]
+ http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap03.html#tag_03_286
+ ("Priority Scheduling"...)

+ http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap03.html#tag_03_330
+ ("Scheduling" and "Scheduling Allocation Domain")

+ http://www.opengroup.org/onlinepubs/007904975/functions/xsh_chap02_08.html#tag_02_08_04
+ ("Process Scheduling"...)

+ http://www.opengroup.org/onlinepubs/007904975/xrat/xsh_chap02.html#tag_03_02_08_16
+ ("Process Scheduling"...)

+ http://www.opengroup.org/onlinepubs/007904975/functions/xsh_chap02_09.html#tag_02_09_04
+ ("Thread Scheduling"...)

+ http://www.opengroup.org/onlinepubs/007904975/functions/pthread_cond_wait.html

+ "....
+ Features of Mutexes and Condition Variables

+ It had been suggested that the mutex acquisition and release
+ be decoupled from condition wait. This was rejected because
+ it is the combined nature of the operation that, in fact,
+ facilitates realtime implementations. Those implementations
+ can atomically move a high-priority thread between the
+ condition variable and the mutex in a manner that is
+ transparent to the caller. This can prevent extra context
+ switches and provide more deterministic acquisition of a
+ mutex when the waiting thread is signaled.

+ Thus, fairness and priority issues can be dealt with directly
+ by the scheduling discipline.
+ ....
+ Scheduling Behavior of Mutexes and Condition Variables

+ Synchronization primitives that attempt to interfere with
+ scheduling policy by specifying an ordering rule are
+ considered undesirable. Threads waiting on mutexes and
+ condition variables are selected to proceed in an order
+ dependent upon the scheduling policy rather than in some
+ fixed order (for example, FIFO or priority). Thus, the
+ scheduling policy determines which thread(s) are awakened
+ and allowed to proceed.
+ ...."

+ http://www.opengroup.org/onlinepubs/007904975/functions/pthread_cond_signal.html

+ "....
+ If more than one thread is blocked on a condition variable,
+ the scheduling policy shall determine the order in which
+ threads are unblocked. When each thread unblocked as a
+ result of a pthread_cond_broadcast() or pthread_cond_signal()
+ returns from its call to pthread_cond_wait() or
+ pthread_cond_timedwait(), the thread shall own the mutex
+ with which it called pthread_cond_wait() or
+ pthread_cond_timedwait(). The thread(s) that are unblocked
+ shall contend for the mutex according to the scheduling
+ policy (if applicable), and as if each had called
+ pthread_mutex_lock().

+ The pthread_cond_broadcast() or pthread_cond_signal()
+ functions may be called by a thread whether or not it
+ currently owns the mutex that threads calling
+ pthread_cond_wait() or pthread_cond_timedwait() have
+ associated with the condition variable during their waits;
+ however, if predictable scheduling behavior is required,
+ then that mutex shall be locked by the thread calling
+ pthread_cond_broadcast() or pthread_cond_signal().
+ ...."

Either mutex acquisition following signals and broadcasts is
deterministic or it isn't. From the above quotes I still can't tell.
For example, does "Those implementations can atomically move a


high-priority thread between the condition variable and the mutex in a

manner that is transparent to the caller" mean that conforming
implementations must atomically move awakened thread between the
condition variable and the mutex (provided that the signaller holds
the lock)? If so, why is this feature restricted to "high-prority"
threads, and how high must priority be in order for a signallee to
receive such deterministic signalling.

The last time I tried to understand the Pthreads protocol, I got the
(mis)impression that, at least in the multiprocessor case, post-signal
mutex acquisition was non-deterministic. Also, I got the impression
that conforming implementations are allowed to inject "spurious
signals", which further complicate the problem of bounding latencies.

Tom Payne

t...@cs.ucr.edu

unread,
Sep 9, 2002, 1:47:36 AM9/9/02
to
Jim Rogers <jimmaure...@worldnet.att.net> wrote:
[...]
+ This document supports my point. A broadcast stimulates all waiting
+ threads to attempt to acquire the mutex. One will succeed and the others
+ will be placed back in the synchronization wait queue.

Unfortunately, no --- instead, they will contend for the mutex, which
is likely implemented via blocking interrupts/signals on a
uniprocessor and via a spinlock or a semaphore on a multiprocessor.

+ Furthermore, just
+ because a thread acquires the mutex does not mean that it succeeds. It must
+ then evaluate the predicate. If the predicate is false the thread must
+ release the mutex and rejoining the synchronization wait queue.

In a well designed application that should happen infrequently.

[...]
+ I noted that the pthread signal is used in special cases where all
+ threads are waiting for the same predicate. The existence of the signal
+ is for efficiency in this special case.

I don't think that "same predicate" is a requirement; rather it is
simply the common case. The basic idea behind signal is that much of
thread coordination involves allocation of resources. When a unit of
some resource becomes available, the waiting thread with the highest
priority for such a resource should be allowed to proceed. The rest
will have to wait until another unit is freed.

+ The Ada model provides a queue
+ for each entry. Each entry corresponds to a single predicate. What Ada
+ has done is provide an additional queue of tasks (threads) waiting for
+ a specified predicate. When the predicate is evaluated to be true an
+ equivalent to a pthread signal is sent to the entry queue.

Oops! I was of the impression that broadcasts (not signals) get sent
to the queues associated with the suddenly true predicates. Am I
incorrect?

+ This relieves
+ the need to awaken unrelated tasks, saving the processing overhead
+ associated with awakening a thread that will not be allowed to proceed.

It's common to have a one-to-one correspondence between conditions and
predicates. That's not a problem.

+ For Ada, there are two defined queuing policies for real time systems.
+ They correspond to POSIX policies: FIFO and Priority. In Ada these
+ policies are used by entry queues to determine which task to awaken
+ on an entry queue.

So, back to my confusion: when a predicate becomes true do *all* of
its entry-calls fire threreby resuming their callers (tasks)? Or, is
it only the first (in the sense of priority or fifo, whichever
applies)? I had understood some of the material you quoted as
indicating that they all fire.

+ Efficiency is also gained in Ada by forbidding blocking calls during
+ a protected operation.

Hmmmm. What's a "blocking call", and what does it mean to "forbid"
them.

Tom Payne

t...@cs.ucr.edu

unread,
Sep 9, 2002, 3:48:43 AM9/9/02
to
t...@cs.ucr.edu wrote:
[...]
+ So, back to my confusion: when a predicate becomes true do *all* of
+ its entry-calls fire threreby resuming their callers (tasks)? Or, is
+ it only the first (in the sense of priority or fifo, whichever
+ applies)? I had understood some of the material you quoted as
+ indicating that they all fire.

Perhaps the following question better expresses my confusion. To what
extent is invoking the Ada entry of the form

entry whatever( <parameters> ) when <predicate> is
begin
<statements>
end

semantically equivalent to invoking a C++ monitor procedure of the
form

void whatever( <parameters> ) {
EXCLUSION // acquiring the monitor's lock via initialization
while ( ! <predicate> ) whateverCondition.wait();
<statements> // possibly making predicate false
if ( <predicate> ) whateverCondition.signal();
}

Tom Payne

Alexander Terekhov

unread,
Sep 9, 2002, 4:44:29 AM9/9/02
to

http://www.iuma.ulpgc.es/users/jmiranda/gnat-rts/node18.htm#SECTION00521000000000000000
("Self-service versus proxy")

You might also want to take a look at the following article (note that
it is NOT entirely "correct" w.r.t. Pthreads "features", AFAICS):

http://citeseer.nj.nec.com/giering93implementing.html
("Implementing Ada 9X features using POSIX Threads: Design Issues (1993)")

regards,
alexander.

Jim Rogers

unread,
Sep 9, 2002, 9:11:36 AM9/9/02
to
t...@cs.ucr.edu wrote:


Three are a lot of similarities and some subtle differences.

In Ada, when a task calls a protected entry while the predicate is false
the task is blocked and the call is queued in the entry queue. The task will
not continue until either the call completes, or an alternate condition in
a selective entry call is executed.

If another task, through another entry call, or through a protected procedure
call, causes the predicate to evaluate to TRUE, that task will process all the
calls in the entry queue. The order they are processed depends upon the queuing
policy in effect. The waiting tasks do not directly execute the entry call.
The entry call is executed by the task that changed the protected object state,
by proxy. This allows the processing to continue with only the overhead of
a function call, avoiding the full overhead of a task or thread switch.

It is my understanding that pthreads does not specify this kind of proxy
execution. Instead, each waiting thread must be awakened to do its own work.

Jim Rogers

Jim Rogers

unread,
Sep 9, 2002, 9:24:38 AM9/9/02
to
t...@cs.ucr.edu wrote:

> Jim Rogers <jimmaure...@worldnet.att.net> wrote:
> + Efficiency is also gained in Ada by forbidding blocking calls during
> + a protected operation.
>
> Hmmmm. What's a "blocking call", and what does it mean to "forbid"
> them.


Quoting from the Ada 95 Language Reference Manual:

Bounded (Run-Time) Errors

During a protected action, it is a bounded error to invoke an operation that is
potentially blocking. The following are defined to be potentially blocking
operations:

· a select_statement;

· an accept_statement;

· an entry_call_statement;

· a delay_statement;

· an abort_statement;

· task creation or activation;

· an external call on a protected subprogram (or an external requeue) with the
same target object as that of the protected action;

· a call on a subprogram whose body contains a potentially blocking operation.

If the bounded error is detected, Program_Error is raised. If not detected, the
bounded error might result in deadlock or a (nested) protected action on the
same target object.

Certain language-defined subprograms are potentially blocking. In particular,
the subprograms of the language-defined input-output packages that manipulate
files (implicitly or explicitly) are potentially blocking. Other potentially
blocking subprograms are identified where they are defined. When not specified
as potentially blocking, a language-defined subprogram is nonblocking.

Jim Rogers

t...@cs.ucr.edu

unread,
Sep 9, 2002, 10:28:06 AM9/9/02
to
Jim Rogers <jimmaure...@worldnet.att.net> wrote:
+ t...@cs.ucr.edu wrote:

+> t...@cs.ucr.edu wrote:
+> [...]
+> + So, back to my confusion: when a predicate becomes true do *all* of
+> + its entry-calls fire threreby resuming their callers (tasks)? Or, is
+> + it only the first (in the sense of priority or fifo, whichever
+> + applies)? I had understood some of the material you quoted as
+> + indicating that they all fire.
+>
+> Perhaps the following question better expresses my confusion. To what
+> extent is invoking the Ada entry of the form
+>
+> entry whatever( <parameters> ) when <predicate> is
+> begin
+> <statements>
+> end
+>
+> semantically equivalent to invoking a C++ monitor procedure of the
+> form
+>
+> void whatever( <parameters> ) {
+> EXCLUSION // acquiring the monitor's lock via initialization
+> while ( ! <predicate> ) whateverCondition.wait();
+> <statements> // possibly making predicate false
+> if ( <predicate> ) whateverCondition.signal();
+> }


+ Three are a lot of similarities and some subtle differences.

+ In Ada, when a task calls a protected entry while the predicate is false
+ the task is blocked and the call is queued in the entry queue. The task will
+ not continue until either the call completes,

Similarly, the C++ thread will not continue on with the rest of its
business until it returns from the monitor procedure.

+ or an alternate condition in a selective entry call is executed.

For the sake of simplicity, I'm assuming non-conditional calls for the
moment.

+ If another task, through another entry call, or through a protected procedure
+ call, causes the predicate to evaluate to TRUE, that task will process all the
+ calls in the entry queue.

It's that "all" that has me concerned. What if processing the first
entry of that queue suddenly makes the predicated false? Does the
second entry get processed anyway? Or, must further processing of the
queue wait until further activity again makes the predicate true?

[...]
+ The waiting tasks do not directly execute the entry call. The entry call is
+ executed by the task that changed the protected object state,
+ by proxy. This allows the processing to continue with only the overhead of
+ a function call, avoiding the full overhead of a task or thread switch.

+ It is my understanding that pthreads does not specify this kind of proxy
+ execution. Instead, each waiting thread must be awakened to do its own work.

I agree that this proxy execution can make a significant difference in
terms of performance. But, because Ada restricts the statements and
predicate to in-object variables and call parameters it should make no
*semantic* difference. Right?

Tom Payne

Jim Rogers

unread,
Sep 9, 2002, 12:53:13 PM9/9/02
to
t...@cs.ucr.edu wrote:

> Jim Rogers <jimmaure...@worldnet.att.net> wrote:
> + If another task, through another entry call, or through a protected procedure
> + call, causes the predicate to evaluate to TRUE, that task will process all the
> + calls in the entry queue.
>
> It's that "all" that has me concerned. What if processing the first
> entry of that queue suddenly makes the predicated false? Does the
> second entry get processed anyway? Or, must further processing of the
> queue wait until further activity again makes the predicate true?
>


Good question. I was assuming, for simplicity, that the queued calls did
not change the predicate. In fact, the predicate is reevaluated after
each call is executed. If the predicate is false, then no more calls
are executed, and additional calls can be added to the queue.


> [...]
> + The waiting tasks do not directly execute the entry call. The entry call is
> + executed by the task that changed the protected object state,
> + by proxy. This allows the processing to continue with only the overhead of
> + a function call, avoiding the full overhead of a task or thread switch.
>
> + It is my understanding that pthreads does not specify this kind of proxy
> + execution. Instead, each waiting thread must be awakened to do its own work.
>
> I agree that this proxy execution can make a significant difference in
> terms of performance. But, because Ada restricts the statements and
> predicate to in-object variables and call parameters it should make no
> *semantic* difference. Right?


Correct. The difference is primarily one of efficiency. This proxy
capability is completely transparent to the Ada programmer. It appears as
though it was executed by the calling task.

Jim Rogers


t...@cs.ucr.edu

unread,
Sep 9, 2002, 4:19:45 PM9/9/02
to
Jim Rogers <jimmaure...@worldnet.att.net> wrote:
+ t...@cs.ucr.edu wrote:

+> Jim Rogers <jimmaure...@worldnet.att.net> wrote:
+> + If another task, through another entry call, or through a protected procedure
+> + call, causes the predicate to evaluate to TRUE, that task will process all the
+> + calls in the entry queue.
+>
+> It's that "all" that has me concerned. What if processing the first
+> entry of that queue suddenly makes the predicated false? Does the
+> second entry get processed anyway? Or, must further processing of the
+> queue wait until further activity again makes the predicate true?
+>

+ Good question. I was assuming, for simplicity, that the queued calls did
+ not change the predicate. In fact, the predicate is reevaluated after
+ each call is executed. If the predicate is false, then no more calls
+ are executed, and additional calls can be added to the queue.

Whew! I can sleep better now. The reason that question is important
is that monitors (and protected objects) are often used as resource
allocators. After the first waiting task has grabbed a recently freed
resource, it's pointless and perhaps incorrect to awaken the rest of
the waiting tasks. That's why conditions usually have both signal and
broadcast operations.

+> + The waiting tasks do not directly execute the entry call. The entry call is
+> + executed by the task that changed the protected object state,
+> + by proxy. This allows the processing to continue with only the overhead of
+> + a function call, avoiding the full overhead of a task or thread switch.
+>
+> + It is my understanding that pthreads does not specify this kind of proxy
+> + execution. Instead, each waiting thread must be awakened to do its own work.
+>
+> I agree that this proxy execution can make a significant difference in
+> terms of performance. But, because Ada restricts the statements and
+> predicate to in-object variables and call parameters it should make no
+> *semantic* difference. Right?

+ Correct. The difference is primarily one of efficiency. This proxy
+ capability is completely transparent to the Ada programmer. It appears as
+ though it was executed by the calling task.

Both elegant and efficient!!

I do have one remaining concern. Suppose that a protected procedure
updates multiple variables in a multi-variable predicate like say
"i<j". In a monitor-based system, the procedure would update both
variables before signalling the predicate's condition, after which the
signallee would test the predicate. In Ada, however, if i and j both
have the value three, then the order in which I, say, increment them
will make a difference. I see huge opportunities for astonishment
in a situation where the updating procedure does not choose exactly
when predicates get evaluated. Am I missing something? (I can't be
the first person that has worried about this.)

Tom Payne


Jim Rogers

unread,
Sep 9, 2002, 4:44:48 PM9/9/02
to
t...@cs.ucr.edu wrote:

> I do have one remaining concern. Suppose that a protected procedure
> updates multiple variables in a multi-variable predicate like say
> "i<j". In a monitor-based system, the procedure would update both
> variables before signalling the predicate's condition, after which the
> signallee would test the predicate. In Ada, however, if i and j both
> have the value three, then the order in which I, say, increment them
> will make a difference. I see huge opportunities for astonishment
> in a situation where the updating procedure does not choose exactly
> when predicates get evaluated. Am I missing something? (I can't be
> the first person that has worried about this.)


It is clearly a timing issue.
The way Ada does this is pretty simple.
The procedure will update as many variables as necessary. Only when the
procedure completes all the operations it needs to do will it evaluate
the state of the protected object. That evaluation step is transparent
to the programmer. The compiler supplies all the code for state
and predicate evaluation.

The point is that updates are not interrupted. They are effectively
atomic. Once the updates complete the state is evaluated. This avoids
all those interesting astonishment issues.

Jim Rogers

t...@cs.ucr.edu

unread,
Sep 9, 2002, 6:14:33 PM9/9/02
to
Jim Rogers <jimmaure...@worldnet.att.net> wrote:
[...]
+ The way Ada does this is pretty simple.
+ The procedure will update as many variables as necessary. Only when the
+ procedure completes all the operations it needs to do will it evaluate
+ the state of the protected object. That evaluation step is transparent
+ to the programmer. The compiler supplies all the code for state
+ and predicate evaluation.

+ The point is that updates are not interrupted. They are effectively
+ atomic. Once the updates complete the state is evaluated. This avoids
+ all those interesting astonishment issues.

I presume then that an entry fires when and only when the updating
procedure has completed its last update that will affect the predicate
and/or body of the entry.

Tom Payne

Jim Rogers

unread,
Sep 9, 2002, 9:16:06 PM9/9/02
to
t...@cs.ucr.edu wrote:


Exactly. The procedure completes its updates. The protected object state
is evaluated to determine if any previously closed barriers are now open.
The next waiting call on the queue with the open barrier is executed.
Following that execution, the state of the protected object is again
evaluated. If the barrier is still open the next call waiting in the
entry queue is evaluated. If the barrier is false no more actions are
performed on the protected object by the task originally calling the
protected procedure.

Jim Rogers

t...@cs.ucr.edu

unread,
Sep 10, 2002, 12:35:30 AM9/10/02
to
Jim Rogers <jimmaure...@worldnet.att.net> wrote:
[...]
+ Exactly. The procedure completes its updates. The protected object state
+ is evaluated to determine if any previously closed barriers are now open.
+ The next waiting call on the queue with the open barrier is executed.
+ Following that execution, the state of the protected object is again
+ evaluated. If the barrier is still open the next call waiting in the
+ entry queue is evaluated. If the barrier is false no more actions are
+ performed on the protected object by the task originally calling the
+ protected procedure.

So, I apologize, but I have one more question. Does the procedure
that makes the predicate true execute up until it is about to return
before firing the entries, or does it fire them immediately as soon as
it has updated the last relevant variable? In some situations, that
seemingly subtle distinction might be relevant, at least to the order
in which tasks are reactivated.

Tom

Jim Rogers

unread,
Sep 10, 2002, 1:27:51 AM9/10/02
to
t...@cs.ucr.edu wrote:


It executes until it completes its timeslice or blocks.
After updating the last relevant variable, and completing all the user-written
code for the protected procedure the task calling the protected procedure
evaluates the predicates. If one of them becomes true, and there are
calls in the corresponding entry queue, the procedure executes the first
call (either in fifo order, or by priority of the calling task). After
executing that entry the procedure again evaluates the predicate. If it
is still true, it executes the next call in the entry queue, if there are
any. If it is false, or no more calls are in the entry queue, the task
executes the next statement in its own task body. This is the way the
procedure executes the entry calls by proxy. It does the work to prevent
the context switching required for thread switching.


Jim Rogers

t...@cs.ucr.edu

unread,
Sep 10, 2002, 1:58:35 AM9/10/02
to
Jim Rogers <jimmaure...@worldnet.att.net> wrote:
[...]
+ It executes until it completes its timeslice or blocks.
+ After updating the last relevant variable, and completing all the user-written
+ code for the protected procedure the task calling the protected procedure
+ evaluates the predicates.

... i.e., whenever a protected procedure returns, the calling task
evaluates *all* predicates it might have affected, one at a time.
Right?

+ If one of them becomes true, and there are
+ calls in the corresponding entry queue, the procedure executes the first
+ call (either in fifo order, or by priority of the calling task). After
+ executing that entry the procedure again evaluates the predicate. If it
+ is still true, it executes the next call in the entry queue, if there are
+ any. If it is false, or no more calls are in the entry queue, the task
+ executes the next statement in its own task body.

Hmmmmm. At this point, I would expect the task to evaluate the next
predicate that it might have affected, and so on. That is to say, a
task invoking a protected procedure waits until that invocation
returns before evaluating the queues (all of them) associated with
predicates whose variables got updated. Right?

My apologies for being so pedantic, but "the devil is in the details".

Tom Payne

Jim Rogers

unread,
Sep 10, 2002, 8:22:56 AM9/10/02
to
t...@cs.ucr.edu wrote:


No problem. I am posting section 9.5.3 from the Ada 95 Language Reference
Manual. I hope this explains what you want to know.

9.5.3 Entry Calls

An entry_call_statement (an entry call) can appear in various contexts. A simple
entry call is a stand-alone statement that represents an unconditional call on
an entry of a target task or a protected object. Entry calls can also appear as
part of select_statements (see 9.7).

Syntax

entry_call_statement ::= entry_name [actual_parameter_part];

Name Resolution Rules

The entry_name given in an entry_call_statement shall resolve to denote an
entry. The rules for parameter associations are the same as for subprogram calls
(see 6.4 and 6.4.1).

Static Semantics

The entry_name of an entry_call_statement specifies (explicitly or implicitly)
the target object of the call, the entry or entry family, and the entry index,
if any (see 9.5).

Dynamic Semantics

Under certain circumstances (detailed below), an entry of a task or protected
object is checked to see whether it is open or closed:

· An entry of a task is open if the task is blocked on an accept_statement that
corresponds to the entry (see 9.5.2), or on a selective_accept (see 9.7.1) with
an open accept_alternative that corresponds to the entry; otherwise it is closed.

· An entry of a protected object is open if the condition of the entry_barrier
of the corresponding entry_body evaluates to True; otherwise it is closed. If
the evaluation of the condition propagates an exception, the exception
Program_Error is propagated to all current callers of all entries of the
protected object.

For the execution of an entry_call_statement, evaluation of the name and of the
parameter associations is as for a subprogram call (see 6.4). The entry call is
then issued: For a call on an entry of a protected object, a new protected
action is started on the object (see 9.5.1). The named entry is checked to see
if it is open; if open, the entry call is said to be selected immediately, and
the execution of the call proceeds as follows:

· For a call on an open entry of a task, the accepting task becomes ready and
continues the execution of the corresponding accept_statement (see 9.5.2).

· For a call on an open entry of a protected object, the corresponding
entry_body is executed (see 9.5.2) as part of the protected action.

If the accept_statement or entry_body completes other than by a requeue (see
9.5.4), return is made to the caller (after servicing the entry queues -- see
below); any necessary assigning back of formal to actual parameters occurs, as
for a subprogram call (see 6.4.1); such assignments take place outside of any
protected action.

If the named entry is closed, the entry call is added to an entry queue (as part
of the protected action, for a call on a protected entry), and the call remains
queued until it is selected or cancelled; there is a separate (logical) entry
queue for each entry of a given task or protected object (including each entry
of an entry family).

When a queued call is selected, it is removed from its entry queue. Selecting a
queued call from a particular entry queue is called servicing the entry queue.
An entry with queued calls can be serviced under the following circumstances:

· When the associated task reaches a corresponding accept_statement, or a
selective_accept with a corresponding open accept_alternative;

· If after performing, as part of a protected action on the associated protected
object, an operation on the object other than a call on a protected function,
the entry is checked and found to be open.

If there is at least one call on a queue corresponding to an open entry, then
one such call is selected according to the entry queuing policy in effect (see
below), and the corresponding accept_statement or entry_body is executed as
above for an entry call that is selected immediately.

The entry queuing policy controls selection among queued calls both for task and
protected entry queues. The default entry queuing policy is to select calls on a

given entry queue in order of arrival. If calls from two or more queues are
simultaneously eligible for selection, the default entry queuing policy does not
specify which queue is serviced first. Other entry queuing policies can be
specified by pragmas (see D.4).

For a protected object, the above servicing of entry queues continues until
there are no open entries with queued calls, at which point the protected action
completes.

For an entry call that is added to a queue, and that is not the
triggering_statement of an asynchronous_select (see 9.7.4), the calling task is
blocked until the call is cancelled, or the call is selected and a corresponding
accept_statement or entry_body completes without requeuing. In addition, the
calling task is blocked during a rendezvous.

An attempt can be made to cancel an entry call upon an abort (see 9.8) and as
part of certain forms of select_statement (see 9.7.2, 9.7.3, and 9.7.4). The
cancellation does not take place until a point (if any) when the call is on some
entry queue, and not protected from cancellation as part of a requeue (see
9.5.4); at such a point, the call is removed from the entry queue and the call
completes due to the cancellation. The cancellation of a call on an entry of a
protected object is a protected action, and as such cannot take place while any
other protected action is occurring on the protected object. Like any protected
action, it includes servicing of the entry queues (in case some entry barrier
depends on a Count attribute).

A call on an entry of a task that has already completed its execution raises the
exception Tasking_Error at the point of the call; similarly, this exception is
raised at the point of the call if the called task completes its execution or
becomes abnormal before accepting the call or completing the rendezvous (see
9.8). This applies equally to a simple entry call and to an entry call as part
of a select_statement.

Implementation Permissions

An implementation may perform the sequence of steps of a protected action using
any thread of control; it need not be that of the task that started the
protected action. If an entry_body completes without requeuing, then the
corresponding calling task may be made ready without waiting for the entire
protected action to complete.

When the entry of a protected object is checked to see whether it is open, the
implementation need not reevaluate the condition of the corresponding
entry_barrier if no variable or attribute referenced by the condition (directly
or indirectly) has been altered by the execution (or cancellation) of a
protected procedure or entry call on the object since the condition was last
evaluated.

An implementation may evaluate the conditions of all entry_barriers of a given
protected object any time any entry of the object is checked to see if it is open.

When an attempt is made to cancel an entry call, the implementation need not
make the attempt using the thread of control of the task (or interrupt) that
initiated the cancellation; in particular, it may use the thread of control of
the caller itself to attempt the cancellation, even if this might allow the
entry call to be selected in the interim.

NOTES

26 If an exception is raised during the execution of an entry_body, it is
propagated to the corresponding caller (see 11.4).

27 For a call on a protected entry, the entry is checked to see if it is open
prior to queuing the call, and again thereafter if its Count attribute (see 9.9)
is referenced in some entry barrier.

28 In addition to simple entry calls, the language permits timed, conditional,
and asynchronous entry calls (see 9.7.2, 9.7.3, and see 9.7.4).

29 The condition of an entry_barrier is allowed to be evaluated by an
implementation more often than strictly necessary, even if the evaluation might
have side effects. On the other hand, an implementation need not reevaluate the
condition if nothing it references was updated by an intervening protected
action on the protected object, even if the condition references some global
variable that might have been updated by an action performed from outside of a
protected action.

Jim Rogers

t...@cs.ucr.edu

unread,
Sep 10, 2002, 10:57:28 AM9/10/02
to
Jim Rogers <jimmaure...@worldnet.att.net> wrote:
+ t...@cs.ucr.edu wrote:
[...]
+> Hmmmmm. At this point, I would expect the task to evaluate the next
+> predicate that it might have affected, and so on. That is to say, a
+> task invoking a protected procedure waits until that invocation
+> returns before evaluating the queues (all of them) associated with
+> predicates whose variables got updated. Right?
+>
+> My apologies for being so pedantic, but "the devil is in the details".

+ No problem. I am posting section 9.5.3 from the Ada 95 Language Reference
+ Manual. I hope this explains what you want to know.

Unfortunately, it seems to talk around the matter. (See below.)

[...]
+ · An entry of a protected object is open if the condition of the entry_barrier
+ of the corresponding entry_body evaluates to True; otherwise it is closed. If
+ the evaluation of the condition propagates an exception, the exception
+ Program_Error is propagated to all current callers of all entries of the
+ protected object.

[...]
+ An entry with queued calls can be serviced under the following circumstances:

+ · When the associated task reaches a corresponding accept_statement, or a
+ selective_accept with a corresponding open accept_alternative;

+ · If after performing, as part of a protected action on the associated protected
+ object, an operation on the object other than a call on a protected function,
+ the entry is checked and found to be open.

[...]
+ 29 The condition of an entry_barrier is allowed to be evaluated by an
+ implementation more often than strictly necessary, even if the evaluation might
+ have side effects.

So, "an entry with a queued call can be serviced ... if after
performing ... an operation on the object, the entry is checked and
found to be open." Moreover, such "an entry_barrier is allowed to be


evaluated by an implementation more often than strictly necessary,
even if the evaluation might have side effects."

Nevertheless:
- When *must* the barrier be checked?
- When *must* an entry with a queued call be serviced?
- And, why allow non-deterministic evaluation of constructs
having side effects?

Tom Payne

Jim Rogers

unread,
Sep 10, 2002, 11:45:38 AM9/10/02
to
t...@cs.ucr.edu wrote:

> Jim Rogers <jimmaure...@worldnet.att.net> wrote:
> + t...@cs.ucr.edu wrote:
> [...]
> +> Hmmmmm. At this point, I would expect the task to evaluate the next
> +> predicate that it might have affected, and so on. That is to say, a
> +> task invoking a protected procedure waits until that invocation
> +> returns before evaluating the queues (all of them) associated with
> +> predicates whose variables got updated. Right?
> +>
> +> My apologies for being so pedantic, but "the devil is in the details".
>
> + No problem. I am posting section 9.5.3 from the Ada 95 Language Reference
> + Manual. I hope this explains what you want to know.
>
> Unfortunately, it seems to talk around the matter. (See below.)


If the standard does not specify a behavior that behavior is
implementation-defined. That is, it is left up to the compiler vendor
to decide exactly how things will work.


>
> [...]
> + · An entry of a protected object is open if the condition of the entry_barrier
> + of the corresponding entry_body evaluates to True; otherwise it is closed. If
> + the evaluation of the condition propagates an exception, the exception
> + Program_Error is propagated to all current callers of all entries of the
> + protected object.
>
> [...]
> + An entry with queued calls can be serviced under the following circumstances:
>
> + · When the associated task reaches a corresponding accept_statement, or a
> + selective_accept with a corresponding open accept_alternative;
>
> + · If after performing, as part of a protected action on the associated protected
> + object, an operation on the object other than a call on a protected function,
> + the entry is checked and found to be open.
>
> [...]
> + 29 The condition of an entry_barrier is allowed to be evaluated by an
> + implementation more often than strictly necessary, even if the evaluation might
> + have side effects.
>
> So, "an entry with a queued call can be serviced ... if after
> performing ... an operation on the object, the entry is checked and
> found to be open." Moreover, such "an entry_barrier is allowed to be
> evaluated by an implementation more often than strictly necessary,
> even if the evaluation might have side effects."
>
> Nevertheless:
> - When *must* the barrier be checked?


The barrier *must* be checked after the completion of any protected
procedure or protected entry. It *must* be checked when a task
calls the entry. If the barrier is false the task must be queued.
If not false, the entry will be executed.


> - When *must* an entry with a queued call be serviced?


It must be serviced when the entry barrier is found to be true and
no other task holds the lock on the protected object.

There is no rule about the order that entry queues are serviced if
more than one evaluates to true. This detail is left to the compiler
developers.


> - And, why allow non-deterministic evaluation of constructs
> having side effects?


I was not involved in the language definition discussions. I suspect
this is a concession to some of the compiler developers represented on
the standards committee.

Note that permission is also given to be very efficient about the
evaluation of barriers. The implication is that you can create a poor
compiler, but if you do you will have to compete with good compilers.
This is often sufficient motivation to get things right. Why loose
market share because you chose a compiler implementation short cut?

The Ada development community is very unforgiving of poor compilers.

Jim Rogers

Alexander Terekhov

unread,
Sep 10, 2002, 12:08:11 PM9/10/02
to

t...@cs.ucr.edu wrote:
[...]

> Nevertheless:
> - When *must* the barrier be checked?

Uhmm. What do you mean, exactly?

http://www.iuma.ulpgc.es/users/jmiranda/gnat-rts/node17.htm#SECTION00512000000000000000
("The eggshell model")

> - When *must* an entry with a queued call be serviced?

"A queued entry call has precedence over other operations
on the protected object. .... Ada does not specify which
task executes the entry barrier and the entry body.
Protected entry bodies can be executed by any task,
regardless of which task made the corresponding entry call
[GB94a, Section 5(3)]). Therefore two obvious candidates
are the task that called the entry, and the task opens the
barrier of the entry [GMB93, Section 3.4]. They are
referred as the Self-Service Model and the Proxy Model of
protected entry execution, respectively."

> - And, why allow non-deterministic evaluation of constructs
> having side effects?

"When the entry of a protected object is checked to see whether

it is open, the implementation need not reevaluate the condition
of the corresponding entry_barrier if no variable or attribute
referenced by the condition (directly or indirectly) has been
altered by the execution (or cancellation) of a protected
procedure or entry call on the object since the condition was
last evaluated.

An implementation may evaluate the conditions of all
entry_barriers of a given protected object any time any entry
of the object is checked to see if it is open."

regards,
alexander.

t...@cs.ucr.edu

unread,
Sep 10, 2002, 12:35:27 PM9/10/02
to
Alexander Terekhov <tere...@web.de> wrote:

+ t...@cs.ucr.edu wrote:
+ [...]
+> Nevertheless:
+> - When *must* the barrier be checked?

+ Uhmm. What do you mean, exactly?

At what points *must* the task that holds the object's lock check to
see if a given entry is open? I found language saying (roughly) that
it *may* be checked whenever the task holding the lock pleases, even
if doing so has side effects.

+ http://www.iuma.ulpgc.es/users/jmiranda/gnat-rts/node17.htm#SECTION00512000000000000000
+ ("The eggshell model")

+> - When *must* an entry with a queued call be serviced?

+ "A queued entry call has precedence over other operations
+ on the protected object.

Ah so! That's what I was looking for. So, the rule is that any open
entry's queue *must* be serviced before the next operation on the
object can begin. Thanks.

[...]
+> - And, why allow non-deterministic evaluation of constructs
+> having side effects?

+ "When the entry of a protected object is checked to see whether
+ it is open, the implementation need not reevaluate the condition
+ of the corresponding entry_barrier if no variable or attribute
+ referenced by the condition (directly or indirectly) has been
+ altered by the execution (or cancellation) of a protected
+ procedure or entry call on the object since the condition was
+ last evaluated.

+ An implementation may evaluate the conditions of all
+ entry_barriers of a given protected object any time any entry
+ of the object is checked to see if it is open."

So, I guess that the specification is simply saying: Don't use
conditions that have side effects because the implementation *may*
check barriers whenever it pleases.

Tom Payne

Alexander Terekhov

unread,
Sep 10, 2002, 12:59:40 PM9/10/02
to

t...@cs.ucr.edu wrote:
[...]

> + "A queued entry call has precedence over other operations
> + on the protected object.
>
> Ah so! That's what I was looking for. So, the rule is that any open
> entry's queue *must* be serviced before the next operation on the
> object can begin. ....

Yeah, now... ;-)

http://groups.google.com/groups?selm=3D75D07C.288E66AD%40web.de
(...protected Blocker is...)

http://groups.google.com/groups?selm=3D77949D.B43DFF0F%40web.de
(...Okay. Here's what I was missing:...)

http://groups.google.com/groups?selm=3D790991.4E6C20B6%40web.de
(...``unnecessary'' context switching due to "hand off" of *output data*...)

regards,
alexander.

t...@cs.ucr.edu

unread,
Sep 10, 2002, 1:20:40 PM9/10/02
to
Alexander Terekhov <tere...@web.de> wrote:

+ t...@cs.ucr.edu wrote:
+ [...]

+> + "A queued entry call has precedence over other operations
+> + on the protected object.
+>
+> Ah so! That's what I was looking for. So, the rule is that any open
+> entry's queue *must* be serviced before the next operation on the
+> object can begin. ....

+ Yeah, now... ;-)

That's still troubling. An entry call can fire at any time between
the time its predicate becomes true and the beginning of the next
operation on the protected object. So, what if my protected procedure
make that predicate true and then changes some variable that affects
the behavior of the entry's body, e.g., it increments a variable that
is used as the upper bound on a loop. That loop will execute a
different number of times depending on whether the implementation
fires the entry call as soon as the predicate be comes true or waits
until some time after I've incremented that bounds variable. That's
not a good thing for portability. :-(

Tom Payne

Alexander Terekhov

unread,
Sep 10, 2002, 1:27:39 PM9/10/02
to

Uhmm. Your protected procedure holds the lock and *nothing*
can "interrupt" it (abort/exceptions aside). So, I don't see
"not a good thing for portability" problems in your example.

regards,
alexander.

t...@cs.ucr.edu

unread,
Sep 10, 2002, 2:38:16 PM9/10/02
to
Alexander Terekhov <tere...@web.de> wrote:

+ t...@cs.ucr.edu wrote:
[...]
+> That's still troubling. An entry call can fire at any time between
+> the time its predicate becomes true and the beginning of the next
+> operation on the protected object. So, what if my protected procedure
+> make that predicate true and then changes some variable that affects
+> the behavior of the entry's body, e.g., it increments a variable that
+> is used as the upper bound on a loop. That loop will execute a
+> different number of times depending on whether the implementation
+> fires the entry call as soon as the predicate be comes true or waits
+> until some time after I've incremented that bounds variable. That's
+> not a good thing for portability. :-(

+ Uhmm. Your protected procedure holds the lock and *nothing*
+ can "interrupt" it (abort/exceptions aside). So, I don't see
+ "not a good thing for portability" problems in your example.

Everything would be fine if the body of the protected entry must
acquire the lock -- perhaps I missed that part. Here is what
I managed to distill out:

... an entry with a queued call can be serviced ... if after


performing ... an operation on the object, the entry is checked
and found to be open.

Upon reading this once again, I note the words "after performing
... an operation". I guess means that the now-open entry cannot be
serviced until *after* my protected-procedure operation. If so, that
eliminates the indeterminacy that concerned me.

Tom Payne

Jim Rogers

unread,
Sep 10, 2002, 2:54:00 PM9/10/02
to
t...@cs.ucr.edu wrote:

>
> Upon reading this once again, I note the words "after performing
> ... an operation". I guess means that the now-open entry cannot be
> serviced until *after* my protected-procedure operation. If so, that
> eliminates the indeterminacy that concerned me.


That is correct. *After* the completion of a protected procedure,
the entry barrier is evaluated. If, at that time, the barrier is true,
a call from the entry queue is executed. Following the execution of
that entry call, the barrier is again evaluated. If it is still true
another queued entry call is executed.

Jim Rogers

Alexander Terekhov

unread,
Sep 10, 2002, 2:57:15 PM9/10/02
to

t...@cs.ucr.edu wrote:
[...]

> Everything would be fine if the body of the protected entry must
> acquire the lock -- perhaps I missed that part. Here is what
> I managed to distill out:
>
> ... an entry with a queued call can be serviced ... if after
> performing ... an operation on the object, the entry is checked
> and found to be open.
>
> Upon reading this once again, I note the words "after performing
> ... an operation". I guess means that the now-open entry cannot be
> serviced until *after* my protected-procedure operation. If so, that
> eliminates the indeterminacy that concerned me.

Yep. However, that doesn't eliminate the inefficiency [output data
"hand off" to selected waiter(s); somewhat reduced concurrency in
proxy model and "schedulability analysis is compromised by this
model since a task attempting to exit an eggshell must first
execute all of the waiting entry calls whose barriers are open,
and there is no language-imposed restriction on the number of such
calls that can be pending" aside] that concerned me. ;-)

http://www.iuma.ulpgc.es/users/jmiranda/gnat-rts/node18.htm#SECTION00521000000000000000
("Self-service versus proxy", the 4th paragraph is about proxy model:
The principal advantage of the self-s^H^H^H^H^H^H proxy model is...)

regards,
alexander.

t...@cs.ucr.edu

unread,
Sep 10, 2002, 6:20:21 PM9/10/02
to
Alexander Terekhov <tere...@web.de> wrote:

+ t...@cs.ucr.edu wrote:
+ [...]
+> Everything would be fine if the body of the protected entry must
+> acquire the lock -- perhaps I missed that part. Here is what
+> I managed to distill out:
+>
+> ... an entry with a queued call can be serviced ... if after
+> performing ... an operation on the object, the entry is checked
+> and found to be open.
+>
+> Upon reading this once again, I note the words "after performing
+> ... an operation". I guess means that the now-open entry cannot be
+> serviced until *after* my protected-procedure operation. If so, that
+> eliminates the indeterminacy that concerned me.

+ Yep. However, that doesn't eliminate the inefficiency [output data
+ "hand off" to selected waiter(s); somewhat reduced concurrency in
+ proxy model and "schedulability analysis is compromised by this
+ model since a task attempting to exit an eggshell must first
+ execute all of the waiting entry calls whose barriers are open,
+ and there is no language-imposed restriction on the number of such
+ calls that can be pending" aside] that concerned me. ;-)

The proxy model has the advantage of lower overhead -- less lock and
context juggling. Moreover, the self-service version of this
protocol, which corresponds to the tail-signalling version of Hoare
semantics, has some problems of its own.

On a uniprocessor system, a bottleneck occurs, if the task running the
protected procedure passes the lock but not the CPU to the awakened
self-server. A suspended thread holding a lock is A BAD THING. If,
on the other hand, the task running the protected procedure must pass
his CPU to the awakened thread (along with the lock), his good deed is
being punished via a premature aborting of his timeslice.

FWIW under the usual implementation of Hoare semantics (not the
tail-signalling version), the signallee passes the CPU back to the
signaller as soon as the signallee waits or leaves the monitor -- a
cost of two context switches per signal but the CPU ends up in the
right hands.

Tom Payne

0 new messages