for example:
thread 1 has 3 fuctions a--b--c
thread 2 has 3 d--e--f
we allow the execution: a-d-b-e-c-f.
My problem is how to know that "a" will not be interrupt before it finishs.
Hope i have addressed the problem clearly, thanks for u reading.
- -
Best Regards
CuiBin
Unless you have some documentation to the contrary, always assume that they
are *not* atomic.
Generally speaking function calls are not atomic, only individual assembly
instructions.
The only way to guarantee that your function will not be pre-empted is
by running the program in an non-multitasking environment -- defeats the
purpose of coding it in threads. Perhaps you should code your program
as a single thread if you need the order of execution of functions to be
fixed.
-Missaka
--
============================================================================
Languages: UNIX,C,C++,Java,PERL,HTML,sh,PHP
Specifications: HTTP,SMTP,DNS,MIME,T.30,T.37,FAX CLASS 2/2.0
Applications: Sendmail,Bind,Apache
Networking: TCP/IP,IP CHAINS/IP TABLES,Routing,Firewalling,NAT
OS: Solaris,Linux,Windows
Missaka Wijekoon, Troubleshooter/Programmer/Network Admin/UNIX Admin
GR8TECHS Consulting
missaka_...@yahoo.com
949-280-7982
============================================================================
> actually i want to implement concurrency operation on the index, such as
> B-tree, R-tee. So I must run some concurrently. Anyone has experience?
The point is that you don't make this work by preventing preemption of your
functions. You make it work by PROTECTING access to your data with
appropriate synchronization mechanisms (mutexes, semaphores, condition
variables, barriers, read-write locks, etc.) so that no thread will see
broken data invariants. You want to design this in a way that will allow
your threads to run simultaneously on different processors in a
multiprocessor system -- quite the opposite of "preventing preemption".
Please study synchronization and invariants. The concepts are thoroughly
covered in any basic book on programming with threads; such as, for
example, my own (Programming with POSIX Threads), or Bil Lewis'
Multithreaded Programming with Pthreads, or even "the others". (There are a
number out there).
--
/--------------------[ David.B...@hp.com ]--------------------\
| Hewlett-Packard Company Tru64 UNIX & VMS Thread Architect |
| My book: http://www.awl.com/cseng/titles/0-201-63392-2/ |
\-------------[ http://homepage.mac.com/~dbutenhof ]--------------/
Only kernel-functions can be atomic.
Thread-Libraries are not kernel-functions.
They provide synchronisation and save
access to datastructures - which is enough
for most applications -, but not atomicity.
Ciao,
Oliver
--
Normally, OO-programming is annoying; but with Ocaml it is fun! :)
If I understand correctly, you'd like to write a portable (no "machine
specific assembly instructions") preemptive threads library that is
POSIX compliant. For exactly the reasons you cite, I doubt that it is
possible to write such a POSIX-compliant preemptive library. On the
other hand, Ralf Engelschall has written a non-preemptive library,
GNU's Pth, that he claims to be portable and (I presume) POSIX
compliant -- FWIW, Dave Butenhof disagrees. Engelschall presented a
paper on his approach at one of the recent Usenix technical
conferences, and IIRC it's available from his web site.
I'm not familiar with the restrictions on what signal handlers are
allowed to do under POSIX. Under the C and C++ standards, they are
allowed to set flags and that's about it:
... The [signal handler] function func may terminate by executing a
return statement or by calling the abort, exit, or longjmp function.
If func executes a return statement and the value of sig was SIGFPE or
any other implementaion-defined value corresponding to a computational
exception, the behavior is undefined. ...
... the behavior is undefined if the signal handler ... refers to
any object with static storage duraton other than by assigning a
value to a static storage duration variable of type volatile
sig_atomic_t. [C89 7.7.2.1]
Notice that signal handlers are allowed to call longjmp, which may
give you some hope of handing off execution from a signal handler.
But, although a signal handler "may terminate by executing a
... longjmp", the jmp_buf argument has to be local to the handler --
which means that execution ain't going nowhere.
Tom Payne
You have equated "atomicity" with "non-preemptability". Although,
non-preemptability is one way to achieve atomicity, it is not the only
way, nor even the best way. Simply acquire a lock at the beginning of
each function and release it at the end. You'll typically find
locks to be of one of the following kinds:
- locks that are implemented by blocking preemption,
- spinlocks, where the acquiring thread busy waits on a flag,
- passive locks where waiting threads give up their CPU (e.g.,
a semaphore initialized to one).
Which is best depends on how long your functions run and what's available
in your environment.
Tom Payne
Another possible implementation path is to use the Ada compiler (GNAT) on
your Linux system. Ada implements full concurrency capabilities, although
you will work at an abstraction level above the POSIX standard.
Ada also provides a reliable model for handling signals in a concurrent
design.
The following code demonstrates a simple signal handler using Ada.
The first part is a package specification, defining the interface to
the "handlers" package
with Ada.Interrupts.Names; use Ada.Interrupts.Names;
package handlers is
pragma Unreserve_all_interrupts;
-- Declare a protected object to handle SIGINT
protected handler is
procedure Counter;
entry wait_for_interrupt;
pragma Attach_Handler(Counter,SigINT);
private
Count : Natural := 0;
end handler;
-- Declare a task to wait for signal to be handled
task int_handler is
entry stop;
end int_handler;
end handlers;
============
The next part is the package body for "handlers", containing the actual
implementation.
with Ada.Text_Io; use Ada.Text_Io;
package body handlers is
-- Define the signal handler protected object
protected body handler is
procedure Counter is
begin
Count := Count + 1;
end Counter;
entry wait_for_interrupt when Count > 0 is
begin
Count := Count - 1;
end wait_for_interrupt;
end handler;
-- Define the task that will resond to signals
task body int_handler is
num_interrupts : Natural := 0;
begin
loop
select
handler.wait_for_interrupt;
num_interrupts := num_interrupts + 1;
Put_Line("handled interrupt #" &
Natural'Image(num_interrupts));
or
delay 0.001;
select
accept Stop;
exit;
or
delay 0.0;
end select;
end select;
end loop;
end int_handler;
end handlers;
Jim Rogers
Are there non-asynch-signals?
IMHO all signals are asynch.
And normally it's best, not to mix Unix-Signals and threaded
code. POSIX-threads provide their own signalling mechanisms
and you better use only these.
I have not used swapcontext and so I can not tell you details
about them. But it doesn't matter, what POSIX-signal-functions
you call, don't rely on mixing POSIX-threads with Unix-signals.
Even if running on one system, you may have problems, when
porting to another system.
Better use defensive programming for creating reliable code.
Ciao,
Oliver
No, not clear enough.
What do you mean with "not interrupted"?
You have to be clear about some things:
- Which tasks are performed in that function?
- What parts of that function must be saved against "interruption"?
Like most other problems in programming, the answer
can only be: "it depends...".
But I think, you do not really mean atomicity.
Best idea is, to be more verbose in the description
of your pogramming problem, so that the people in this group
can give you better answers.
Ciao,
Oliver
> Are there non-asynch-signals?
> IMHO all signals are asynch.
Signals such as system-generated SEGVs, and anything raise()d, are
synchronous - although most code will still be unsafe to call in a SEGV
handler.
--
Arnold Hendriks <a.hen...@b-lex.com>
B-Lex Information Technologies, http://www.b-lex.com/
Perhaps it's me. When I read about Ada, the world of concurrency and
asychrony seems sane. When I read about such things in C and its
extensions, the world seems weird. What am I missing here? Does Ada
pay a huge price in overhead or something for this (perceived) sanity?
Tom Payne
Not at all. Ada is intended to be used in embedded systems, as well as
more common applications. This does force Ada to be efficient. If your
Ada compiler is targeted for an OS that supports threads, then the Ada
concurrency is usually mapped to the OS threads. If the Ada compiler is
targeted to and OS without thread support compiler vendors have been
known to map Ada concurrency to OS processes. If the Ada compiler is
targeted to a bare chip without any OS the Ada compiler will produce
its own executive, including concurrency support. What you get depends
upon which compiler you use.
Since the concepts of concurrency are not necessarily tied to threads
in Ada you, as the programmer, do not need to worry about new syntax
for different platforms. Performance will vary depending on which
implementation you use, but the source code will remain unchanged.
I believe concurrency looks more sane for Ada because Ada provides you
a higher level of abstraction for concurrency than do C and its extensions.
The compiler generates the low level code containing most of the weird
details. This is what a compiler is for. If you want to live at the level
of such fine grained details for single threaded code you can use assembler.
If you want to live at that low level for multi-threaded code you can use
C or its extensions.
It has been shown that you can implement semaphores in Ada for use in a
single process with multiple tasks (concurrent threads of execution) such
that those semaphores are more efficient than OS semaphores intended for
use between processes. The biggest difference is the light weight context
switching for the Ada semaphores compared to the OS semaphores.
The Ada Rationale provides the following example of a counting semaphore
implemented using Ada's protected type:
protected type Counting_Semaphore(Start_Count : Integer := 1) is
entry Secure;
procedure Release;
function Count return Integer;
private
Current_Count : Integer := Start_Count;
end Counting_Semaphore;
protected body Counting_Semaphore is
entry Secure when Current_Count > 0 is
begin
Current_Count := Current_Count - 1;
end Secure;
procedure Release is
begin
Current_Count := Current_Count + 1;
end Release;
function Count return Integer is
begin
return Current_Count;
end Count;
end Counting_Semaphore;
This implements the general form of Dijkstra's semaphore. It illustrates the
use of all three forms of protected operations: a function, a procedure, and
an entry. The entry "Secure" and the procedure "Release" correspond to the
P and V operations, and the function "Count" gives the current value of the
semaphore.
Any task calling the "Secure" entry will be suspended until Current_Count is
greater than 0. The "Release" procedure and the function "Count can be called
without conditional suspension. Only one of these operations is allowed to
execute at any moment. Thus, Current_Count cannot be changed while the
"Count" function is executing. All changes to Current_Count in both "Secure"
and "Release" will appear to be atomic. No other access to the protected object
will be allowed until these operations complete.
Jim Rogers
That's a monitor. It's really nice that language impl can figure
out all predicates and perform waiting and signaling (I'm just
curious how would you express signal vs broadcast "difference").
It's probably fine and definitely less error prone than low level
Pthreads monitors.
However, I don't think "that those semaphores are more efficient
than OS semaphores". ;-)
regards,
alexander.
> That's a monitor. It's really nice that language impl can figure
> out all predicates and perform waiting and signaling (I'm just
> curious how would you express signal vs broadcast "difference").
> It's probably fine and definitely less error prone than low level
> Pthreads monitors.
>
> However, I don't think "that those semaphores are more efficient
> than OS semaphores". ;-)
That depends on the compiler's code generator and runtime. Because an OS
semaphore involves OS calls, an inline specially-generated Ada semaphore
CAN BE much more efficient.
The fact that Ada 95 understands and manipulates data predicates is
extremely powerful. If protected types had been in the original Ada
specification, Ada might well have continued its early momentum to rule the
world. ;-)
> Alexander Terekhov wrote:
>
>
>>That's a monitor. It's really nice that language impl can figure
>>out all predicates and perform waiting and signaling (I'm just
>>curious how would you express signal vs broadcast "difference").
>>It's probably fine and definitely less error prone than low level
>>Pthreads monitors.
>>
>>However, I don't think "that those semaphores are more efficient
>>than OS semaphores". ;-)
>>
>
> That depends on the compiler's code generator and runtime. Because an OS
> semaphore involves OS calls, an inline specially-generated Ada semaphore
> CAN BE much more efficient.
The term "inline" is very appropriate here. The protected object operation
executes within the context of the calling task. This means that the task
does not have to suspend during the actual execution of the protected
object operation. In many operating systems a thread will suspend until
an OS call completes.
>
> The fact that Ada 95 understands and manipulates data predicates is
> extremely powerful. If protected types had been in the original Ada
> specification, Ada might well have continued its early momentum to rule the
> world. ;-)
The original version of Ada provided only the "rendezvous" mechanism for
communication between tasks. The rendezvous is useful for simple communication
between two tasks. It is clumsy for anything else. It was possible to
simulate the capabilities of a protected object using tasks and the
rendezvous mechanism. It was also a very heavy weight work around.
Ada 95 did not eliminate the rendezvous from the language. It simply added
protected types, allowing a much more efficient implementation of
asynchronous communication between tasks.
The "counting semaphore" example I showed uses all three kinds of operations
available to a protected type. You are free to use only those operations
you need. If you do not need all the operations, do not use them.
Protected functions are intended to only report the state of the protected
object. They should not change state. Because of this, many tasks may
simultaneously execute functions on the same protected object.
Protected procedures are intended to change the state of a protected
object. They must have exclusive access to the protected object. This
exclusivity is controlled by the language. You simply define the
operation to be a procedure and the low level details of mutual
exclusion are provided by the compiler.
Protected entries are intended to change the state of a protected
object, like procedures. Unlike protected procedures, protected entries
have a boundary condition. While the condition evaluates to false all
calling tasks are placed in an entry queue to await the opening of the
barrier (change of the condition to TRUE). The condition must evaluate
the state of the protected object, not something external to the
protected object.
Following is an example of a protected object used as a counter for
multiple tasks. Note that this example has only one operation, a
protected procedure.
type Modular_Num is mod 2**32;
protected Counter is
procedure Get_Number(Value : out Integer);
private
Num : Modular_Num := 0;
end Counter;
protected body Counter is
procedure Get_Number(Value : out Integer) is
begin
Value := Integer(Num);
Num := Num + 1;
end Get_Number;
end Counter;
Every call to Counter.Get_Number will pass back a unique value.
The use of a modular type will cause the value to eventually
cycle back to 0. The minimum number will be 0 and the maximum
will be (2**32) - 1.
Note that this example is not a monitor. It simply manipulates
a mutex for the protected object's private data.
Jim Rogers
>> Are there non-asynch-signals?
>> IMHO all signals are asynch.
> Signals such as system-generated SEGVs, and anything raise()d, are
> synchronous - although most code will still be unsafe to call in a SEGV
> handler.
Oh, can you please explain what is meant by "asynchronus signals"?
When said "signals" I normally think of Unix-Signals. If you say,
they are synchronus signals, what signals are asynchron and
on which systems are they available?
Is this special POSIX-stuff, or what?
I only heard of asynchronus I/O, but not of asynchronus signals,
so I thought normal Unix-signals were meant.
Can you explain me the different signalling types more verbose?
Thanx.
Ciao,
Oliver
> Arnold Hendriks <a.hen...@b-lex.com> wrote:
> > Oliver Bandel <oli...@first.in-berlin.de> wrote:
> >>> pre-emptive scheduling. as for instance GNU manual says that I cannot call
> >>> swapcontext from the signal handler as it is NOT asynch-signal safe.
>
> >> Are there non-asynch-signals?
> >> IMHO all signals are asynch.
> > Signals such as system-generated SEGVs, and anything raise()d, are
> > synchronous - although most code will still be unsafe to call in a SEGV
> > handler.
>
> Oh, can you please explain what is meant by "asynchronus signals"?
> When said "signals" I normally think of Unix-Signals. If you say,
> they are synchronus signals, what signals are asynchron and
> on which systems are they available?
> Is this special POSIX-stuff, or what?
This might help you (from signal(3HEAD) man page on Solaris):
Signals can be generated synchronously or asynchronously.
Events directly caused by the execution of code by a thread,
such as a reference to an unmapped, protected, or bad memory
can generate SIGSEGV or SIGBUS; a floating point exception
can generate SIGFPE; and the execution of an illegal
instruction can generate SIGILL. Such events are referred to
as traps; signals generated by traps are said to be synchro-
nously generated. Synchronously generated signals are ini-
tiated by a specific thread and are delivered to and handled
by that thread.
All other signals are asynchronous and (in multi-threaded environment) can
be delivered to any thread within the process.
HTH, Dragan
--
Dragan Cvetkovic,
To be or not to be is true. G. Boole No it isn't. L. E. J. Brouwer
: Protected functions are intended to only report the state of the protected
: object. They should not change state. Because of this, many tasks may
: simultaneously execute functions on the same protected object.
: Protected procedures are intended to change the state of a protected
: object. They must have exclusive access to the protected object. This
: exclusivity is controlled by the language. You simply define the
: operation to be a procedure and the low level details of mutual
: exclusion are provided by the compiler.
Sounds like a class-object protected by an instances-specific shared
(read/write) lock:
* functions have shared (i.e., read) access, while
* procedures have exclusive (i.e., write) access.
: Protected entries are intended to change the state of a protected
: object, like procedures. Unlike protected procedures, protected entries
: have a boundary condition. While the condition evaluates to false all
: calling tasks are placed in an entry queue to await the opening of the
: barrier (change of the condition to TRUE). The condition must evaluate
: the state of the protected object, not something external to the
: protected object.
Sounds like a wait on a condition: while ( <predicate> ) <cond>.wait();
: Following is an example of a protected object used as a counter for
: multiple tasks. Note that this example has only one operation, a
: protected procedure.
: type Modular_Num is mod 2**32;
: protected Counter is
: procedure Get_Number(Value : out Integer);
: private
: Num : Modular_Num := 0;
: end Counter;
: protected body Counter is
: procedure Get_Number(Value : out Integer) is
: begin
: Value := Integer(Num);
: Num := Num + 1;
: end Get_Number;
: end Counter;
: Every call to Counter.Get_Number will pass back a unique value.
: The use of a modular type will cause the value to eventually
: cycle back to 0. The minimum number will be 0 and the maximum
: will be (2**32) - 1.
: Note that this example is not a monitor. It simply manipulates
: a mutex for the protected object's private data.
Sounds like a monitor to me, but a monitor with no conditions.
Tom Payne
> The fact that Ada 95 understands and manipulates data predicates is
> extremely powerful. If protected types had been in the original Ada
> specification, Ada might well have continued its early momentum to rule the
> world. ;-)
Did it have an early momentum? Curious.
Dale
>
> : Protected entries are intended to change the state of a protected
> : object, like procedures. Unlike protected procedures, protected entries
> : have a boundary condition. While the condition evaluates to false all
> : calling tasks are placed in an entry queue to await the opening of the
> : barrier (change of the condition to TRUE). The condition must evaluate
> : the state of the protected object, not something external to the
> : protected object.
>
> Sounds like a wait on a condition: while ( <predicate> ) <cond>.wait();
Maybe. The fact that the compiler has knowledge about the interesting
state changes however allow for all sorts of interesting optimisations
that simply aren't possible using the Posix library.
Dale
Hmmmmm. Sounds like there's no broadcast or signal on this condition.
Rather, compiler generated code issues a broadcast or signal (?) the
instant the predicate becomes true. If so, very cool.
Tom Payne
Actually it is a bit different. There is a queue of waiting threads for
each entry. When the predicate becomes true the tasks in the entry
queue are allowed access before any additional tasks are allowed access.
This is known in Ada as the "internal progress first" rule. Any task may
remove itself from an entry queue by calling the entry with a conditional
entry call.
The general form of a conditional entry call is:
select
entry-call-statement
..........................
. sequence-of-statements .
..........................
else
sequence-of-statements
end select;
The call on the protected entry is selected immediately if the predicate is
true. If this happens the entry call and the statements following it are
executed. If the predicate is false the statements following the "else"
are executed instead.
This arrangement allows you to effectively poll for a condition in the
protected object, and then respond to that condition.
The point, however, is that there is no broadcast or signal associated
with a protected entry. The protected entry itself handles the processing
of the entry queue. All the code for this is generated by the compiler without
involvement of the programmer.
Jim Rogers
Here is an equivalent thread-safe counter implemented in the join-calculus
language (I think I have this pretty close to right..)
# let create_counter() =
# let count(n) | get0() = count(n+1) | reply n to inc0 in
# count(0) | reply get0
# ;;
#
# let my_counter = create_counter()
# ;;
val create_counter : <> -> <<> -> <int>>
val my_counter : <> -> <int>
Polyphonic C# example (from
http://research.microsoft.com/~nick/polyphony/intro.htm ):
public class Buffer {
public String get() & public async put(String s) {
return s;
}
}
hys
"Jim Rogers" <jimmaure...@worldnet.att.net> wrote in message
news:3D6E226C...@worldnet.att.net...
> type Modular_Num is mod 2**32;
>
> protected Counter is
> procedure Get_Number(Value : out Integer);
> private
> Num : Modular_Num := 0;
> end Counter;
>
> protected body Counter is
> procedure Get_Number(Value : out Integer) is
> begin
> Value := Integer(Num);
> Num := Num + 1;
> end Get_Number;
> end Counter;
>
> Every call to Counter.Get_Number will pass back a unique value.
> The use of a modular type will cause the value to eventually
> cycle back to 0. The minimum number will be 0 and the maximum
> will be (2**32) - 1.
>
> Note that this example is not a monitor. It simply manipulates
> a mutex for the protected object's private data.
>
> Jim Rogers
>
>
>
--
Hillel Y. Sims
FactSet Research Systems
hsims AT factset.com
> Signals can be generated synchronously or asynchronously.
> Events directly caused by the execution of code by a thread,
> such as a reference to an unmapped, protected, or bad memory
[...]
> All other signals are asynchronous and (in multi-threaded environment) can
> be delivered to any thread within the process.
OK, then synchronus/asynchronus does not mean, how these
are scheduled in time; it means, who can be addressed
by the signals.
OK, I now see this. The words were a littlebid confusing.
Thanks for that info.
Ciao,
Oliver
P.S.: I found some stuff about Thread-signalling in Butenhofs
book (which is very good btw). But it's in chapters I didn't
reached now. I recently unblocked ;-) my thread-programming
interests after some blocking events some months ago. ;-)
You might also want to take a look at its "native C-pound[1]" implementation...
those MS-research{/UK} folks (and C# "designers" too, AFAICS) have no idea what
"a monitor" is, to begin with[2]. ;-) ;-)
regards,
alexander.
[1] http://groups.google.com/groups?selm=3D3BD75A.4F886239%40web.de
(Subject: Re: c#, Newsgroups: comp.std.c)
[2] http://research.microsoft.com/~nick/polyphony/polyphonypaged.pdf
("The code below implements synchronization using monitors,
the low-level interface to object locks in C# ... The
specification of monitors guarantees that an interrupt
on a non-sleeping thread does not happen until the thread
actually does call Thread.Sleep, hence it is correct to
release the lock before entering the try catch statement.")
Regardless of the quality of the underlying implementation, you gotta admit
the concept is pretty cool!
public class Buffer {
public String get() & public async put(String s) {
return s;
}
}
hys
--
That seems a bit unlikely as a blanket statement, given that Tony
Hoare is one of those MS-research{/UK} folks.
http://groups.google.com/groups?threadm=3D0B3459.7A7D7998%40web.de
(Subject: Re: C# threading model?)
"....
http://groups.google.com/groups?selm=3CFE1D7F.96EC0C0F%40web.de
("This led Brinch-Hansen to express a great disappointment...."
Heck, MS-folks can't even comprehend what >>NOT<< to steal from
SUN's brain-dead Java)
...."
> given that Tony Hoare
>>The Knight<<(*)
> is one of those MS-research{/UK} folks.
Right. THAT makes it even FUNNIER! ;-) ;-)
regards,
alexander.
(*) Sir Charles Antony Richard {CAR} Hoare, known as Tony b. 1934, knighted
in 2000. "In 1959, as a graduate student at Moscow State University, he
studied the machine translation of languages (together with probability
theory, in the school of Kolmogorov). To assist in efficient look-up of
words in a dictionary, he discovered the well-known sorting algorithm
Quicksort. ...." http://www.research.microsoft.com/~thoare
:> Dale Stanbrough <dsta...@bigpond.net.au> wrote:
:> : t...@cs.ucr.edu wrote:
:>
:> :>
:> :> : Protected entries are intended to change the state of a protected
:> :> : object, like procedures. Unlike protected procedures, protected entries
:> :> : have a boundary condition. While the condition evaluates to false all
:> :> : calling tasks are placed in an entry queue to await the opening of the
:> :> : barrier (change of the condition to TRUE). The condition must evaluate
:> :> : the state of the protected object, not something external to the
:> :> : protected object.
:> :>
:> :> Sounds like a wait on a condition: while ( <predicate> ) <cond>.wait();
:>
:> : Maybe. The fact that the compiler has knowledge about the interesting
:> : state changes however allow for all sorts of interesting optimisations
:> : that simply aren't possible using the Posix library.
:>
:> Hmmmmm. Sounds like there's no broadcast or signal on this condition.
:> Rather, compiler generated code issues a broadcast or signal (?) the
:> instant the predicate becomes true. If so, very cool.
:>
: Actually it is a bit different. There is a queue of waiting threads for
: each entry. When the predicate becomes true the tasks in the entry
: queue are allowed access before any additional tasks are allowed access.
So when one goes, they all go -- like a broadcast.
: This is known in Ada as the "internal progress first" rule.
Hoare's monitors (1974) worked along this line. Does the task that
caused the predicate to become true continue on before or after the
awakened threads do their business?
By the way, thanks for this information.
Tom Payne
I'v just learned that the task that caused the predicate to become
true may actually "do business" *on behalf* of "awakened" threads.
http://www.adapower.com/rationale/rat95-p2-9.html#1-3
(9.1.3 Efficiency of Protected Types)
Put(New_Item);
lock
Data := New_Item; Full := True;
scan: and then on behalf of Consumer
An_Item := Data; Full := False;
set Consumer ready
unlock
Put(New_Item);
lock
Data := New_Item: Full := True;
scan: nothing to do
unlock
Put(New_Item);
lock
queue
unlock
switch context
regards,
alexander.
> Hoare's monitors (1974) worked along this line. Does the task that
> caused the predicate to become true continue on before or after the
> awakened threads do their business?
Ideally, the task that caused the predicate to become true would get
priority. We want to minimize context switches, not maximize them.
DS
I disagree.
In a binary Producer / Consumer model with a shared buffer, the Consumer can
only read when there is new data in the buffer, and the producer cannot
write to the buffer until the old message has been read. In this case you
do not want priority to the task that caused the predicate to become true.
In reality context switching must occur because task scheduling does not
break down to a simple set of protected object calls.
Here is an Ada example of a protected object for the simple producer /
consumer model mentioned above.
protected Buffer is
entry Put(X: in Item);
entry Get(X: out Item);
private
Data: Item;
Full: Boolean := False;
end;
protected body Buffer is
entry Put(X: in Item) when not Full is
begin
Data := X; Full := True;
end Put;
entry Get(X: out Item) when Full is
begin
X := Data; Full := False;
end Get;
end Buffer;
This model produces a very fast exchange of data, but does involve
context switching. At any given moment either the producer or consumer
can be blocked due the the state of its entry barrier.
Note that this example does not show any equivalent to the
a timed condition wait in pthreads. Ada delegates the responsibility
of applying a time-out to the calling task, not the protected object
itself. If I wanted to call the Put entry above with a time-out value
of 100 milliseconds I would use the following construct in the
producer task.
Timed_Out := False;
select
Buffer.Put(Value);
or
delay 0.1; -- Delay expressed in seconds
Timed_Out := True;
end select;
If the Put completes before the time-out expires, the variable Timed_Out will
be False. If the time-out occurs before the Put completes, the
task will be removed from the entry queue and the variable Timed_Out will
be set to True.
Ada also allows you to produce a conditional entry call:
select
entry-call statement;
optional sequence of statements;
else
sequence-of-statements;
end select;
If the call on the entry is accepted immediately (the entry predicate is TRUE)
the entry call and the optional sequence of statements following it are
executed. If the entry call cannot be handled immediately (the entry predicate
is FALSE) the sequence of statements following the "else" is executed and the
task is removed from the entry queue.
Jim Rogers
Other things being equal, minimizing context switches is a good thing.
But the designers of Ada tend to care about things like the
predictability of response times, etc., so I wouldn't be surprised to
see them reverse those priorities.
Tom Payne
Context switching does rise to a level of concern for the Ada language
developers. The following quote is from the Ada 95 Rationale section 9.1.
"Efficiency. The size and initialization requirements of a protected object
are known at compile time of the client, because all entries and data
components are declared in the specification. This enables protected
objects to be allocated statically or directly on the stack, rather than
via dynamic allocation, and to be initialized in-line. No extra context
switches are required to service waiting clients, since the task changing
the state may directly execute entry bodies whose barriers become true.
Non-queued locking may be used to implement the mutual exclusion of a
protected object because no blocking is permitted during the execution of
a protected operation."
Jim Rogers
I can predict with near certainty that if you minimize context
switches, you'll get faster responses. Fairly horrible things can happen
if you allow the other task to proceed first. For example, frequently
the task that caused the condition to be signalled is finished after
signalling the condition. Switching tasks just before a task puts itself
to sleep is pointless.
A thread should be allowed to use up its timeslice until and unless it
blocks. And if you care about fairness, it's obviously not fair to
penalize a thread for helping other threads make forward progress. You
can still implement FIFO queues and bound the wait time if you must --
the maximum wait time will be one more timeslices than the number of
threads ahead of you when you blocked.
DS
[...ADA monitor implementation...]
> Non-queued locking may be used to implement the mutual exclusion of a
> protected object because no blocking is permitted during the execution of
> a protected operation."
>
What is non-queued locking?
Herman
--
K.U.Leuven, Mechanical Engineering, Robotics Research Group
<http://people.mech.kuleuven.ac.be/~bruyninc> Tel: +32 16 322480
This is, in fact, what happens. The task changing the state of the protected
object causes the boundary to be reevaluated. That does not mean that this
task must give up its time slice. The concept of forward progress simply
ensures that the next task to access the protected entry will be one of
the tasks in the entry queue when the barrier was evaluated and found to
be true. This forces forward progress of the tasks waiting in the entry
queue without adding unnecessary context switching.
Ada allows you to specify either of two possible queuing policies.
Fifo queuing provides a strict sequential order. Priority queuing
allows higher priority tasks in a queue to execute before lower priority
tasks. Forward progress is required regardless of the queuing policy.
Jim Rogers
> On Fri, 6 Sep 2002, Jim Rogers wrote:
>
> [...ADA monitor implementation...]
>
>>Non-queued locking may be used to implement the mutual exclusion of a
>>protected object because no blocking is permitted during the execution of
>>a protected operation."
>>
>>
> What is non-queued locking?
One example often used on simple uniprocessor systems is priority
ceilings. Locking can be effected by raising a task executing a
protected operation to the highest priority in the system. This would
be a temporary priority, which can be reduced at the conclusion of the
protected operation.
Note that the mutual exclusion can also be achieved by using a pthread
mutex if that is available on the system. The Ada concurrency model does
not specify how the lock is to be implemented.
Jim Rogers
I can imagine situations where a reasonable response time would be
considerably less than a timeslice.
Tom Payne
Yeah, in "proxy" model (vs. "self-service" model). However, even using
proxy model, one might still get ``unnecessary'' context switching due
to "hand off" of *output data* ("hand off" of lock ownership in self-
service model could result in even more overhead), I guess.
Unless I'm missing something, a better way to implement the proxy model
would be to extend the notion of barriers for "output" entries that are
meant provide some input to their callers (e.g. "Dequeue"/"Get" entries,
where "out" stuff doesn't depend on other entry call arguments, if any)
with additional "HOT" state. The idea is to simply queue the data using
an extra queue associated with each "output" entry (instead of doing data
"hand off"), unblock queued threads (signal(s) or broadcast depending on
"how much data has been made ready") and do NOT give unblocked waiters
any precedence (other than scheduling policy, if any) w.r.t. picking up
that data. Other "hot" threads may then race ahead and pickup that data;
"original"/"cold"/"too late" waiter(s) would have to reevaluate now-not-
HOT barrier and may even block again, then.
Or am I {again} missing and/or misunderstanding something?
< one task is queued on Get entry/barrier -- "Consumer" task >
Put(in New_Item);
lock
Data := New_Item; Full := True;
scan: and then on behalf of Consumer
Ready_Item := Data; Full := False;
set Get barrier HOT
unlock
unblock Consumer because now its barrier is HOT
Put(in New_Item);
lock
Data := New_Item: Full := True;
scan: nothing to do
unlock
Get(out An_Item);
lock
check: and then since Get barrier is HOT
An_Item := Ready_Item;
set Get barrier !HOT
unlock
Put(in New_Item);
lock
queue
unlock
switch context
.
.
.
regards,
alexander.
I think this could work under appropriate conditions.
The biggest problem you might encounter is that a protected object can have
protected procedures as well as protected entries. Protected procedures have
no predicate. They must acquire the write lock, but may execute regardless of
the internal state of the protected object. Frequently protected procedures
will modify the internal state, changing the value of a protected entry's
predicate. Note that protected procedures are not limited by the internal
progress first rule. That rule only limits access through protected entries.
The following example from the Ada 95 Rationale illustrates the problem:
protected Event is
entry Wait;
procedure Signal;
private
Occurred: Boolean := False;
end Event;
protected body Event is
entry Wait when Occurred is
begin
Occurred := False;
end Wait;
procedure Signal is
begin
Occurred := True;
end Signal;
end Event;
Many tasks may be queued on the Wait entry. They will not be
able to execute that entry until some task calls the Signal
procedure. As soon as one task executes the Wait entry all other
tasks continue to suspend until some task again calls Signal.
This simulates acquisition and release of a binary semaphore.
Jim Rogers
1) All thread priorities are equal. In this case, if you have a single
CPU and five waiting threads with a FIFO, you will have to wait for the
threads ahead of you to use up their timeslices. Nothing you can do
about that.
2) The thread that unblocks the waiting thread has a higher priority.
In this case, it would be clearly wrong to let the lower priority thread
run when a higher priority thread is ready-to-run.
3) The therad that unblocks the waiting thread has a lower priority. In
this case, you have a priority-inversion problem however you slice it.
The higher priority thread can't proceed until a lower-priority thread
allow it. In this case, you generally would raise the priority of the
waking thread, returning you to case '1' above. If you don't, you lose
latency guarantees anyway because you wait for all the other
higher-priority threads to run.
DS
+ 1) All thread priorities are equal. In this case, if you have a single
+ CPU and five waiting threads with a FIFO, you will have to wait for the
+ threads ahead of you to use up their timeslices. Nothing you can do
+ about that.
+ 2) The thread that unblocks the waiting thread has a higher priority.
+ In this case, it would be clearly wrong to let the lower priority thread
+ run when a higher priority thread is ready-to-run.
+ 3) The therad that unblocks the waiting thread has a lower priority. In
+ this case, you have a priority-inversion problem however you slice it.
+ The higher priority thread can't proceed until a lower-priority thread
+ allow it. In this case, you generally would raise the priority of the
+ waking thread, returning you to case '1' above. If you don't, you lose
+ latency guarantees anyway because you wait for all the other
+ higher-priority threads to run.
I was concerned that awakened threads would tie up monitors (i.e.,
protected object) by holding priority for the monitor's lock.
Apparently, however, under the Ada approach whatever high-priority
stuff a signallee (or broadcastee) might need to do inside a monitor
immediately gets done for it by the signaller (i.e., the thread that
makes the predicate true). That stuff gets *top* priority, and the
signallee simply becomes runnable without any need to inherit or
compete for the monitor's lock. Right?
Tom Payne
Quoting directly from the Ada 95 Rationale:
9.1.3 Efficiency of Protected Types
Protected types provide an extremely efficient mechanism; the ability to use the
thread of control of one task to execute a protected operation on behalf of
another task reduces the overhead of context switching compared with other
paradigms. Protected types are thus not only much more efficient than the use of
an agent task and associated rendezvous, they are also more efficient than
traditional monitors or semaphores in many circumstances.
As an example consider the following very simple protected object which
implements a single buffer between a producer and a consumer task.
protected Buffer is
entry Put(X: in Item);
entry Get(X: out Item);
private
Data: Item;
Full: Boolean := False;
end;
protected body Buffer is
entry Put(X: in Item) when not Full is
begin
Data := X; Full := True;
end Put;
entry Get(X: out Item) when Full is
begin
X := Data; Full := False;
end Get;
end Buffer;
This object can contain just a single buffered value of the type Item in the
variable Data; the boolean Full indicates whether or not the buffer contains a
value. The barriers ensure that reading and writing of the variable is
interleaved so that each value is only used once. The buffer is initially empty
so that the first call that will be processed will be of Put.
A producer and consumer task might be
task body Producer is
begin
loop
... -- generate a value
Buffer.Put(New_Item);
end loop;
end Producer;
task body Consumer is
begin
loop
Buffer.Get(An_Item);
... -- use a value
end loop;
end Consumer;
In order to focus the discussion we will assume that both tasks have the same
priority and that a run until blocked scheduling algorithm is used on a single
processor. We will also start by giving the processor to the task Consumer.
The task Consumer will issue a call of Get, acquire the lock and then find that
the barrier is false thereby causing it to be queued and to release the lock.
The Consumer is thus blocked and so a context switch occurs and control passes
to the task Producer. This sequence of actions can be symbolically described by
Get(An_Item);
lock
queue
unlock
switch context
The task Producer issues a first call of Put, acquires the lock, successfully
executes the body of Put thereby filling the buffer and setting Full to False.
Before releasing the lock, it reevaluates the barriers and checks the queues to
see whether a suspended operation can now be performed. It finds that it can and
executes the body of the entry Get thereby emptying the buffer and causing the
task Consumer to be marked as no longer blocked and thus eligible for
processing. Note that the thread of control of the producer has effectively
performed the call of Get on behalf of the consumer task; the overhead for doing
this is essentially that of a subprogram call and a full context switch is not
required. This completes the sequence of protected actions and the lock is released.
However, the task Producer still has the processor and so it cycles around its
loop and issues a second call of Put. It acquires the lock again, executes the
body of Put thereby filling the buffer again. Before releasing the lock it
checks the barriers but of course no task is queued and so nothing else can be
done; it therefore releases the lock.
The task Producer still has the processor and so it cycles around its loop and
issues yet a third call of Put. It acquires the lock but this time it finds the
barrier is false since the buffer is already full. It is therefore queued and
releases the lock. The producer task is now blocked and so a context switch
occurs and control at last passes to the consumer task. The full sequence of
actions performed by the producer while it had the processor are
Put(New_Item);
lock
Data := New_Item; Full := True;
scan: and then on behalf of Consumer
An_Item := Data; Full := False;
set Consumer ready
unlock
Put(New_Item);
lock
Data := New_Item: Full := True;
scan: nothing to do
unlock
Put(New_Item);
lock
queue
unlock
switch context
Jim Rogers
+ Quoting directly from the Ada 95 Rationale:
+ 9.1.3 Efficiency of Protected Types
+ Protected types provide an extremely efficient mechanism; the ability to use the
+ thread of control of one task to execute a protected operation on behalf of
+ another task reduces the overhead of context switching compared with other
+ paradigms. Protected types are thus not only much more efficient than the use of
+ an agent task and associated rendezvous, they are also more efficient than
+ traditional monitors or semaphores in many circumstances.
+ As an example consider the following very simple protected object which
+ implements a single buffer between a producer and a consumer task.
[...]
Thanks. That example shows that even though the protected entry gets
executed by the task that makes the predicate true (the broadcaster)
on behalf of the awakened tasks (the broadcastees), it has access to
arguments passed to the protected entry by the broadcastees, even
arguments that have been passed by reference or by pointer. (I've
done such restructuring at the source level when I needed to
coordinate via message passing instead of monitors. Without the help
of a compiler, it gets very tedious.)
It appears that protected entries offer the benefits of Hoare
semantics for signalling (where the signaller immediately defers to
the signallee, who in turn defers to the signallee upon exiting the
monitor) without the overhead of the double context switch. IMHO, the
benefits of Hoare semantics over Mesa semantics (used by Pthreads)
are:
* Responsiveness -- awakened threads immediately take care of their
lock-protected business, rather than waiting for the awakener's
time-slice to end.
* Determinacy -- there's no question which thread gets the lock and
when, so fewer rude surprises.
Tom Payne