Is there a simpler solution?
Russell
> search thread needs to stop searching. Creating a mutex for that seems like
> overkill.
Any inter-thread communication will generally require some form of locking, and
the mutex is possibly the lightest lock (only some platform-specific
interlocked integer operators might be faster). You should probably go for
the mutex, and see if it really is too slow.
But if you
- really, really, really don't want to use a mutex,
- are prepared to properly document and explain it to any other
fellow programmers,
- are willing to take the risk of future breakage, (race conditions!)
- don't require guaranteed or timely cancelling of the searching thread,
- and memory granulity doesn't get in your way,
then you can *maybe* get away with setting a 'bool cancel' flag
to 'false' before your search thread starts, setting it to 'true' in the
console thread when you want the search to stop, and never doing anything
else with it. Then the search thread can test the flag without locking, and
when the flag is changed by the console thread, the change will probably
be seen sooner or later by the searching thread.
--
Arnold Hendriks <a.hen...@b-lex.com>
B-Lex Information Technologies, http://www.b-lex.com/
That's good to hear :-)
[snip]
> But if you
> - are prepared to properly document and explain it to any other
> fellow programmers,
I'm a loner, just programming for fun :-)
[snip]
> then you can *maybe* get away with setting a 'bool cancel' flag
> to 'false' before your search thread starts, setting it to 'true' in the
> console thread when you want the search to stop, and never doing anything
> else with it. Then the search thread can test the flag without locking,
and
> when the flag is changed by the console thread, the change will probably
> be seen sooner or later by the searching thread.
If I tried this method, should I go with a 'volatile' variable? Even if the
operations involved aren't atomic, it should still work (according to my
thinking, you tell me if it's wrong). Let's say the search thread checks the
'volatile bool cancel' variable maybe once every 200 ms (is this too often?
too seldom?). So let's say that the instructions involved in the input
thread that say 'cancel = true;' are not atomic, so when the search thread
tests it, it *might* not be the "correct" value (yet). But in another 200 ms
(or however long) the search thread will get the message to stop.
Now, is this safe to do? It seems like it would work, but I'm by no means an
expert on synchronization techniques and race conditions.
Thanks for the help!
Russell
Yes. Is there a simpler *good* solution? No. Use a mutex.
It's true that mutexes are "quite slow" when compared to, say,
the speed of quark-quark reactions inside a proton. They're even
slow compared to some laboratory-generated optical phenomena with
durations measured in femtoseconds. But compared to the time it
takes to generate just ONE ply's worth of chess positions, a mutex
is so fast it'll take your breath away.
You're polling the flag at the rate of FIVE >>gasp!<< HERTZ
and you're worried about the overhead of locking and unlocking a
no-contention mutex? FIVE HERTZ?
An old friend used to refer to this sort of thing as clearing
the bottle caps off the beach so the sand would be nice and clean
next to the beached whales.
> I've heard
> mutexes are quite slow, and I really only need one flag to determine if the
> search thread needs to stop searching. Creating a mutex for that seems like
> overkill.
>
> Is there a simpler solution?
Mutexes are the appropriate solution. What do you care about speed in
this case? How often are you planning on checking the flag?!
DS
"David Schwartz" <dav...@webmaster.com> wrote in message
news:3CB61BD7...@webmaster.com...
hys
--
Hillel Y. Sims
hsims AT factset.com
Yeah, and given the nature of work in the "searcher thread",
it is likely that you could have A LOT of *async-cancel-safe*
places (with loops; long computations) in it. Just set the
cancel type to PTHREAD_CANCEL_ASYNCHRONOUS on entry and set
it back to PTHREAD_CANCEL_DEFERRED on exit of such region
(and please, *don't* code anything non-async-cancel-safe
inside it ;-)). No "periodically checking" then needed
(inside async-canel-regions)... and it even works on
WinBlahBlah "platforms":
http://sources.redhat.com/pthreads-win32/
;-)
Aahh, well... it would be *really* GREAT if *Boost.Threads*
would have/provide "something like that" as well! ;-) ;-)
IMHO.
regards,
alexander.
Arnold Hendriks wrote:
...
> Any inter-thread communication will generally require some form of locking, and
> the mutex is possibly the lightest lock (only some platform-specific
> interlocked integer operators might be faster). You should probably go for
> the mutex, and see if it really is too slow.
>
> But if you
> - really, really, really don't want to use a mutex,
> - are prepared to properly document and explain it to any other
> fellow programmers,
> - are willing to take the risk of future breakage, (race conditions!)
> - don't require guaranteed or timely cancelling of the searching thread,
> - and memory granulity doesn't get in your way,
>
> then you can *maybe* get away with setting a 'bool cancel' flag
> to 'false' before your search thread starts, setting it to 'true' in the
> console thread when you want the search to stop, and never doing anything
> else with it. Then the search thread can test the flag without locking, and
> when the flag is changed by the console thread, the change will probably
> be seen sooner or later by the searching thread.
Would a mutex necessarily guarantee anything significant? Suppose a mutex was used
and the search thread did not see the changed cancel flag value for a significant
amount of time after the cancel flag. Could you prove the posix implementation was
broken on that basis alone? No. Wait, you're going to say that if the search
thread gets the mutex after the main thread released the mutex after changing
the cancel flag then it must see the changed value. But the only way you have
of knowing that "after" occurred is by seeing the changed flag value, so there
is a little problem there.
There is also the problem of forward progress in posix. The main thread is allowed
to be blocked indefinitely waiting for the mutex so it can change the flag. And this
is not entirely unlikely. We've seen numerous examples in this news group where
one very active thread repeatedly getting and releasing a mutex would block out
another thread, usually "fixed" by doing a yield or sleep.
Now, I would love to see a general purpose architecture that did have a really
aggressive storage architecture. That would be way fun to play with. But I
don't think it is very likely for reasons of practicality.
Joe Seigh
[...]
% thread I handle the search. In my search thread I need to periodically check
% whether the search thread needs to bail (e.g. if I recieve a quit command or
[...]
% the communication from the input thread to the search thread? I've heard
% mutexes are quite slow, and I really only need one flag to determine if the
My experience is that mutexes are only slow when there's a lot of contention
on them. Create the mutex, lock it when you want to check if it's time to
stop, unlock it when you're done with the shared data. It's simple, fast,
and if you have performance problems, it's probably because of something
else you're doing.
Another alternative is to cancel the search thread when you don't want
it any more. There are reliable mechanisms for doing this, but they're
less portable in the sense that not all platforms implement them well,
and they're marginally more complicated to use than using a flag.
--
Patrick TJ McPhee
East York Canada
pt...@interlog.com
If your search threads reads the variable, and more than one other thread
writes the variable, then the writers may need to use a mutex to serialise
the writes.
"Patrick TJ McPhee" <pt...@interlog.com> wrote in message
news:a9lipg$2vd5$1...@news.ca.inter.net...
% If your thread is only reading a flag, and another thread writes it, then a
% mutex is unnecessary.
It depends. Do you want to reliably read the value or not? If you never
write the flag, then the mutex is not needed. If only one thread ever
uses the flag, then the mutex is not necessary. Otherwise it is.
http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap04.html#tag_04_10
"Memory Synchronization
Applications shall ensure that access to any memory location
by more than one thread of control (threads or processes) is
restricted such that no thread of control can read or modify
a memory location while another thread of control may be
modifying it. Such access is restricted using functions that
synchronize thread execution and also synchronize memory with
respect to other threads. The following functions synchronize
memory with respect to other threads:
fork()
pthread_barrier_wait()
pthread_cond_broadcast()
pthread_cond_signal()
pthread_cond_timedwait()
pthread_cond_wait()
pthread_create()
pthread_join()
pthread_mutex_lock()
pthread_mutex_timedlock()
pthread_mutex_trylock()
pthread_mutex_unlock()
pthread_spin_lock()
pthread_spin_trylock()
pthread_spin_unlock()
pthread_rwlock_rdlock()
pthread_rwlock_timedrdlock()
pthread_rwlock_timedwrlock()
pthread_rwlock_tryrdlock()
pthread_rwlock_trywrlock()
pthread_rwlock_unlock()
pthread_rwlock_wrlock()
sem_post()
sem_trywait()
sem_wait()
wait()
waitpid()"
Unless explicitly stated otherwise, if one of the above
functions returns an error, it is unspecified whether the
invocation causes memory to be synchronized.
Applications may allow more than one thread of control
to read a memory location simultaneously."
And, BTW, somewhat related to that XBD section:
http://www.opengroup.org/austin/aardvark/finaltext/xbdbug.txt
(see "rdvk# 26", "rdvk# 27" and "rdvk# 28")
regards,
alexander.
> If your thread is only reading a flag, and another thread writes it, then
> a
> mutex is unnecessary. You only need mutexes when two or more threads
> write a variable, especially when they read then write the variable.
This is true IFF:
1) The flag is read by a thread created by the thread that previously wrote
the new flag value... and the flag is never changed; or
2) The hardware supports atomic access to the data type (and alignment) used
for the flag, AND the compiler choose to generate the appropriate atomic
access. (Otherwise you might get inconsistent and possibly misleading
results.) AND you don't care whether you read the latest possible value of
the flag. Or,
3) Your access to the flag is trivial and you don't really care what value
you get... even if it's completely corrupted due to inconsistencies caused
by the non-atomic write and read operations. (But in that case, why
bother?)
Otherwise... just use the mutex.
/------------------[ David.B...@compaq.com ]------------------\
| Compaq Computer Corporation POSIX Thread Architect |
| My book: http://www.awl.com/cseng/titles/0-201-63392-2/ |
\-----[ http://home.earthlink.net/~anneart/family/dave.html ]-----/
David Butenhof wrote:
>
> Chris Bevan wrote:
>
> > If your thread is only reading a flag, and another thread writes it, then
> > a
> > mutex is unnecessary. You only need mutexes when two or more threads
> > write a variable, especially when they read then write the variable.
>
> This is true IFF:
>
> 1) The flag is read by a thread created by the thread that previously wrote
> the new flag value... and the flag is never changed; or
>
> 2) The hardware supports atomic access to the data type (and alignment) used
> for the flag, AND the compiler choose to generate the appropriate atomic
> access. (Otherwise you might get inconsistent and possibly misleading
> results.) AND you don't care whether you read the latest possible value of
> the flag. Or,
>
> 3) Your access to the flag is trivial and you don't really care what value
> you get... even if it's completely corrupted due to inconsistencies caused
> by the non-atomic write and read operations. (But in that case, why
> bother?)
>
Atomicity shouldn't matter in the case of a simple boolean. Whatever predicate
you test the boolean with will eventually toggle.
Joe Seigh
Please define "eventually"... does it include "never"... why not? ;-)
regards,
alexander.
Alexander Terekhov wrote:
>
> Joe Seigh wrote:
...
> >
> > Atomicity shouldn't matter in the case of a simple boolean. Whatever predicate
> > you test the boolean with will eventually toggle.
>
> Please define "eventually"... does it include "never"... why not? ;-)
>
Sooner than you are guaranteed to acquire a look in a timely manner. There are
no forward progress guarantees in POSIX so just using a "pure" POSIX threads
solution won't buy you anything here.
Joe Seigh
Memory synchronization rules aside for a moment, how about:
http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap03.html#tag_03_286
("Priority Scheduling"...)
http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap03.html#tag_03_330
("Scheduling" and "Scheduling Allocation Domain")
http://www.opengroup.org/onlinepubs/007904975/functions/xsh_chap02_08.html#tag_02_08_04
("Process Scheduling"...)
http://www.opengroup.org/onlinepubs/007904975/xrat/xsh_chap02.html#tag_03_02_08_16
("Process Scheduling"...)
*if you really need it* <?>
regards,
alexander.
Sorry, but atomicity IS an issue. Very simply, without synchronization, you
don't know what you're getting and therefore cannot interpret it. Atomicity
is a particular low-level kind of synchronization guarantee that can be
used profitably if you're careful. Without it, you're counting on much
lower level (less well defined, less predictable, and less direct) forms of
synchronization in the hardware.
Of course, if you don't CARE what you get, then synchronization doesn't
matter... but then why bother?
David Butenhof wrote:
>
> Joe Seigh wrote:
> >
> > Atomicity shouldn't matter in the case of a simple boolean. Whatever
> > predicate you test the boolean with will eventually toggle.
>
> Sorry, but atomicity IS an issue. Very simply, without synchronization, you
> don't know what you're getting and therefore cannot interpret it. Atomicity
> is a particular low-level kind of synchronization guarantee that can be
> used profitably if you're careful. Without it, you're counting on much
> lower level (less well defined, less predictable, and less direct) forms of
> synchronization in the hardware.
>
> Of course, if you don't CARE what you get, then synchronization doesn't
> matter... but then why bother?
>
The point is the predicate works fine even in the presence of indeterminate
values. Suppose your boolean is defined for two values, say A and B, which
can be anything as long as they're distinct from each other. And say A is
the initial value set before the work thread is started, and B is the "halt"
value. So the worker thread basically uses either
while (flag == A) {...}
or
while (flag != B) {...}
No matter what intermediate values the flages may have, the predicates work
correctly and terminate the loop.
Joe Seigh
Alexander Terekhov wrote:
>
> Joe Seigh wrote:
...
> >
> > Sooner than you are guaranteed to acquire a look in a timely manner. There are
> > no forward progress guarantees in POSIX so just using a "pure" POSIX threads
> > solution won't buy you anything here.
>
> Memory synchronization rules aside for a moment, how about:
>
> http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap03.html#tag_03_286
> ("Priority Scheduling"...)
>
> http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap03.html#tag_03_330
> ("Scheduling" and "Scheduling Allocation Domain")
>
> http://www.opengroup.org/onlinepubs/007904975/functions/xsh_chap02_08.html#tag_02_08_04
> ("Process Scheduling"...)
>
> http://www.opengroup.org/onlinepubs/007904975/xrat/xsh_chap02.html#tag_03_02_08_16
> ("Process Scheduling"...)
>
> *if you really need it* <?>
>
Scheduling policy really has nothing or little to do with forward progress. You shouldn't
confuse the two. Basically, forward progress for a thread is defined as the probability
of a thread progressing forward from a point as increasing with time. It's a rather
necessary thing if you want to do things like prove a program actually terminates or
finishes. Pthreads and Java especially don't guarantee forward progress. This is why
they both emphasize a cooperative processing model rather than the competitive processing
model. Sometimes people find this out the hard way when they discover that pthread locks
aren't always FIFO or "fair" locks.
Joe Seigh
Given: SCHED_FIFO (the same scheduling allocation domain of size 1);
Two "worker" threads: A-HI.PRTY, B-LO.PRTY; SAME "JOB"; Both
created/started by another thread: C-HI+.PRTY, which simply
exits after launching A and B.
Question: Which thread (A or B) is going to complete the job first
(make "more" 'forward progress' from a point of C-thread
termination 'as increasing with time')?
[...]
> Sometimes people find this out the hard way when they discover that pthread locks
> aren't always FIFO or "fair" locks.
Been there, done that! ;-) But what does this have to
do with "forward progress" under *priority scheduling*?
regards,
alexander.
I've had discussions with my colleagues about this before--in fact,
some heated discussions ;-) .. I will try to give my point of view
here, in favor of using synchronization in this case.
A computer is an extremely complex piece of machinery. Computers
nowadays have many processors, several levels of caches, and other
features that enhance performance, and often make them more complex.
Because of cache consistency/coherency issues, and the fact that there
are differnet models of cache coherency, on an SMP machine simply
spinning like you do above may not give you the results you want.
The most compelling reason, IMHO, to use mutexes when accessing/
modifying shared memory is the fact that mutexes *guarantee* atomicity,
using hardware-provided instructions. The fact is that you don't know
what kind of hardware you're running on top of, so why make assumptions?
Why not use the mechanism provided by the hardware to be 100% sure? You
will make your application more predictable.
--
Joshua Jones :: jajones(at)cc.gatech.edu :: http://www.intmain.net
College of Computing @ Georgia Tech, Atlanta, Georgia, US
"Quotes in .sigs are useless." -- Me
If you needed atomicity this might be a consideration, but you don't
need atomicity in this case.
Joe Seigh
**if** a thread makes forward progress, then you may infer *somewhat* from the scheduling
policy the *relative* forward progress the threads may make. A scheduling policy does not
guarantee forward progress. And, AFAIK, pthreads has no forward progress guarantees whatsoever.
This is why they emphasize the cooperative processing model. Didn't you ever wonder why
they emphasize this so much?
Specifically, this lack of a forward progress guarantee is noticeable in the "fairness" of
some lock implementations. Under some circumstances, a thread can block indefinitely trying
to acquire a lock.
Generally, this means you can't prove that a thread won't indefinitely block trying to acquire
a lock. "highly unlikely" isn't the same as "never" as far as proofs are concerned.
Anyway, I'll agree that never acquiring a lock is highly unlikely if you'll agree that never
seeing an updated value in shared memory is highly unlikely. And I'm being more than fair
here since we've seen examples of noticable delays in acquiring a lock and never had reported delays
in seeing updated memory values.
I'll also agree not posit hypothetical pathological pthread implementations if you agree not to
posit hypothetical pathological shared memory implementations.
Joe Seigh
This is silly. FIFO locks don't guarantee progress, either; not of anything
remotely useful, anyway. They just guarantee inefficiency and unproductive
context switches. No synchronization or scheduling mechanism or policy can
ever guarantee that an application will (normally) terminate; that's up to
the application designer. (And it's nearly impossible except in an
embedded, monolithic application.) Threads share resources. All resources,
at a fine granularity. There's no way that correctness/progress can be
externally imposed. THAT'S why POSIX emphasizes cooperation. Threads that
compete can only hurt the entire application. People persist in thinking of
threads within a process as independent "mini applications". That's
absolutely wrong, and a certain path to doom and destruction. Threads are
nothing but asynchronous procedure calls; if they mess with each other,
they mess with themselves.
> Specifically, this lack of a forward progress guarantee is noticeable in
> the "fairness" of
> some lock implementations. Under some circumstances, a thread can block
> indefinitely trying to acquire a lock.
This is true only if either the function of the thread was redundant in the
first place, or if the application was inappropriately designed. Again,
threads share at such a fine granularity that they cannot effectively
compete for resources. If one thread hogs the heap, or locks stdout, the
others are hosed. That's life, and it's an important part of the threaded
programming model. If you don't LIKE that, that's fine; but the solution is
to avoid threads and use something heavier weight and more independent,
such as processes with small regions of shared memory. (Or, better, go for
total independence and use some form of IPC messaging.)
> Generally, this means you can't prove that a thread won't indefinitely
> block trying to acquire a lock. "highly unlikely" isn't the same as
> "never" as far as proofs are concerned.
Absolutely. Unless, of course, you've designed your application to control
contention so that you CAN reason about the progress. Again, this is
probably only possible in a monolithic application, since you can't prove
that glibc's malloc() will always return in a bounded time. (For example.)
Still, expecting the thread model to guarantee progress of your thread is
exactly like expecting the compiler to guarantee that function A will make
progress... even when it's called function B that doesn't return. The
dependencies may be less obvious, but they're just as real.
> Anyway, I'll agree that never acquiring a lock is highly unlikely if
> you'll agree that never
> seeing an updated value in shared memory is highly unlikely. And I'm
> being more than fair here since we've seen examples of noticable delays in
> acquiring a lock and never had reported delays in seeing updated memory
> values.
Nobody's been arguing that you should presume updates will never be seen.
Frankly, it's highly unlikely that you could ever go more than one hardware
clock tick in each thread before they'll have synchronized views, because
the interrupts almost have to do that on any likely architecture. But
there's a really big distance between telling people they ought to count on
side effects like that (without understanding, in most cases, exactly what
their needs are) and explaining to them that there ARE no real guarantees
even though "probably", "most of the time", they'll get something
reasonable. It's all about education. Because even if the cheap shortcut
approximation gets them what they need THIS time, it might not NEXT time.
> I'll also agree not posit hypothetical pathological pthread
> implementations if you agree not to posit hypothetical pathological shared
> memory implementations.
There's a useful middle ground between short term rationalizations and
flights of philosophical fancy. Granted, it's not always easy to find. But
there are books full of things said by intelligent people that made a lot
of sense at the time but look really foolish with a bit of historical
perspective. So, sure, MAYBE nobody ever would or ever could build a useful
computer where unsynchronized memory references could ever remain
inconsistent between processors for any appreciable period of time. YOU
want to be the one with your name on that quote in 5 or 10 years, after
someone's figured out how and why to do it? Go ahead. It wasn't that long
ago nobody could imagine how anyone could possibly support a "home
computer"; much less why anyone would ever want one.
> David Butenhof wrote:
>>
>> Joe Seigh wrote:
>> >
>> > Atomicity shouldn't matter in the case of a simple boolean. Whatever
>> > predicate you test the boolean with will eventually toggle.
>>
>> Sorry, but atomicity IS an issue. Very simply, without synchronization,
>> you don't know what you're getting and therefore cannot interpret it.
>> Atomicity is a particular low-level kind of synchronization guarantee
>> that can be used profitably if you're careful. Without it, you're
>> counting on much lower level (less well defined, less predictable, and
>> less direct) forms of synchronization in the hardware.
>
> The point is the predicate works fine even in the presence of
> indeterminate values. Suppose your boolean is defined for two values, say
> A and B, which can be anything as long as they're distinct from each
> other. And say A is the initial value set before the work thread is
> started, and B is the "halt" value. So the worker thread basically uses
> either
>
> while (flag == A) {...}
> or
> while (flag != B) {...}
>
> No matter what intermediate values the flags may have, the predicates
> work correctly and terminate the loop.
If you narrow the scope sufficiently, yes, you can overlook intermediate
states, "memory leaks", word tearing, reordering, and so forth. In many
cases, if you're careful enough in designing your predicates and
implementing the tests, and if memory is properly laid out, you can write
code that's moderately portable and is relatively immune to the dangers.
However, a simple answer "do this, ignore that" does beginners an injustice,
because without full understanding of what they're ignoring and why they're
doing this, you risk leading them into "obvious" (but dangerous)
generalizations of that shortcut. "If I can do it with 2-valued flags,
surely I can do it with a 3-valued enum. And if that, then why not with a
general integer?" You've pointed them on a path, not just lead them a
single step. You can't expect them to know when to turn back unless they
understand the dangers. The first step is reasonably safe and sturdy; it's
the second step you need to watch out for. And, in most applications, the
value of that first step is so small that it's simply not worth the risk.
David Butenhof wrote:
>
> Joe Seigh wrote:
...
> >
> > **if** a thread makes forward progress, then you may infer *somewhat* from
> > the scheduling
> > policy the *relative* forward progress the threads may make. A scheduling
> > policy does not
> > guarantee forward progress. And, AFAIK, pthreads has no forward progress
> > guarantees whatsoever.
> > This is why they emphasize the cooperative processing model. Didn't you
> > ever wonder why they emphasize this so much?
>
> This is silly. FIFO locks don't guarantee progress, either; not of anything
> remotely useful, anyway. They just guarantee inefficiency and unproductive
> context switches. No synchronization or scheduling mechanism or policy can
> ever guarantee that an application will (normally) terminate; that's up to
> the application designer. (And it's nearly impossible except in an
> embedded, monolithic application.) Threads share resources. All resources,
> at a fine granularity. There's no way that correctness/progress can be
> externally imposed. THAT'S why POSIX emphasizes cooperation. Threads that
> compete can only hurt the entire application. People persist in thinking of
> threads within a process as independent "mini applications". That's
> absolutely wrong, and a certain path to doom and destruction. Threads are
> nothing but asynchronous procedure calls; if they mess with each other,
> they mess with themselves.
Forward progress at a specific program point isn't the same as overall
program progress, usefulness, or efficiency. I agree with you on the latter.
When I referred to FIFO locks as guaranteeing forward progress, I wasn't
implying that they were efficient especially in terms of overall system
resource utilization. I'm well aware of why some implementations choose
to use "darwinian" mechanisms. The lack of a forward progress guarantee
is at least convenient in this regard.
>
> > Specifically, this lack of a forward progress guarantee is noticeable in
> > the "fairness" of
> > some lock implementations. Under some circumstances, a thread can block
> > indefinitely trying to acquire a lock.
>
> This is true only if either the function of the thread was redundant in the
> first place, or if the application was inappropriately designed. Again,
> threads share at such a fine granularity that they cannot effectively
> compete for resources. If one thread hogs the heap, or locks stdout, the
> others are hosed. That's life, and it's an important part of the threaded
> programming model. If you don't LIKE that, that's fine; but the solution is
> to avoid threads and use something heavier weight and more independent,
> such as processes with small regions of shared memory. (Or, better, go for
> total independence and use some form of IPC messaging.)
I didn't say I disliked it. "fairness" was the old terminology before it
was realized that forward progress was a better term.
I'm arguing a special case here. Nobody is arguing any generalizations
from it. I'm also not arguing that there will never be an implementation
where it doesn't hold. Just that it's somewhat unlikely, just as unlikely
that a pthreads implementation will exhibit really pathelogical behavior
with regards to forward progress. Maybe less so.
Joe Seigh
What if the compiler produces:
r0 = flag;
jump PC + r0 * (endloop-startloop)
startloop:
...
endloop:
or, rather more plausibly:
cmp #0, flag;
jumpeq startloop;
cmp #1 flag;
jumpeq endloop;
trap #MEM_CORRUPT
or any other sequence of instructions which assumes that the only
values it will find in a variable are those that can legitimately be
put into it.
The reason for having rules specifying the correct way to do things
like this is so that you can write a correct program regardless of
whether or not you can imagine every possible implementation, which
is a very difficult thing to do in general.
--
Eppur si muove