2013/4/15 Kevin Gillette <
extempor...@gmail.com>:
> Indeed, if any kind of synchronization occurs on shared-state access (and
> that shared state is modified such that it is consistent and valid on each
> access), then the system as a whole is deterministic, as well as each
> independent computation being deterministic. Unless you're trying to create
> a system that discovers the meaning of life by accident, or are using
> qubits, then you really want to use synchronization, therefore making your
> software, whether or not anything happens in parallel, fully deterministic.
Synchronization doesn't guarantee determinism, although it might
appear to. The problem is that there exist many cases in which
expected deterministic behavior is impossible due to hardware,
operating system, operating environment, etc. Indeed, even fair
synchronization systems (for example, spinlocks like ticket, MCS,
queue locks, etc) don't guarantee a purely evenly distributed
workload. So even with synchronization, you don't know who is waiting
and who is synchronized. And if your synchronized path does anything
with syscalls, if your locks put you to sleep, or if pre-emption
exists, all bets are off.
Even assuming syscalls, sleep locks, and pre-emption all work fine
every time, synchronization by itself doesn't create determinism in a
non-deterministic system. I feel like I am understanding you to say
"deterministic" in a way that I don't typically understand it to be
defined. In a deterministic system, a given input always results in
the same output and does so by traversing through the same sequence of
states. Synchronization does not guarantee sequence (implicitly or
explicitly). You can have a deterministic parallel system only when
you can guarantee a total order of the intermediate states. Simply
synchronizing around a machine that contains valid states does not do
this. Consider:
accumulator = 0
thread_add(n) {
lock(accumulator)
accumulator += n
unlock(accumulator)
}
add3(a, b, c) {
thread_add(a)
thread_add(b)
thread_add(c)
}
In this system, add3(5, 7, 11) will always leave accumulator with the
value 23. But in between, it may have the states (11, 18), (5, 12),
(7, 12), etc. And this can be a problem if you're dealing with
problems that expect a total order. The above example is too simple to
demonstrate this, but it is easy to conceive of a system where a
consistent state isn't the same thing as a valid state. And an invalid
(but consistent) state breaks determinism if it can happen as a
side-effect of parallel scheduling. Simply synchronizing doesn't solve
this. You also have to schedule properly and you have to guarantee
total ordering of state sequence.
Maybe that's what you meant? Or maybe I'm just being too much of a
pedantic jerk.
--dho
> On Monday, April 15, 2013 9:33:32 AM UTC-6, ron minnich wrote:
>>
>> On Fri, Apr 12, 2013 at 9:26 PM, Jens Alfke <
je...@mooseyard.com> wrote:
>> >But any code that uses
>> > parallelism is nondeterministic.
>>
>> Actually, that's not really correct, it is highly dependent on many
>> factors. If enough effort is expended on software, architecture, and
>> hardware, as in e.g. the Blue Gene systems, parallel execution can be
>> as deterministic as you wish. But your statement is true in many cases
>> -- just not all.
>>
>> ron
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to
golang-nuts...@googlegroups.com.
> For more options, visit
https://groups.google.com/groups/opt_out.
>
>