max := array first.
array do: [ :each | max < each ifTrue: [ max := each ]].
it can be easily parallelised, by splitting a loop on a number of smaller loops.
But you lose most of benefits from parallelism, because of contention
due to writing/reading max value, especially when number of cores are
MANY.
My guess, that following code will beat the above, because of
eliminating contention:
max := array first.
ranges := array getSlicedRanges: System optimalNumberOfCores.
intermediateMaximums := Array new: ranges size.
intermediateMaximums setAll: max.
ranges parallelWithIndexDo: [:i :range |
range start to: range end do: [:j |
(intermediateMaximums at: i) < (array at: j)
ifTrue: [ (intermediateMaximums at: i put: (array at: j) ]
]].
intermediateMaximums do: [:each | max < each ifTrue: [ max := each ]].
Now, let me ask: is it possible to write such sophisticated compiler,
which by taking a simple code (in first example) will create a more
complex, like latter, which is more appropriate for multicore? If they
will be able to do that, then we will have no more job(s), because
they can adopt their compiler on many other computing problems becides
parallelism :)