Thank you Tim, will check it carefully.
-Júlio
@everywhere function fill(A::SharedArray) for idx in Base.localindexes(A) A[idx] = rand() endend
function fill_array(m, n) A = SharedArray(Float64, (m, n)) @sync begin for p in procs(q) @async remotecall_wait(p, fill, A) end end Aend
fill_array(9, 60)
Hi Ismael,
MPI is distributed memory, I'm trying to use all the cores in my single workstation with shared memory instead. Thanks for the link anyways.
-Júlio
Hi Sebastian, thanks for sharing your experience in parallelizing Julia code. I used OpenMP in the past too, it was very convenient in my C++ codebase. I remember of an initiative OpenACC that was trying to bring OpenMP and GPU accelerators together, I don't know the current status of it. It may be of interest to Julia devs.
-Júlio
void TreeLattice<Impl>::stepback(Size i, const Array& values,
Array& newValues) const {
#pragma omp parallel for
for (Size j=0; j<this->impl().size(i); j++) {
Real value = 0.0;
for (Size l=0; l<n_; l++) {
value += this->impl().probability(i,j,l) *
values[this->impl().descendant(i,j,l)];
}
value *= this->impl().discount(i,j);
newValues[j] = value;
}
}