Hi Andrew,
> * Does foreach(serial) guarantee that only one process executes
> the loop?
Not "process", "thread" i.e. "serial" guarantees that only one thread
executes the loop (typically when using OpenMP). MPI processes are
independent from one another, and so each MPI process will execute the
loop in parallel (whether "serial" is specified or not).
> * Is it possible that the second serial loop begins before all
> parallel processes have completed in the first loop?
If by "first loop" you mean the loop containing the reduction()
operations, then no, the second loop will only be executed on each
process after the results from the first loop have been "reduced".
> * Should I use any MPI_Barrier() to ensure all the processes
> finish before starting the serial part?
No, see previous answer.
> * More generally, how does Basilisk handle process synchronization
> in this context when using MPI?
Synchronisation is done by reduction() (as above) and/or by boundary
conditions. For example:
scalar a[], b[];
foreach()
a[] = x*y;
// synchronisation is necessary here since boundary conditions
// must be applied on a[] before computing b[]
foreach()
b[] = (a[1] - a[-1])/Delta;
> Any insights on using foreach(serial) properly with MPI would be
> greatly appreciated.
As mentioned above, foreach(serial) is irrelevant for MPI.
hope this helps,
Stephane