Calling a serial NLBVP solve within a parallel job: running into ValueError on mesh

230 views
Skip to first unread message

Ben Brown

unread,
Feb 17, 2018, 1:09:37 AM2/17/18
to dedalus-users
I have a script where I'd like to call a serial NLBVP solve within a larger parallel job.  I'm solving Lane-Emden equations for a background structure.  My problem is that the serial solve is seeing the larger processor mesh of the parallel job and is throwing a ValueError on mesh:

ValueError: Mesh must have lower dimension than domain.


The serial solve is in "lane_emden()".  Within the main parallel script, I tried the following:

if rank == 0:
    from structure import lane_emden
    structure = lane_emden(n_rho=1, m=1.5)
else:
    structure = None
comm.bcast(structure, root=0)

where structure.py has a lane_emden() function, based of the dedalus/examples/bvp script, like the following:

def lane_emden(n_rho=5, m=1.5, nr=128):
    # Build domain
    r_basis = de.Chebyshev('r', nr, interval=(0, 1), dealias=2)
    domain = de.Domain([r_basis], np.float64)
    <... rest of dedalus/examples/bvb/1d_lane_emden ... >

How do I modify the domain call in lane_emden(), or what else can I do, to have lane_emden() run in serial, either on the rank == 0 core, or on all cores individually?  I've tried setting mesh=None or mesh=[1] in the lane_emden() domain call; neither worked.

Thanks in advance,
--Ben

Keaton Burns

unread,
Feb 17, 2018, 1:14:10 AM2/17/18
to dedalu...@googlegroups.com
Hi Ben,

To do this, try passing MPI.COMM_SELF as the comm keyword on the NLBVP domain. 

-Keaton
--
You received this message because you are subscribed to the Google Groups "Dedalus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedalus-user...@googlegroups.com.
To post to this group, send email to dedalu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dedalus-users/CAHqBLzx9MctiBeivXtPkjps14%3D9BwwjJuXEDRVeLfMZVsr0qVg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Ben Brown

unread,
Feb 17, 2018, 1:21:33 AM2/17/18
to dedalus-users
Keaton,
     That worked great!  Thank you!

As one note for anyone else:

my attempt to bcast() the solution failed, owing to a Pickle error on the Chebyshevs.

So final code solution was to remove the rank==0 if check and the subsequent comm.bcast, having each core compute a full NLBVP:

from structure import lane_emden
structure = lane_emden(n_rho=1, m=1.5, comm=MPI.COMM_SELF)

and then within lane_emden():

def lane_emden(n_rho=5, m=1.5, nr=128, comm=None):

    # Build domain
    r_basis = de.Chebyshev('r', nr, interval=(0, 1), dealias=2)
    domain = de.Domain([r_basis], np.float64, comm=comm)

It's ripping along on 256 cores now, and producing same results as the serial single core.  Thanks!

--Ben


To unsubscribe from this group and stop receiving emails from it, send an email to dedalus-users+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dedalus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedalus-users+unsubscribe@googlegroups.com.

To post to this group, send email to dedalu...@googlegroups.com.

Ju Dith

unread,
Jan 6, 2023, 11:04:54 AM1/6/23
to Dedalus Users
Hello,

I am having kind of the same problem, but somehow using comm=MPI.COMM_SELF does not solve it in my case. I am using dedalus v3, do I need to specify the mesh then?
I have to solve each problem in serial, because it's all in 1D.
For example, if I run my code with two cores (mpiexec -n 2 python myfile.py), I get the error
ValueError: Mesh ([2]) must have lower dimension than distributor (1)

I would appreciate any help!

Best,
Judith

To unsubscribe from this group and stop receiving emails from it, send an email to dedalus-user...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Dedalus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedalus-user...@googlegroups.com.

Keaton Burns

unread,
Jan 8, 2023, 10:56:52 AM1/8/23
to dedalu...@googlegroups.com
Hi Judith,

Please send a minimal working example script showing this behavior. If you do not specify a mesh, it should default to the size of the communicator, which should always be 1, not 2, for MPI.COMM_SELF.

-Keaton


Ju Dith

unread,
Jan 11, 2023, 9:47:59 AM1/11/23
to Dedalus Users
Hi Keaton,

I just noticed it was some stupid mistake from my side. It does work actually.

Thanks a lot!
Judith

Reply all
Reply to author
Forward
0 new messages