shmem_finalize

12 views
Skip to first unread message

Palmer, Bruce J

unread,
Dec 2, 2025, 4:54:45 PM12/2/25
to Open MPI Users
Hi,

Not sure if this is the right place for this inquiry, but I’ve got a question about the implementation of OpenSHMEM that is distributed with OpenMPI. As far as I can tell from the OpenSHMEM documentation, MPI_Finalize() and shmem_finalize are relatively independent and it should be possible to call shmem_finalize followed by MPI_Finalize without incurring an error. However, it looks like the shmem_finalize implementation is calling omni_mpi_finalize and this causes MPI_Finalize to crash. I can fix this easily in my application, but it seems like I shouldn’t have to.

Bruce

Pritchard Jr., Howard

unread,
Dec 3, 2025, 2:11:30 PM12/3/25
to us...@lists.open-mpi.org

HI Bruce,

 

Could you describe more what the crash is?  Does the application emit some sort of error message or is there a segmentation fault and maybe traceback?

Also which version of Open MPI/OSHMEM are you using?

 

Howard

To unsubscribe from this group and stop receiving emails from it, send an email to users+un...@lists.open-mpi.org.

Palmer, Bruce J

unread,
Dec 3, 2025, 4:02:09 PM12/3/25
to us...@lists.open-mpi.org
It appears to complain about a call to MPI_Comm_free that is called after MPI_Finalize. I assume this is occurring because there is a call to MPI_Comm_free (or its internal equivalent) inside MPI_Finalize and this is triggered because our code has a call to shmem_finalize that is followed by a call to MPI_Finalize. My understanding is that should be allowed.

Bruce

From: 'Pritchard Jr., Howard' via Open MPI users <us...@lists.open-mpi.org>
Date: Wednesday, December 3, 2025 at 11:11 AM
To: us...@lists.open-mpi.org <us...@lists.open-mpi.org>
Subject: Re: [EXTERNAL] [OMPI users] shmem_finalize

Check twice before you click! This email originated from outside PNNL.
Message has been deleted
Message has been deleted

Pritchard Jr., Howard

unread,
Dec 4, 2025, 5:35:11 PM12/4/25
to us...@lists.open-mpi.org

HI Bruce,

 

I think this is a fuzzy area – whether or not an app can/should call finalization methods for both open shmem and MPI.

The way oshmem is implemented, its tightly coupled to Open MPI for startup and shutdown.

 

The best I can suggest is invoke shmem_finalize first then call MPI_Finalized(&flag) and if flag is returned false, call MPI_Finalize, otherwise not.

 

Unfortunately I don’t think there’s any standardization here.

Palmer, Bruce J

unread,
Dec 5, 2025, 11:37:10 AM12/5/25
to us...@lists.open-mpi.org
Yeah, that seems to be the only safe bet. 

Tomislav Janjusic US

unread,
Dec 5, 2025, 6:51:30 PM12/5/25
to Open MPI users, Palmer, Bruce J
Hey Bruce, 

Do you have a minimal reproducer?
We tried with:

int main(int argc, char *argv[])
{

    MPI_Init(NULL, NULL);
    shmem_init();
    shmem_finalize();
    MPI_Finalize();
   
    return 0;
}

And cannot reproduce though we do some issues if we reorder inits.

Leonid Genkin

unread,
Dec 5, 2025, 6:51:33 PM12/5/25
to Open MPI users, Palmer, Bruce J
Hi

Do you have a minimal & stable reproducer? What MPI version are you using?

I was unable to reproduce with:

int main(int argc, char *argv[])
{

    MPI_Init(NULL, NULL);
    shmem_init();
    shmem_finalize();
    MPI_Finalize();
   
    return 0;
}

Palmer, Bruce J

unread,
Dec 5, 2025, 6:51:37 PM12/5/25
to Tomislav Janjusic US, Open MPI users
I tried running your code below on a workstation with an Ubuntu 22.04.2 OS on 4 processes. I’m using openmpi-5.0.8. I get the error that MPI_Finalize() function was called after MPI_FINALIZE invoked.

MPI was configured with

configure --enable-oshmem --with-ucx=/my/local/UCX/install --prefix=/my/local/MPI/install

Bruce
Reply all
Reply to author
Forward
0 new messages