Groups
Groups
Sign in
Groups
Groups
Open MPI users
Conversations
About
Send feedback
Help
Open MPI users
Contact owners and managers
1–30 of 30
Mark all as read
Report group
0 selected
Sheppard, Raymond W
,
Pritchard Jr., Howard
4
Jan 14
OpenMPI issue reported to SPEC HPG group. Thoughts?
HI Ray, I think you're correct about opening a new issue. I added a link to this email chain on
unread,
OpenMPI issue reported to SPEC HPG group. Thoughts?
HI Ray, I think you're correct about opening a new issue. I added a link to this email chain on
Jan 14
Ryan Novosielski
Jan 12
Compiling OpenMPI 5.0.9 to be flexible with libevent dependency?
Hi there. We are in the process of migrating to RHEL9 and are working through how we plan to handle
unread,
Compiling OpenMPI 5.0.9 to be flexible with libevent dependency?
Hi there. We are in the process of migrating to RHEL9 and are working through how we plan to handle
Jan 12
Cooper Burns
,
Edgar Gabriel
2
12/10/25
Long file paths causing segfault
Thank you for the bug report. Could you please create an issue on here: https://github.com/open-mpi/
unread,
Long file paths causing segfault
Thank you for the bug report. Could you please create an issue on here: https://github.com/open-mpi/
12/10/25
Collin Strassburger
, …
George Bosilca
16
12/10/25
Multi-host troubleshooting
I did more digging and you are correct; after updating another node (4), nodes 1 and 4 are happy to
unread,
Multi-host troubleshooting
I did more digging and you are correct; after updating another node (4), nodes 1 and 4 are happy to
12/10/25
Palmer, Bruce J
, …
Leonid Genkin
8
12/5/25
shmem_finalize
I tried running your code below on a workstation with an Ubuntu 22.04.2 OS on 4 processes. I'm
unread,
shmem_finalize
I tried running your code below on a workstation with an Ubuntu 22.04.2 OS on 4 processes. I'm
12/5/25
Pritchard Jr., Howard
11/16/25
Open MPI BOF at SC25
Hi Open MPI users, Open MPI will be hosting our State of the Union Birds of a Feather (BOF) Session
unread,
Open MPI BOF at SC25
Hi Open MPI users, Open MPI will be hosting our State of the Union Birds of a Feather (BOF) Session
11/16/25
Konstantin Tokarev
8/18/25
How to use only one NUMA node with mpirun?
Hello, I wonder what is the correct way to tell mpirun that all processes should be run on specific
unread,
How to use only one NUMA node with mpirun?
Hello, I wonder what is the correct way to tell mpirun that all processes should be run on specific
8/18/25
Kook Jin Noh
,
Gilles Gouaillardet
2
8/2/25
An error while installing openmpi 5.0.1
Cliff, Is there any reason why you are not installing the latest version of the v5.0 series (eg 5.0.8
unread,
An error while installing openmpi 5.0.1
Cliff, Is there any reason why you are not installing the latest version of the v5.0 series (eg 5.0.8
8/2/25
jde...@intec.unl.edu.ar
,
Gilles Gouaillardet
5
7/30/25
Error when building openmpi-5.0.8.tar.gz
Dear Gilles, > Gilles Gouaillardet <gilles.go...@gmail.com> escribió: > > Jorge
unread,
Error when building openmpi-5.0.8.tar.gz
Dear Gilles, > Gilles Gouaillardet <gilles.go...@gmail.com> escribió: > > Jorge
7/30/25
Achilles Vassilicos
,
George Bosilca
9
7/5/25
Openmpi5.0.7 causes fatal timeout on last rank
This works: $ mpirun -x UCX_TLS=ib,rc -x UCX_NET_DEVICES=mlx4_0:1 -x MXM_RDMA_PORTS=mlx4_0:1 ./
unread,
Openmpi5.0.7 causes fatal timeout on last rank
This works: $ mpirun -x UCX_TLS=ib,rc -x UCX_NET_DEVICES=mlx4_0:1 -x MXM_RDMA_PORTS=mlx4_0:1 ./
7/5/25
Shruti Sharma
, …
George Bosilca
4
6/4/25
Horovod Performance with OpenMPI
Please ignore my prior answer, I just noticed you are running single-node. In addition to
unread,
Horovod Performance with OpenMPI
Please ignore my prior answer, I just noticed you are running single-node. In addition to
6/4/25
Udo Ziegenhagel
6/4/25
unsubscribe
-- Mit freundlichen Grüßen Udo Ziegenhagel --------------------------------------- Dipl.-Phys. Udo
unread,
unsubscribe
-- Mit freundlichen Grüßen Udo Ziegenhagel --------------------------------------- Dipl.-Phys. Udo
6/4/25
Mike Adams
, …
Tomislav Janjusic US
9
6/3/25
CUDA-Aware on OpenMPI v4 with CUDA IPC buffers
add --mca pml_base_verbose 90 And should see something like this: [rock18:3045236] select: component
unread,
CUDA-Aware on OpenMPI v4 with CUDA IPC buffers
add --mca pml_base_verbose 90 And should see something like this: [rock18:3045236] select: component
6/3/25
Miroslav Iliaš
,
Jeff Squyres (jsquyres)
7
5/29/25
openmpi5.0.7 with Intel2021 can not compile simple MPI program, error #6633
No, it is not fixed...when I get someting, I will report here. M. On Wednesday, 28 May 2025 at 23:49:
unread,
openmpi5.0.7 with Intel2021 can not compile simple MPI program, error #6633
No, it is not fixed...when I get someting, I will report here. M. On Wednesday, 28 May 2025 at 23:49:
5/29/25
McCoy Smith
,
Jeff Squyres
6
5/22/25
Open MPI license submitted for Open Source Intitative approval
Looks like Brian Barrett has chimed in, as an original developer of Open MPI: Josh – I'm one of
unread,
Open MPI license submitted for Open Source Intitative approval
Looks like Brian Barrett has chimed in, as an original developer of Open MPI: Josh – I'm one of
5/22/25
Christoph Niethammer
5/7/25
catching SIGTERM in Open MPI 5.x
Hello, I am trying to implement some graceful application shutdown in case mpirun receives a SIGTERM.
unread,
catching SIGTERM in Open MPI 5.x
Hello, I am trying to implement some graceful application shutdown in case mpirun receives a SIGTERM.
5/7/25
Suyash
,
Brice Goglin
2
4/25/25
hwloc 2.12.0 received invalid information from the operating system.
Hello Indeed the L1d and L1i affinity of some cores is wrong. L1 are private to each core, hence
unread,
hwloc 2.12.0 received invalid information from the operating system.
Hello Indeed the L1d and L1i affinity of some cores is wrong. L1 are private to each core, hence
4/25/25
Anna R.
,
George Bosilca
2
4/23/25
MPI Tool Information Interface (MPI_T), details on collective communication
Anna, The monitoring PML tracks all activity on the PML but might choose to only expose that one that
unread,
MPI Tool Information Interface (MPI_T), details on collective communication
Anna, The monitoring PML tracks all activity on the PML but might choose to only expose that one that
4/23/25
Mathieu Westphal
,
Pritchard Jr., Howard
2
4/11/25
Supported MPI standard ?
Hi Mathieu The statement at www.open-mpi.org is incorrect. It should read 3.1 Howard Get Outlook for
unread,
Supported MPI standard ?
Hi Mathieu The statement at www.open-mpi.org is incorrect. It should read 3.1 Howard Get Outlook for
4/11/25
Natasha Evans
, …
Jeff Squyres (jsquyres)
3
4/10/25
OpenMPI-5.0.7 libopen-rte.so.0 missing
Gilles raises a good point: if you think you're using Open MPI v5.0.x, but you're somehow not
unread,
OpenMPI-5.0.7 libopen-rte.so.0 missing
Gilles raises a good point: if you think you're using Open MPI v5.0.x, but you're somehow not
4/10/25
Sangam B
, …
Gilles Gouaillardet
8
3/31/25
OpenMPI-5.0.5 & 5.0.6 build failure: error: expected expression before ‘struct’
Nice catch Rainer! I absolutely forgot to include the btl/self component. Cheers, Gilles On Mon, Mar
unread,
OpenMPI-5.0.5 & 5.0.6 build failure: error: expected expression before ‘struct’
Nice catch Rainer! I absolutely forgot to include the btl/self component. Cheers, Gilles On Mon, Mar
3/31/25
Matteo Guglielmi
3/31/25
unsubscribe
unsubscribe -------- Matteo Guglielmi | DALCO AG | Industriestr. 28 | 8604 Volketswil | Switzerland |
unread,
unsubscribe
unsubscribe -------- Matteo Guglielmi | DALCO AG | Industriestr. 28 | 8604 Volketswil | Switzerland |
3/31/25
Thompson, Matt (GSFC-610.1)[SCIENCE SYSTEMS AND APPLICATIONS INC]
3/25/25
Oddity with Open MPI 5, Multiple Nodes, and Malloc?
All, The subject line of this email is vague, but that's only because I'm not sure what is
unread,
Oddity with Open MPI 5, Multiple Nodes, and Malloc?
All, The subject line of this email is vague, but that's only because I'm not sure what is
3/25/25
Reuti
,
Pritchard Jr., Howard
2
3/24/25
OpenMPI with “Fort integer size: 8” for Dirac*
Hi Reuti, Sorry this missed 5.0.7. https://github.com/open-mpi/ompi/pull/13159 Hopefully it will show
unread,
OpenMPI with “Fort integer size: 8” for Dirac*
Hi Reuti, Sorry this missed 5.0.7. https://github.com/open-mpi/ompi/pull/13159 Hopefully it will show
3/24/25
Sangam B
3/20/25
OpenMPI-5.0.7 build with ucx, cuda & gdr_copy
Hi Team, My application fails with following error [compiled with openmpi-5.0.7, ucx-1.18.0, cuda-
unread,
OpenMPI-5.0.7 build with ucx, cuda & gdr_copy
Hi Team, My application fails with following error [compiled with openmpi-5.0.7, ucx-1.18.0, cuda-
3/20/25
Saurabh T
2/27/25
Re: Avoiding localhost as rank 0 with openmpi-default-hostfile
I asked this before but did not receive a reply. Now with openmpi 5, I tried doing this with prte-
unread,
Re: Avoiding localhost as rank 0 with openmpi-default-hostfile
I asked this before but did not receive a reply. Now with openmpi 5, I tried doing this with prte-
2/27/25
Joshua Strodtbeck
, …
George Bosilca
3
2/17/25
Disable PMPI bindings?
I'm not sure if I correctly understand the compiler complaint here, but I think it is complaining
unread,
Disable PMPI bindings?
I'm not sure if I correctly understand the compiler complaint here, but I think it is complaining
2/17/25
Sangam B
, …
Patrick Begou
7
2/14/25
OpenMPI-5.0.6: -x LD_LIBRARY_PATH not able to load shared objects
Thanks Gilles & Patrick. As Gilles mentioned, while OpenMPI spawns prted daemons on compute nodes
unread,
OpenMPI-5.0.6: -x LD_LIBRARY_PATH not able to load shared objects
Thanks Gilles & Patrick. As Gilles mentioned, while OpenMPI spawns prted daemons on compute nodes
2/14/25
Chandran, Arun
, …
Jeff Hammond
12
1/24/25
Regarding the usage of MPI-One sided communications in HPC applications
NWChem uses MPI RMA via ARMCI-MPI, which is a separate repo (https://github.com/pmodels/armci-mpi).
unread,
Regarding the usage of MPI-One sided communications in HPC applications
NWChem uses MPI RMA via ARMCI-MPI, which is a separate repo (https://github.com/pmodels/armci-mpi).
1/24/25
Jeff Squyres (jsquyres)
1/5/25
Migrated Open MPI mailing lists to Google Groups
The Open MPI community would like to publicly thank the New Mexico Consortium for hosting our GNU
unread,
Migrated Open MPI mailing lists to Google Groups
The Open MPI community would like to publicly thank the New Mexico Consortium for hosting our GNU
1/5/25