Groups
Groups
Sign in
Groups
Groups
Open MPI users
Conversations
About
Send feedback
Help
Open MPI users
Contact owners and managers
1–24 of 24
Mark all as read
Report group
0 selected
Konstantin Tokarev
Aug 18
How to use only one NUMA node with mpirun?
Hello, I wonder what is the correct way to tell mpirun that all processes should be run on specific
unread,
How to use only one NUMA node with mpirun?
Hello, I wonder what is the correct way to tell mpirun that all processes should be run on specific
Aug 18
Kook Jin Noh
,
Gilles Gouaillardet
2
Aug 2
An error while installing openmpi 5.0.1
Cliff, Is there any reason why you are not installing the latest version of the v5.0 series (eg 5.0.8
unread,
An error while installing openmpi 5.0.1
Cliff, Is there any reason why you are not installing the latest version of the v5.0 series (eg 5.0.8
Aug 2
jde...@intec.unl.edu.ar
,
Gilles Gouaillardet
5
Jul 30
Error when building openmpi-5.0.8.tar.gz
Dear Gilles, > Gilles Gouaillardet <gilles.go...@gmail.com> escribió: > > Jorge
unread,
Error when building openmpi-5.0.8.tar.gz
Dear Gilles, > Gilles Gouaillardet <gilles.go...@gmail.com> escribió: > > Jorge
Jul 30
Achilles Vassilicos
,
George Bosilca
9
Jul 5
Openmpi5.0.7 causes fatal timeout on last rank
This works: $ mpirun -x UCX_TLS=ib,rc -x UCX_NET_DEVICES=mlx4_0:1 -x MXM_RDMA_PORTS=mlx4_0:1 ./
unread,
Openmpi5.0.7 causes fatal timeout on last rank
This works: $ mpirun -x UCX_TLS=ib,rc -x UCX_NET_DEVICES=mlx4_0:1 -x MXM_RDMA_PORTS=mlx4_0:1 ./
Jul 5
Shruti Sharma
, …
George Bosilca
4
Jun 4
Horovod Performance with OpenMPI
Please ignore my prior answer, I just noticed you are running single-node. In addition to
unread,
Horovod Performance with OpenMPI
Please ignore my prior answer, I just noticed you are running single-node. In addition to
Jun 4
Udo Ziegenhagel
Jun 4
unsubscribe
-- Mit freundlichen Grüßen Udo Ziegenhagel --------------------------------------- Dipl.-Phys. Udo
unread,
unsubscribe
-- Mit freundlichen Grüßen Udo Ziegenhagel --------------------------------------- Dipl.-Phys. Udo
Jun 4
Mike Adams
, …
Tomislav Janjusic US
9
Jun 3
CUDA-Aware on OpenMPI v4 with CUDA IPC buffers
add --mca pml_base_verbose 90 And should see something like this: [rock18:3045236] select: component
unread,
CUDA-Aware on OpenMPI v4 with CUDA IPC buffers
add --mca pml_base_verbose 90 And should see something like this: [rock18:3045236] select: component
Jun 3
Miroslav Iliaš
,
Jeff Squyres (jsquyres)
7
May 29
openmpi5.0.7 with Intel2021 can not compile simple MPI program, error #6633
No, it is not fixed...when I get someting, I will report here. M. On Wednesday, 28 May 2025 at 23:49:
unread,
openmpi5.0.7 with Intel2021 can not compile simple MPI program, error #6633
No, it is not fixed...when I get someting, I will report here. M. On Wednesday, 28 May 2025 at 23:49:
May 29
McCoy Smith
,
Jeff Squyres
6
May 22
Open MPI license submitted for Open Source Intitative approval
Looks like Brian Barrett has chimed in, as an original developer of Open MPI: Josh – I'm one of
unread,
Open MPI license submitted for Open Source Intitative approval
Looks like Brian Barrett has chimed in, as an original developer of Open MPI: Josh – I'm one of
May 22
Christoph Niethammer
May 7
catching SIGTERM in Open MPI 5.x
Hello, I am trying to implement some graceful application shutdown in case mpirun receives a SIGTERM.
unread,
catching SIGTERM in Open MPI 5.x
Hello, I am trying to implement some graceful application shutdown in case mpirun receives a SIGTERM.
May 7
Suyash
,
Brice Goglin
2
Apr 25
hwloc 2.12.0 received invalid information from the operating system.
Hello Indeed the L1d and L1i affinity of some cores is wrong. L1 are private to each core, hence
unread,
hwloc 2.12.0 received invalid information from the operating system.
Hello Indeed the L1d and L1i affinity of some cores is wrong. L1 are private to each core, hence
Apr 25
Anna R.
,
George Bosilca
2
Apr 23
MPI Tool Information Interface (MPI_T), details on collective communication
Anna, The monitoring PML tracks all activity on the PML but might choose to only expose that one that
unread,
MPI Tool Information Interface (MPI_T), details on collective communication
Anna, The monitoring PML tracks all activity on the PML but might choose to only expose that one that
Apr 23
Mathieu Westphal
,
Pritchard Jr., Howard
2
Apr 11
Supported MPI standard ?
Hi Mathieu The statement at www.open-mpi.org is incorrect. It should read 3.1 Howard Get Outlook for
unread,
Supported MPI standard ?
Hi Mathieu The statement at www.open-mpi.org is incorrect. It should read 3.1 Howard Get Outlook for
Apr 11
Natasha Evans
, …
Jeff Squyres (jsquyres)
3
Apr 10
OpenMPI-5.0.7 libopen-rte.so.0 missing
Gilles raises a good point: if you think you're using Open MPI v5.0.x, but you're somehow not
unread,
OpenMPI-5.0.7 libopen-rte.so.0 missing
Gilles raises a good point: if you think you're using Open MPI v5.0.x, but you're somehow not
Apr 10
Sangam B
, …
Gilles Gouaillardet
8
Mar 31
OpenMPI-5.0.5 & 5.0.6 build failure: error: expected expression before ‘struct’
Nice catch Rainer! I absolutely forgot to include the btl/self component. Cheers, Gilles On Mon, Mar
unread,
OpenMPI-5.0.5 & 5.0.6 build failure: error: expected expression before ‘struct’
Nice catch Rainer! I absolutely forgot to include the btl/self component. Cheers, Gilles On Mon, Mar
Mar 31
Matteo Guglielmi
Mar 31
unsubscribe
unsubscribe -------- Matteo Guglielmi | DALCO AG | Industriestr. 28 | 8604 Volketswil | Switzerland |
unread,
unsubscribe
unsubscribe -------- Matteo Guglielmi | DALCO AG | Industriestr. 28 | 8604 Volketswil | Switzerland |
Mar 31
Thompson, Matt (GSFC-610.1)[SCIENCE SYSTEMS AND APPLICATIONS INC]
Mar 25
Oddity with Open MPI 5, Multiple Nodes, and Malloc?
All, The subject line of this email is vague, but that's only because I'm not sure what is
unread,
Oddity with Open MPI 5, Multiple Nodes, and Malloc?
All, The subject line of this email is vague, but that's only because I'm not sure what is
Mar 25
Reuti
,
Pritchard Jr., Howard
2
Mar 24
OpenMPI with “Fort integer size: 8” for Dirac*
Hi Reuti, Sorry this missed 5.0.7. https://github.com/open-mpi/ompi/pull/13159 Hopefully it will show
unread,
OpenMPI with “Fort integer size: 8” for Dirac*
Hi Reuti, Sorry this missed 5.0.7. https://github.com/open-mpi/ompi/pull/13159 Hopefully it will show
Mar 24
Sangam B
Mar 20
OpenMPI-5.0.7 build with ucx, cuda & gdr_copy
Hi Team, My application fails with following error [compiled with openmpi-5.0.7, ucx-1.18.0, cuda-
unread,
OpenMPI-5.0.7 build with ucx, cuda & gdr_copy
Hi Team, My application fails with following error [compiled with openmpi-5.0.7, ucx-1.18.0, cuda-
Mar 20
Saurabh T
Feb 27
Re: Avoiding localhost as rank 0 with openmpi-default-hostfile
I asked this before but did not receive a reply. Now with openmpi 5, I tried doing this with prte-
unread,
Re: Avoiding localhost as rank 0 with openmpi-default-hostfile
I asked this before but did not receive a reply. Now with openmpi 5, I tried doing this with prte-
Feb 27
Joshua Strodtbeck
, …
George Bosilca
3
Feb 17
Disable PMPI bindings?
I'm not sure if I correctly understand the compiler complaint here, but I think it is complaining
unread,
Disable PMPI bindings?
I'm not sure if I correctly understand the compiler complaint here, but I think it is complaining
Feb 17
Sangam B
, …
Patrick Begou
7
Feb 14
OpenMPI-5.0.6: -x LD_LIBRARY_PATH not able to load shared objects
Thanks Gilles & Patrick. As Gilles mentioned, while OpenMPI spawns prted daemons on compute nodes
unread,
OpenMPI-5.0.6: -x LD_LIBRARY_PATH not able to load shared objects
Thanks Gilles & Patrick. As Gilles mentioned, while OpenMPI spawns prted daemons on compute nodes
Feb 14
Chandran, Arun
, …
Jeff Hammond
12
Jan 24
Regarding the usage of MPI-One sided communications in HPC applications
NWChem uses MPI RMA via ARMCI-MPI, which is a separate repo (https://github.com/pmodels/armci-mpi).
unread,
Regarding the usage of MPI-One sided communications in HPC applications
NWChem uses MPI RMA via ARMCI-MPI, which is a separate repo (https://github.com/pmodels/armci-mpi).
Jan 24
Jeff Squyres (jsquyres)
Jan 5
Migrated Open MPI mailing lists to Google Groups
The Open MPI community would like to publicly thank the New Mexico Consortium for hosting our GNU
unread,
Migrated Open MPI mailing lists to Google Groups
The Open MPI community would like to publicly thank the New Mexico Consortium for hosting our GNU
Jan 5