Hi PFlotran Dev Team,
We are currently attempting to build PFlotran 7.0 for Windows, but struggle with calls made to MPI from PFlotran. Any insight you might be able to share would be really valuable to us.
Full details of what we are doing are below:
Build environment: Windows Server 2022 x64
Build tools: MSYS2 with mingw64, GNU compiler collection v15.2.0 with MPI wrappers (mpicc, mpifort)
Libraries/dependencies:
Flex v2.6.4
Bison v3.8.2
BLAS/LAPACK: openblas v0.3.30
MS MPI v10.1.3 (https://learn.microsoft.com/en-us/message-passing-interface/microsoft-mpi)
HDF5: v1.14.6
PT-Scotch: v7.0.3
PETSc: v3.21.4
PFlotran: v7.0
We build HDF5 (with parallel support), PT-Scotch and PETSc from source. HDF5, PT-Scotch and PETSc are configured to build Fortran bindings and we use static linking throughout.
Our configure/build step for HDF5 and PT-Scotch only takes the path to the mpiexec.exe and we use short DOS paths (no whitespaces) to define this, e.g. “/C/PROGRA~1/MICROS~4/Bin/mpiexec”. In PETSc we define Include and Lib locations to MS MPI (also using short DOS paths).
Regression tests for HDF5 show 29 test fails (9 of these have MPI in their name, but all other MPI tests pass, and one specifically references Fortran).
106:MPI_TEST_testphdf5_cchunk3
114:MPI_TEST_testphdf5_tldsc
120:MPI_TEST_t_bigio
121:MPI_TEST_t_cache
128:MPI_TEST_t_pmulti_dset
129:MPI_TEST_t_select_io_dset
131:MPI_TEST_t_filters_parallel
2356:FORTRAN_testhdf5_fortran
2366:MPI_TEST_FORT_async_test
2790:MPI_TEST_H5_f90_ph5_f90_filtered_writes_no_sel
Regression tests for PT-Scotch show 4 test fails (3 of these are related to file compression/zipping and we’re not concerned about that).
1:test_common_file_compress_bz2
2:test_common_file_compress_gz
3:test_common_file_compress_lzma
5:test_common_random_1
Running check for PETSc indicates some warnings with Tutorials ex19 and ex5f but otherwise tests for C/C++ and Fortran using MPI processes pass.
PFlotran
PFlotran builds successfully, but when running any calculation that invokes MPI_Allreduce(), this method does not behave as expected and always sets 0 in returned data.
By adding some print debugging statements for example, we can determine that temp_int in line 956 is correct for the supplied model, but after the call to MPI_Allreduce(), temp_int is set to 0. This therefore fails the logic in line 959 and causes the calculation to fail. We see this in all grid calculations that call MPI_AllReduce().

We believe PFlotran 7.0 was never formally released for general use – is it possible there was an underlying issue with using MS MPI in this version?
We would appreciate any help or guidance on what might be the cause of the problem here.
Thanks,
Chris