Installing Dedalus on HPC - FFTW Wrapper MPICC Error

186 views
Skip to first unread message

jason.o...@gmail.com

unread,
Feb 28, 2018, 12:14:39 PM2/28/18
to Dedalus Users
Good Morning,

I'm trying to install Dedalus on Westgrid Orcinus an HPC at UBC. It has intel c compilers so I've been trying for mostly follow along the install notes for TACC/Stampede. I've managed to install all of the dependancies including the FFTW libraries. However, when I attempt to install Dedalus proper using the "Install notes for TACC/Stampede", the install fails with the following errors (see below).

I've attempted to use either the pre-installed FFTW module on the system and compiling a local copy of FFTW. I've also attempted to use mpi4py version 2.0.0 and version 3.0.0. Any help would be greatly appreciated.

Thanks!

Jason

OPENMPI == 1.6.5
icc == 14.0.2
Python3 == 3.3.3
numpy == 1.8.0
hdf5 == 1.8.12
fftw == 3.3.3
h5py == 2.5.0
mpi4py == 3.0.0 --- also fails with 2.0.0
cython == 0.27.3 --- also fails with 0.20


[#USERNAME# dedalus]$ python3 setup.py build_ext --inplace

Looking for fftw prefix
Found env var FFTW_PATH = /home/#USERNAME#/build_intel
Looking for mpi prefix
Found env var MPI_PATH = /global/software/openmpi-1.6.5/intel/
Looking for fftw prefix
Found env var FFTW_PATH = /home/#USERNAME#/build_intel
dedalus/core/transposes.pyx: cannot find cimported module '..libraries.fftw'
Compiling dedalus/libraries/fftw/fftw_wrappers.pyx because it changed.
Compiling dedalus/core/transposes.pyx because it changed.
Compiling dedalus/core/polynomials.pyx because it changed.
[1/3] Cythonizing dedalus/core/polynomials.pyx
[2/3] Cythonizing dedalus/core/transposes.pyx
warning: dedalus/core/transposes.pyx:301:38: Index should be typed for more efficient access
[3/3] Cythonizing dedalus/libraries/fftw/fftw_wrappers.pyx
running build_ext
creating build
creating build/lib.linux-x86_64-3.3
creating build/lib.linux-x86_64-3.3/dedalus
building 'dedalus.libraries.fftw.fftw_wrappers' extension
creating build/temp.linux-x86_64-3.3
creating build/temp.linux-x86_64-3.3/dedalus
creating build/temp.linux-x86_64-3.3/dedalus/libraries
creating build/temp.linux-x86_64-3.3/dedalus/libraries/fftw
mpicc -Wno-unused-result -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -mkl -O3 -xHost -fPIC -ipo -fPIC -Idedalus/libraries/fftw/ -I/home/#USERNAME#/build_intel/lib/python3.3/site-packages/numpy/core/include -I/home/#USERNAME#/build_intel/lib/python3.3/site-packages/mpi4py/include -I/home/#USERNAME#/build_intel/include -I/global/software/openmpi-1.6.5/intel/include -I/home/#USERNAME#/build_intel/include/python3.3m -c dedalus/libraries/fftw/fftw_wrappers.c -o build/temp.linux-x86_64-3.3/dedalus/libraries/fftw/fftw_wrappers.o -Wno-error=declaration-after-statement
icc: command line warning #10006: ignoring unknown option '-Wno-unused-result'
In file included from /home/#USERNAME#/build_intel/lib/python3.3/site-packages/numpy/core/include/numpy/ndarraytypes.h(1760),
from /home/#USERNAME#/build_intel/lib/python3.3/site-packages/numpy/core/include/numpy/ndarrayobject.h(17),
from /home/#USERNAME#/build_intel/lib/python3.3/site-packages/numpy/core/include/numpy/arrayobject.h(4),
from dedalus/libraries/fftw/fftw_wrappers.c(561):
/home/#USERNAME#/build_intel/lib/python3.3/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h(15): warning #1224: #warning directive: "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION"
#warning "Using deprecated NumPy API, disable it by " \
^

dedalus/libraries/fftw/fftw_wrappers.c(1307): error: identifier "MPI_Message" is undefined
MPI_Message ob_mpi;
^

dedalus/libraries/fftw/fftw_wrappers.c(7097): warning #167: argument of type "int *" is incompatible with parameter of type "const fftw_r2r_kind={enum fftw_r2r_kind_do_not_use_me} *"
__pyx_v_self->forward_plan = fftw_plan_guru_r2r(__pyx_v_trans_rank, __pyx_v_trans_struct, __pyx_v_vec_rank, __pyx_v_vec_struct_f, __pyx_v_gdata, __pyx_v_cdata, __pyx_v_kind_f, (__pyx_v_intflags | __pyx_e_7dedalus_9libraries_4fftw_10fftw_c_api_FFTW_DESTROY_INPUT));
^

dedalus/libraries/fftw/fftw_wrappers.c(7106): warning #167: argument of type "int *" is incompatible with parameter of type "const fftw_r2r_kind={enum fftw_r2r_kind_do_not_use_me} *"
__pyx_v_self->backward_plan = fftw_plan_guru_r2r(__pyx_v_trans_rank, __pyx_v_trans_struct, __pyx_v_vec_rank, __pyx_v_vec_struct_b, __pyx_v_cdata, __pyx_v_gdata, __pyx_v_kind_b, (__pyx_v_intflags | __pyx_e_7dedalus_9libraries_4fftw_10fftw_c_api_FFTW_DESTROY_INPUT));
^

dedalus/libraries/fftw/fftw_wrappers.c(8678): warning #167: argument of type "int *" is incompatible with parameter of type "const fftw_r2r_kind={enum fftw_r2r_kind_do_not_use_me} *"
__pyx_v_self->forward_plan = fftw_plan_guru_r2r(__pyx_v_trans_rank, __pyx_v_trans_struct, __pyx_v_vec_rank, __pyx_v_vec_struct_f, __pyx_v_gdata, __pyx_v_cdata, __pyx_v_kind_f, (__pyx_v_intflags | __pyx_e_7dedalus_9libraries_4fftw_10fftw_c_api_FFTW_DESTROY_INPUT));
^

dedalus/libraries/fftw/fftw_wrappers.c(8687): warning #167: argument of type "int *" is incompatible with parameter of type "const fftw_r2r_kind={enum fftw_r2r_kind_do_not_use_me} *"
__pyx_v_self->backward_plan = fftw_plan_guru_r2r(__pyx_v_trans_rank, __pyx_v_trans_struct, __pyx_v_vec_rank, __pyx_v_vec_struct_b, __pyx_v_cdata, __pyx_v_gdata, __pyx_v_kind_b, (__pyx_v_intflags | __pyx_e_7dedalus_9libraries_4fftw_10fftw_c_api_FFTW_DESTROY_INPUT));
^

compilation aborted for dedalus/libraries/fftw/fftw_wrappers.c (code 2)
error: command 'mpicc' failed with exit status 2

Keaton Burns

unread,
Feb 28, 2018, 1:29:02 PM2/28/18
to dedalu...@googlegroups.com
Good afternoon from the east coast!  If you open up a python3 terminal, can you run “from mpi4py import MPI” without any errors?  Also is there a newer version of openmpi available on the cluster?

-Keaton
--
You received this message because you are subscribed to the Google Groups "Dedalus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dedalus-user...@googlegroups.com.
To post to this group, send email to dedalu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dedalus-users/fe29f544-e243-47af-b27a-c1a11ed99e49%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jason Olsthoorn

unread,
Feb 28, 2018, 2:48:48 PM2/28/18
to Daniel Lecoanet
Hey, thanks for the speedy reply! 

Yes, I can import MPI from mpi4py without errors. 

There is not a newer version of openmpi available for icc, however openmpi-2.1.0 is available through gcc. If it's likely that's causing the problems, I can recompile numpy etc using gcc instead of the intel compilers.

Thanks!

Jason


To unsubscribe from this group and stop receiving emails from it, send an email to dedalus-users+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Dedalus Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dedalus-users/-d-m8gBfkZk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dedalus-users+unsubscribe@googlegroups.com.

To post to this group, send email to dedalu...@googlegroups.com.

Jason

unread,
Mar 2, 2018, 1:27:18 PM3/2/18
to Dedalus Users
Hey,

I never did get the code working on that HPC, so I just migrated over to a different system where things appear to be running (Cedar on Compute Canada). However, I am getting the following warning. I noticed that there was another comment about this on the discussion board that never was resolved. Could you let me know if this is an issue and/or if this is a result of some incorrect dependancy compile?


--------------------------------------------------------------------------
A process has executed an operation involving a call to the
"fork()" system call to create a child process. Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your job may hang, crash, or produce silent
data corruption. The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.

The process that invoked fork was:

Local host: ##############

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------

python== 3.5.3
numpy== 1.9.2
mpi4py==2.0.0
openmpi==2.0.2
fftw-mpi==3.3.6
hdf5-mpi==1.8.18
mkl== 11.3.4.258


Thanks

Jason
> To unsubscribe from this group and stop receiving emails from it, send an email to dedalus-user...@googlegroups.com.
>
> To post to this group, send email to dedalu...@googlegroups.com.
>
> To view this discussion on the web visit https://groups.google.com/d/msgid/dedalus-users/fe29f544-e243-47af-b27a-c1a11ed99e49%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
>
>
> --
>
> You received this message because you are subscribed to a topic in the Google Groups "Dedalus Users" group.
>
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/dedalus-users/-d-m8gBfkZk/unsubscribe.
>
> To unsubscribe from this group and all its topics, send an email to dedalus-user...@googlegroups.com.

Daniel Lecoanet

unread,
Mar 2, 2018, 1:34:01 PM3/2/18
to dedalu...@googlegroups.com
Hi Jason,

I get warnings like that all the time.  Don't know what it means.  Hasn't stopped me from doing interesting work (or at least, I think it's interesting!).

Daniel

Jason Olsthoorn

unread,
Mar 2, 2018, 3:11:07 PM3/2/18
to Daniel Lecoanet

Keaton Burns

unread,
Mar 5, 2018, 6:07:57 PM3/5/18
to dedalu...@googlegroups.com
Hi Jason,

Sorry, I’m not sure about that issue and if it’s related to intel compilers or not.  If you get a chance to try with the GCC version of openmpi, it would be great to hear how it turns out, but no worries if you’d rather not spend more time on that system!

-Keaton
To unsubscribe from this group and stop receiving emails from it, send an email to dedalus-user...@googlegroups.com.

To post to this group, send email to dedalu...@googlegroups.com.

Jason Olsthoorn

unread,
Mar 5, 2018, 10:47:00 PM3/5/18
to Daniel Lecoanet
I'll keep you posted if I get to it!

Thanks! I've got a couple simulations off the ground now :)

Jason

--
You received this message because you are subscribed to a topic in the Google Groups "Dedalus Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dedalus-users/-d-m8gBfkZk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dedalus-users+unsubscribe@googlegroups.com.
To post to this group, send email to dedalu...@googlegroups.com.
Reply all
Reply to author
Forward
Message has been deleted
Message has been deleted
0 new messages