dolfin Import Error, _common (Cray)

216 views
Skip to first unread message

Drew Parsons

unread,
Feb 27, 2017, 8:51:50 AM2/27/17
to fenics-support
I'm trying to get dolfin 2.016.2.0 running on Cray.  I have a successful build of dolfin and other FEniCS components.  I've built on a compute node to keep the environment compatible with the runtime environment.  I also built numpy, scipy, sympy and petsc. All builds are apparently successful; all python numeric modules and all FEniCS components apart from dolfin import successfully in python 2.7.10.

But even though dolfin apparently built successfully in the same environment (and the new dolfin cray module loads successfully), it fails to import in python:


> python
Python 2.7.10 (default, Sep  6 2016, 10:44:23)
[GCC 4.3.4 [gcc-4_3-branch revision 152973]] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dolfin
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/__init__.py", line 17, in <module>
    from . import cpp
  File "/group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/cpp/__init__.py", line 43, in <module>
    exec("from . import %s" % module_name)
  File "<string>", line 1, in <module>
  File "/group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/cpp/common.py", line 25, in <module>
    _common = swig_import_helper()
  File "/group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/cpp/common.py", line 24, in swig_import_helper
    return importlib.import_module('_common')
  File "/pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
ImportError: No module named _common



_common import problems like this have been reported on this mailing list previously, where they were diagnosed as misdirected libraries (Boost or libc) that might be fixed via LD_LIBRARY_PATH.  But in my case there's no such clear problem.  If I go to python2.7/site-packages/dolfin/cpp and import directly in order to diagnose, then numpy gets blamed:

>>> import importlib
>>> importlib.import_module('_common')
ImportError: numpy.core.multiarray failed to import
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
ImportError: numpy.core.multiarray failed to import


But if I import numpy directly (before import dolfin or _common), then it loads fine.


So if I import numpy first, then _common via importlib (while starting in the dolfin/cpp directory), then _common does import.  If try to load dolfin (while in dolfin/cpp), it fails saying it needs ffc:
.../dolfin/cpp> python
Python 2.7.10 (default, Sep  6 2016, 10:44:23)
[GCC 4.3.4 [gcc-4_3-branch revision 152973]] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import importlib
>>> importlib.import_module('_common')
<module '_common' (built-in)>
>>> import dolfin

---------------------------------------------------
DOLFIN runtime dependency is not met.
Install the following python module: 'ffc'
and make sure its location is listed in PYTHONPATH.
---------------------------------------------------
 

But if (in dolfin/cpp)  I try to import ffc before importing dolfin, it fails with a different error:

.../dolfin/cpp> python
Python 2.7.10 (default, Sep  6 2016, 10:44:23)
[GCC 4.3.4 [gcc-4_3-branch revision 152973]] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import ffc
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/ffc/2016.2.0/lib/python2.7/site-packages/FFC-2016.2.0-py2.7.egg/ffc/__init__.py", line 21, in <module>
    from ffc.compiler import compile_form, compile_element
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/ffc/2016.2.0/lib/python2.7/site-packages/FFC-2016.2.0-py2.7.egg/ffc/compiler.py", line 124, in <module>
    from ffc.analysis import analyze_ufl_objects
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/ffc/2016.2.0/lib/python2.7/site-packages/FFC-2016.2.0-py2.7.egg/ffc/analysis.py", line 45, in <module>
    from ffc.quadratureelement import default_quadrature_degree
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/ffc/2016.2.0/lib/python2.7/site-packages/FFC-2016.2.0-py2.7.egg/ffc/quadratureelement.py", line 26, in <module>
    from FIAT.functional import PointEvaluation
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/fiat/2016.2.0/lib/python2.7/site-packages/FIAT/__init__.py", line 8, in <module>
    from FIAT.finite_element import FiniteElement, CiarletElement  # noqa: F401
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/fiat/2016.2.0/lib/python2.7/site-packages/FIAT/finite_element.py", line 27, in <module>
    from FIAT.polynomial_set import PolynomialSet
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/fiat/2016.2.0/lib/python2.7/site-packages/FIAT/polynomial_set.py", line 32, in <module>
    from FIAT import expansions
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/fiat/2016.2.0/lib/python2.7/site-packages/FIAT/expansions.py", line 25, in <module>
    import sympy
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/sympy/0.7.6.1/lib/python2.7/site-packages/sympy/__init__.py", line 40, in <module>
    from .polys import *
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/sympy/0.7.6.1/lib/python2.7/site-packages/sympy/polys/__init__.py", line 5, in <module>
    from . import polytools
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/sympy/0.7.6.1/lib/python2.7/site-packages/sympy/polys/polytools.py", line 53, in <module>
    from sympy.polys.domains import FF, QQ, ZZ
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/sympy/0.7.6.1/lib/python2.7/site-packages/sympy/polys/domains/__init__.py", line 9, in <module>
    from . import finitefield
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/sympy/0.7.6.1/lib/python2.7/site-packages/sympy/polys/domains/finitefield.py", line 7, in <module>
    from sympy.polys.domains.groundtypes import SymPyInteger
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/sympy/0.7.6.1/lib/python2.7/site-packages/sympy/polys/domains/groundtypes.py", line 13, in <module>
    from .pythonrational import PythonRational
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/sympy/0.7.6.1/lib/python2.7/site-packages/sympy/polys/domains/pythonrational.py", line 12, in <module>
    from sympy.printing.defaults import DefaultPrinting
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/sympy/0.7.6.1/lib/python2.7/site-packages/sympy/printing/__init__.py", line 14, in <module>
    from .preview import preview
  File "/group/director2026/dparsons/software/cle52up04/python/2.7.10/sympy/0.7.6.1/lib/python2.7/site-packages/sympy/printing/preview.py", line 6, in <module>
    from io import BytesIO
ImportError: cannot import name BytesIO


Outside dolfin/cpp, ffc imports fine (but dolfin and _common does not).

I'm out of clues.  Help welcome.

Drew

Johannes Ring

unread,
Feb 27, 2017, 9:19:38 AM2/27/17
to Drew Parsons, fenics-support
Can you show us `ldd
/group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/cpp/_common.so`?

> _common import problems like this have been reported on this mailing list
> previously, where they were diagnosed as misdirected libraries (Boost or
> libc) that might be fixed via LD_LIBRARY_PATH. But in my case there's no
> such clear problem. If I go to python2.7/site-packages/dolfin/cpp and
> import directly in order to diagnose, then numpy gets blamed:
>
>> >>> import importlib
>> >>> importlib.import_module('_common')
>> ImportError: numpy.core.multiarray failed to import
>> Traceback (most recent call last):
>> File "<stdin>", line 1, in <module>
>> File
>> "/pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/python2.7/importlib/__init__.py",
>> line 37, in import_module
>> __import__(name)
>> ImportError: numpy.core.multiarray failed to import

I get the same when doing `python -c "import _common"` while standing
in dolfin/cpp, so I don't think that is the problem.
This is reasonable since it will try to load BytesIO from io.py in the
dolfin/cpp directory instead of from io in the standard library.

Johannes

Drew Parsons

unread,
Feb 27, 2017, 9:30:03 AM2/27/17
to Johannes Ring, fenics-support
On Mon, 2017-02-27 at 15:19 +0100, Johannes Ring wrote:
> On Mon, Feb 27, 2017 at 2:51 PM, Drew Parsons <dpar...@emerall.com>
> wrote:
> > I'm trying to get dolfin 2.016.2.0 running on Cray. 
...

> Can you show us `ldd
> /group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/cpp/_common.so`?


Here it is:

linux-vdso.so.1 => (0x00007fffe33de000)
libdolfin.so.2016.2 => /group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/libdolfin.so.2016.2 (0x00007f621ac84000)
libpython2.7.so.1.0 => /pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/libpython2.7.so.1.0 (0x00007f621a8a3000)
libboost_filesystem.so.1.57.0 => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/boost/1.57.0/lib/libboost_filesystem.so.1.57.0 (0x00007f621a68b000)
libboost_system.so.1.57.0 => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/boost/1.57.0/lib/libboost_system.so.1.57.0 (0x00007f621a486000)
libpetsc.so.3.7 => /group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/petsc/3.7.5/magnus-gnu/lib/libpetsc.so.3.7 (0x00007f6218e1f000)
libmpich_gnu_49.so.3 => /opt/cray/lib64/libmpich_gnu_49.so.3 (0x00007f6218898000)
libnetcdf_parallel_gnu_49.so.mpich31.7 => /opt/cray/lib64/libnetcdf_parallel_gnu_49.so.mpich31.7 (0x00007f6218587000)
libhdf5_hl_parallel_gnu_49.so.8 => /opt/cray/lib64/libhdf5_hl_parallel_gnu_49.so.8 (0x00007f6218356000)
libhdf5_parallel_gnu_49.so.8 => /opt/cray/lib64/libhdf5_parallel_gnu_49.so.8 (0x00007f6217e62000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f6217c45000)
libstdc++.so.6 => /opt/gcc/4.9.2/snos/lib64/libstdc++.so.6 (0x00007f621792d000)
libm.so.6 => /lib64/libm.so.6 (0x00007f62176b3000)
libgomp.so.1 => /opt/gcc/4.9.2/snos/lib64/libgomp.so.1 (0x00007f621749c000)
libgcc_s.so.1 => /opt/gcc/4.9.2/snos/lib64/libgcc_s.so.1 (0x00007f6217285000)
libc.so.6 => /lib64/libc.so.6 (0x00007f6216f08000)
libboost_program_options.so.1.57.0 => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/boost/1.57.0/lib/libboost_program_options.so.1.57.0 (0x00007f6216c91000)
libboost_iostreams.so.1.57.0 => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/boost/1.57.0/lib/libboost_iostreams.so.1.57.0 (0x00007f6216a78000)
libboost_timer.so.1.57.0 => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/boost/1.57.0/lib/libboost_timer.so.1.57.0 (0x00007f6216872000)
libparmetis_gnu_49.so.mpi31.4 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libparmetis_gnu_49.so.mpi31.4 (0x00007f621662f000)
libmetis_gnu_49.so.mpi31.4 => /opt/cray/lib64/libmetis_gnu_49.so.mpi31.4 (0x00007f62163c1000)
libz.so.1 => /lib64/libz.so.1 (0x00007f62161ab000)
libboost_regex.so.1.57.0 => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/boost/1.57.0/lib/libboost_regex.so.1.57.0 (0x00007f6215ec0000)
libboost_chrono.so.1.57.0 => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/boost/1.57.0/lib/libboost_chrono.so.1.57.0 (0x00007f6215cb7000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f6215ab3000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007f62158af000)
librt.so.1 => /lib64/librt.so.1 (0x00007f62156a6000)
libumfpack.so => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/suitesparse/4.4.6/lib/libumfpack.so (0x00007f62153c0000)
libklu.so => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/suitesparse/4.4.6/lib/libklu.so (0x00007f6215188000)
libcholmod.so => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/suitesparse/4.4.6/lib/libcholmod.so (0x00007f6214e68000)
libbtf.so => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/suitesparse/4.4.6/lib/libbtf.so (0x00007f6214c64000)
libccolamd.so => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/suitesparse/4.4.6/lib/libccolamd.so (0x00007f6214a56000)
libcolamd.so => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/suitesparse/4.4.6/lib/libcolamd.so (0x00007f621484d000)
libcamd.so => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/suitesparse/4.4.6/lib/libcamd.so (0x00007f6214641000)
libamd.so => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/suitesparse/4.4.6/lib/libamd.so (0x00007f6214435000)
libsuitesparseconfig.so => /pawsey/cle52up04/devel/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/suitesparse/4.4.6/lib/libsuitesparseconfig.so (0x00007f6214232000)
libX11.so.6 => /usr/lib64/libX11.so.6 (0x00007f6213ef5000)
libssl.so.0.9.8 => /usr/lib64/libssl.so.0.9.8 (0x00007f6213c9e000)
libcrypto.so.0.9.8 => /usr/lib64/libcrypto.so.0.9.8 (0x00007f62138fd000)
libgfortran.so.3 => /opt/gcc/4.9.2/snos/lib64/libgfortran.so.3 (0x00007f62135dc000)
libmpichf90_gnu_49.so.3 => /opt/cray/mpt/7.0.0/gni/mpich2-gnu/49/lib/libmpichf90_gnu_49.so.3 (0x00007f62133d9000)
libmpichcxx_gnu_49.so.3 => /opt/cray/mpt/7.0.0/gni/mpich2-gnu/49/lib/libmpichcxx_gnu_49.so.3 (0x00007f62131b4000)
libAtpSigHandler.so.0 => /opt/cray/atp/1.7.3/lib/libAtpSigHandler.so.0 (0x00007f6212fae000)
libHYPRE_gnu_49.so.mpi31.2 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libHYPRE_gnu_49.so.mpi31.2 (0x00007f6212a94000)
libcmumps_gnu_49.so.mpi31.4 => /opt/cray/tpsl/1.4.1/GNU/49/sandybridge/lib/libcmumps_gnu_49.so.mpi31.4 (0x00007f6212753000)
libdmumps_gnu_49.so.mpi31.4 => /opt/cray/tpsl/1.4.1/GNU/49/sandybridge/lib/libdmumps_gnu_49.so.mpi31.4 (0x00007f6212418000)
libesmumps_gnu_49.so.mpi31.6 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libesmumps_gnu_49.so.mpi31.6 (0x00007f6212213000)
libsmumps_gnu_49.so.mpi31.4 => /opt/cray/tpsl/1.4.1/GNU/49/sandybridge/lib/libsmumps_gnu_49.so.mpi31.4 (0x00007f6211ecf000)
libsundials_cvode_gnu_49.so.mpi31.1 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libsundials_cvode_gnu_49.so.mpi31.1 (0x00007f6211cb0000)
libsundials_cvodes_gnu_49.so.mpi31.2 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libsundials_cvodes_gnu_49.so.mpi31.2 (0x00007f6211a79000)
libsundials_ida_gnu_49.so.mpi31.2 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libsundials_ida_gnu_49.so.mpi31.2 (0x00007f621185a000)
libsundials_idas_gnu_49.so.mpi31.0 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libsundials_idas_gnu_49.so.mpi31.0 (0x00007f6211624000)
libsundials_kinsol_gnu_49.so.mpi31.1 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libsundials_kinsol_gnu_49.so.mpi31.1 (0x00007f6211409000)
libsundials_nvecparallel_gnu_49.so.mpi31.0 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libsundials_nvecparallel_gnu_49.so.mpi31.0 (0x00007f6211204000)
libsundials_nvecserial_gnu_49.so.mpi31.0 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libsundials_nvecserial_gnu_49.so.mpi31.0 (0x00007f6211000000)
libzmumps_gnu_49.so.mpi31.4 => /opt/cray/tpsl/1.4.1/GNU/49/sandybridge/lib/libzmumps_gnu_49.so.mpi31.4 (0x00007f6210cc4000)
libsuperlu_dist_gnu_49.so.mpi31.3 => /opt/cray/tpsl/1.4.1/GNU/49/sandybridge/lib/libsuperlu_dist_gnu_49.so.mpi31.3 (0x00007f6210a1f000)
libsuperlu_gnu_49.so.mpi31.4 => /opt/cray/tpsl/1.4.1/GNU/49/sandybridge/lib/libsuperlu_gnu_49.so.mpi31.4 (0x00007f6210788000)
libmumps_common_gnu_49.so.mpi31.4 => /opt/cray/tpsl/1.4.1/GNU/49/sandybridge/lib/libmumps_common_gnu_49.so.mpi31.4 (0x00007f621053d000)
libptesmumps_gnu_49.so.mpi31.6 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libptesmumps_gnu_49.so.mpi31.6 (0x00007f6210338000)
libpord_gnu_49.so.mpi31.4 => /opt/cray/tpsl/1.4.1/GNU/49/sandybridge/lib/libpord_gnu_49.so.mpi31.4 (0x00007f621011f000)
libptscotch_gnu_49.so.mpi31.6 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libptscotch_gnu_49.so.mpi31.6 (0x00007f620fed8000)
libscotch_gnu_49.so.mpi31.6 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libscotch_gnu_49.so.mpi31.6 (0x00007f620fc5b000)
libptscotcherr_gnu_49.so.mpi31.6 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libptscotcherr_gnu_49.so.mpi31.6 (0x00007f620fa59000)
libscotcherr_gnu_49.so.mpi31.6 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libscotcherr_gnu_49.so.mpi31.6 (0x00007f620f856000)
libsci_gnu_49_mpi.so.5 => /opt/cray/libsci/13.0.0/GNU/49/sandybridge/lib/libsci_gnu_49_mpi.so.5 (0x00007f620f07c000)
libsci_gnu_49.so.5 => /opt/cray/libsci/13.0.0/GNU/49/sandybridge/lib/libsci_gnu_49.so.5 (0x00007f620df13000)
libquadmath.so.0 => /opt/gcc/4.9.2/snos/lib64/libquadmath.so.0 (0x00007f620dcd4000)
libmpl.so.0 => /opt/cray/lib64/libmpl.so.0 (0x00007f620dacf000)
libxpmem.so.0 => /opt/cray/xpmem/default/lib64/libxpmem.so.0 (0x00007f620d8cb000)
libugni.so.0 => /opt/cray/ugni/default/lib64/libugni.so.0 (0x00007f620d658000)
libudreg.so.0 => /opt/cray/udreg/default/lib64/libudreg.so.0 (0x00007f620d44f000)
libpmi.so.0 => /opt/cray/lib64/libpmi.so.0 (0x00007f620d211000)
/lib64/ld-linux-x86-64.so.2 (0x00007f621b709000)
libbz2.so.1 => /lib64/libbz2.so.1 (0x00007f620d002000)
libmetis_gnu_49.so.mpi31.5 => /opt/cray/tpsl/16.12.1/GNU/4.9/sandybridge/lib/libmetis_gnu_49.so.mpi31.5 (0x00007f620cd8e000)
librca.so.0 => /opt/cray/rca/default/lib64/librca.so.0 (0x00007f620cb8a000)
libxcb-xlib.so.0 => /usr/lib64/libxcb-xlib.so.0 (0x00007f620c987000)
libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00007f620c76b000)
libXau.so.6 => /usr/lib64/libXau.so.6 (0x00007f620c567000)
libsci_gnu_49_mp.so.5 => /opt/cray/lib64/libsci_gnu_49_mp.so.5 (0x00007f620b38f000)
libsci_gnu_49_mpi_mp.so.5 => /opt/cray/lib64/libsci_gnu_49_mpi_mp.so.5 (0x00007f620ac06000)


> > But if (in dolfin/cpp)  I try to import ffc before importing
> > dolfin, it
> > fails with a different error:
...
> > >   File
> > > "/group/fenicstest/software/cle52up04/python/2.7.10/sympy/0.7.6.1
> > > /lib/python2.7/site-packages/sympy/printing/preview.py",
> > > line 6, in <module>
> > >     from io import BytesIO
> > > ImportError: cannot import name BytesIO
>
> This is reasonable since it will try to load BytesIO from io.py in
> the
> dolfin/cpp directory instead of from io in the standard library.

OK, thanks :)


Drew

Jan Blechta

unread,
Feb 27, 2017, 9:36:38 AM2/27/17
to Drew Parsons, fenics-support
This is expected to fail this way. There is a workaround for this in
importhandler/__init__.py.
This fail because there is io.py in the working dir and sympy
apparently uses here relative import.

You should be able to do this:

cd ~
python

import imp
r = imp.find_module('_common', ['/group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/cpp/'])
imp.load_module('_common', *r)
import _common


Jan

Drew Parsons

unread,
Feb 27, 2017, 1:29:42 PM2/27/17
to Jan Blechta, fenics-support
On Mon, 2017-02-27 at 14:36 +0000, Jan Blechta wrote:
> On Mon, 27 Feb 2017 05:51:49 -0800 (PST)
> Drew Parsons <dpar...@emerall.com> wrote:
>
> > I'm trying to get dolfin 2.016.2.0 running on Cray.  
...
>
> You should be able to do this:
>
> cd ~
> python
>
> import imp
> r = imp.find_module('_common', ['/group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/cpp/'])
> imp.load_module('_common', *r)
> import _common
>


Thanks Jan, that gives me an extra clue.

>>> import imp
>>> r = imp.find_module('_common', ['/group/fenicstools/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/cpp/'])
>>> imp.load_module('_common', *r)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named petsc4py.PETSc


Looking closer, I see I've got discrepant petsc versions used by
petsc4py and dolfin. More to the point, my dolfin module refers to a
different version of petsc to the one referenced in the ldd for
dolfin/cpp/_common.so. That's the cause of the import failure. Just a
simple inconsistency in my dolfin module dependencies, not matching
what was actually used for the build.

Thanks Johannes and Jan. dolfin imports now.

It fails to execute, but that's a different problem. Something to do
with the Cray compute node architecture. For the record, a gdb
backtrace shows

> aprun -n 1 -d 24 gdb python
GNU gdb (GDB) SUSE (7.5.1-0.7.29)
(gdb) run fenicstest.py
Starting program: /pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/bin/python fenicstest.py
...
Program received signal SIGSEGV, Segmentation fault.
---Type <return> to continue, or q <return> to quit---
0x00002aaac37f6ae5 in _pmi_cpr_init_daemon ()
at /.AUTO/cray/css.u18/jemison/pmi/pmi/src/cpr/pmi_cpr.c:461
461 /.AUTO/cray/css.u18/jemison/pmi/pmi/src/cpr/pmi_cpr.c: No such file or directory.
(gdb) bt
#0 0x00002aaac37f6ae5 in _pmi_cpr_init_daemon ()
at /.AUTO/cray/css.u18/jemison/pmi/pmi/src/cpr/pmi_cpr.c:461
#1 0x00002aaac37e86b4 in _pmi_constructor ()
at /.AUTO/cray/css.u18/jemison/pmi/pmi/src/pmi_core/_pmi_init.c:183
#2 0x00002aaac37fc056 in __do_global_ctors_aux ()
from /opt/cray/lib64/libpmi.so.0
#3 0x00002aaac37e523b in _init () from /opt/cray/lib64/libpmi.so.0
#4 0x00000005ab5738a8 in ?? ()
#5 0x00002aaaaaab91f8 in call_init () from /lib64/ld-linux-x86-64.so.2
#6 0x00002aaaaaab9327 in _dl_init_internal () from /lib64/ld-linux-x86-64.so.2
#7 0x00002aaaaaabd646 in dl_open_worker () from /lib64/ld-linux-x86-64.so.2
#8 0x00002aaaaaab8e86 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2
#9 0x00002aaaaaabce3b in _dl_open () from /lib64/ld-linux-x86-64.so.2
#10 0x00002aaaaaeeaf9b in dlopen_doit () from /lib64/libdl.so.2
#11 0x00002aaaaaab8e86 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2
#12 0x00002aaaaaeeb33c in _dlerror_run () from /lib64/libdl.so.2
#13 0x00002aaaaaeeaf01 in dlopen@@GLIBC_2.2.5 () from /lib64/libdl.so.2
#14 0x00000000004df206 in _PyImport_GetDynLoadFunc (fqname=<optimized out>,
shortname=0x147f3fb "_common",
pathname=0x14a3d50 "/group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages---Type <return> to continue, or q <return> to quit---
/dolfin/cpp/_common.so", fp=0x13ffd10) at Python/dynload_shlib.c:130
#15 0x00000000004c42a2 in _PyImport_LoadDynamicModule (
name=0x147f3f0 "dolfin.cpp._common",
pathname=0x14a3d50 "/group/fenicstests/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/cpp/_common.so", fp=0x13ffd10) at ./Python/importdl. :42

Drew

Jan Blechta

unread,
Feb 27, 2017, 2:00:43 PM2/27/17
to Drew Parsons, fenics-support
That looks pretty much related to the import problem. What is Py
stacktrace? You can do 'py-bt' in gdb if you have Python debugging
symbols. Or you can manually step using pdb until a segfault happens.

What is aprun -d 24? I can see that this is number of threads. Why
would you request 24 threads? DOLFIN will not take any advantage of
them in a typical application.

Jan

>
> Drew
>

Drew Parsons

unread,
Feb 28, 2017, 5:03:27 AM2/28/17
to Jan Blechta, fenics-support
On Mon, 2017-02-27 at 19:00 +0000, Jan Blechta wrote:
> On Tue, 28 Feb 2017 02:29:30 +0800
> Drew Parsons <dpar...@emerall.com> wrote:
>
> >
> > Thanks Johannes and Jan. dolfin imports now.
> >
> > It fails to execute, but that's a different problem. Something to
> > do
> > with the Cray compute node architecture. For the record, a gdb
> > backtrace shows
> >
> > > aprun -n 1 -d 24 gdb python  
...
> > Python/dynload_shlib.c:130 #15 0x00000000004c42a2 in
> > _PyImport_LoadDynamicModule ( name=0x147f3f0 "dolfin.cpp._common",
> > pathname=0x14a3d50
> > "/group/fenicstests/software/cle52up04/apps/PrgEnv-
> > gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site
> > -packages/dolfin/cpp/_common.so",
> > fp=0x13ffd10) at ./Python/importdl. :42
>
> That looks pretty much related to the import problem. What is Py
> stacktrace? You can do 'py-bt' in gdb if you have Python debugging
> symbols. Or you can manually step using pdb until a segfault happens.

We don't seem to have py-bt available.

Stepping through line by line (using break _PyImport_GetDynLoadFunc"),
the segfault comes out of l.130 in Python/dynload_shlib.c:

130 in Python/dynload_shlib.c
(gdb) s
warning: File "/opt/gcc/4.9.2/snos/lib64/libstdc++.so.6.0.20-gdb.py" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load".
Program received signal SIGSEGV, Segmentation fault.0x00002aaac343cae5 in _pmi_cpr_init_daemon ()
    at /.AUTO/cray/css.u18/jemison/pmi/pmi/src/cpr/pmi_cpr.c:461461
/.AUTO/cray/css.u18/jemison/pmi/pmi/src/cpr/pmi_cpr.c: No such file or directory.

But that doesn't give me much insight beyond the standard backtrace.
l.130 just contains "handle = dlopen(pathname, dlopenflags);" 

pathname is ".../dolfin/cpp/_common.so", so we're trying to dlopen
_common.

If I step into the machine code with si, I get to
183 in /.AUTO/cray/css.u18/jemison/pmi/pmi/src/pmi_core/_pmi_init.c
1: x/i $pc
=> 0x2aaac342e6af <_pmi_constructor+285>:
callq 0x2aaac342b3e0 <_pmi_cpr_init_daemon@plt>
(gdb) ni

Program received signal SIGSEGV, Segmentation fault.
0x00002aaac343cae5 in _pmi_cpr_init_daemon ()
at /.AUTO/cray/css.u18/jemison/pmi/pmi/src/cpr/pmi_cpr.c:461
461 /.AUTO/cray/css.u18/jemison/pmi/pmi/src/cpr/pmi_cpr.c: No such file or directory.
1: x/i $pc
=> 0x2aaac343cae5 <_pmi_cpr_init_daemon+183>: mov 0x0(%rax),%ecx


It looks like that 0x0 used at l.461 of pmi_cpr.c is triggering the
segfault. But I don't have access to the code for pmi_cpr.c to check.

> What is aprun -d 24? I can see that this is number of threads. Why
> would you request 24 threads? DOLFIN will not take any advantage of
> them in a typical application.

Our HPC has 24 processors per node. I was copying -d 24 based on what
we used for the build, it's supposed to mark the "depth", i.e. number
of cpus per pe (processing element). In any case, the behaviour is the
same without the flag (defaulting to -d 1, i.e. single processor).

Drew

Jan Blechta

unread,
Feb 28, 2017, 6:08:12 AM2/28/17
to Drew Parsons, fenics-support
On Tue, 28 Feb 2017 18:03:18 +0800
I'd suggest to run the code using pdb and stepping through the code
incrementally to find an exact line where it happens.
Ok. Just note that DOLFIN by itself does not take any advantage of
threads and rather use MPI parallelism which seems to correspond to -n
flag.

Jan

>
> Drew
>

Drew Parsons

unread,
Mar 3, 2017, 2:39:06 AM3/3/17
to Jan Blechta, fenics-support
On Tue, 2017-02-28 at 11:08 +0000, Jan Blechta wrote:
> On Tue, 28 Feb 2017 18:03:18 +0800
> Drew Parsons <dpar...@emerall.com> wrote:
>
> >
> > > That looks pretty much related to the import problem. What is Py
> > > stacktrace? You can do 'py-bt' in gdb if you have Python
> > > debugging
> > > symbols. Or you can manually step using pdb until a segfault
> > > happens.  
> >
> > We don't seem to have py-bt available.
>
> I'd suggest to run the code using pdb and stepping through the code
> incrementally to find an exact line where it happens.


Ah right, the python module pdb. I took you to mean gdb with a typo.
Nevertheless, it doesn't seem to say more than gdb:

> aprun -n 1 python -i
Python 2.7.10 (default, Sep 6 2016, 10:44:23)
[GCC 4.3.4 [gcc-4_3-branch revision 152973]] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pdb
>>> pdb.run('import dolfin')
> <string>(1)<module>()
(Pdb) s
--Call--
> /group/fenicstest/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/__init__.py(2)<module>()
-> """Main module for DOLFIN"""
(Pdb) break /pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/python2.7/importlib/__init__.py:37
Breakpoint 1 at /pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/python2.7/importlib/__init__.py:37
(Pdb) c 32
> /pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/python2.7/importlib/__init__.py(37)import_module()
-> __import__(name)
(Pdb) p name
'dolfin.cpp._common'
(Pdb) s
--Call--
> /pawsey/cle52up04/python/2.7.10/six/1.10.0/lib/python2.7/site-packages/six.py(184)find_module()
-> def find_module(self, fullname, path=None):
(Pdb) s
> /pawsey/cle52up04/python/2.7.10/six/1.10.0/lib/python2.7/site-packages/six.py(185)find_module()
-> if fullname in self.known_modules:
(Pdb) s
> /pawsey/cle52up04/python/2.7.10/six/1.10.0/lib/python2.7/site-packages/six.py(187)find_module()
-> return None
(Pdb) p fullname
'dolfin.cpp._common'
(Pdb) s
--Return--
> /pawsey/cle52up04/python/2.7.10/six/1.10.0/lib/python2.7/site-packages/six.py(187)find_module()->None
-> return None
(Pdb) s
Application 7073241 exit signals: Segmentation fault


So l.37 of importlib/__init__.py is trying to
__import__('dolfin.cpp._common').  

It goes into six.py to see if _common is already known. It is not. Then
it crashes coming out of six.py at l.187.

Drew


Jan Blechta

unread,
Mar 3, 2017, 6:43:21 AM3/3/17
to Drew Parsons, fenics-support
I'd try to compare python stacktrace at the point where it fails with
the approach without aprun which works.

Jan


On Fri, 03 Mar 2017 15:38:58 +0800

Drew Parsons

unread,
Mar 3, 2017, 10:37:09 AM3/3/17
to Jan Blechta, fenics-support
On Fri, 2017-03-03 at 11:43 +0000, Jan Blechta wrote:
> I'd try to compare python stacktrace at the point where it fails with
> the approach without aprun which works.
>


I guess it's a clue: when I run in pdb without aprun, it passes l.187
in six.py with fullname='dolfin.cpp._common', where it crashes in aprun
and steps on to the same line with fullname='swig_runtime_data4',
whereupon the "normal" non-aprun MPID_Init error is triggered. That
points the finger at swig.

swig is supposed to be using pcre/8.37, but there's a stray version of
pcre/8.10 in the dolfin build. I'll try rebuilding dolfin.

Drew


Drew Parsons

unread,
Mar 3, 2017, 12:10:19 PM3/3/17
to Jan Blechta, fenics-support
The rebuild is inconclusive. Under aprun it still segfaults after it
gets to six.py(187) while loading dolfin.cpp._common

Without aprun, while importing common, six.py(187) passes _common and
passes swig_runtime_data4, then starts processing petsc4py. It gets to
PETSc._initialize()
at petsc4py/PETSc.py(4). From here (without aprun) it goes to the
normal MPI error:

> /group/fenictools/software/cle52up04/python/2.7.10/petsc4py/3.7.0/lib/python2.7/site-packages/petsc4py/PETSc.py(4)<module>()
-> PETSc._initialize()
(Pdb) d
*** Newest frame
(Pdb) s
[Sat Mar 4 01:06:49 2017] [c4-0c0s0n1] Fatal error in PMPI_Init_thread: Other MPI error, error stack:
MPIR_Init_thread(483):
MPID_Init(192).......: channel initialization failed
MPID_Init(569).......: PMI2 init failed: 1


I'll try rebuilding dolfin without petsc4py support.

Drew

Drew Parsons

unread,
Mar 4, 2017, 6:18:54 AM3/4/17
to Jan Blechta, fenics-support
On Sat, 2017-03-04 at 01:09 +0800, Drew Parsons wrote:
>

> Without aprun, while importing common, six.py(187) passes _common and
> passes swig_runtime_data4, then starts processing petsc4py. It gets
> to
>   PETSc._initialize()
> at petsc4py/PETSc.py(4). From here (without aprun) it goes to the
> normal MPI error:
...
> I'll try rebuilding dolfin without petsc4py support.
>


With petsc4py support deactivated, dolfin imports cleanly under single
process (without aprun). The MPI error kicks in later, e.g. when
building a mesh.

But under MPI usage (aprun), the segfault still triggers when loading
_common.

To avoid any other interference from extra packages, I next tried
disabling all the optional features which I was previously switching on
(OpenMP,PETSc,PETSc4py,ParMETIS,HDF5). i.e. specifying
"-DDOLFIN_ENABLE_OPENMP=OFF -DDOLFIN_ENABLE_PARMETIS=OFF" etc in the
cmake configuration line.

In this case, the segfault in MPI (aprun) still occurs. But also under
single process python, dolfin no longer imports cleanly:
File "/group/director2026/dparsons/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site-packages/dolfin/cpp/common.py", line 24, in swig_import_helper
return importlib.import_module('_common')
File "/pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named _common


It looks like it's now trying to load "_common" rather than
'dolfin.cpp._common', and isn't finding the path to _common.
(_common.so is still there is the same subdir). Why would deactivating
optional features cause this change in behaviour? PYTHONPATH is the
same, including .../dolfin/2016.2.0/lib/python2.7/site-packages.

Drew

Jan Blechta

unread,
Mar 4, 2017, 11:02:07 AM3/4/17
to Drew Parsons, fenics-support
I'd check what's happening few lines above in the try clause of
common.py. It's line 22 in my common.py. The swig import helper
obfuscates the error by try-except clause.

> importlib.import_module('_common') File
> "/pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/python2.7/importlib/__init__.py",
> line 37, in import_module __import__(name) ImportError: No module
> named _common
>
>
> It looks like it's now trying to load "_common" rather than
> 'dolfin.cpp._common', and isn't finding the path to _common.
> (_common.so is still there is the same subdir). Why would
> deactivating optional features cause this change in behaviour?

I don't think this is suspicious. That's just how swig import helpers
deal with the import. Check the generated
<prefix>/lib/python2.7/site-packages/dolfin/cpp/common.py.

Jan

Drew Parsons

unread,
Mar 5, 2017, 10:22:31 PM3/5/17
to Jan Blechta, fenics-support
On Sat, 2017-03-04 at 16:02 +0000, Jan Blechta wrote:
> On Sat, 04 Mar 2017 19:18:43 +0800
> Drew Parsons <dpar...@emerall.com> wrote:
...
> > In this case, the segfault in MPI (aprun) still occurs. But
> > also under
> > single process python, dolfin no longer imports cleanly: 
> >   File
> > "/group/director2026/dparsons/software/cle52up04/apps/PrgEnv-
> > gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/python2.7/site
> > -packages/dolfin/cpp/common.py",
> > line 24, in swig_import_helper return
>
> I'd check what's happening few lines above in the try clause of
> common.py. It's line 22 in my common.py. The swig import helper
> obfuscates the error by try-except clause.
>
> > importlib.import_module('_common') File
> > "/pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/python2.7/impor
> > tlib/__init__.py",
> > line 37, in import_module __import__(name) ImportError: No module
> > named _common
> >

I think you're right with regards to "_common". When I step into the
try block in dolfin/cpp/common.py, l.22 tries
importlib.import_module(mname)
with mname='dolfin.cpp._common'

That fails in single processor python, with
ImportError: '/group/director2026/dparsons/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/libdolfin.so.2016.2: undefined symbol: MPI_Allgather'
> /pawsey/cle52up04/apps/gcc/4.3.4/python/2.7.10/lib/python2.7/importlib/__init__.py(37)import_module()
-> __import__(name)

i.e. it's the MPI error (it needs to be run as an MPI process). So importing 'dolfin.cpp._common' fails, and common.py goes on to attempt to load
'_common' in l.24.

So the single-processor behaviour makes sense. But I'm still stuck with
the segfault when running as an MPI process.

That undefined symbol wasn't showing up before, so that's progress.
Running under MPI gives the same error, so I'll follow this clue now.

Drew

Drew Parsons

unread,
Mar 5, 2017, 11:58:16 PM3/5/17
to Jan Blechta, fenics-support
On Mon, 2017-03-06 at 11:22 +0800, Drew Parsons wrote:
> On Sat, 2017-03-04 at 16:02 +0000, Jan Blechta wrote:
> >
> I think you're right with regards to "_common".  When I step into the
> try block in dolfin/cpp/common.py, l.22 tries
>     importlib.import_module(mname)
> with mname='dolfin.cpp._common'
>
> That fails in single processor python, with
>     ImportError:
> '/group/director2026/dparsons/software/cle52up04/apps/PrgEnv-
> gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/libdolfin.so.201
> 6.2: undefined symbol: MPI_Allgather'
...
> That undefined symbol wasn't showing up before, so that's progress. 
> Running under MPI gives the same error, so I'll follow this clue now.

It's a good clue. Inspecting libdolfin,
objdump -p /group/fenicstest/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/lib/libdolfin.so.2016.2
I find in the Dynamic Section there was no NEEDED entry for any libmpi.
That explains why it wasn't finding MPI_Allgather.

I understand that the MPI configuration on Cray is a little special, in
the sense that the compilers and environment are supposed to support
MPI by default without needing to explicitly declare them. I think
that's why Chris Richardson configured dolfin with 
-DDOLFIN_AUTO_DETECT_MPI=false
at http://fenics-hpc.blogspot.com.au/2016/09/latest-stable-dolfin-on-cray-system.html

In my login environment, cray-mpich is loaded by default.  But the MPI
module appears to have gone missing in my build environment (our
support team have set up a customised system for building apps).

I had been using Chris's example with -DDOLFIN_AUTO_DETECT_MPI=false. 
If instead I switch MPI detection on, and also explicitly load the
cray-mpich module, then libmpich_gnu_49.so.3 becomes NEEDED by
libdolfin.so.2016.2. Consequently dolfin imports successfuly in both
single-process and MPI python. (The next problem is with fcntl.flock
during runtime, which must be a different problem).

So looks like the workaround for me is to explicitly load cray-mpich
and let dolfin's cmake detect it.  

That leaves dangling the question of how it was supposed to be working
with -DDOLFIN_AUTO_DETECT_MPI=false. But the important thing is that
it's now working.

Thanks for your help Jan and all.

Drew

Jan Blechta

unread,
Mar 6, 2017, 4:41:53 AM3/6/17
to Drew Parsons, fenics-support
On POSIX and NFS it is recommended to install flufl.lock, see
http://flufllock.readthedocs.io/en/latest/. I'm not sure if it's
relevant for your OS and FS.

Jan

Drew Parsons

unread,
Mar 8, 2017, 2:34:31 AM3/8/17
to Jan Blechta, fenics-support
On Mon, 2017-03-06 at 09:41 +0000, Jan Blechta wrote:
> On Mon, 06 Mar 2017 12:58:05 +0800
> Drew Parsons <dpar...@emerall.com> wrote:
> >
> > I had been using Chris's example with
> > -DDOLFIN_AUTO_DETECT_MPI=false. 
> > If instead I switch MPI detection on, and also explicitly load the
> > cray-mpich module, then
...
> > dolfin imports successfuly in both
> > single-process and MPI python. (The next problem is with
> > fcntl.flock
> > during runtime, which must be a different problem).
>
> On POSIX and NFS it is recommended to install flufl.lock, see
> http://flufllock.readthedocs.io/en/latest/. I'm not sure if it's
> relevant for your OS and FS.

Thanks Jan. flufl.lock gets me further. I can now import dolfin on the
Cray and create a mesh.

My next error occurs when creating an Expression:

>>> from dolfin import *
>>> g = Expression("sin(5*x[0])")
Calling DOLFIN just-in-time (JIT) compiler, this may take some time.
--- Instant: compiling ---
In instant.import_module_directly: Failed to import module 'dolfin_03bc991d4d2a290fd3e1e5b67296be96e5edd2a8' from '/home/fenicstest/.cache/instant/python2.7/cache/dolfin_03bc991d4d2a290fd3e1e5b67296be96e5edd2a8';
ImportError:No module named _dolfin_03bc991d4d2a290fd3e1e5b67296be96e5edd2a8;
Failed to import module found in cache. Modulename: 'dolfin_03bc991d4d2a290fd3e1e5b67296be96e5edd2a8';
Path: '/home/fenicstest/.cache/instant/python2.7/cache/dolfin_03bc991d4d2a290fd3e1e5b67296be96e5edd2a8';
ImportError:No module named _dolfin_03bc991d4d2a290fd3e1e5b67296be96e5edd2a8;


Is this a clue?: compile.log in the cache subdir for the instant module
says
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning (dev) at /group/fenicstest/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/share/dolfin/cmake/DOLFINTargets.cmake:54 (add_librar
y):
ADD_LIBRARY called with SHARED option but the target platform does not
support dynamic linking. Building a STATIC library instead. This may lead
to problems.
Call Stack (most recent call first):
/group/fenicstest/software/cle52up04/apps/PrgEnv-gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/share/dolfin/cmake/DOLFINConfig.cmake:16 (include)
CMakeLists.txt:10 (FIND_PACKAGE)
This warning is for project developers. Use -Wno-dev to suppress it.

i.e. instant claims it can't build a shared library and builds static
instead. Sure enough, the cache subdir does contain
_dolfin_03bc991d4d2a290fd3e1e5b67296be96e5edd2a8.a
instead of .so

Is this generation of .a in place of .so the reason why Expression()
fails?

I notice if I run the instant tests in instant-2016.2.0/test/run_tests.py,
they run successfully, leaving .so shared libraries in the cache
subdirs. ffc tests also succeed (mostly). Why would instant be unable
to generate .so files when loaded under dolfin?

Drew

Johannes Ring

unread,
Mar 8, 2017, 4:01:46 AM3/8/17
to Drew Parsons, Jan Blechta, fenics-support
Try to set

export CRAYPE_LINK_TYPE=dynamic

Johannes

> Is this generation of .a in place of .so the reason why Expression()
> fails?
>
> I notice if I run the instant tests in instant-2016.2.0/test/run_tests.py,
> they run successfully, leaving .so shared libraries in the cache
> subdirs. ffc tests also succeed (mostly). Why would instant be unable
> to generate .so files when loaded under dolfin?
>
> Drew
>
> --
> You received this message because you are subscribed to the Google Groups "fenics-support" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to fenics-suppor...@googlegroups.com.
> To post to this group, send email to fenics-...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/fenics-support/1488958463.14173.8.camel%40emerall.com.
> For more options, visit https://groups.google.com/d/optout.

Drew Parsons

unread,
Mar 8, 2017, 9:31:15 PM3/8/17
to Johannes Ring, Jan Blechta, fenics-support
On Wed, 2017-03-08 at 10:01 +0100, Johannes Ring wrote:
> On Wed, Mar 8, 2017 at 8:34 AM, Drew Parsons <dpar...@emerall.com>
> wrote:
> >
> > Is this a clue?: compile.log in the cache subdir for the instant
> > module
> > says
> >     -- Detecting CXX compile features
> >     -- Detecting CXX compile features - done
> >     CMake Warning (dev) at
> > /group/fenicstest/software/cle52up04/apps/PrgEnv-
> > gnu/5.2.82/gcc/4.9.2/sandybridge/dolfin/2016.2.0/share/dolfin/cmake
> > /DOLFINTargets.cmake:54 (add_librar
> >     y):
> >       ADD_LIBRARY called with SHARED option but the target platform
> > does not
> >       support dynamic linking.  Building a STATIC library
> > instead.  This may lead
> >       to problems.
> >
...
> Try to set
>
>   export CRAYPE_LINK_TYPE=dynamic

Perfect, thanks Johannes. With this variable, instant generates a
shared .so instead of a static .a, and dolfin runs successfully. I've
now got fenics working on Cray (for a minimal build. My final step will
be to build PETSc and the other add-ons).

Drew

Reply all
Reply to author
Forward
0 new messages