"No module named 'gsd.hoomd'"

47 views
Skip to first unread message

Gerardo Cisneros

unread,
May 17, 2023, 4:03:15 PM5/17/23
to hoomd-users
Hi.

I followed the documentation's directions for installing and building hoomd-blue from sources, but when I try to run any tutorial example that includes 'import gsd.hoomd' I get the following:

[gerardo@login01 hoomd-blue]$ python3
Python 3.7.4 (default, Aug 13 2019, 20:35:49)  
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import hoomd
>>> import gsd.hoomd
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'gsd.hoomd'

For reference, these are the steps I followed for building hoomd-blue (after having installed gsd):

git clone --recursive https://github.com/glotzerlab/hoomd-blue
module load python/3.7 intel/2022.3.1 compiler hpcx/2.14.0 cmake/3.21.4
python hoomd-blue/install-prereq-headers.py
export OMPI_CC=icx
export OMPI_CXX=icpx
cmake -DENABLE_MPI=on -DENABLE_TBB=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DCMAKE_CXX_FLAGS="-std=c++17" -DCMAKE_C_FLAG
S="-march=native" -B build/hoomd -S hoomd-blue
cmake --build build/hoomd
cmake --install build/hoomd

The building and installation steps had no errors.

Additional information:

[gerardo@login01 hoomd-blue]$ cd gsd
[gerardo@login01 gsd]$ git log
commit e45351cbff2d447a9fe576439abc98242589a0e9 (HEAD -> trunk-patch, origin/trunk-patch, origin/HEAD)
. . .
[gerardo@login01 gsd]$ cd ../hoomd-blue
[gerardo@login01 hoomd-blue]$ git log
commit ee8897adbd362c6f638b24bd7008657e8e3e84ec (HEAD -> trunk-patch, tag: v3.11.0, origin/trunk-patch, origin/trunk-minor, origin/HEAD)
. . .
[gerardo@login01 hoomd-blue]$ uname -a
Linux login01.hpcadvisorycouncil.com 4.18.0-425.10.1.el8_7.x86_64 #1 SMP Thu Jan 12 16:32:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux



What am I missing?

Saludos,

Gerardo
--
Gerardo Cisneros-Stoianowski, Ph.D.
HPC Applications Performance Specialist
HPC-AI Advisory Council
www.hpcadvisorycouncil.com

Joshua Anderson

unread,
May 18, 2023, 8:39:08 AM5/18/23
to hoomd...@googlegroups.com
Gerardo,

Thanks for the detailed description of the steps you took and the output you get. You are missing "conda install -c conda-forge gsd".

`gsd.hoomd` is provided by the `gsd` Python package, not HOOMD itself. See https://gsd.readthedocs.io/en/v2.8.1/installation.html for more details.
------
Joshua A. Anderson, Ph.D.
Research Area Specialist, Chemical Engineering, University of Michigan

--
You received this message because you are subscribed to the Google Groups "hoomd-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hoomd-users/413c76e3-42ef-4d46-9478-29ecfed7fe27n%40googlegroups.com.

Gerardo Cisneros

unread,
May 18, 2023, 1:03:11 PM5/18/23
to hoomd-users
Joshua,

Thanks for the prompt reply.  I mentioned before that I had installed gsd, but didn't say how.  I installed it from sources as follows:

git clone https://github.com/glotzerlab/gsd
cmake -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DCMAKE_CXX_FLAGS="-std=c++17" -DCMAKE_C_FLAGS="-march=native" -B build/gsd -S gsd
cmake --build build/gsd
cmake --install build/gsd

but apparently the last step didn't appear to do much (in the sense that it didn't get installed into the python site packages, like hoomd was); its only output was "-- Install configuration: "Release"".

I tried "conda install -c conda-forge gsd" as you suggested, but ran into various compatibility issues with python 3.7 as installed on our system.

Saludos,

Gerardo
--
Gerardo Cisneros-Stoianowski, Ph.D.
HPC Applications Performance Specialist
HPC-AI Advisory Council
www.hpcadvisorycouncil.com

Joshua Anderson

unread,
May 18, 2023, 2:58:34 PM5/18/23
to hoomd...@googlegroups.com
Gerardo,

My apologies. I missed the brief note that you had installed gsd.

To install gsd from source into the currently active python environment, cd into the gsd source directory and run `pip install .`. Set the compilers and compilation flags via normal mechanisms for Python setuptools builds.

The CMake scripts you are using are intended for development only and do not define any install rules. They create a working gsd module in the build directory but do not provide any facility to detect the current python environment or copy files into that environment.
------
Joshua A. Anderson, Ph.D.
Research Area Specialist, Chemical Engineering, University of Michigan

Gerardo Cisneros

unread,
May 18, 2023, 3:06:25 PM5/18/23
to hoomd-users
Joshua,

Thank you very much.  The suggestion to run "pip install ." in the gsd source directory worked fine.  My next question is whether there is a largish hoomd cpu-only benchmark test case that will scale to a couple of thousand cores, which involves fairly extensive inter-node communications via MPI.  Is there such a thing?

Saludos,

Gerardo

Joshua Anderson

unread,
May 18, 2023, 3:12:37 PM5/18/23
to hoomd...@googlegroups.com
hoomd-benchmarks (https://github.com/glotzerlab/hoomd-benchmarks) provides a number HOOMD-blue benchmarks that can be run in serial or parallel and have command line options to set parameters. You could run one or more of the simulation benchmarks with a system size of N=2000000 or more (note that the default N is only 64000).

For example:
mpirun -n 1024 python3 -m hoomd_benchmarks.md_pair_wca --device CPU -N 2000000 -v

------
Joshua A. Anderson, Ph.D.
Research Area Specialist, Chemical Engineering, University of Michigan

Gerardo Cisneros

unread,
May 18, 2023, 4:20:33 PM5/18/23
to hoomd-users
Joshua,

Thanks again.  Unfortunately, I get hangs or crashes in MPI_Allreduce, depending on whether I run with (HPC-X's Open MPI's) HCOLL enabled or disabled, respectively.  I tried reducing the number of particles, nodes, and tasks per node, but I still got the same (mis-)behavior, e.g.:

mpirun -np 128 -x UCX_NET_DEVICES=mlx5_0:1 --map-by ppr:16:socket --bind-to core --report-bindings -mca coll_hcoll_enable 0  python3 -m hoomd_benchmarks.md_pair_wca --device CPU -v
. . .
Generating initial_configuration_cache/hard_sphere_64000_1.0_3.gsd
notice(2): Using domain decomposition: n_x = 4 n_y = 5 n_z = 8.
.. randomizing positions
.. step 100 at 212.9 TPS
[iris001:4071413] *** An error occurred in MPI_Allreduce
[iris001:4071413] *** reported by process [2144403457,23433341566978]
[iris001:4071413] *** on communicator MPI_COMM_WORLD
[iris001:4071413] *** MPI_ERR_TRUNCATE: message truncated
[iris001:4071413] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[iris001:4071413] ***    and potentially your MPI job)

Saludos,

Gerardo

Gerardo Cisneros

unread,
May 18, 2023, 4:46:24 PM5/18/23
to hoomd-users
Joshua,

A correction.  The correct mpirun output for the mpirun command in the previous reply in this thread is the following:


notice(2): Using domain decomposition: n_x = 4 n_y = 4 n_z = 8.
.. randomizing positions
.. step 100 at 232.2 TPS
[iris001:4071746] *** An error occurred in MPI_Allreduce
[iris001:4071746] *** reported by process [1090715649,23283017711617]
[iris001:4071746] *** on communicator MPI_COMM_WORLD
[iris001:4071746] *** MPI_ERR_TRUNCATE: message truncated
[iris001:4071746] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[iris001:4071746] ***    and potentially your MPI job)

Saludos,

Gerardo

Joshua Anderson

unread,
May 19, 2023, 9:21:04 AM5/19/23
to hoomd...@googlegroups.com
Gerardo,

Pull the latest changes to the hoomd-benchmarks `trunk` branch with git and try running it again.

Specifically, I think this commit may fix the issue:
commit 25160b852398a5df5f0bc966709eb6011bfd11fa (HEAD -> trunk, origin/trunk, origin/HEAD)
Author: Joshua A. Anderson <joaa...@umich.edu>
Date:   Fri May 19 09:17:45 2023 -0400

    Fix MPI make_hard_sphere_configuration.
------
Joshua A. Anderson, Ph.D.
Research Area Specialist, Chemical Engineering, University of Michigan

Gerardo Cisneros

unread,
May 19, 2023, 11:09:22 AM5/19/23
to hoomd-users
Joshua,

Thank you very much.  That worked; the 2 million particle simulation, with the keyword arguments you provided, ran in just under two minutes using 1792 MPI ranks (32 nodes with 56 Cascade Lake cores per node).  Is there a way to disable the following warning (other than editing the script that produces it), which occurred once per rank?

/global/software/centos-8.x86_64/modules/langs/python/3.7/lib/python3.7/site-packages/hoomd/md/methods/methods.py:105: FutureWarning: NVT is deprecated and wil be removed in hoomd 4.0. In version 4.0, use the ConstantVolume method with the desired thermostat from hoomd.md.methods.thermostats.

Saludos,

Gerardo

Brandon Butler

unread,
May 19, 2023, 11:22:08 AM5/19/23
to hoomd...@googlegroups.com

Hey Gerardo,

You can eliminate the warning using Python's warning module. https://docs.python.org/3/library/warnings.html

Best,

Brandon

To view this discussion on the web visit https://groups.google.com/d/msgid/hoomd-users/cb1cf057-3ca0-4519-af1f-4f4081d180b8n%40googlegroups.com.
--
Brandon Butler
MolSSI Fellow
PhD Candidate, Chemical Engineering and Scientific Computing | Glotzer Lab, University of Michigan
Email: butl...@umich.edu

Joshua Anderson

unread,
May 19, 2023, 11:34:40 AM5/19/23
to hoomd...@googlegroups.com
You can also pass `-W ignore` to python on the command line. See https://stackoverflow.com/questions/14463277/how-to-disable-python-warnings for many discussions on this topic.

------
Joshua A. Anderson, Ph.D.
Research Area Specialist, Chemical Engineering, University of Michigan

Gerardo Cisneros

unread,
May 19, 2023, 11:56:31 AM5/19/23
to hoomd-users
Brandon and Josh,

Thank you very much!  (I'm more of a Fortran and C person, so anything that helps me improve my knowledge of Python is welcome.)

Saludos,

Gerardo
Reply all
Reply to author
Forward
0 new messages