Issues running HOOMD-Blue on GPU with WSL2

386 views
Skip to first unread message

Sam L

unread,
Oct 6, 2022, 10:41:57 AM10/6/22
to hoomd-users
Hi all,

I am a very new HOOMD user coming over from LAMMPS and looking to see if I can improve my performance using GPU-based MD simulations. I have an NVIDIA GeForce GTX 1650 Ti with Max-Q Design GPU on the computer I want to build HOOMD on, which is running Windows 11 and Windows Subsystem for Linux 2. Following the detailed instructions for setting up CUDA on WSL2 from the NVIDIA website and then the instructions for building and installing HOOMD from its documentation, I built HOOMD using cmake with the following command:

cmake -B build/hoomd -S hoomd-blue -GNinja -DCMAKE_CXX_FLAGS=-march=native -DCMAKE_C_FLAGS=-march=native -DENABLE_GPU=ON -DENABLE_MPI=ON -DSINGLE_PRECISION=ON -DHOOMD_GPU_PLATFORM=CUDA

And everything built without error. There were some warnings related to typing with single precision enabled, but everything worked and I was able to install the package. After successfully running a test simulation of a Kob-Anderson glass on my CPU, I changed the device to GPU and am encountering this error when the simulation gets to the run step:

**ERROR**: invalid device ordinal before /hoomd/Autotuner.h:496
Traceback (most recent call last):
  File "/home/slayding/test-lj.py", line 186, in <module>
    sim.run(10_000)
  File "/home/slayding/miniconda3/envs/hoomd-gpu/lib/python3.9/site-packages/hoomd/simulation.py", line 455, in run
    self.operations._schedule()
  File "/home/slayding/miniconda3/envs/hoomd-gpu/lib/python3.9/site-packages/hoomd/operations.py", line 186, in _schedule
    self.integrator._attach()
  File "/home/slayding/miniconda3/envs/hoomd-gpu/lib/python3.9/site-packages/hoomd/md/integrate.py", line 310, in _attach
    super()._attach()
  File "/home/slayding/miniconda3/envs/hoomd-gpu/lib/python3.9/site-packages/hoomd/md/integrate.py", line 48, in _attach
    self._forces._sync(self._simulation, self._cpp_obj.forces)
  File "/home/slayding/miniconda3/envs/hoomd-gpu/lib/python3.9/site-packages/hoomd/data/syncedlist.py", line 244, in _sync
    raise err
  File "/home/slayding/miniconda3/envs/hoomd-gpu/lib/python3.9/site-packages/hoomd/data/syncedlist.py", line 240, in _sync
    self._attach_value(item, False)
  File "/home/slayding/miniconda3/envs/hoomd-gpu/lib/python3.9/site-packages/hoomd/data/syncedlist.py", line 202, in _attach_value
    value._attach()
  File "/home/slayding/miniconda3/envs/hoomd-gpu/lib/python3.9/site-packages/hoomd/md/pair/pair.py", line 170, in _attach
    self._cpp_obj = cls(self._simulation.state._cpp_sys_def,
RuntimeError: HIP Error

And I really don't know what to make of it. I have run other scientific computing packages written for CUDA on this same device without issue, so I know that my GPU functions. Unfortunately I'm really not a computer expert so I've been having trouble diagnosing the error.

Has anyone else run into trouble setting up HOOMD on a WSL2 system? I'm hoping that there's some workaround for this so that I can start doing some benchmarking for the systems I typically study.

Thank you so much,

Sam

Joshua Anderson

unread,
Oct 6, 2022, 2:52:55 PM10/6/22
to hoomd...@googlegroups.com
Sam,

It is my understanding that Windows NVIDIA drivers do not support managed memory. HOOMD requires the use of managed memory to provide a coherent parameter state on the CPU, GPU, from Python, and C++. I'm surprised that you didn't get this error: "The device [name] does not support managed memory."

------
Joshua A. Anderson, Ph.D.
Research Area Specialist, Chemical Engineering, University of Michigan

--
You received this message because you are subscribed to the Google Groups "hoomd-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hoomd-users/dab1a10d-0d9c-4f13-a850-af977955f9a3n%40googlegroups.com.

Reply all
Reply to author
Forward
0 new messages