Running a custom anisotropic potential - best choices?

50 views
Skip to first unread message

Rachael Skye

unread,
May 5, 2023, 10:01:33 AM5/5/23
to hoomd-users
Hi all,
I'd like to run some simulations using a custom anisotropic potential. Essentially, I'd like to use the md.pair.aniso.ALJ setup, but ideally with an inverse power law potential instead of the WCA option in aniso.ALJ.  I'm adding a repulsive potential to a polygon.

I've found a few ways I could do this, but I'd like to ask for guidance on the best methods.

1.) I could implement the anisotropic potential in the user-defined HPMC potential, but I'm concerned about a highly unoptimized calculation, as well as writing a lot of my own math to define side-side interactions when that already exists in pair.aniso.

2.) The other option I've discovered is that I can pull the HOOMD base code and modify the ALJ function manually in order to simply adjust the calculations done from LJ/WCA to inverse power law. The hitch to this one is that in order to run efficiently, I would need to re-make a Singularity image that is compatible with my cluster's GPUs. (I've already discussed it with our cluster maintainers; we can't get HOOMD3 to install with GPU compatibility on our cluster and singularity is the easiest way for us).


Do you have any advice or recommendations for me? How is it best to modify a function?

Thank you!
Rachael

Joshua Anderson

unread,
May 8, 2023, 7:39:12 AM5/8/23
to hoomd...@googlegroups.com
Rachael,

The choice of MD vs MC should be driven by research needs, not implementation specifics. MC much better handles very sharp corners, where MD would require prohibitively small step sizes dt. MD provides momentum conserving dynamics where MC provides correct equilibrium statistics and approximate brownian dynamics.

1) You may find that 2D specific polygon-polygon math runs faster, and you won't know until you try. Assuming the number of edges in your polygons is small, a direct algorithm may be faster than the complex GJK implementation in ALJ. Also, depending on the rounding radius of your corners - MC may require evaluating the potential fewer times than MD to explore the same phase space volume. 

2) Yes, we have had internal discussions on renaming ALJ and providing other contact potentials.

What specific about your cluster prevents building HOOMD-blue from source? If the nodes have the drivers necessary to run the Singularity image, then I can't think of any any possible missing runtime components that you could not install in your home directory.
------
Joshua A. Anderson, Ph.D.
Research Area Specialist, Chemical Engineering, University of Michigan

--
You received this message because you are subscribed to the Google Groups "hoomd-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hoomd-users/c51560fe-d100-41f8-bc79-751c3e1983d3n%40googlegroups.com.

Rachael Skye

unread,
May 10, 2023, 10:23:23 AM5/10/23
to hoomd-users
Hi Josh,
Thanks for the advice. Based on the calculations I need, MD may be the better solution. I'll pursue that option; a 2D CPU implementation may not be too slow.

I've been in contact with the IT group that runs our cluster trying to understand why we can't install HOOMD3; they can't manage it either. I can install HOOMD version <3.0 with GPU compatibility, or I can install HOOMD >3.0 without  GPU compatibility; I can't install >= 3.0 with GPU compatibility. We are planning to upgrade our OS this summer, so potentially that will help.

The message that I get when trying to install is the following:

conda install -c conda-forge "hoomd=3.0=*gpu*"
...

PackagesNotFoundError: The following packages are not available from current channels:

  - hoomd==3.0[build=*gpu*] -> __cuda[version='>=11.2']
  - hoomd==3.0[build=*gpu*] -> __glibc[version='>=2.17']

Joshua Anderson

unread,
May 10, 2023, 10:48:30 AM5/10/23
to hoomd...@googlegroups.com
Rachael,

When you are developing changes to HOOMD then you will be building from source and conda build requirements for the binaries will no longer be a challenge :)

As for conda package resolver issues: 0) Ensure you are using miniforge, mambaforge, or miniconda - not Anaconda 1) Try mamba. 2) Try installing the appropriate specific build using the full identifier (not * wildcards) - find a list at https://anaconda.org/conda-forge/hoomd/files.

That being said, the conda builds do require CUDA 11.2 or greater and a glibc version as set by the conda-forge team (this requirement is the same for all packages compiled on conda-forge). If your system has an older version of one or more of these libraries, then your admins will need to upgrade to use the conda-forge packages. CUDA upgrades require a both a runtime and driver component (which requires a reboot) and can be done non-disruptively with rolling reservations. glibc upgrades require installing a newer OS.

------
Joshua A. Anderson, Ph.D.
Research Area Specialist, Chemical Engineering, University of Michigan

Rachael Skye

unread,
May 12, 2023, 12:29:26 PM5/12/23
to hoomd-users
Great, thank you for the advice, Josh. I'll bring it to our cluster admins and we'll try again.

Best,
Rachael
Reply all
Reply to author
Forward
0 new messages