Screw OpenMM, use LAMMPS instead?

841 views
Skip to first unread message

Vasilii Artyukhov

unread,
Dec 2, 2010, 1:55:02 PM12/2/10
to nanoengi...@googlegroups.com
I was reading on some capabilities of the famous MD code from Sandia Lab (this is one of the main workhorses of our group here at Rice), and many of the things I saw turned out surprisingly relevant for us.

In summary, LAMMPS is a very flexible, very versatile and very serious code that implements many different atomistic forcefields and coarse-grain or even FEM description (also with some nice constraint and rigid-body algorithms). It apparently can also interface to some QM:MM codes. It also supports parallel computation via MPI, and some forcefield models are already implemented for GPU (also it can run over multiple GPUs, unlike (apparently [citation needed]) OpenMM). It is written in C++ and very extensible. I suggest we seriously consider adopting this code for NE1, or at least stealing the GPU ReaxFF routines from it.

Check it out:
http://lammps.sandia.gov/features.html:

--
  • build as library, invoke LAMMPS thru library interface or provided Python wrapper
  • couple with other codes: LAMMPS calls other code, other code calls LAMMPS, umbrella code calls both
...
  • handful of GPU-enabled pair styles [it can also run in parallel over multiple GPU, I think - VA]
...
...
--


Tom

unread,
Dec 2, 2010, 3:40:51 PM12/2/10
to nanoengi...@googlegroups.com
LAMMPS sounds perfect.

Best,

Tom
Reply all
Reply to author
Forward
0 new messages