Amber 18 with PLUMED 2

456 views
Skip to first unread message

Qinghua Liao

unread,
Mar 23, 2020, 6:32:32 PM3/23/20
to PLUMED users
Hello PLUMEDers,

I successfully compiled Amber 18 together with PLUMED 2 (github) on one cluster,
and I performed metadynamics simulations using pmemd.cuda (GPU, CUDA-10.1).

I then tried to install them on another cluster using the same steps (CUDA-9.0),
the installation was successful, but I could not run MD simulations with PLUMED.
The simulation with pmemd.cuda failed just in seconds after reading the PLUMED input file.
The problem is:

PLUMED:   on file colvar.dat
PLUMED:   with format  %f
PLUMED: END FILE: plumed.dat
pmemd.cuda: Plumed.h:2605: plumed_gcreate: Assertion `plumed_gmain.p==((void *)0)' failed.

The PLUMED input is very simple, just writing out a distance:
##
 dist: DISTANCE ATOMS=1,3817
PRINT STRIDE=500 ARG=dist FILE=colvar.dat

The simulation was OK if I turned PLUMED off (plumed=0).

Does anyone have the same problem? Thanks a lot!

All the best,
Qinghua

viktor drobot

unread,
Mar 23, 2020, 6:40:23 PM3/23/20
to PLUMED users
Hello! Could you please provide us with full specs of these clusters? I mean compiler versions, detailed configuration and building protocol of Amber and PLUMED, kernel versions, hardware specs...

Qinghua Liao

unread,
Mar 23, 2020, 7:02:00 PM3/23/20
to PLUMED users
Hello Viktor,

Thanks a lot for your prompt response!

The one which works is on Ubuntu 16.04 (Linux version 4.15.0-91-generic (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12))),
The compilers are GCC/8.3.0 (GFORTRAN)  CUDA/10.1.243  OpenMPI/3.1.4  (GPU: K80, V100)

The one which does not work is on openSUSE Leap 42.2 (Linux version 4.4.27-2-default (gcc version 4.8.5 (SUSE Linux) ))
The compilers are GCC/4.8.5 (SUSE Linux), OpenMPI/1.10.3, CUDA/9.0 (GPU: Geforce1080 TI)

The installation protocol is the same on both clusters:

cd plumed2
./configure --prefix=/path/
source sourceme.sh

cd amber18
export AMBERHOME=/path/amber18
plumed-patch -p (choose amber 18)
./configure -cuda gnu
make install -j 8
source amber.sh

I tried sander on the not-working cluster, it worked.
I guess there might be something wrong with the compilation.
Thanks for looking into it.

All the best,
Qinghua

viktor drobot

unread,
Mar 23, 2020, 7:22:14 PM3/23/20
to PLUMED users
If I remember correctly, here was a thread some months ago dealing with the similar problem. And if I remember it right the cause was old gcc compiler from 4.x branch. There was a lot of innovations regarding C++11 support in versions 5.x and higher so may be it worth to try newer compiler, if possible. Do you have access for another compilers on problematic cluster?

viktor drobot

unread,
Mar 23, 2020, 7:30:52 PM3/23/20
to PLUMED users
yes, I've found it. OP didn't tell his compiler info, but error is the same. I assume it's all about C++ features support in older GCCs. May be PLUMED developers could say if there any special requirements for C++11 standard...

Qinghua Liao

unread,
Mar 23, 2020, 7:30:57 PM3/23/20
to PLUMED users


cd plumed2
./configure --prefix=/path/
source sourceme.sh

 
Here, I forgot to mention "make -j 8" and "make install -j 8" after configuration.

Qinghua Liao

unread,
Mar 23, 2020, 7:34:29 PM3/23/20
to PLUMED users
Thanks a lot, Viktor!

It's our own cluster, we will try to install new compilers, and then try it again!


All the best,
Qinghua

Qinghua Liao

unread,
Mar 24, 2020, 3:54:19 PM3/24/20
to PLUMED users
Hello Viktor,

I tried GCC-6.5.0 and GCC-8.4.0, I used the new compiler to compile PLUMED and openMPI.
The problem is still the same! Any clue? Thanks!


All the best,
Qinghua

On Tuesday, March 24, 2020 at 12:30:52 AM UTC+1, viktor drobot wrote:

viktor drobot

unread,
Mar 24, 2020, 6:31:16 PM3/24/20
to PLUMED users
The only things that remains is the Linux kernel version. Is there any possibility to update it?

Qinghua Liao

unread,
Mar 26, 2020, 7:33:08 PM3/26/20
to PLUMED users
Thanks Viktor!

I am not going to do that, as I worry about that I mess up the cluster.
I will run simulations on the other cluster.

All the best,
Qinghua
Reply all
Reply to author
Forward
0 new messages