Plumed installation help

1,427 views
Skip to first unread message

srinivas penumutchu

unread,
Aug 1, 2017, 7:02:55 PM8/1/17
to PLUMED users

Hello Plumed users !

I installed the plumed without mpi and patch-up with gromacs without mpi as follows using intel compilers and gcc4.9.3.  But i end-up with plumed internal error. could you please let me know that does we need any external dependencies or libs  that we need to  install plumed properly, i mean without errors.  and more specifically does we need  pre installation of libibverbs-dev', 'libibverbs-devel ?

Installation of plumed and gromacs. 

module load base gcc libmatheval gsl xdrfile boost fftw/3.3.6-pl2 lapack/3.7.0

./configure --disable-mpi CXX=icpc CC=icc --prefix=/home/srp106/software/plumed2
make -j 16
make install
cd gromacs-5.1.4
plumed patch -p --runtime -e
CXX=icpc CC=icc LDFLAGS=-lmpi_cxx cmake -DCMAKE_BUILD_TYPE=RELEASE -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON -DGMX_THREAD_MPI=OFF -DGMX_MPI=OFF -DGMX_GPU=ON -DCMAKE_INSTALL_PREFIX=/home/srp106/software/gromacs-5.1.4/gmxbuild -DFFTWF_INCLUDE_DIR=/usr/local/fftw/3.3.6-pl2/include -DBoost_INCLUDE_DIR=/usr/local/boost/1_58_0/include -DBoost_DIR=/usr/local/boost/1_58_0 -DZLIB_INCLUDE_DIR=/usr/local/base/8.0/include -DZLIB_LIBRARY_RELEASE=/usr/local/base/8.0/lib/libz.so -DFFTWF_LIBRARY=/usr/local/fftw/3.3.6-pl2/lib/libfftw3f.so
make -j 16
make install


I also crosscheck the plumed installation and installation check output as follows.  Looks like the gromacs installation is fine and  error coming from the plumed installation. 

+++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Final report:
+ 200 tests performed, 84 tests not appliable
+ 84 errors found
+ Find the bug!
+ To replace references, go to the test directory and
+ type 'make reset'
+++++++++++++++++++++++++++++++++++++++++++++++++++++


After compilation with gromacs, i got internal plumed errors

Unknown exception:
(exception type: N4PLMD9ExceptionE)

+++ Internal PLUMED error
+++ file IFile.cpp, line 211
+++ message: assertion failed tmp=='\n', plumed only accepts \n (unix) or \r\n
(dos) new lines






Test run output
[srp106@gpu013t setup]$ gmx mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat
                   :-) GROMACS - gmx mdrun, VERSION 5.1.4 (-:

                            GROMACS is written by:
     Emile Apol      Rossen Apostolov  Herman J.C. Berendsen    Par Bjelkmar   
 Aldert van Buuren   Rudi van Drunen     Anton Feenstra   Sebastian Fritsch 
  Gerrit Groenhof   Christoph Junghans   Anca Hamuraru    Vincent Hindriksen
 Dimitrios Karkoulis    Peter Kasson        Jiri Kraus      Carsten Kutzner  
    Per Larsson      Justin A. Lemkul   Magnus Lundborg   Pieter Meulenhoff 
   Erik Marklund      Teemu Murtola       Szilard Pall       Sander Pronk   
   Roland Schulz     Alexey Shvetsov     Michael Shirts     Alfons Sijbers  
   Peter Tieleman    Teemu Virolainen  Christian Wennberg    Maarten Wolf   
                           and the project leaders:
        Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2015, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:      gmx mdrun, VERSION 5.1.4
Executable:   /home/srp106/software/gromacs-5.1.4/gmxbuild/bin/gmx
Data prefix:  /home/srp106/software/gromacs-5.1.4/gmxbuild
Command line:
  gmx mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat


Back Off! I just backed up md.log to ./#md.log.2#
+++ Loading the PLUMED kernel runtime +++
+++ PLUMED_KERNEL="/home/srp106/software/plumed2/lib/libplumedKernel.so" +++
+++ PLUMED kernel successfully loaded +++

Running on 1 node with total 12 cores, 12 logical cores, 2 compatible GPUs
Hardware detected:
  CPU info:
    Vendor: GenuineIntel
    Brand:  Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz
    SIMD instructions most likely to fit this hardware: SSE4.1
    SIMD instructions selected at GROMACS compile time: SSE4.1
  GPU info:
    Number of GPUs detected: 2
    #0: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible
    #1: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible

Reading file topolA.tpr, VERSION 4.6.7 (single precision)
Note: file tpx version 83, software tpx version 103

NOTE: GPU(s) found, but the current simulation can not use GPUs
      To use a GPU, set the mdp option: cutoff-scheme = Verlet


Overriding nsteps with value passed on the command line: 10000 steps, 20 ps


2 compatible GPUs detected in the system, but none will be used.
Consider trying GPU acceleration with the Verlet scheme!


NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be
removed in a future release when 'verlet' supports all interaction forms.


Back Off! I just backed up traj_comp.xtc to ./#traj_comp.xtc.2#

Back Off! I just backed up ener.edr to ./#ener.edr.2#
starting mdrun 'alanine dipeptide in vacuum'
10000 steps,     20.0 ps.

-------------------------------------------------------
Program:     gmx mdrun, VERSION 5.1.4

Unknown exception:
(exception type: N4PLMD9ExceptionE)

+++ Internal PLUMED error
+++ file IFile.cpp, line 211
+++ message: assertion failed tmp=='\n', plumed only accepts \n (unix) or \r\n
(dos) new lines


For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

João Henriques

unread,
Aug 2, 2017, 3:14:52 AM8/2/17
to plumed...@googlegroups.com
Hello,

Looks like PLUMED is installed properly. The error seems to be with your plumed.dat file format. I don't know what OS you used to create the file but it seems to have inserted a weird character for the end-of-line.

Here is the respective snippet under ./src/tools/IFile.cpp 

  if(tmp=='\r') {
    llread(&tmp,1);
    plumed_massert(tmp=='\n',"plumed only accepts \\n (unix) or \\r\\n (dos) new lines");
  }

Apparently in old Mac systems (pre-OS X), '\r' was the code for end-of-line instead. I'd go to your plumed.dat file and check it thoroughly. There's gotta be something funky in there.

This is just my interpretation, hopefully someone that actually knows the code and has experience with PLUMED will be able to confirm or disprove me.

Best regards,
João


--
You received this message because you are subscribed to the Google Groups "PLUMED users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users+unsubscribe@googlegroups.com.
To post to this group, send email to plumed...@googlegroups.com.
Visit this group at https://groups.google.com/group/plumed-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/9fb1bd9d-68a1-4357-a1e2-56fe901b828b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

srinivas penumutchu

unread,
Aug 2, 2017, 5:51:59 PM8/2/17
to PLUMED users

Dear João !

Thanks for your reply.

I change plumed.dat file manually,  it works. but i am getting new error.  does anyone how to fix this bug ?.



Back Off! I just backed up traj_comp.xtc to ./#traj_comp.xtc.11#

Back Off! I just backed up ener.edr to ./#ener.edr.11#

starting mdrun 'alanine dipeptide in vacuum'
1000 steps,      2.0 ps.
[gpu013t:02677] *** Process received signal ***
[gpu013t:02677] Signal: Segmentation fault (11)
[gpu013t:02677] Signal code: Address not mapped (1)
[gpu013t:02677] Failing at address: (nil)
[gpu013t:02677] [ 0] /lib64/libpthread.so.0[0x372d40f710]
[gpu013t:02677] *** End of error message ***
Segmentation fault (core dumped)




[srp106@gpu013t setup]$ gmx_mpi mdrun -s topolA.tpr -plumed plumed.dat -nsteps 1000

                   :-) GROMACS - gmx mdrun, VERSION 5.1.4 (-:

                            GROMACS is written by:
     Emile Apol      Rossen Apostolov  Herman J.C. Berendsen    Par Bjelkmar  
 Aldert van Buuren   Rudi van Drunen     Anton Feenstra   Sebastian Fritsch
  Gerrit Groenhof   Christoph Junghans   Anca Hamuraru    Vincent Hindriksen
 Dimitrios Karkoulis    Peter Kasson        Jiri Kraus      Carsten Kutzner 
    Per Larsson      Justin A. Lemkul   Magnus Lundborg   Pieter Meulenhoff
   Erik Marklund      Teemu Murtola       Szilard Pall       Sander Pronk  
   Roland Schulz     Alexey Shvetsov     Michael Shirts     Alfons Sijbers 
   Peter Tieleman    Teemu Virolainen  Christian Wennberg    Maarten Wolf  
                           and the project leaders:
        Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2015, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:      gmx mdrun, VERSION 5.1.4
Executable:   /home/srp106/software/gromacs-5.1.4/gmxbuild/bin/gmx_mpi

Data prefix:  /home/srp106/software/gromacs-5.1.4/gmxbuild
Command line:
  gmx_mpi mdrun -s topolA.tpr -plumed plumed.dat -nsteps 1000


Back Off! I just backed up md.log to ./#md.log.12#
^[[A+++ Loading the PLUMED kernel runtime +++

+++ PLUMED_KERNEL="/home/srp106/software/plumed2/lib/libplumedKernel.so" +++
+++ PLUMED kernel successfully loaded +++

Running on 1 node with total 12 cores, 12 logical cores, 2 compatible GPUs
Hardware detected on host gpu013t (the node of MPI rank 0):

  CPU info:
    Vendor: GenuineIntel
    Brand:  Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz
    SIMD instructions most likely to fit this hardware: SSE4.1
    SIMD instructions selected at GROMACS compile time: SSE4.1
  GPU info:
    Number of GPUs detected: 2
    #0: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible
    #1: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible

Reading file topolA.tpr, VERSION 4.6.7 (single precision)
Note: file tpx version 83, software tpx version 103

NOTE: GPU(s) found, but the current simulation can not use GPUs
      To use a GPU, set the mdp option: cutoff-scheme = Verlet


Overriding nsteps with value passed on the command line: 1000 steps, 2 ps

Using 1 MPI process


2 compatible GPUs detected in the system, but none will be used.
Consider trying GPU acceleration with the Verlet scheme!


NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be
removed in a future release when 'verlet' supports all interaction forms.


Back Off! I just backed up traj_comp.xtc to ./#traj_comp.xtc.11#

Back Off! I just backed up ener.edr to ./#ener.edr.11#

starting mdrun 'alanine dipeptide in vacuum'
1000 steps,      2.0 ps.
[gpu013t:02677] *** Process received signal ***
[gpu013t:02677] Signal: Segmentation fault (11)
[gpu013t:02677] Signal code: Address not mapped (1)
[gpu013t:02677] Failing at address: (nil)
[gpu013t:02677] [ 0] /lib64/libpthread.so.0[0x372d40f710]
[gpu013t:02677] *** End of error message ***
Segmentation fault (core dumped)
[srp106@gpu013t setup]$
Message has been deleted

João Henriques

unread,
Aug 3, 2017, 3:21:00 AM8/3/17
to plumed...@googlegroups.com
Hi,

This one looks more serious and it's not PLUMED related, it is a problem with Gromacs itself. I'd suggest looking through the gmx-users (or even the gmx-developers) mailing list archives for something similar.
To me, this looks like there's something incoherent/wrong about the libs/options you passed to cmake when compiling Gromacs. Without more information about the compilation it's impossible to say much more than this. Take a good look at how you compiled it and see if everything checks out. You could also try a different simulation as a test, or submit to a different node. Try running the sim on the front end of the cluster, maybe the front and back end nodes are different (it's common), and when you compile in the front end it may break when using in the back end nodes. You could also retry without PLUMED, just to make extra sure it isn't PLUMED breaking up something. I suggest making a list of tests to rule out causes one by one, because this error doesn't look trivial (to me).

Hopefully someone will be able to help you more.

Best regards,
João


João Henriques

unread,
Aug 3, 2017, 3:41:21 AM8/3/17
to plumed...@googlegroups.com
Ok, I just noticed something. Your command is 'gmx_mpi mdrun -s topolA.tpr -plumed plumed.dat -nsteps 1000'. I suppose your gmx_mpi executable was built for OpenMPI, so that may be the problem, i.e. you're not using mpirun. The /lib64/libpthread.so.0 mentioned in the error is clearly related to parallel stuff, so I'd start by investigating MPI related things. As a general rule, if you don't plan on running the simulation in parallel, then use a gmx build without MPI support. It would eliminate all these MPI related issues.

Plus, there's other questionable stuff in there. Your tpr file and the Gromacs version you're using don't match, and that's usually asking for trouble.

To end, this is probably unrelated to the issue at hand, but why not use the GPUs? You have two on that node and Gromacs can't use them because you are using the Group cut-off scheme instead of Verlet.

Best regards,
João    

srinivas penumutchu

unread,
Aug 3, 2017, 11:27:30 PM8/3/17
to PLUMED users
Thanks for your reply ! 

The first example (plumed.dat) in the this tutorial (https://plumed.github.io/doc-v2.3/user-doc/html/munster.html) works fine on GPU. it didn't show-up any error message with same tpr file. I tried example 2 in this tutorial here. I prepared  plumed.dat file for example2 and  it is causing problems. looks like it is complilation issue. I also checked the compilation very carefully. I am not quite sure where I messed up (installation commands and mdlog file) attached below. does anyone have clue about whats going on here ?

first example plumed.dat file
phi: TORSION ATOMS=5,7,9,15
psi: TORSION ATOMS=7,9,15,17
METAD ARG=phi,psi HEIGHT=1.0 BIASFACTOR=10 SIGMA=0.35,0.35 PACE=100 GRID_MIN=-pi,-pi GRID_MAX=pi,pi
second example plumed.dat file 
# set up two variables for Phi and Psi dihedral angles 
phi: TORSION ATOMS=5,7,9,15
psi: TORSION ATOMS=7,9,15,17
#
# Activate well-tempered metadynamics in phi depositing 
# a Gaussian every 500 time steps, with initial height equal 
# to 1.2 kJoule/mol, biasfactor equal to 10.0, and width to 0.35 rad

METAD ...
LABEL=metad
ARG=phi
PACE=500
HEIGHT=1.2
SIGMA=0.35
FILE=HILLS
BIASFACTOR=10.0
TEMP=300.0
GRID_MIN=-pi
GRID_MAX=pi
GRID_SPACING=0.1
... METAD

# monitor the two variables and the metadynamics bias potential
PRINT STRIDE=10 ARG=phi,psi,metad.bias FILE=COLVAR



md log out file  while running on GPU for example 2

Log file opened on Thu Aug 3 16:30:51 2017
Host: gpu013t pid: 29484 rank ID: 0 number of ranks: 1


:-) GROMACS - gmx mdrun, VERSION 5.1.4 (-:

GROMACS is written by:
Emile Apol Rossen Apostolov Herman J.C. Berendsen Par Bjelkmar 
Aldert van Buuren Rudi van Drunen Anton Feenstra Sebastian Fritsch 
Gerrit Groenhof Christoph Junghans Anca Hamuraru Vincent Hindriksen
Dimitrios Karkoulis Peter Kasson Jiri Kraus Carsten Kutzner 
Per Larsson Justin A. Lemkul Magnus Lundborg Pieter Meulenhoff 
Erik Marklund Teemu Murtola Szilard Pall Sander Pronk 

Roland Schulz Alexey Shvetsov Michael Shirts Alfons a 


Peter Tieleman Teemu Virolainen Christian Wennberg Maarten Wolf 
and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2015, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS: gmx mdrun, VERSION 5.1.4
Executable: /home/srp106/software/gromacs-5.1.4/gmxbuild/bin/gmx_mpi
Data prefix: /home/srp106/software/gromacs-5.1.4/gmxbuild
Command line:

gmx_mpi mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat

GROMACS version: VERSION 5.1.4
Precision: single
Memory model: 64 bit
MPI library: MPI
OpenMP support: disabled
GPU support: enabled
OpenCL support: disabled
invsqrt routine: gmx_software_invsqrt(x)
SIMD instructions: SSE4.1
FFT library: fftw-3.3.6-pl2-fma-sse2-avx-avx2-avx2_128
RDTSCP usage: enabled
C++11 compilation: disabled
TNG support: enabled
Tracing support: disabled
Built on: Thu Jul 27 14:09:08 EDT 2017
Built by: srp106@hpc1 [CMAKE]
Build OS/arch: Linux 2.6.32-696.3.1.el6.x86_64 x86_64
Build CPU vendor: GenuineIntel
Build CPU brand: Intel(R) Xeon(R) CPU X5660 @ 2.80GHz
Build CPU family: 6 Model: 44 Stepping: 2
Build CPU features: apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc pcid pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
C compiler: /usr/local/openmpi/1.8.8/bin/mpicc Intel 15.0.3.20150407
C compiler flags: -msse4.1 -std=gnu99 -w3 -wd177 -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074 -wd11076 -wd3180 -O3 -DNDEBUG -ip -funroll-all-loops -alias-const -ansi-alias 
C++ compiler: /usr/local/openmpi/1.8.8/bin/mpic++ Intel 15.0.3.20150407
C++ compiler flags: -msse4.1 -w3 -wd177 -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074 -wd11076 -wd1782 -wd2282 -wd3180 -O3 -DNDEBUG -ip -funroll-all-loops -alias-const -ansi-alias 
Boost version: 1.58.0 (external)
CUDA compiler: /usr/local/cuda-7.5/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2015 NVIDIA Corporation;Built on Tue_Aug_11_14:27:32_CDT_2015;Cuda compilation tools, release 7.5, V7.5.17
CUDA compiler flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_52,code=compute_52;-use_fast_math;-Xcompiler;-gcc-version=450; ;-msse4.1;-w3;-wd177;-wd271;-wd304;-wd383;-wd424;-wd444;-wd522;-wd593;-wd869;-wd981;-wd1418;-wd1419;-wd1572;-wd1599;-wd2259;-wd2415;-wd2547;-wd2557;-wd3280;-wd3346;-wd11074;-wd11076;-wd1782;-wd2282;-wd3180;-O3;-DNDEBUG;-ip;-funroll-all-loops;-alias-const;-ansi-alias;
CUDA driver: 7.50
CUDA runtime: 7.50


Running on 1 node with total 12 cores, 12 logical cores, 2 compatible GPUs
Hardware detected on host gpu013t (the node of MPI rank 0):
CPU info:
Vendor: GenuineIntel
Brand: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Family: 6 model: 44 stepping: 2
CPU features: aes apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3


SIMD instructions most likely to fit this hardware: SSE4.1
SIMD instructions selected at GROMACS compile time: SSE4.1
GPU info:
Number of GPUs detected: 2
#0: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible
#1: NVIDIA Tesla M2090, compute cap.: 2.0, ECC: yes, stat: compatible


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E.
Lindahl
GROMACS: High performance molecular simulations through multi-level
parallelism from laptops to supercomputers
SoftwareX 1 (2015) pp. 19-25
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl
Tackling Exascale Software Challenges in Molecular Dynamics Simulations with
GROMACS
In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R.
Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl
GROMACS 4.5: a high-throughput and highly parallel open source molecular
simulation toolkit
Bioinformatics 29 (2013) pp. 845-54
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- --- Thank You --- -------- --------


NOTE: GPU(s) found, but the current simulation can not use GPUs
To use a GPU, set the mdp option: cutoff-scheme = Verlet

Input Parameters:
integrator = md
tinit = 0
dt = 0.002
nsteps = 1000
init-step = 0
simulation-part = 1
comm-mode = Angular
nstcomm = 100
bd-fric = 0
ld-seed = 1993
emtol = 10
emstep = 0.01
niter = 20
fcstep = 0
nstcgsteep = 1000
nbfgscorr = 10
rtpi = 0.05
nstxout = 0
nstvout = 0
nstfout = 0
nstlog = 100
nstcalcenergy = 100
nstenergy = 100
nstxout-compressed = 100
compressed-x-precision = 1000
cutoff-scheme = Group
nstlist = 10
ns-type = Grid
pbc = no
periodic-molecules = FALSE
verlet-buffer-tolerance = 0.005
rlist = 1.2
rlistlong = 1.2
nstcalclr = 0
coulombtype = Cut-off
coulomb-modifier = None
rcoulomb-switch = 0
rcoulomb = 1.2
epsilon-r = 1
epsilon-rf = inf
vdw-type = Cut-off
vdw-modifier = None
rvdw-switch = 0
rvdw = 1.2
DispCorr = No
table-extension = 1
fourierspacing = 0.12
fourier-nx = 0
fourier-ny = 0
fourier-nz = 0
pme-order = 4
ewald-rtol = 1e-05
ewald-rtol-lj = 1e-05
lj-pme-comb-rule = Geometric
ewald-geometry = 0
epsilon-surface = 0
implicit-solvent = No
gb-algorithm = Still
nstgbradii = 1
rgbradii = 1
gb-epsilon-solvent = 80
gb-saltconc = 0
gb-obc-alpha = 1
gb-obc-beta = 0.8
gb-obc-gamma = 4.85
gb-dielectric-offset = 0.009
sa-algorithm = Ace-approximation
sa-surface-tension = 2.05016
tcoupl = V-rescale
nsttcouple = 10
nh-chain-length = 0
print-nose-hoover-chain-variables = FALSE
pcoupl = No
pcoupltype = Isotropic
nstpcouple = -1
tau-p = 1
compressibility (3x3):
compressibility[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compressibility[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p (3x3):
ref-p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
refcoord-scaling = No
posres-com (3):
posres-com[0]= 0.00000e+00
posres-com[1]= 0.00000e+00
posres-com[2]= 0.00000e+00
posres-comB (3):
posres-comB[0]= 0.00000e+00
posres-comB[1]= 0.00000e+00
posres-comB[2]= 0.00000e+00
QMMM = FALSE
QMconstraints = 0
QMMMscheme = 0
MMChargeScaleFactor = 1
qm-opts:
ngQM = 0
constraint-algorithm = Lincs
continuation = FALSE
Shake-SOR = FALSE
shake-tol = 0.0001
lincs-order = 4
lincs-iter = 1
lincs-warnangle = 30
nwall = 0
wall-type = 9-3
wall-r-linpot = -1
wall-atomtype[0] = -1
wall-atomtype[1] = -1
wall-density[0] = 0
wall-density[1] = 0
wall-ewald-zfac = 3
pull = FALSE
rotation = FALSE
interactiveMD = FALSE
disre = No
disre-weighting = Conservative
disre-mixed = FALSE
dr-fc = 1000
dr-tau = 0
nstdisreout = 100
orire-fc = 0
orire-tau = 0
nstorireout = 100
free-energy = no
cos-acceleration = 0
deform (3x3):
deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
simulated-tempering = FALSE
E-x:
n = 0
E-xt:
n = 0
E-y:
n = 0
E-yt:
n = 0
E-z:
n = 0
E-zt:
n = 0
swapcoords = no
adress = FALSE
userint1 = 0
userint2 = 0
userint3 = 0
userint4 = 0
userreal1 = 0
userreal2 = 0
userreal3 = 0
userreal4 = 0
grpopts:
nrdf: 39
ref-t: 300
tau-t: 0.1
annealing: No
annealing-npoints: 0
acc: 0 0 0
nfreeze: N N N
energygrp-flags[ 0]: 0


Overriding nsteps with value passed on the command line: 10000 steps, 20 ps

Using 1 MPI process


2 compatible GPUs detected in the system, but none will be used.
Consider trying GPU acceleration with the Verlet scheme!


NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be
removed in a future release when 'verlet' supports all interaction forms.

Table routines are used for coulomb: FALSE
Table routines are used for vdw: FALSE
Cut-off's: NS: 1.2 Coulomb: 1.2 LJ: 1.2
System total charge: -0.000
Generated table with 1100 data points for 1-4 COUL.
Tabscale = 500 points/nm
Generated table with 1100 data points for 1-4 LJ6.
Tabscale = 500 points/nm
Generated table with 1100 data points for 1-4 LJ12.
Tabscale = 500 points/nm
Potential shift: LJ r^-12: 0.000e+00 r^-6: 0.000e+00, Coulomb -0e+00

Initializing LINear Constraint Solver

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
LINCS: A Linear Constraint Solver for molecular simulations
J. Comp. Chem. 18 (1997) pp. 1463-1472
-------- -------- --- Thank You --- -------- --------

The number of constraints is 21
Center of mass motion removal mode is Angular
We have the following groups for center of mass motion removal:
0: rest

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
G. Bussi, D. Donadio and M. Parrinello
Canonical sampling through velocity rescaling
J. Chem. Phys. 126 (2007) pp. 014101
-------- -------- --- Thank You --- -------- --------


Gromacs-plumed installation

[srp106@hpc1 TOPO]$

./configure --prefix=/home/srp106/ software/plumed2


make -j 16
make install

cd gromacs-5.1.4
plumed patch -p --runtime -e

module load base gcc libmatheval gsl xdrfile boost fftw/3.3.6-pl2 lapack/3.7.0

module load cuda/7.5

CXX=mpic++ CC=mpicc FC=mpifort LDFLAGS=-lmpi_cxx cmake -DCMAKE_BUILD_TYPE=RELEASE -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON -DGMX_THREAD_MPI=OFF -DGMX_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/srp106/software/gromacs-5.1.4/gmxbuild -DFFTWF_INCLUDE_DIR=/usr/local/fftw/3.3.6-pl2/include -DBoost_INCLUDE_DIR=/usr/local/boost/1_58_0/include -DBoost_DIR=/usr/local/boost/1_58_0 -DZLIB_INCLUDE_DIR=/usr/local/base/8.0/include -DZLIB_LIBRARY_RELEASE=/usr/local/base/8.0/lib/libz.so -DFFTWF_LIBRARY=/usr/local/fftw/3.3.6-pl2/lib/libfftw3f.so

make -j 16
make install

Massimiliano Bonomi

unread,
Aug 4, 2017, 2:38:12 AM8/4/17
to plumed...@googlegroups.com
Hello,

can you point where in the log file you see the error or “problems"?
I don’t see anything weird.

Max
> --
> You received this message because you are subscribed to the Google Groups "PLUMED users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.
> To post to this group, send email to plumed...@googlegroups.com.
> Visit this group at https://groups.google.com/group/plumed-users.
> To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/76b3b736-4d5b-4d50-bc59-0710d1a857eb%40googlegroups.com.

srinivas penumutchu

unread,
Aug 4, 2017, 11:23:45 AM8/4/17
to PLUMED users
Thanks  Max for your reply !

md log file is not showing any errors, but the md.out file is showing the following errors. we also run on gpu, it is end-up with same error. this output is on hpc.

[comp202t:184633] *** Process received signal ***
[comp202t:184633] Signal: Segmentation fault (11)
[comp202t:184633] Signal code: Address not mapped (1)
[comp202t:184633] Failing at address: (nil)
[comp202t:184633] [ 0] /lib64/libpthread.so.0[0x3d0100f7e0]
[comp202t:184633] *** End of error message ***

/tmp/slurm/job2790546/slurm_script: line 13: 184633 Segmentation fault      (core dumped) gmx_mpi mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat


slurm-2790546.out file

The following have been reloaded with a version change:
  1) openmpi/1.8.8 => openmpi/1.8.5


                   :-) GROMACS - gmx mdrun, VERSION 5.1.4 (-:

                            GROMACS is written by:
     Emile Apol      Rossen Apostolov  Herman J.C. Berendsen    Par Bjelkmar  
 Aldert van Buuren   Rudi van Drunen     Anton Feenstra   Sebastian Fritsch
  Gerrit Groenhof   Christoph Junghans   Anca Hamuraru    Vincent Hindriksen
 Dimitrios Karkoulis    Peter Kasson        Jiri Kraus      Carsten Kutzner 
    Per Larsson      Justin A. Lemkul   Magnus Lundborg   Pieter Meulenhoff
   Erik Marklund      Teemu Murtola       Szilard Pall       Sander Pronk  
   Roland Schulz     Alexey Shvetsov     Michael Shirts     Alfons Sijbers 
   Peter Tieleman    Teemu Virolainen  Christian Wennberg    Maarten Wolf  
                           and the project leaders:
        Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2015, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:      gmx mdrun, VERSION 5.1.4
Executable:   /usr/local/gromacs/5.1.4-plumed2/bin/gmx_mpi
Data prefix:  /usr/local/gromacs/5.1.4-plumed2

Command line:
  gmx_mpi mdrun -s topolA.tpr -nsteps 10000


Number of logical cores detected (24) does not match the number reported by OpenMP (12).
Consider setting the launch configuration manually!

NOTE: Error occurred during GPU detection:
      CUDA driver version is insufficient for CUDA runtime version
      Can not use GPU acceleration, will fall back to CPU kernels.


Running on 1 node with total 24 cores, 24 logical cores, 0 compatible GPUs
Hardware detected on host comp202t (the node of MPI rank 0):
  CPU info:
    Vendor: GenuineIntel
    Brand:  Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz
    SIMD instructions most likely to fit this hardware: AVX2_256

    SIMD instructions selected at GROMACS compile time: SSE4.1

Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this machine, which is better


Reading file topolA.tpr, VERSION 4.6.7 (single precision)
Note: file tpx version 83, software tpx version 103

Overriding nsteps with value passed on the command line: 10000 steps, 20 ps

Using 1 MPI process


NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be
removed in a future release when 'verlet' supports all interaction forms.


Non-default thread affinity set probably by the OpenMP library,
disabling internal thread affinity

starting mdrun 'alanine dipeptide in vacuum'
10000 steps,     20.0 ps.

Writing final coordinates.

               Core t (s)   Wall t (s)        (%)
       Time:        0.167        0.184       90.5
                 (ns/day)    (hour/ns)
Performance:     9376.388        0.003

gcq#69: "Here's the Way It Might End" (G. Michael)


                   :-) GROMACS - gmx mdrun, VERSION 5.1.4 (-:

                            GROMACS is written by:
     Emile Apol      Rossen Apostolov  Herman J.C. Berendsen    Par Bjelkmar  
 Aldert van Buuren   Rudi van Drunen     Anton Feenstra   Sebastian Fritsch
  Gerrit Groenhof   Christoph Junghans   Anca Hamuraru    Vincent Hindriksen
 Dimitrios Karkoulis    Peter Kasson        Jiri Kraus      Carsten Kutzner 
    Per Larsson      Justin A. Lemkul   Magnus Lundborg   Pieter Meulenhoff
   Erik Marklund      Teemu Murtola       Szilard Pall       Sander Pronk  
   Roland Schulz     Alexey Shvetsov     Michael Shirts     Alfons Sijbers 
   Peter Tieleman    Teemu Virolainen  Christian Wennberg    Maarten Wolf  
                           and the project leaders:
        Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2015, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:      gmx mdrun, VERSION 5.1.4
Executable:   /usr/local/gromacs/5.1.4-plumed2/bin/gmx_mpi
Data prefix:  /usr/local/gromacs/5.1.4-plumed2

Command line:
  gmx_mpi mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat


Back Off! I just backed up md.log to ./#md.log.1#

+++ Loading the PLUMED kernel runtime +++
+++ PLUMED_KERNEL="/usr/local/plumed2/2.3.2/lib/libplumedKernel.so" +++

+++ PLUMED kernel successfully loaded +++

Number of logical cores detected (24) does not match the number reported by OpenMP (12).
Consider setting the launch configuration manually!

NOTE: Error occurred during GPU detection:
      CUDA driver version is insufficient for CUDA runtime version
      Can not use GPU acceleration, will fall back to CPU kernels.


Running on 1 node with total 24 cores, 24 logical cores, 0 compatible GPUs
Hardware detected on host comp202t (the node of MPI rank 0):
  CPU info:
    Vendor: GenuineIntel
    Brand:  Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz
    SIMD instructions most likely to fit this hardware: AVX2_256

    SIMD instructions selected at GROMACS compile time: SSE4.1

Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this machine, which is better


Reading file topolA.tpr, VERSION 4.6.7 (single precision)
Note: file tpx version 83, software tpx version 103

Overriding nsteps with value passed on the command line: 10000 steps, 20 ps

Using 1 MPI process


NOTE: This file uses the deprecated 'group' cutoff_scheme. This will be
removed in a future release when 'verlet' supports all interaction forms.


Non-default thread affinity set probably by the OpenMP library,
disabling internal thread affinity

Back Off! I just backed up traj_comp.xtc to ./#traj_comp.xtc.1#

Back Off! I just backed up ener.edr to ./#ener.edr.1#

starting mdrun 'alanine dipeptide in vacuum'
10000 steps,     20.0 ps.
[comp202t:184633] *** Process received signal ***
[comp202t:184633] Signal: Segmentation fault (11)
[comp202t:184633] Signal code: Address not mapped (1)
[comp202t:184633] Failing at address: (nil)
[comp202t:184633] [ 0] /lib64/libpthread.so.0[0x3d0100f7e0]
[comp202t:184633] *** End of error message ***

/tmp/slurm/job2790546/slurm_script: line 13: 184633 Segmentation fault      (core dumped) gmx_mpi mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat

md.log file

Log file opened on Fri Aug  4 10:39:42 2017
Host: comp130t  pid: 28761  rank ID: 0  number of ranks:  1

                   :-) GROMACS - gmx mdrun, VERSION 5.1.4 (-:

                            GROMACS is written by:
     Emile Apol      Rossen Apostolov  Herman J.C. Berendsen    Par Bjelkmar  
 Aldert van Buuren   Rudi van Drunen     Anton Feenstra   Sebastian Fritsch
  Gerrit Groenhof   Christoph Junghans   Anca Hamuraru    Vincent Hindriksen
 Dimitrios Karkoulis    Peter Kasson        Jiri Kraus      Carsten Kutzner 
    Per Larsson      Justin A. Lemkul   Magnus Lundborg   Pieter Meulenhoff
   Erik Marklund      Teemu Murtola       Szilard Pall       Sander Pronk  
   Roland Schulz     Alexey Shvetsov     Michael Shirts     Alfons Sijbers 
   Peter Tieleman    Teemu Virolainen  Christian Wennberg    Maarten Wolf  
                           and the project leaders:
        Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2015, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:      gmx mdrun, VERSION 5.1.4
Executable:   /usr/local/gromacs/5.1.4-plumed2/bin/gmx_mpi
Data prefix:  /usr/local/gromacs/5.1.4-plumed2

Command line:
  gmx_mpi mdrun -s topolA.tpr -nsteps 10000 -plumed plumed.dat

GROMACS version:    VERSION 5.1.4
Precision:          single
Memory model:       64 bit
MPI library:        MPI
OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 32)

GPU support:        enabled
OpenCL support:     disabled
invsqrt routine:    gmx_software_invsqrt(x)
SIMD instructions:  SSE4.1
FFT library:        fftw-3.3.6-pl2-fma-sse2-avx-avx2-avx2_128
RDTSCP usage:       enabled
C++11 compilation:  disabled
TNG support:        enabled
Tracing support:    disabled
Built on:           Tue Jul 18 11:50:04 EDT 2017
Built by:           dxb507@gpu013t [CMAKE]
Build OS/arch:      Linux 2.6.32-504.el6.x86_64 x86_64
Build CPU vendor:   GenuineIntel
Build CPU brand:    Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz

Build CPU family:   6   Model: 44   Stepping: 2
Build CPU features: aes apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3

C compiler:         /usr/local/openmpi/1.8.8/bin/mpicc Intel 15.0.3.20150407
C compiler flags:    -msse4.1    -std=gnu99 -w3 -wd177 -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074 -wd11076  -O3 -DNDEBUG -ip -funroll-all-loops -alias-const -ansi-alias 
C++ compiler:       /usr/local/openmpi/1.8.8/bin/mpic++ Intel 15.0.3.20150407
C++ compiler flags:  -msse4.1    -w3 -wd177 -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074 -wd11076 -wd1782 -wd2282  -O3 -DNDEBUG -ip -funroll-all-loops -alias-const -ansi-alias 
Boost version:      1.58.0 (external)
CUDA compiler:      /usr/local/cuda-7.5/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2015 NVIDIA Corporation;Built on Tue_Aug_11_14:27:32_CDT_2015;Cuda compilation tools, release 7.5, V7.5.17
CUDA compiler flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_52,code=compute_52;-use_fast_math;-Xcompiler;-gcc-version=450; ;-msse4.1;-w3;-wd177;-wd271;-wd304;-wd383;-wd424;-wd444;-wd522;-wd593;-wd869;-wd981;-wd1418;-wd1419;-wd1572;-wd1599;-wd2259;-wd2415;-wd2547;-wd2557;-wd3280;-wd3346;-wd11074;-wd11076;-wd1782;-wd2282;-O3;-DNDEBUG;-ip;-funroll-all-loops;-alias-const;-ansi-alias;
CUDA driver:        0.0
CUDA runtime:       0.0


NOTE: Error occurred during GPU detection:
      CUDA driver version is insufficient for CUDA runtime version
      Can not use GPU acceleration, will fall back to CPU kernels.


Running on 1 node with total 12 cores, 12 logical cores, 0 compatible GPUs
Hardware detected on host comp130t (the node of MPI rank 0):

  CPU info:
    Vendor: GenuineIntel
    Brand:  Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz
    Family:  6  model: 44  stepping:  2
    CPU features: aes apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
    SIMD instructions most likely to fit this hardware: SSE4.1
    SIMD instructions selected at GROMACS compile time: SSE4.1


khala...@gmail.com

unread,
Aug 4, 2017, 1:31:15 PM8/4/17
to PLUMED users
Hello.

That libpthread.so is used at all indicates that threading is enabled in gromacs. However, you disabled thread-mpi during compilation and I can't think of anything else in it that would use pthreads directly. Since openMP support is already on, can you try running the simulation with explicitly telling it how many openMP cores to use per MPI task: mdrun -ntomp 24 ? Replace 24 with however many cores you requested from slurm.

Before doing anything serious with gromacs, you should also make sure that it passes the regression tests (can actually run simulations without errors). For gromacs 5.1.* these can be found at: https://github.com/gromacs/regressiontests/tree/release-5-1

Also, if you want gromacs to use GPUs, you will have to tell slurm to only try scheduling the job on nodes that actually have them. Check your cluster's documentation on how if speed with CPU-only becomes an issue for you.

Regards,
Yuriy.

mao...@gmail.com

unread,
Dec 13, 2018, 7:52:24 PM12/13/18
to PLUMED users
How did you solve the error"Unknown exception: (exception type: N4PLMD9ExceptionE)"?
Reply all
Reply to author
Forward
0 new messages