Gromacs compilation:
gromacs@2018.4%gcc@6.5.0 build_type=RelWithDebInfo +cuda~double+mpi+plumed+rdtscp+shared simd=auto arch=linux-rhel7-x86_64
^cu...@9.2.88%gcc@6.5.0 arch=linux-rhel7-x86_64
^ff...@3.3.8%gcc@6.5.0+double+float~fma+long_double+mpi~openmp~pfft_patches~quad simd=avx,avx2,sse2 arch=linux-rhel7-x86_64
^ope...@3.1.3%gcc@6.5.0~cuda+cxx_exceptions fabrics=verbs ~java~legacylaunchers~memchecker~pmi schedulers=auto ~sqlite3~thread_multiple+vt arch=linux-rhel7-x86_64
^plu...@2.5.0%g...@6.5.0+gsl+mpi optional_modules=all +shared arch=linux-rhel7-x86_64
^gsl@2.5%gcc@6.5.0 arch=linux-rhel7-x86_64
^libma...@1.1.11%gcc@6.5.0 patches=0465844d690e3ff4d022f0c2bab76f636d78e4c6012a7a6d42b6c99e307fb671 arch=linux-rhel7-x86_64
^fl...@2.6.3%gcc@6.5.0+lex arch=linux-rhel7-x86_64
^open...@0.3.5%gcc@6.5.0 cpu_target=auto ~ilp64+pic+shared threads=none ~virtual_machine arch=linux-rhel7-x86_64
^zl...@1.2.11%gcc@6.5.0+optimize+pic+shared arch=linux-rhel7-x86_64
Plumed.dat:
q3: Q3 SPECIES=1-8 SWITCH={RATIONAL D_0=1.3 R_0=0.2 D_MAX=3.0} MEAN
q4: Q4 SPECIES=1-8 SWITCH={RATIONAL D_0=1.3 R_0=0.2 D_MAX=3.0} MEAN
###
METAD ...
LABEL=metad
ARG=q3.mean,q4.mean
PACE=500
HEIGHT=0.01
SIGMA=0.35,0.35
FILE=HILLS
GRID_MIN=0.1,0.02
GRID_MAX=0.27,0.27
GRID_SPACING=0.01,0.01
... METAD
PRINT STRIDE=10 ARG=q3.mean,q4.mean,metad.bias FILE=COLVAR
gromacs md.mdp
title = my simualtion
; Run parameters
integrator = md ; leap-frog integrator
nsteps = 5000000 ; 2 * 50000 = 100 ps
dt = 0.002 ; 2 fs
; Output control
nstxout = 500 ; save coordinates every 1.0 ps
nstvout = 500 ; save velocities every 1.0 ps
nstenergy = 500 ; save energies every 1.0 ps
nstlog = 500 ; update log file every 1.0 ps
; Bond parameters
constraint_algorithm = lincs ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
cutoff-scheme = Verlet
ns_type = Grid
nstlist = 10 ; 20 fs, largely irrelevant with Verlet
rcoulomb = 2.49 ; short-range electrostatic cutoff (in nm)
rvdw = 2.49 ; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype = Cut-off
pme_order = 4 ; cubic interpolation
fourierspacing = 0.16 ; grid spacing for FFT
; Temperature coupling is on
tcoupl = v-rescale
tc-grps = system
tau_t = 0.01
ref_t = 2
; Pressure coupling is off
pcoupl = no ; no pressure coupling in NVT
; Periodic boundary conditions
pbc = xyz ; 3-D PBC
; Dispersion correction
DispCorr = EnerPres ; account for cut-off vdW scheme
; Velocity generation
gen_vel = yes ; assign velocities from Maxwell distribution
gen_temp = 1 ; temperature for Maxwell distribution
gen_seed = -1 ; generate a random seed
--
You received this message because you are subscribed to the Google Groups "PLUMED users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.
To post to this group, send email to plumed...@googlegroups.com.
Visit this group at https://groups.google.com/group/plumed-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/c6532672-9bcb-4dbb-a397-923509c04124%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi.This should not happen. Can you check if it happens also with gromacs without plumed? Otherwise this is a bug we should fix.Thanks!
Il giorno dom 14 apr 2019 alle 19:17 <ge....@gmail.com> ha scritto:
Hello,I'm trying some simple standard meta-dynamics on LJ clusters. The simulations run until all memory gets consumed on the cluster and then crashes. These are my specs below, has anyone else encountered this issue?
Gromacs compilation:
gro...@2018.4%gcc@6.5.0 build_type=RelWithDebInfo +cuda~double+mpi+plumed+rdtscp+shared simd=auto arch=linux-rhel7-x86_64
^c...@9.2.88%gcc@6.5.0 arch=linux-rhel7-x86_64
^f...@3.3.8%gcc@6.5.0+double+float~fma+long_double+mpi~openmp~pfft_patches~quad simd=avx,avx2,sse2 arch=linux-rhel7-x86_64
^ope...@3.1.3%gcc@6.5.0~cuda+cxx_exceptions fabrics=verbs ~java~legacylaunchers~memchecker~pmi schedulers=auto ~sqlite3~thread_multiple+vt arch=linux-rhel7-x86_64
^plu...@2.5.0%g...@6.5.0+gsl+mpi optional_modules=all +shared arch=linux-rhel7-x86_64
^g...@2.5%gcc@6.5.0 arch=linux-rhel7-x86_64
^libma...@1.1.11%gcc@6.5.0 patches=0465844d690e3ff4d022f0c2bab76f636d78e4c6012a7a6d42b6c99e307fb671 arch=linux-rhel7-x86_64
^f...@2.6.3%gcc@6.5.0+lex arch=linux-rhel7-x86_64
^ope...@0.3.5%gcc@6.5.0 cpu_target=auto ~ilp64+pic+shared threads=none ~virtual_machine arch=linux-rhel7-x86_64
^z...@1.2.11%gcc@6.5.0+optimize+pic+shared arch=linux-rhel7-x86_64
To unsubscribe from this group and stop receiving emails from it, send an email to plumed...@googlegroups.com.
To post to this group, send email to plumed...@googlegroups.com.
Visit this group at https://groups.google.com/group/plumed-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/c6532672-9bcb-4dbb-a397-923509c04124%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.
To post to this group, send email to plumed...@googlegroups.com.
Visit this group at https://groups.google.com/group/plumed-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/d91ce201-1408-4d04-b568-0ab655204c52%40googlegroups.com.
Yes that was what I wanted to know.We will try to reproduce it and fix it (https://github.com/plumed/plumed2/issues/461)Meanwhile I just suggest you to restart the simulation every now and thenThanks!Giovanni
To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.
To post to this group, send email to plumed...@googlegroups.com.
Visit this group at https://groups.google.com/group/plumed-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/60bbd57c-0885-4fa7-916b-85093393f831%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/CAPLm8ZK0UKQy1f38ranYUOr0O9ZpYmqsP6ej1cWQGu2uEfYDVQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/15D06E94-CE3C-4085-98B0-219BD04D1186%40gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "PLUMED users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/d8bb7719-bb3b-46a0-9eb3-aabb3ba676bcn%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/cccd9b22-6f51-4aed-b381-80cfef642337n%40googlegroups.com.