memory leak metadynamics?

210 views
Skip to first unread message

ge.f...@gmail.com

unread,
Apr 14, 2019, 1:17:30 PM4/14/19
to PLUMED users
Hello,

I'm trying some simple standard meta-dynamics on LJ clusters.  The simulations run until all memory gets consumed on the cluster and then crashes.  These are my specs below, has anyone else encountered this issue?


Gromacs compilation:

 

gromacs@2018.4%gcc@6.5.0 build_type=RelWithDebInfo +cuda~double+mpi+plumed+rdtscp+shared simd=auto arch=linux-rhel7-x86_64

    ^cu...@9.2.88%gcc@6.5.0 arch=linux-rhel7-x86_64

    ^ff...@3.3.8%gcc@6.5.0+double+float~fma+long_double+mpi~openmp~pfft_patches~quad simd=avx,avx2,sse2 arch=linux-rhel7-x86_64

        ^ope...@3.1.3%gcc@6.5.0~cuda+cxx_exceptions fabrics=verbs ~java~legacylaunchers~memchecker~pmi schedulers=auto ~sqlite3~thread_multiple+vt arch=linux-rhel7-x86_64

    ^plu...@2.5.0%g...@6.5.0+gsl+mpi optional_modules=all +shared arch=linux-rhel7-x86_64

        ^gsl@2.5%gcc@6.5.0 arch=linux-rhel7-x86_64

        ^libma...@1.1.11%gcc@6.5.0 patches=0465844d690e3ff4d022f0c2bab76f636d78e4c6012a7a6d42b6c99e307fb671 arch=linux-rhel7-x86_64

            ^fl...@2.6.3%gcc@6.5.0+lex arch=linux-rhel7-x86_64

        ^open...@0.3.5%gcc@6.5.0 cpu_target=auto ~ilp64+pic+shared threads=none ~virtual_machine arch=linux-rhel7-x86_64

        ^zl...@1.2.11%gcc@6.5.0+optimize+pic+shared arch=linux-rhel7-x86_64







Plumed.dat:



q3: Q3 SPECIES=1-8 SWITCH={RATIONAL D_0=1.3 R_0=0.2 D_MAX=3.0} MEAN

q4: Q4 SPECIES=1-8 SWITCH={RATIONAL D_0=1.3 R_0=0.2 D_MAX=3.0} MEAN

###

METAD ...

LABEL=metad

ARG=q3.mean,q4.mean

PACE=500

HEIGHT=0.01

SIGMA=0.35,0.35

FILE=HILLS

GRID_MIN=0.1,0.02

GRID_MAX=0.27,0.27

GRID_SPACING=0.01,0.01 

... METAD

PRINT STRIDE=10 ARG=q3.mean,q4.mean,metad.bias FILE=COLVAR



gromacs md.mdp



title   = my simualtion

; Run parameters

integrator  = md    ; leap-frog integrator

nsteps    = 5000000   ; 2 * 50000 = 100 ps

dt        = 0.002   ; 2 fs

; Output control

nstxout   = 500   ; save coordinates every 1.0 ps

nstvout   = 500   ; save velocities every 1.0 ps

nstenergy = 500   ; save energies every 1.0 ps

nstlog    = 500   ; update log file every 1.0 ps

; Bond parameters

constraint_algorithm    = lincs     ; holonomic constraints 

constraints             = all-bonds ; all bonds (even heavy atom-H bonds) constrained

lincs_iter              = 1       ; accuracy of LINCS

lincs_order             = 4       ; also related to accuracy

; Neighborsearching

cutoff-scheme   = Verlet

ns_type       = Grid

nstlist       = 10    ; 20 fs, largely irrelevant with Verlet

rcoulomb      = 2.49  ; short-range electrostatic cutoff (in nm)

rvdw        = 2.49  ; short-range van der Waals cutoff (in nm)

; Electrostatics

coulombtype     = Cut-off

pme_order     = 4   ; cubic interpolation

fourierspacing  = 0.16  ; grid spacing for FFT

; Temperature coupling is on

tcoupl    = v-rescale

tc-grps   = system

tau_t   = 0.01

ref_t   = 2

; Pressure coupling is off

pcoupl    = no    ; no pressure coupling in NVT

; Periodic boundary conditions

pbc   = xyz       ; 3-D PBC

; Dispersion correction

DispCorr  = EnerPres  ; account for cut-off vdW scheme

; Velocity generation

gen_vel   = yes   ; assign velocities from Maxwell distribution

gen_temp  = 1   ; temperature for Maxwell distribution

gen_seed  = -1    ; generate a random seed

Giovanni Bussi

unread,
Apr 14, 2019, 3:45:17 PM4/14/19
to plumed...@googlegroups.com
Hi.

This should not happen. Can you check if it happens also with gromacs without plumed? Otherwise this is a bug we should fix.

Thanks!

--
You received this message because you are subscribed to the Google Groups "PLUMED users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.
To post to this group, send email to plumed...@googlegroups.com.
Visit this group at https://groups.google.com/group/plumed-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/c6532672-9bcb-4dbb-a397-923509c04124%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Sent from Gmail mobile

ge.f...@gmail.com

unread,
Apr 14, 2019, 6:24:00 PM4/14/19
to PLUMED users
Hello,

I've run many (very long) simulations using this compilation of gromacs without issue, it has only done this when a plumed data file is present with -plumed. Is this what you mean?

Thanks, Geoff.



On Sunday, April 14, 2019 at 3:45:17 PM UTC-4, Giovanni Bussi wrote:
Hi.

This should not happen. Can you check if it happens also with gromacs without plumed? Otherwise this is a bug we should fix.

Thanks!
Il giorno dom 14 apr 2019 alle 19:17 <ge....@gmail.com> ha scritto:
Hello,

I'm trying some simple standard meta-dynamics on LJ clusters.  The simulations run until all memory gets consumed on the cluster and then crashes.  These are my specs below, has anyone else encountered this issue?


Gromacs compilation:

 

gro...@2018.4%gcc@6.5.0 build_type=RelWithDebInfo +cuda~double+mpi+plumed+rdtscp+shared simd=auto arch=linux-rhel7-x86_64

    ^c...@9.2.88%gcc@6.5.0 arch=linux-rhel7-x86_64

    ^f...@3.3.8%gcc@6.5.0+double+float~fma+long_double+mpi~openmp~pfft_patches~quad simd=avx,avx2,sse2 arch=linux-rhel7-x86_64

        ^ope...@3.1.3%gcc@6.5.0~cuda+cxx_exceptions fabrics=verbs ~java~legacylaunchers~memchecker~pmi schedulers=auto ~sqlite3~thread_multiple+vt arch=linux-rhel7-x86_64

    ^plu...@2.5.0%g...@6.5.0+gsl+mpi optional_modules=all +shared arch=linux-rhel7-x86_64

        ^g...@2.5%gcc@6.5.0 arch=linux-rhel7-x86_64

        ^libma...@1.1.11%gcc@6.5.0 patches=0465844d690e3ff4d022f0c2bab76f636d78e4c6012a7a6d42b6c99e307fb671 arch=linux-rhel7-x86_64

            ^f...@2.6.3%gcc@6.5.0+lex arch=linux-rhel7-x86_64

        ^ope...@0.3.5%gcc@6.5.0 cpu_target=auto ~ilp64+pic+shared threads=none ~virtual_machine arch=linux-rhel7-x86_64

        ^z...@1.2.11%gcc@6.5.0+optimize+pic+shared arch=linux-rhel7-x86_64

To unsubscribe from this group and stop receiving emails from it, send an email to plumed...@googlegroups.com.

Giovanni Bussi

unread,
Apr 14, 2019, 7:17:41 PM4/14/19
to plumed...@googlegroups.com
Yes that was what I wanted to know.

We will try to reproduce it and fix it (https://github.com/plumed/plumed2/issues/461)

Meanwhile I just suggest you to restart the simulation every now and then

Thanks!

Giovanni


To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.

To post to this group, send email to plumed...@googlegroups.com.
Visit this group at https://groups.google.com/group/plumed-users.

ge.f...@gmail.com

unread,
Apr 14, 2019, 8:25:14 PM4/14/19
to PLUMED users
OK, thank you. Let me know if you need any more information from me.

Geoff


On Sunday, April 14, 2019 at 7:17:41 PM UTC-4, Giovanni Bussi wrote:
Yes that was what I wanted to know.

We will try to reproduce it and fix it (https://github.com/plumed/plumed2/issues/461)

Meanwhile I just suggest you to restart the simulation every now and then

Thanks!

Giovanni


Giovanni Bussi

unread,
Apr 16, 2019, 2:20:22 AM4/16/19
to plumed...@googlegroups.com
Hi again. Here some additional question:

How long is your simulation?

Does the leak depend on the value of PACE (that is: if you double PACE does the simulation crash earlier)? What happens if you replace METAD with RESTRAINT (with some meaningful parameters)?

If you share a tpr file I can try myself to answer these (if I can reproduce it).

Thanks again!

Giovanni

To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.

To post to this group, send email to plumed...@googlegroups.com.
Visit this group at https://groups.google.com/group/plumed-users.

For more options, visit https://groups.google.com/d/optout.

Carlo Camilloni

unread,
Apr 16, 2019, 2:22:56 AM4/16/19
to plumed...@googlegroups.com
Hi

Can’t the leak depend on q3 specie?

Carlo



Sent from my iPhone

Giovanni Bussi

unread,
Apr 16, 2019, 2:53:06 AM4/16/19
to plumed...@googlegroups.com
Yes! Indeed if that’s the case it will remain also with RESTRAINT 


For more options, visit https://groups.google.com/d/optout.

Willmor Pena Ccoa

unread,
Feb 16, 2023, 12:13:09 PM2/16/23
to PLUMED users
Hi, 
Was this issue resolved. I'm seeing similar problems when running funnel metadynamics.

Best,
Will

Giovanni Bussi

unread,
Feb 17, 2023, 7:13:42 AM2/17/23
to plumed...@googlegroups.com
Hi,

I don't think we were able to reproduce this. Can you share your exact input file?

Giovanni


--
You received this message because you are subscribed to the Google Groups "PLUMED users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.

Willmor Pena Ccoa

unread,
Feb 28, 2023, 3:34:34 PM2/28/23
to PLUMED users
Hello, 
In my case it seems that the compilers (gcc 8.4.1 and openmpi/gcc 4.0.5)  with external fftw I used to build from source did not work well. I was trying to use gromacs 2022.3 patched with plumed 2.8.1. The HPC team  at my institution has since built and installed it and it's working well.

All the best,

Will

Chintu Das

unread,
Oct 25, 2023, 6:56:17 AM10/25/23
to PLUMED users
Hello,

I am also getting that same memory leak issue. Has anyone found any solution regarding that? Thank you.

Best regards
Chintu

Gareth Tribello

unread,
Oct 25, 2023, 6:58:13 AM10/25/23
to plumed...@googlegroups.com
Hello

If you try adding the keyword LOWMEM to all the lines calculating Steinhardt parameters that might fix the problem

Gareth

Chintu Das

unread,
Oct 25, 2023, 7:39:16 AM10/25/23
to PLUMED users
Hi

I am not using any Steinhardt parameters. I am only using distance between two atoms as CV. 

best 
Chintu

Reply all
Reply to author
Forward
0 new messages