HREX Incorrect Exchange Rates

571 views
Skip to first unread message

seanmich...@gmail.com

unread,
Dec 8, 2014, 11:40:35 AM12/8/14
to plumed...@googlegroups.com
Hello All,

I have recently patched GROMACS 4.6.5 with plumed2.0-hrex and I am going through the suggested tutorials given here (https://github.com/GiovanniBussi/plumed2/blob/v2.0-hrex/user-doc/tutorials/hrex.txt). When I perform the test for partial_tempering everything is fine (i.e. lambda=1 is the same as an unmodified simulation and lambda =0.5 has an energy file exactly half of lambda=1). However when I perform a test to check the exchange rates I am receiving odd rates.

I have set up a system with 5 replicas. Each replica has the same lambda value (lambda=1) and the plumed.dat file is empty. I would expect the exchange rate to be exactly 1 for each exchange, however I am not receiving this value. 

I have attached all necessary files to run my exact test as well as my submission command below, any help would be greatly appreciated. 

mpirun -np 5 mdrun_plumed-hrex -v -plumed plumed.dat -multi 5 -replex 100 -s start.tpr -deffnm prod -hrex


Thanks,
Sean
topol_3.top
topol_4.top
v2_prod_npt.mdp
start0.tpr
start1.tpr
start2.tpr
start3.tpr
start4.tpr
topol_0.top
topol_1.top
topol_2.top

Giovanni Bussi

unread,
Dec 8, 2014, 12:18:17 PM12/8/14
to plumed...@googlegroups.com
Can you check if the stride for neighbor list updated is dividing exactly the stride for replica exchange (100)?

Notice that with GPUs gromacs changes it to 40, so in case you are running on GPUs you should use e.g. -replex 120.

Giovanni

--
You received this message because you are subscribed to the Google Groups "PLUMED users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.
To post to this group, send email to plumed...@googlegroups.com.
Visit this group at http://groups.google.com/group/plumed-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/plumed-users/490d8847-74ec-446d-9b21-fbda34a4786d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

seanmich...@gmail.com

unread,
Dec 8, 2014, 12:25:20 PM12/8/14
to plumed...@googlegroups.com

I am not using GPUs, the stride of for neighbor list updated (10) is dividing exactly the stride for replica exchange (100). 

I have also attached a log file if that may help.

Thanks,
Sean
prod0.log

Giovanni Bussi

unread,
Dec 8, 2014, 12:34:13 PM12/8/14
to plumed...@googlegroups.com
Another possible issue could be the barostat. Notice that Berendsen barostat is not a good idea with replica exchange. Moreover, I never managed to have proper replica exchange with Parrinello Rahman (could be some problem in some old gromacs implementation, but it could have been solved now). Anyway, I suggest NVT for these calculations. I guess when volume is constant you can also remove "DispCorr        = EnerPres" (I am not sure that is added in the proper way so as to be taken into account for acceptance calculation).

Let us know if with this setup you can obtain an acceptance equal to 1.

Giovanni


seanmich...@gmail.com

unread,
Dec 8, 2014, 1:50:10 PM12/8/14
to plumed...@googlegroups.com
Hello,

After changing the ensemble to NVT I am now receiving the correct acceptance rates of 1.0 for every attempt. Thank you for all of your help!

Thanks,
Sean

seanmich...@gmail.com

unread,
Dec 12, 2014, 10:43:36 AM12/12/14
to plumed...@googlegroups.com
Hi Giovanni,

I am curious as to why having an NPT system would disrupt the exchange rates for this implementation? I have been looking through literature and cannot seem to find a solid explanation. Does this mean with this implementation you cannot ever perform an NPT ensemble?

Thanks,
Sean

Giovanni Bussi

unread,
Dec 12, 2014, 10:47:13 AM12/12/14
to plumed...@googlegroups.com
Hi,

Berendsen barostat could introduce artifacts (this is well known), so that could be an issue.

However, I tried also with Parrinello Rahman and never managed to have it working. It could be a bug in the gromacs implementation of NPT replica exchange (small chance) or a bug in my implementation of Hamiltonian replica exchange (high chance).

Anyway, if you are interested in making it work with NPT and want to make some test (with Parrinello Rahman) that would be very useful.

Thanks!

Giovanni


Chris Neale

unread,
Dec 12, 2014, 11:50:01 AM12/12/14
to plumed...@googlegroups.com
There was a problem with exchanges in the 4.6 series up to, but not including 4.6.4 (so likely not an issue for this post, just mentioning it in case you find it useful)

http://redmine.gromacs.org/issues/1362

Also, if you look at the gromacs release notes for 4.6.6: http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_4.6.x

you can see that they: "Fixed a replica-exchange output problem when running NPT simulations" ... not sure what that was and since it's only output then it might not affect the run but perhaps it is affecting the reported probabilities? Did you check to see that even though probabilities are sometimes reported to be less than 1.0 there is actually some cases in which exchange does not occur?

Sean McHugh

unread,
Dec 12, 2014, 11:54:41 AM12/12/14
to plumed...@googlegroups.com
Hi, 

When testing the Parrinello-Rahman barostat with Gromacs/4.6.5 I do get exchanges (range is from 9-20%). So it seems as though exchanges are occurring, however, the rates are far from the expected 1.0.

Thanks

--
You received this message because you are subscribed to a topic in the Google Groups "PLUMED users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/plumed-users/A6keTq-EWAw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to plumed-users...@googlegroups.com.

To post to this group, send email to plumed...@googlegroups.com.
Visit this group at http://groups.google.com/group/plumed-users.

Giovanni Bussi

unread,
Dec 12, 2014, 12:02:19 PM12/12/14
to plumed...@googlegroups.com
Hi this is just to stress that very likely the bug is in hrex only! A good test would be no hrex flag and pt with slightly different temperature (300, 300.001,..) to be sure that gromacs alone works. Giovanni 

Inviato da iPhone
You received this message because you are subscribed to the Google Groups "PLUMED users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.

To post to this group, send email to plumed...@googlegroups.com.
Visit this group at http://groups.google.com/group/plumed-users.

Sean McHugh

unread,
Dec 12, 2014, 12:19:48 PM12/12/14
to plumed...@googlegroups.com
Hi,

I have also performed a test using 5 replicas with temperatures ranging from 300.000K - 300.004K without the -hrex flag and received the correct exchange rates of 1.0. Therefore as you suggested Giovanni, it seems like the bug is with hrex and not Gromacs in general. I will further look into this. 

Thank you for your help.

Sean

Chris Neale

unread,
Apr 6, 2016, 6:38:25 PM4/6/16
to PLUMED users, seanmich...@gmail.com
Indeed, (for plumed2.2.2-hrex and gromacs 51.2) turn pressure coupling off and use the same lambda for each replica and then use different structures with different box dimensions and the probabilities are not 1.0. Enlarge the boxes in the same initial conformations so that the boxes are all the same size and there are no clashes and now the exchange probabilities are all 1.0. I'll look deeper into the code but hopefully the issue is simply that the box size is not being updated for the potential exchange.

Chris Neale

unread,
Apr 6, 2016, 7:03:56 PM4/6/16
to PLUMED users, seanmich...@gmail.com
If I change temperatures slightly and omit the -hrex flag then I also find the exchange acceptance rates to be 1.0, so it looks like the regular plumed is updating the box size when evaluating exchanging coordinates but the hrex code is not.

Giovanni Bussi

unread,
Apr 7, 2016, 10:34:43 AM4/7/16
to plumed...@googlegroups.com, seanmich...@gmail.com
Hi Chris,

thanks for investigating, trying NVT with different volumes was an excellent idea.

I think I found the bug. Precisely, there was some "non atom" (e.g. volume) information that was not propagated properly. Notice that the bug was not appearing with using domain decomposition.

Now I should have fixed the problem in both patches for gmx 4.6.7 and gmx 5.1.2. Can you try it and confirm that it works for you?

Notice that I did it only for plumed 2.2 (2.1 is unmaintained), so you should download branch v2.2-hrex, whatever version of gmx you are using.

To reassure other plumed users reading this mailing list: the bug only concerns the hrex-modified patch (and only for NPT simulations). "Official" plumed+gromacs is expected to work correctly.

Ciao and thanks a lot for the help!

Giovanni


Chris Neale

unread,
Apr 7, 2016, 1:09:17 PM4/7/16
to PLUMED users, seanmich...@gmail.com
Dear Giovanni:

Thank you for the fix. I tested with different boxes in both NVT and NPT and exchanges are straight 1.0 all across. I'll update later if I find any other issues with pressure coupling, but for now I think the hrex pressure coupling issue is fixed.

Chris.

Chris Neale

unread,
Apr 7, 2016, 6:59:54 PM4/7/16
to PLUMED users, seanmich...@gmail.com
Slight clarification... the code now seems to be correct in the following cases:

(1) anything that I have tested not using the -hrex option
(2) replicas all have the same box sizes
(3) replicas have different box sizes and pressure coupling is ON -- this is the recent fix

However, exchange probabilities are **incorrect** for:

(4) replicas have different box sizes and pressure coupling is OFF -- this is broken now and was also broken in the code I was using yesterday

I presume the issue is because you have used an update function that (inside gromacs) knows that pressure coupling is off so it tries to be smart and does not update the box. Unless I am mistaken here, this could be a very big problem for those not using pressure coupling who have started their replicas with different structures from a preliminary NPT equilibration.

Just to clarify, anything I am reporting here relates only the the -hrex fork while using the -hrex option. 

Thank you,
Chris.

Giovanni Bussi

unread,
Apr 8, 2016, 2:43:28 AM4/8/16
to plumed...@googlegroups.com, seanmich...@gmail.com
Hi Chris,

about point (4), I think the update of information about the box in gromacs is hidden somewhere else so it would be difficult to fix.

Anyway, I don't think this is a problem. Doing replica exchange when replicas have different volumes and volume is a constant of motion does not make sense (actually, it is a typical error). If you want to do this for equilibration purpose (e.g., you want the simulation to be stable in the initial part, and so you run it with constant volume) you should not use replex. Alternatively, if you *really* want to do it (but I don't think it is a good idea) you can try with a barostat and coupling time set to a huge number.

Thanks again for testing!

Giovanni



Chris Neale

unread,
Apr 8, 2016, 1:53:22 PM4/8/16
to PLUMED users, seanmich...@gmail.com
Thank you Giovanni. I don't want to do it, just reporting.

ruchi lohia

unread,
Apr 11, 2016, 11:40:17 AM4/11/16
to PLUMED users, seanmich...@gmail.com
Dear All

I am testing hrex-v2.1 with gromacs 4.6.7 for a system consisting of 70,000 atoms. Single precision vs double precision of gromacs gives different acceptance ratio for 24 replicas with same hamiltonian. 



for 30ps double precision  ( nstlist is 20 , exchange freq 500 ) :
dplumed term is 0.00 (kT) or  4.667e-11 (kT)
and 
 Repl  average number of exchanges:


Repl     0    1    2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18   19   20   21   22   23

Repl      1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0

for 30ps single precision  ( nstlist is 20 , exchange freq 500 ) :
dplumed term is 0.00 (kT) or 3.132e-03 (kT) or 6.256e-03 (kT)  
and 
Repl  average number of exchanges:

Repl     0    1    2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18   19   20   21   22   23

Repl      1.0  1.0  1.0  1.0  1.0  1.0  .99  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0


for 30ps single precision with gpu ( nstlist is 20 , exchange freq 500 ):


several different values for dplumed term including  0.00 (kT)


Repl  average number of exchanges:

Repl     0    1    2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18   19   20   21   22   23

Repl      .98  .98  .98  .98  .98  .97  .95  .98  .98  .99  .97  .99  .98  .96  .96  .99  .96  .99  .99  .96  .97  .98  .96



Single precision is not giving the expected exchange rate of 1.0 for all replicas. I am wondering if gromacs is calculating the energy term in single precision and plumed in double precision ( PLUMED 2.0 always uses double precision ; https://groups.google.com/forum/#!topic/plumed-users/l11d83yvXUshttps://groups.google.com/forum/#!topic/plumed-users/MxCY_-tFLS4, https://groups.google.com/forum/#!msg/plumed-users/ey9eAWE3-tw/Xl7t5R860tMJ) , this gives slight deviation in the dplumed term which is affecting the exchange probability.


Does that mean with large systems we can only use plumed patched with gromacs double precision ? Is there a way for using plumed in single precision as well ? If not, would you consider this deviation significant?

 

Gromacs double precision is 50% slower than single precision for my system. 


Thanks 

Ruchi

Giovanni Bussi

unread,
Apr 11, 2016, 11:53:11 AM4/11/16
to plumed...@googlegroups.com, Sean McHugh
Hi,

in the last case (GPUs) I think that gromacs is changing nstlist to 40, which is not a divisor of 500.

Can you retry with replex 1000?

Giovanni


ruchi lohia

unread,
Apr 11, 2016, 12:33:35 PM4/11/16
to PLUMED users, seanmich...@gmail.com
Dear Dr. Giovanni

Below is the average number of exchanges with 1000 exchange frequency. I have used export GMX_NSTLIST=20 for the test. 

replex 1000
Repl  average number of exchanges:

Repl     0    1    2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18   19   20   21   22   23

Repl      1.0  1.0  .90  .71  .62  .90  1.0  .62  .90  .71  1.0  1.0  1.0  .95  1.0  .95  .95  1.0  1.0  .86  .95  .71  .95


Thanks 

Ruchi

Giovanni Bussi

unread,
Apr 11, 2016, 1:04:29 PM4/11/16
to plumed...@googlegroups.com, seanmich...@gmail.com
Can you share your mdp file?

Inviato da iPhone

ruchi lohia

unread,
Apr 11, 2016, 1:15:13 PM4/11/16
to PLUMED users, seanmich...@gmail.com
Dear Dr. Giovanni


mdp file:

========

;title          = Replica exchange simulation from appropriate starting structure  (all 24 replicas have different starting structure with same box size )

; Run parameters

integrator      = sd            ; stochastic-dynamics integrator

ld-seed         = -1

nsteps          = 500000000     ; the amount of time for the system to be simulated, 1us

dt              = 0.002         ; 2 fs

nstcomm         = 100

; Output control

nstxout         = 0             ; save coordinates at last step only

nstvout         = 0             ; save velocity at the last step only

nstcalcenergy   = 100           ; should be a multiple of nstlist, every 1.0 ps

nstenergy       = 25000         ; save energies every 50.0 ps

nstlog          = 25000         ; update log file every 50.0 ps

nstxtcout       = 25000         ; save xtc files cooordinates every 50.0 ps

xtc-grps        = System        ; saves the coordinates for the entire system

; Bond parameters

continuation            = yes       ; first dynamics run

constraint_algorithm    = lincs     ; holonomic constraints

constraints             = h-bonds ; all bonds (even heavy atom-H bonds) constrained

lincs_iter              = 1    ; accuracy of LINCS

lincs_order             = 4    ; also related to accuracy

; Neighborsearching

cutoff-scheme       = verlet            ; Using group scheme

ns_type             = grid              ; search neighboring grid cells

nstlist             = 20                ; 10 fs

rlist               = 1.0               ; short-range neighborlist cutoff (in nm)

; Electrostatics

rcoulomb            = 1.0               ; short-range electrostatic cutoff (in nm)

rvdw                = 1.0               ; short-range van der Waals cutoff (in nm)

coulombtype         = PME       ; Particle Mesh Ewald for long-range electrostatics

pme_order           = 4         ; cubic interpolation

fourierspacing      = 0.12      ; grid spacing for FFT

optimize_fft         = yes

; Temperature coupling is on with sd

tc-grps         = Protein Non-Protein   ; two coupling groups - more accurate

tau_t           = 1.5     1.5             ; time constant, in ps

ref_t           = 300     300           ; reference temperature, one for each group, in K

; Pressure coupling is off

pcoupl          = no            ; no pressure coupling in NVT

; Periodic boundary conditions

pbc             = xyz               ; 3-D PBC

; Dispersion correction

DispCorr        = EnerPres      ; account for cut-off vdW scheme

; Velocity generation

gen_vel         = no            ; give new velocities

Giovanni Bussi

unread,
Apr 11, 2016, 1:24:55 PM4/11/16
to plumed...@googlegroups.com, Sean McHugh
Looks correct...

Suggestions:
- try md instead of sd (I actually never used sd, so I am not sure if it works)
- on the command line, don't use "-nex" (I guess you are not using it)
- make sure all the simulations are initialized with identical volume (I guess you wrote that in the comment so it should be ok).

If this does not fix your issue perhaps you can send me privately your tpr files.

Thanks!

Giovanni



Chris Neale

unread,
Apr 11, 2016, 4:49:39 PM4/11/16
to PLUMED users, seanmich...@gmail.com
The numbers look impossible.... you're attempting an exchange every 1 ps and you run for 30 ps. Each pair only gets attempted to exchange every 2nd exchange round, so you're testing with 15 attempts per pair. That should give results in increments of 1/15 = 0.07. Therefore, I don't see how its possible to get a 0.99 acceptance rate. You report actual non-zero dplumed values, so there's clearly something going on, but I think its worth a check that you're actually doing what you think you are doing.

ruchi lohia

unread,
Apr 11, 2016, 6:18:00 PM4/11/16
to PLUMED users, seanmich...@gmail.com
Hi Chris

Thanks for looking into it. Yes you are right. I am sorry for the confusion, I have been using replex 100 and not 500 for the tests reported in the first post. However, with either replex the dplumed term includes non-zero values.

For gromacs single precision without gpus 

replex 100
nstlist 20
total time 32.84000 ps
dplumed term  includes non zero values
Repl  164 attempts, 82 odd, 82 even
Repl  average number of exchanges:

Repl     0    1    2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18   19   20   21   22   23

Repl      1.0  1.0  1.0  1.0  1.0  1.0  .99  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0



replex 500 

nstlist 20

total time 34.32000 ps

dplumed term includes non zero values

Repl  34 attempts, 17 odd, 17 even

Repl  average number of exchanges:

Repl     0    1    2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18   19   20   21   22   23

Repl      1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0


Thanks 

Ruchi

...

Giovanni Bussi

unread,
Apr 14, 2016, 1:12:39 PM4/14/16
to plumed...@googlegroups.com, Sean McHugh
Hi all,

we made some test here on a setup for a middle size system that we have here (RNA in water, ~50k atoms).

It looks like if we process with mdrun -rerun the same calculation twice we get slightly different energies. Notice that this is made with a plumed-hrex-patched version but without any replex or hrex flag, so should be identical to official gromacs.

The differences are in the last digit of single precision numbers, which means relative error ~ 10^-7, but since the energy is large this leads to absolute discrepancies in the calculation of energy in neighboring replicas of a fraction of a kj/mol. This can explain acceptances on the order of ~95%.

I think this is because of some internal gromacs optimization that makes the calculation not exactly reproducible.

As a consequence, one cannot expect the acceptance test to succeed with single precision gromacs.

With GPUs it might get even worst.

Given this, I think the error introduced in the simulation by this miscalculated acceptance should be expected to be negligible, and comparable to that expected from a plain MD (not replex) simulation in single precision. I would thus use hrex flag without special worries also with single precision.

Best,

Giovanni

Chris Neale

unread,
Apr 26, 2016, 2:43:34 PM4/26/16
to PLUMED users, seanmich...@gmail.com
Thanks for pointing this out Giovanni. There are also some other relevant issues, which I have posted to the gromacs list here: https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2016-April/105364.html

Chris Neale

unread,
Apr 26, 2016, 6:26:23 PM4/26/16
to PLUMED users, seanmich...@gmail.com
========

laurenr...@hotmail.com

unread,
Dec 6, 2017, 5:13:50 AM12/6/17
to PLUMED users
Hi All,

I am using plumed2-2.2-hrex with gromacs5.1.4 to perform REST2 simulations of a peptide on a membrane in NPT conditions. I have been reading through this discussion and would just like to check that this implementation is definitely suitable to use with pressure coupling? From what I gather from the literature, the pressure must be included in the acceptance criteria for Monte Carlo moves in the isothermal-isobaric ensemble. I would just like to check if this is implemented in the plumed code?

Thanks in advance for your help!

Lauren

Carlo Camilloni

unread,
Dec 6, 2017, 1:05:49 PM12/6/17
to plumed...@googlegroups.com
Hi

As far as I know yes it works. I haven’t had yet the time to test everything with GPU but on CPU it works.

Carlo


Reply all
Reply to author
Forward
0 new messages