high energy gap in HREX simulation, no exchange

35 views
Skip to first unread message

Qinghua Liao

unread,
Apr 15, 2026, 1:21:27 PM (14 days ago) Apr 15
to PLUMED users
Hello, 

I am trying to run HREX simulations of a protein with a ligand (59 atoms). 
Only the ligand was selected as hot region. 

Initially, I set the "temperature" range as 300-600K, with 8 replicas, then I found that no exchange happened, and the energy gap is high, then I tried to narrow the "temperature" range, but the gap is still high. Initially, I tried, Gromacs 2023.5/plumed 2.9.2, later, Gromacs 2022.5/plumed 2.9.0. 

Now I am using Gromacs 2022.5/plumed 2.8.3, my latest test is: 
I set the "temperature" range as 300-330 K, 8 replicas, all starting from the same configuration. Here is my command: 

srun -N 8 -n 128 -c 7 gmx_mpi_d mdrun -deffnm md -plumed plumed.dat -multidir w0 w1 w2 w3 w4 w5 w6 w7 -nsteps 2500000 -replex 500 -hrex -dlb no -pin on

But the energy gap is still very high, here is the first attempt for exchange: 
#
Replica exchange at step 500 time 1.00000
Repl 0 <-> 1  dE_term = -0.000e+00 (kT)
  dpV =  0.000e+00  d =  0.000e+00
dplumed =  4.072e+03  dE_Term =  4.072e+03 (kT)
Repl ex  0    1    2    3    4    5    6    7
Repl pr   .00       .00       .00       .00
#
I could not understand why the energy gap is so big, after narrowing down the "temperature" range, starting from the same configuration for all replicas. 

The attached is the differences between topology files for replica 0 and replica 1 by running: diff mol_0.top mol_1.top > top_diff_0-1.dat 

I would really appreciate if someone could help me figure out this issue. 
Thanks very much! 

All the best,
Qinghua
 


top_diff_0-1.dat

Qinghua Liao

unread,
Apr 16, 2026, 12:32:38 PM (13 days ago) Apr 16
to PLUMED users
Hello, 

I have been trying to figure it out. My system was built with Amber, and converted into Gromacs format with acpype.
With gmx grompp -pp, I got the postprocessed topology file containing all the parameters. 
All the topology files, unscaled and scaled , are OK, I can see the vdW, charge and dihedral parameters being scaled, only for the ligand. 

I used to use pdb2gmx to build up the system when I want to run HREX simulations with Gromacs/plumed, so I suspected that there might be some differences.
So, I just built up a system of ACE-Ala-NME in water with pdb2gmx, then equilibrating, setting up HREX as what I used to do. 
(Gromacs 2022.5 / PLUMED 2.8.3)
8 replicas, temperature range: 300-330 K , hot region, ACE-Ala-NME, 22 atoms, all replicas start from the same configuration.
From the log file, I found the first exchange attempt is like this: 

Replica exchange at step 500 time 1.00000
Repl 0 <-> 1  dE_term =  0.000e+00 (kT)

  dpV =  0.000e+00  d =  0.000e+00
dplumed =  1.687e+02  dE_Term =  1.687e+02 (kT)

Repl ex  0    1    2    3    4    5    6    7
Repl pr   .00       .00       .00       .00

In the first 4 ns, the exchange rate between replica 0 and 1 is 35 / (4447 *0.5) = 1.6 % 

Now not sure what's going wrong. I would appreciate any comments. 

All the best,
Qinghua

Giovanni Bussi

unread,
Apr 16, 2026, 1:51:31 PM (13 days ago) Apr 16
to plumed...@googlegroups.com
Hello,

This is someway surprising.

Can you check how much the acceptance is if the range is 300-300? I mean: all scaling factors are identical.

Giovanni


--
You received this message because you are subscribed to the Google Groups "PLUMED users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to plumed-users...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/plumed-users/8642d9b0-bf76-4576-ad5b-c233fdc764e8n%40googlegroups.com.
<top_diff_0-1.dat>

Qinghua Liao

unread,
Apr 16, 2026, 4:38:13 PM (12 days ago) Apr 16
to PLUMED users
Thanks Prof. Bussi! 

I just tried as you suggested with my system. 
I tried two installations, Gromacs 2022.5/Plumed 2.8.3 (installed by me), Gromacs 2023.5/Plumed 2.9.2 (installed by the cluster administrator)
The temperature range was set as 300-300K, I checked the newly generated topology files, they are all identical (all the tpr files should be considered as identical) 

However, with both installations, big energy gap was reported: 

Replica exchange at step 500 time 1.00000
Repl 0 <-> 1  dE_term = -0.000e+00 (kT)
  dpV =  0.000e+00  d =  0.000e+00
dplumed =  2.296e+03  dE_Term =  2.296e+03 (kT)

Repl ex  0    1    2    3    4    5    6    7
Repl pr   .00       .00       .00       .00
####

I then also tried to use one tpr file (generated with a topolgy file from acpye, not from processed.top) for all 8 replicas. 
Then the exchange rate seems to be OK.

Replica exchange at step 500 time 1.00000
Repl 0 <-> 1  dE_term = -0.000e+00 (kT)
  dpV =  0.000e+00  d =  0.000e+00
dplumed =  0.000e+00  dE_Term =  0.000e+00 (kT)
Repl ex  0 x  1    2 x  3    4 x  5    6 x  7
Repl pr   1.0       1.0       1.0       1.0

But the simulation crashed at 2 ps, due to:
Internal error (bug):
Step 1500: The total potential energy is inf, which is not finite. The LJ and
electrostatic contributions to the energy are 5349.26 and -42718,
respectively. A non-finite potential energy can be caused by overlapping
interactions in bonded interactions or very large or Nan coordinate values.
Usually this is caused by a badly- or non-equilibrated initial configuration,
incorrect interactions or parameters in the topology.

Normal MD simulation with this same tpr file worked well without problem. 

It seems that there is something wrong there, not sure it is from Gromacs or Plumed. 


All the best,
Qinghua

Qinghua Liao

unread,
Apr 17, 2026, 3:26:14 AM (12 days ago) Apr 17
to PLUMED users
Hello, 

I put some of my tests of the Ala dipeptide in the Dropbox: 

In the folder, it is a normal HREX simulation, then I tried to use ONE tpr file for all 8 replicas, with different installations.
In all cases, the energy gap is big. 

I tried to remove the -hrex flag from the command, replicas were exchanging.
(subfolder: hrex_oneTPR_gmx2023.5-plumed2.9.2_nohrex)
But strangely, plumed was still used to calculate the Hamiltonian (dplumed): 

Replica exchange at step 500 time 1.00000
Repl 0 <-> 1  dE_term = -0.000e+00 (kT)
  dpV =  0.000e+00  d =  0.000e+00
dplumed =  0.000e+00  dE_Term =  0.000e+00 (kT)
Repl ex  0 x  1    2 x  3    4 x  5    6 x  7
Repl pr   1.0       1.0       1.0       1.0

dplumed disappeared after I removed the flag "-plumed plumed.dat"

Thanks!

All the best,
Qinghua

Qinghua Liao

unread,
Apr 17, 2026, 9:02:36 AM (12 days ago) Apr 17
to PLUMED users
Hello Prof. Bussi, 

Thanks to my colleague, he suspected that it might be a synchronizing issue, though I don't fully understand.
He suggested me to run the simulations with GPUs, all the previous runs are on CPUs. 

I ran the simulations with Gromacs 2022.5 and Plumed 2.9.0 on GPUs. 
For my protein/ligand system:

when I used ONE tpr file (300-300) for all 8 replicas, the exchange rate is like ~0.9, should it be 1.0 for all? 

when I tried to run a normal HREX simulation of the protein-ligand system, hot region is the ligand, 59 atoms, 
temperature range 300-400, the energy gap is ranging 0- ~20 kbT. (running on CPUs, the energy gap is 4000-6000 kbT)
In 3 ns, the exchange attempt is every 1 ps, but the exchange rate is like 2-3%

Even though the issue is kind of fixed, but the exchange rate is still far too low according to my previous experience. 

I will try to install the very old version that I used to use, and compare. Thanks a lot! 


All the best,
Qinghua



On Thursday, April 16, 2026 at 7:51:31 PM UTC+2 Giovanni Bussi wrote:

Qinghua Liao

unread,
Apr 17, 2026, 7:22:22 PM (11 days ago) Apr 17
to PLUMED users
Hello, 

I managed to install Gromacs 2021.5 patched with Plumed 2.7.0. 

I tried to run the simulation of my system (protein and ligand) with the new installation, using only CPUs.

I set the effective temperature range as 300-400, and the energy gap was like 0-20 kbT, similar to the previous test with Gromacs 2022.5 and Plumed 2.9.0 using GPUs. 
The exchange rate is like 1-2% in the first 1.7 ns. 

All the best,
Qinghua
Reply all
Reply to author
Forward
0 new messages