Doing hoomd run without integrating the system

193 views
Skip to first unread message

DP

unread,
Dec 21, 2019, 9:07:27 PM12/21/19
to hoomd-users
Hi,
I have been trying to determine the chemical potential of a salt pair using Widom's insertion method. As might be already known to you, the method consists in doing periodic insertions of the species of interest in random positions within the simulation cell, computing the energy change resulting from this move and then deleting the inserted particles. Since one has to run the simulation in order for the potential energy computation to happen in hoomd, I have implemented the algorithm in my hoomd code by doing "hoomd.run(1)" with the integrator off. This, in theory, should enable the computation of energy change associated with the insertion. A good confirmation test of whether the system is remaining static during the above run is to compute the potential energy just after deletion. It should be equal to the potential energy before insertion. However, that does not happen here.   Is there any way I can compute the potential energy just after the insertion without having to let the system evolve?

My algorithm in short:

Equilibrate the system first
lange.disable()
energybefore=log.query('potential_energy')
totinser=50000
wtest=0.0
for i in range(totinser):
    tneg = system.particles.add('A')
    tpos = system.particles.add('B')
    posrandx=random.uniform(-Le/2,Le/2)
    posrandy=random.uniform(-Le/2,Le/2)
    posrandz=random.uniform(-Le/2,Le/2)
    negrandx=random.uniform(-Le/2,Le/2)
    negrandy=random.uniform(-Le/2,Le/2)
    negrandz=random.uniform(-Le/2,Le/2)
    pos=system.particles.get(tpos)
    neg=system.particles.get(tneg)
    pos.position=[posrandx,posrandy,posrandz] ! Inserted a particle pair in random locations
    neg.position=[negrandx,negrandy,negrandz]

    hoomd.run(1)
    energyafter=log.query('potential_energy') ! energy after insertion
    print(energyafter, energybefore)
    wtest=wtest+exp(-(energyafter-energybefore))

    system.particles.remove(tneg)
    system.particles.remove(tpos)
    hoomd.run(1)
    energyremoval=log.query('potential_energy') ! energy after the removal of the test particle pair
    When I compute the system potential energy here it turns out to be different from the energy before insertion
    lange.enable() ! enable the integrator to let the particles move
    hoomd.run(10)
    energybefore=log.query('potential_energy') ! energy before insertion
    lange.disable()
chempot=-math.log(wtest/totinser)

Thanks,
Debadutta Prusty

Michael Howard

unread,
Dec 22, 2019, 5:36:50 PM12/22/19
to hoomd-users
First of all, you should probably save the configurations to a file and then post process them rather than analyze them will the simulation is running. In my experience, you need to have a reasonable number of configurations and then many insertions per configuration to get good convergence with Widom’s method, but this will depend on the system. Then, in your post processing script, set dt=0 and use an NVE integrator to take a step without moving the particles.

Regards,
Mike

DP

unread,
Dec 22, 2019, 6:11:18 PM12/22/19
to hoomd-users
Thanks Mike, I will give "dt=0" a try. However, a problem still remains. The force on some particles in the system can be really large in some cases on the insertion of particles due to overlap. Hence, when I let the system evolve after the deletion move, sometimes particles just go out of the box due to the force from the previous step. I tried setting the force on each particle to zero right after the deletion was made, but it turned out that forces cannot be set by the user as they are computed by hoomd from the configuration data. Any way around this?

DP

unread,
Dec 22, 2019, 6:49:50 PM12/22/19
to hoomd-users
One more question. In your reply, are you suggesting that I use NVE to evolve the system instead of langevin, which I am using currently?


On Sunday, December 22, 2019 at 4:36:50 PM UTC-6, Michael Howard wrote:

Michael Howard

unread,
Dec 22, 2019, 7:44:14 PM12/22/19
to hoomd-users
No, as I said, you should run the simulation first without any insertions / deletions, and then post-process it to compute chemical potential using Widom’s method:

“you should probably save the configurations to a file and then post process them rather than analyze them will the simulation is running”

You should use the NVE integrator during post processing because (1) it doesn’t really matter too much which you use (nothing moves) except that (2) the random force applied by the Langevin thermostat diverges when dt = 0. Use whatever integrator makes sense for your ensemble while you are sampling configurations.

Regards,
Mike

DP

unread,
Dec 22, 2019, 7:44:43 PM12/22/19
to hoomd-users
The problem does not go away. Now, it throws an error, which is mentioned in detail below. This happens when I try to run NVE just after the insertion of a pair.

"    context.current.system.run(int(tsteps), callback_period, callback, limit_hours, int(limit_multiple));
RuntimeError: CUFFT returned error 1 in file /home/dpo854/mycode/hoomd-blue/hoomd/md/PPPMForceComputeGPU.cc line 281
"

On Sunday, December 22, 2019 at 4:36:50 PM UTC-6, Michael Howard wrote:

DP

unread,
Dec 22, 2019, 7:47:49 PM12/22/19
to hoomd-users
Thanks. I will adopt this method then.

DP

unread,
Dec 24, 2019, 12:29:21 AM12/24/19
to hoomd-users
Just an update. As was advised by you, I wrote a separate script for insertion moves. However, now I run into a different problem, which I already had sought a solution to in a different post. This comes up during one of the insertion moves, though not on the first move. This brings all my suspicion to the electrostatic force computation part of the code. Since here the inserted particles are charged, do I have to define the new charged group and explicitly mention "    pppm = md.charge.pppm(group=charged,nlist=nl)" after the insertion has been made? In its current form, the code has the group "charged" defined only once for each saved configuration since my assumption is that this group evolves during the simulation, keeping record of whatver comes into or goes out of the system. For your reference, I have attached to this email the saved trajectory file as well as my hoomd post processing script. Any suggestion about how to go about fixing this error would be extremely helpful.

Thanks for your patience.

    hoomd.run(1)
  File "/home/dpo854/mycode/hoomd-blue/install/hoomd/__init__.py", line 201, in run

    context.current.system.run(int(tsteps), callback_period, callback, limit_hours, int(limit_multiple));
RuntimeError: CUFFT returned error 1 in file /home/dpo854/mycode/hoomd-blue/hoomd/md/PPPMForceComputeGPU.cc line 281
postrunana.py
trajectory.gsd

DP

unread,
Dec 24, 2019, 1:47:18 AM12/24/19
to hoomd-users
Sorry, I had neglected to go through in detail the section on group.charged(). So, what I saw there is that currently, there is no provision in hoomd to update the  charged group upon a change in the number of particles in the system, which essentially requires defining the charged group in every insertion step. Now, my final question is "does the neightbour list support being updated when the system's total particle number changes?"

DP

unread,
Dec 24, 2019, 3:50:01 PM12/24/19
to hoomd-users
Unfortunately, there is something wrong with the pppm part since the error goes away when I take electrostatic interactions out of play by removing lines with pppm. To see whether the error was solely due to the charged group not getting updated during the run, I redefined charged.group() after every instance of particle insertion and deletion. To my dismay, the problem still remains. The dcd file containing various configurations, on which the insertion and deletion calculations will be done, and the post processing python script are attached here. With my limited grasp of the algorithms lying behind the electrostatic force computation, I am struggling to wrap my head around this.

On Saturday, December 21, 2019 at 8:07:27 PM UTC-6, DP wrote:

DP

unread,
Dec 24, 2019, 3:50:56 PM12/24/19
to hoomd-users
postrunana.py
trajectory.gsd

Jens Glaser

unread,
Dec 27, 2019, 10:02:15 AM12/27/19
to hoomd...@googlegroups.com
Hi Debadutta,

I examined your script. When I run it with HOOMD-blue 2.8.2, I get a lot of warning messages like these:

*Warning*: analyze.log: The log quantity pair_ewald_energy has been registered more than once. Only the most recent registration takes effect
*Warning*: analyze.log: The log quantity pppm_energy has been registered more than once. Only the most recent registration takes effect

This indicates that the charge.pppm compute has been registered multiple times. This is probably because python garbage collector doesn’t properly release the old force compute when the local variable pppm is reassigned. While this may be considered a bug in current HOOMD-blue, it is unlikely we’ll fix this in HOOMD 2.x, since a rewrite of the python API for 3.0 is in progress. I’ve created a reminder here:


There is an easy workaround though. Explicitly disable the pppm object before overwriting the variable. Like this:

--- postrunana_orig.py 2019-12-27 09:46:00.230948416 -0500
+++ postrunana.py 2019-12-27 09:46:18.393225815 -0500
@@ -58,6 +58,7 @@
         pos.charge=1.0
         neg.charge=-1.0
         charged=group.charged()
+        pppm.disable()
         pppm = md.charge.pppm(group=charged,nlist=nl)
         pppm.set_params(Nx=16, Ny=16, Nz=16, order=6, rcut=2.0)
         nvesim.enable()
@@ -74,6 +75,7 @@
         system.particles.remove(tneg)
         system.particles.remove(tpos)
         charged=group.charged()
+        pppm.disable()
         pppm = md.charge.pppm(group=charged,nlist=nl)
         pppm.set_params(Nx=16, Ny=16, Nz=16, order=6, rcut=2.0)
         nvesim.enable()

Jens


-- 
You received this message because you are subscribed to the Google Groups "hoomd-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hoomd-users/41c0c049-2e0f-44d5-a779-3f18218787ca%40googlegroups.com.
<postrunana.py><trajectory.gsd>

DP

unread,
Dec 27, 2019, 3:20:41 PM12/27/19
to hoomd-users
Thanks a lot Jens for looking into my script in detail. Disabling pppm did the trick.
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd...@googlegroups.com.

DP

unread,
Feb 10, 2020, 9:38:56 PM2/10/20
to hoomd-users
Hi Joshua,
Since using group.all() in pppm(as advised by you in another post) eliminates the need to update the charge group, am I correct in saying that one does not need to define pppm each time a particle is inserted into or removed from the system? I actually did that so as to minimize the memory consumption (**ERROR**: invalid argument before /hoomd/GPUArray.h:926 comes up after a few rounds of insertion and deletion in the grand canonical MD code). However, now the same error "RuntimeError: CUFFT returned error 1 in file /home/dpo854/mycode/hoomd-blue/hoomd/md/PPPMForceComputeGPU.cc line 281" comes up.

On Friday, December 27, 2019 at 9:02:15 AM UTC-6, Jens wrote:
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd...@googlegroups.com.

DP

unread,
Feb 11, 2020, 6:12:59 PM2/11/20
to hoomd-users
The problem goes away when I run it in cpu mode instead of gpu. However, it is not at all efficient in that the speed is considerably reduced (of the order of 2 TPS).

Joshua A. Anderson

unread,
Feb 17, 2020, 10:35:44 AM2/17/20
to hoomd...@googlegroups.com
Debadutta,

You do not need to redefine any potentials after changing the number of
particles in the system. However, there may be a bug in the PPPM GPU
implementation that doesn't work when the number of particles changes.

Unfortunately, the PPPM code was contributed many years ago and none of
the HOOMD core developers actively use the module. We might be able to
find the cause of the bug, but to do so, it is absolutely essential that
we have a minimal script that reproduces the problem. See the recent
example by Sean
(https://groups.google.com/forum/#!topic/hoomd-users/Jv6Yrh6i3L8) for an
excellent example of such a script.

---
Joshua A. Anderson, Ph.D.
Research Area Specialist, Chemical Engineering, University of Michigan
http://www-personal.umich.edu/~joaander/

On 2/11/20 6:12 PM, DP wrote:
> The problem goes away when I run it in cpu mode instead of gpu. However,
> it is not at all efficient in that the speed is considerably reduced (of
> the order of 2 TPS).
>
> On Monday, February 10, 2020 at 8:38:56 PM UTC-6, DP wrote:
>
> Hi Joshua,
> Since using group.all() in pppm(as advised by you in another post)
> eliminates the need to update the charge group, am I correct in
> saying that one does not need to define pppm each time a particle is
> inserted into or removed from the system? I actually did that so as
> to minimize the memory consumption (**ERROR**: invalid argument
> before /hoomd/GPUArray.h:926 comes up after a few rounds of
> insertion and deletion in the grand canonical MD code). However, now
> the same error "*RuntimeError: CUFFT returned error 1 in file
> /home/dpo854/mycode/hoomd-blue/hoomd/md/PPPMForceComputeGPU.cc line
> 281" comes up.
> *
>
> On Friday, December 27, 2019 at 9:02:15 AM UTC-6, Jens wrote:
>
> Hi Debadutta,
>
> I examined your script. When I run it with HOOMD-blue 2.8.2, I
> get a lot of warning messages like these:
>
> *Warning*: analyze.log: The log quantity pair_ewald_energy has
> been registered more than once. Only the most recent
> registration takes effect
> *Warning*: analyze.log: The log quantity pppm_energy has been
> registered more than once. Only the most recent registration
> takes effect
>
> This indicates that the charge.pppm compute has been registered
> multiple times. This is probably because python garbage
> collector doesn’t properly release the old force compute when
> the local variable pppm is reassigned. While this may be
> considered a bug in current HOOMD-blue, it is unlikely we’ll fix
> this in HOOMD 2.x, since a rewrite of the python API for 3.0 is
> in progress. I’ve created a reminder here:
>
> https://github.com/glotzerlab/hoomd-blue/issues/571
> <https://github.com/glotzerlab/hoomd-blue/issues/571>
>
> There is an easy workaround though. Explicitly disable the pppm
> object before overwriting the variable. Like this:
>
> --- postrunana_orig.py2019-12-27 09:46:00.230948416 -0500
> +++ postrunana.py2019-12-27 09:46:18.393225815 -0500
>> *Equilibrate the system first*
>> *lange.disable()*
>> *energybefore=log.query('potential_energy')*
>> totinser=50000
>> wtest=0.0
>> for i in range(totinser):
>> tneg = system.particles.add('A')
>> tpos = system.particles.add('B')
>> posrandx=random.uniform(-Le/2,Le/2)
>> posrandy=random.uniform(-Le/2,Le/2)
>> posrandz=random.uniform(-Le/2,Le/2)
>> negrandx=random.uniform(-Le/2,Le/2)
>> negrandy=random.uniform(-Le/2,Le/2)
>> negrandz=random.uniform(-Le/2,Le/2)
>> pos=system.particles.get(tpos)
>> neg=system.particles.get(tneg)
>> pos.position=[posrandx,posrandy,posrandz]*! Inserted a
>> particle pair in random locations*
>> neg.position=[negrandx,negrandy,negrandz]
>>
>> hoomd.run(1)
>> energyafter=log.query('potential_energy')*! energy
>> after insertion*
>> print(energyafter, energybefore)
>> wtest=wtest+exp(-(energyafter-energybefore))
>>
>> system.particles.remove(tneg)
>> system.particles.remove(tpos)
>> hoomd.run(1)
>> energyremoval=log.query('potential_energy')*! energy
>> after the removal of the test particle pair
>> *
>> *When I compute the system potential energy here it
>> turns out to be different from the energy before
>> insertion*
>> lange.enable() !*enable the integrator to let the
>> particles move*
>> hoomd.run(10)
>> energybefore=log.query('potential_energy')*! energy
>> before insertion*
>> lange.disable()
>> chempot=-math.log(wtest/totinser)
>>
>> Thanks,
>> Debadutta Prusty
>>
>>
>> --
>> You received this message because you are subscribed to the
>> Google Groups "hoomd-users" group.
>> To unsubscribe from this group and stop receiving emails from
>> it, send an email tohoomd...@googlegroups.com.
>> To view this discussion on the web
>> visithttps://groups.google.com/d/msgid/hoomd-users/41c0c049-2e0f-44d5-a779-3f18218787ca%40googlegroups.com
>> <https://groups.google.com/d/msgid/hoomd-users/41c0c049-2e0f-44d5-a779-3f18218787ca%40googlegroups.com?utm_medium=email&utm_source=footer>.
>> <postrunana.py><trajectory.gsd>
>
> --
> You received this message because you are subscribed to the Google
> Groups "hoomd-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to hoomd-users...@googlegroups.com
> <mailto:hoomd-users...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/hoomd-users/48344f41-5118-4ed9-b149-214ab4a576d9%40googlegroups.com
> <https://groups.google.com/d/msgid/hoomd-users/48344f41-5118-4ed9-b149-214ab4a576d9%40googlegroups.com?utm_medium=email&utm_source=footer>.
Reply all
Reply to author
Forward
0 new messages