Creation of Fixed Particle walls

511 views
Skip to first unread message

Russell Holt

unread,
Nov 28, 2021, 6:30:36 AM11/28/21
to hoomd-users

Hi all

I am new to HOOMD-Blue however have some experience in other MD packages (LIGGGHTS/LAMMPS). I have recently installed the Fast Stokesian Dynamics (FSD) package ‘PSEv3’ and am interested in applying this package to a pressure driven channel flow.

I have run the included test simulation which comes with the package which uses shearing walls to imply a fluid flow, however what I would like to do is a little different and I’m not sure if this is something HOOMD can do.

I would like to have fixed particles which make up the walls of the channel, and then have a separate flowing group. I believe I should be able to use the ‘rigid’ constraint on the wall particles and then apply the PSEv3 to resolve the hydrodynamic interactions, (with Stokesian Dynamics, the fixed wall particles are required to be included in a mobility matrix which describes the flow of the particles moving through the channel, thus the wall needs to be particles). My problem at the moment is that I cannot see how to make these separate groups for the wall particles. Can the init.read_getar command be used to read in a set of particles, fix these and then create a lattice of particles for the flowing group?

A simple 2D example of this system being run in LAMMPS can be seen attached, where the white particles on the top and bottom are effectively ‘frozen’, still allowing interactions to be calculated, however themselves not moving. This will create the desired parabolic poiseulle velocity profile when a pressure gradient is applied across all particles.

ChannelFlow.png

Regards,

Russell Holt
Email:
S352...@student.rmit.edu.au

Erik Navarro

unread,
Nov 28, 2021, 6:02:31 PM11/28/21
to hoomd...@googlegroups.com
Hi Russ, 

While I'm not familiar with FSD, you can create groups in HoomD:   hoomd.group — HOOMD-blue 2.9.7 documentation. Perhaps you can use hoomd.group.type and make your wall particles a different type from the flowing particles.

Hope that helps,

-Erik N



--
You received this message because you are subscribed to the Google Groups "hoomd-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hoomd-users/d25aed46-483c-4734-9df4-c0a356773f95n%40googlegroups.com.

Erik Navarro

unread,
Nov 28, 2021, 6:13:29 PM11/28/21
to hoomd...@googlegroups.com
Also, in the case of brownian dynamics (which is what I know best), simply omitting a group from an integrator will hold the particles fixed. Interparticle interactions (such as leonard jones) can still be calculated. Perhaps you can do something similar with your simulations. I think hoomd.rigid is really intended to hold particles together relative to one another while allowing the entire rigid body to move. 

On Sun, Nov 28, 2021 at 3:30 AM Russell Holt <rho...@gmail.com> wrote:

Russell Holt

unread,
Nov 29, 2021, 6:37:40 AM11/29/21
to hoomd-users
Hi Erik

Thank you for the suggestion of using the group command, unfortunately I will need to include the wall particles in the integrator in order to correctly build the hydrodynamic coupling tensor. Including the stationary particles is what allows for the resistance tensor to be built which calculates the forces/velocities of the flowing particles, thus I need some way to hold these particles in place irregardless of forces applied. I was hoping there may be a functionality to zero out forces, or maybe set an infinite mass to avoid movement.

Regards,
Russell

Joshua Anderson

unread,
Nov 29, 2021, 12:31:48 PM11/29/21
to hoomd...@googlegroups.com
Russel,

In HOOMD-blue, the integration method is responsible for applying the equations of motion to all particles in the given group. Since the integration method you are using is provided by a plugin, you need to ask the plugin authors for support with this feature request.
------
Joshua A. Anderson, Ph.D.
Research Area Specialist, Chemical Engineering, University of Michigan

> On Nov 29, 2021, at 6:37 AM, Russell Holt <rho...@gmail.com> wrote:
>
> Hi Erik
>
> Thank you for the suggestion of using the group command, unfortunately I will need to include the wall particles in the integrator in order to correctly build the hydrodynamic coupling tensor. Including the stationary particles is what allows for the resistance tensor to be built which calculates the forces/velocities of the flowing particles, thus I need some way to hold these particles in place irregardless of forces applied. I was hoping there may be a functionality to zero out forces, or maybe set an infinite mass to avoid movement.
>
> Regards,
> Russell
>
> On Monday, November 29, 2021 at 10:13:29 AM UTC+11 erik.j....@gmail.com wrote:
> Also, in the case of brownian dynamics (which is what I know best), simply omitting a group from an integrator will hold the particles fixed. Interparticle interactions (such as leonard jones) can still be calculated. Perhaps you can do something similar with your simulations. I think hoomd.rigid is really intended to hold particles together relative to one another while allowing the entire rigid body to move.
>
> On Sun, Nov 28, 2021 at 3:30 AM Russell Holt <rho...@gmail.com> wrote:
> Hi all
>
> I am new to HOOMD-Blue however have some experience in other MD packages (LIGGGHTS/LAMMPS). I have recently installed the Fast Stokesian Dynamics (FSD) package ‘PSEv3’ and am interested in applying this package to a pressure driven channel flow.
>
> I have run the included test simulation which comes with the package which uses shearing walls to imply a fluid flow, however what I would like to do is a little different and I’m not sure if this is something HOOMD can do.
>
> I would like to have fixed particles which make up the walls of the channel, and then have a separate flowing group. I believe I should be able to use the ‘rigid’ constraint on the wall particles and then apply the PSEv3 to resolve the hydrodynamic interactions, (with Stokesian Dynamics, the fixed wall particles are required to be included in a mobility matrix which describes the flow of the particles moving through the channel, thus the wall needs to be particles). My problem at the moment is that I cannot see how to make these separate groups for the wall particles. Can the init.read_getar command be used to read in a set of particles, fix these and then create a lattice of particles for the flowing group?
>
> A simple 2D example of this system being run in LAMMPS can be seen attached, where the white particles on the top and bottom are effectively ‘frozen’, still allowing interactions to be calculated, however themselves not moving. This will create the desired parabolic poiseulle velocity profile when a pressure gradient is applied across all particles.
>
>
>
>
> Regards,
>
> Russell Holt
> Email: S352...@student.rmit.edu.au
>
>
>
> --
> You received this message because you are subscribed to the Google Groups "hoomd-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/hoomd-users/d25aed46-483c-4734-9df4-c0a356773f95n%40googlegroups.com.
>
> --
> You received this message because you are subscribed to the Google Groups "hoomd-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/hoomd-users/ef845dd4-c7f4-4508-9d59-bada4c53f09fn%40googlegroups.com.

Michael Howard

unread,
Nov 29, 2021, 1:51:42 PM11/29/21
to hoomd-users
Hi Russell,

To add onto the discussion, the approaches you are suggesting in your last post will probably not be suitable because (1) zeroing out forces on the fixed particles does not guarantee they will not move (due to the induced velocity from forces on other particles) and (2) mass is not relevant in the overdamped limit used for Stokesian dynamics.

To completely freeze some particles, you would need to solve a constrained set of equations where the forces on the unfrozen particles are fixed by the interaction potentials (and their velocity is computed), while the velocities of the frozen particles are fixed to zero (and the effective forces on them are computed). This sounds pretty difficult to me.

As an alternative, you could tether the frozen particles to their initial positions using harmonic springs. That way, there is nothing "special" about the frozen particles other than that they have extra forces on them. We did that in this paper:


using HOOMD with plugin code that is now available in azplugins:


You would need to tune the spring constant so that the particles stay close enough to their initial position without restricting your timestep too much. I would do that exploration using Brownian dynamics simulations.

Regards,
Mike

Russell Holt

unread,
Dec 8, 2021, 9:06:01 AM12/8/21
to hoomd-users
Hi Mike & Joshua

Thank you both for your help, I have been successful in implementing both azplugins & PSEv3 and am getting some promising results!

I do have one question which I can't seem to find the answer for and was hoping to clarify if this is likely an issue with the plugin, hardware or something else

Currently when I run the simulation I am getting inconsistent segmentation faults (See below image) at different times. This will occur without changing any inputs and will happen at a different step each time.

This may be due to the stochastic nature of the integration method,  but I thought it best to see if this is an error anyone has experience with and can point me in the right direction to solve it?

I am not sure if this would have an impact, but I am running HOOMD on WSL2 currently.

HOOMD error.PNG

Michael Howard

unread,
Dec 8, 2021, 10:54:52 AM12/8/21
to hoomd-users
That looks like an uncaught segfault, which is not good. I would not trust the results that come from the simulations as you don't know about their correctness in cases that don't segfault.

Do you get these errors if you use a BD integrator (from HOOMD) instead of the PSE integrator? If you do not, that would help narrow down the issue to the PSE plugin. Otherwise, it could be a bug in azplugins or a weird issue with your setup.

If your seeds stay the same and you are getting segfaults at different times, that could mean some sort of illegal memory access. What is your build configuration for HOOMD? I know that PSEv1 only compiled under single precision and segfaulted in double precision due to the authors hardcoding some of the data types. I don't know if it was resolved in PSEv3.

Regards,
Mike

tom...@umich.edu

unread,
Dec 9, 2021, 10:19:18 AM12/9/21
to hoomd-users
Hi All,

CUDA has limited support for some features on WSL2, if your simulations are on GPU this may be the reason you see inconsistent seg faults. Here is a list of limitations of CUDA on WSL2: https://docs.nvidia.com/cuda/wsl-user-guide/index.html#known-limitations-for-linux-cuda-apps .

I will also add that I've had similar issues with hoomd on WSL2: everything compiles fine and then there can be weird errors at runtime.

Russell Holt

unread,
Dec 10, 2021, 8:07:53 AM12/10/21
to hoomd-users
Hi Mike

It appears that I do not get these errors when using the BD integrator with the same simulation setup in all other regards.

Upon further investigation it seems that this error only occurs when the flowing particles are too close to the restrained wall particles, this leads me to the assumption that the PSE integrator may not be able to correctly integrate the restrained  particles, but I will need to confirm with the plugin authors.

Hi Tom

Thank you for that link and feedback, I guess I will need to run some tests in future on a linux system as my simulations will be using the GPU. In your experience is there anything that can be done to improve these issues? And do you find these errors are consistent across specific integrators or completely random?

tom...@umich.edu

unread,
Dec 10, 2021, 11:10:38 AM12/10/21
to hoomd-users
Hi Russ,

Before jumping to any conclusions about WSL2, you should investigate the PSE integrator to make sure the seg faults you are finding are not due to a problem in the plugin.

If the segfault cannot be traced back to the plugin, I would look into run your simulations on another platform if you can. All the features of CUDA are supported on linux, so I would start by trying to get access to a linux machine with a GPU.

Hoomd extensively uses pinned (mapped) memory on the host to improve the performance of host-device data transfers, which is one of the features with limited support on WSL2. The errors I have seen are illegal memory access errors; they are likely due to issues with pinned memory and they always cause crashes when they occur. If your simulations aren't crashing, I would guess pinned memory is functioning properly on your system.

Whether you continue to run your simulations on WSL2 is up to you: Its possible there are silent issues, but if your simulation results look reasonable then I would guess that everything is okay.

Michael Howard

unread,
Dec 10, 2021, 11:44:58 AM12/10/21
to hoomd-users
Hi Russell,

It is possible that you are hitting some numerically ill-conditioned part of the plugin code with the walls and that causes it to crash. It could also be a coding error. Those are difficult to track down. For debugging, perhaps you could dump out a configuration each timestep up until the segfault, then see if it still crashes if you reinitialize the simulation from that configuration. That would make it a lot easier for you to then use a debugger, etc.

I guess one thing you didn't answer: are you compiling HOOMD in single or double precision? If you are using double, try switching to single.

Regards,
Mike


Russell Holt

unread,
Dec 11, 2021, 2:38:59 AM12/11/21
to hoomd-users
Hi all

I have compiled HOOMD as single precision which was what was specified in the PSEv3 plugin instructions, I can try recompiling as double precision if that has any chance of helping?

May I ask if there is any way to dump additional logs from HOOMD? I am currently dumping the configuration of the particles as a GSD file which is giving some insight (perhaps alluding to an incorrect setup), however I cannot use a debugger (I believe? I am new to this!) since I am running the simulation on WSL2.

In regards to a possible incorrect setup, I do believe that using the restraints on the particles may unfortunately be the incorrect approach based on previous stokesian dynamics papers where the wall particles were set to a constant 0 velocity (Ref Page 162: https://doi.org/10.1017/S0022112094002326), the forces from the wall particles appears to be imposing an incorrect force back into the flowing particle.

Without knowing the ramifications of doing this within HOOMD, is it possible to make a function which will hold a constant velocity of zero for a group of particles? something similar to "hoomd.md.force.constant". If this is possible can anyone point me to the best example function/plugin to use as a basis to build this?

Thank you all for all your help!

Michael Howard

unread,
Dec 12, 2021, 2:28:09 PM12/12/21
to hoomd-users
If you are already configured in single precision, keep using that. Double will only make things worse. :-)

In HOOMD 2.x, you can increase the level of detail printed with --notice-level=X, setting X up to 10 to get detailed output. However, it would be up to the PSEv3 plugin developers what level of notices (if any) they logged. You could also use the --gpu_error_checking flag to synchronize kernels to check for GPU errors after each call. A detailed list of options is here: https://hoomd-blue.readthedocs.io/en/v2.9.7/command-line-options.html.

As I mentioned before, I don't think you can simply hold the velocity of the wall particles constant with this plugin out of the box. Since the plugin operates on all the particles, you would need to solve for the forces that keep the wall-particle velocities at zero [Eq. (8) of the paper you linked]. If you really want the model that is exactly what is in the paper, you will likely need to write the code yourself based off the PSEv3 plugin.

If you want to try to stick with the restraints and check whether they are the problem, find a particle configuration that consistently causes the segfault. Then, reinitialize the simulation from the configuration and try taking a simulation step with different values of the restraint spring constant. If it still crashes when k = 0, the problem is with the configuration & the integrator and not the restraints. If it only crashes at some values of k, it could be a timestep or numerics issue. If it crashes sporadically, there is likely a programming issue somewhere.

Regards,
Mike

Reply all
Reply to author
Forward
0 new messages