> (1) I have 500 bodies with 44 particles each, when I set exclusions =
> 'body' I don't think zero exclusions is correct.
>
> multi-dbg.py:112 | nlist.reset_exclusions(exclusions=['body'])
> -- Neighborlist exclusion statistics:
> Max. number of exclusions: 0
> Particles with 0 exclusions: 22000
The rigid body particle exclusions don't show up in this list. I've got a ticket open reminding myself to fix this.
> The bodies do rotate and translate rigidly, as expected, so this is
> not a defect in behavior but it's possible that hoomd is wasting time
> computing intra-body forces that will subsequently cancel.
The issue is not just wasting time (you can validate by looking at the average number of neighbors printed at the end of the run w/ exclusions on and then off). The absolute need for the body exclusion is that if your particles are close, then intra-body forces will be huge and suffer from cancellation error - resulting in a net force/torque on a body from just the intra-body forces!
> (2) What is the preferred way to disable certain pair potentials
> between different types? Currently, I set r_max = 0.0 but is there a
> more efficient way? As it is now, I have 7 types and 2 pair potentials
> such that the only interactions are pair1 = {A,A} and pair2 = { BD,
> CC, EG, FF } out of all 98 possible pair potentials.
Setting r_cut for the non-interacting pairs is the most sure-proof way. Set energy scale coefficients to zero as well.
> (2B) What is the preferred way of disabling a tabulated potential
> between two types, since setting r_min,r_max = 0.0,0.0 results in an
> error (since it attempts to populate the table but has no reasonable
> 'width'). I set r_max to some very small value and populated the table
> with zeroes but this seems like a large waste of memory.
This isn't a use-case that I considered when writing the table potential. I can implement a better syntax for this on the python side (basically, I would allow r_max==r_min and will have that zero out the table for that row as you are now). But I can't limit the memory usage. Due to the way that type pairs are indexed, that array of zeros must be present. This is only a few kilobytes, so its nothing to worry about.
--------
Joshua A. Anderson, Ph.D.
Chemical Engineering Department, University of Michigan
> Ah, sorry I didn't find the ticket when I searched for exclusions.
I was mistaken, it was on my local todo list, but not on a ticket - but it is now: https://codeblue.umich.edu/hoomd-blue/trac/ticket/447
> Setting r_cut = 0.0 (or, for the table, zeroing out the array
> manually) is no problem, I just wanted to know if it was possible to
> be more explicit about disabling the pair forces for the (vast
> majority of) type-pairs, especially as I challenge the system with
> ever-larger menagerie of types.
It would be nice if there was a more explicit (and efficient!) way of culling certain type pairs from the list, but I cannot think of one. The neighbor list includes all particles, so the pair potential must loop through them all and filter out the pairs it doesn't need (once for each pair potential!). Storing a per type pair neighbor list is not an option, because then you have O(Ntypes^2) neighbor lists which are much more expensive to compute than the pair potentials.
If you can get away with the loss of accuracy by using one pair.table for all of your various mixed potentials, it may be a performance win over having muliple pair.* commands as there will then be only one loop through the neighbor list per time step.
There is currently an upper limit on types of ~40 for the traditional pair potentials. I can fairly easily remove this limit on Fermi cards if you need more. pair.table's table will get larger with O(Ntypes^2) and its performance will degrade gracefully as there are more and more cache misses.
>
> ~Oren
>
> PS. As a curiosity, when I implemented a tabulated potential (for,
> e.g., Yukawa, which was about 4x faster than computing it using exp on
> x86-64), I wrote the tabulation for F/R as a function of |R|**2 to
> avoid having to take the sqrt of the distance vector. RMSD from the
> was ~1e-5 using 1e-7 steps in the r^2 domain and not interpolating at
> all. This experience may likely have no bearing whatsoever on the GPU
> compute world but it occurred to me when porting over.
Indeed. On the GPU, one can compute dozens of exp or sqrt (they take just a few clock ticks) in place of one memory read and still have better performance. Also, the longer the table in memory, the worse the performance (GPU caches are tiny).
The current GPU code stores tables linear in r and linearly interpolates between points. If you have a need for higher accuracy, let me know what improvements on this would be needed (or better yet, submit a patch). In our group, we mainly use pair.table for exploratory runs where we don't care about accuracy much. Once we've narrowed down to the functional form of the potential we need for a project, we then write a pair potential plugin that evaluates that potential directly for faster performance runs.
>--
>You received this message because you are subscribed to the Google Groups "hoomd-users" group.
>To post to this group, send email to hoomd...@googlegroups.com.
>To unsubscribe from this group, send email to hoomd-users...@googlegroups.com.
>For more options, visit this group at http://groups.google.com/group/hoomd-users?hl=en.
>
This will scale well up to a few 10's of types stored in shared memory - where one gets to the real heart of the matter in that it is challenging to efficiently exclude particles based on type from the nlist.
Furthermore, add the ability to use a separate neighbor list for each individual pair potential and you may (or may not) get added performance in your simulations. Since we've already had another user need this feature, I'll add it to the todo list to tackle for 0.10.1 or later.
--------
Joshua A. Anderson, Ph.D.
Chemical Engineering Department, University of Michigan