On Nov 25, 2009, at 1:01 PM, Axel wrote:
> hi joshua,
>
> On Nov 25, 4:02 pm, Joshua <
joshua.adel...@gmail.com> wrote:
>> I'm about to embark on doing a bunch of simulations on a Go-like
>> polymer model and am trying to pick a simulation package and was
>> considering using HOOMD. Go-like models are essentially a bead-spring
>> polymer model except that non-bonded terms between two beads are
>> either (1) LJ-like if the two beads make contact in a reference
>> conformation, or (2) purely repulsive (e.g. eps*(sig/r)^12 - I think
>> you can get this from the current potentials by shifting the standard
>> LJ potential and applying the appropriate cutoff) otherwise. I think
>
> in HOOMD the latter is even easier. the 12-6 LJ potential has
> an "alpha" parameter, setting it to 0.0 will retain only the repulsive
> part. HOOMD in its current form only supports one global cutoff.
If a true WCA potential is required, you will not need to wait long. I
have that functionality working in the template-pair-potentials branch
now.
>> this can be pulled off if every bead has a unique identifier/type.
>> These types of models are generally used to look at protein folding
>> to
>> some native state (which, by definition in this model has the lowest
>> energy).
There is a limitation of around 40 particle types in hoomd. It stores
all N^2 different type parameters and the limit is imposed by the
amount of fast cache memory. Surely you should be able to implement
the LJ/WCA combination with only 2 particle types?
>>
>> We are trying to gather extensive sampling/statistics for a folding
>> event in a polymer that is on the order of a couple of hundred beads.
>> My understanding from looking at the mailing list and looking at the
>> benchmarks, is that the GPU performance can be negatively impacted by
>> using a small number of particles (correct me if I'm mistaken about
>
> yes. if i recall correctly, a high-end nvidia GPU has 30 multi-
> processors,
> each with 8 cores and each of them will be scheduled to execute
> 4 threads simultaneously (similar to hyperthreading) which means that
> about a thousand threads need to be executed at the same time to
> keep it busy. if you also consider the overhead of launching a kernel,
> that you need about 10x as many threads to make that overhead
> become less important. thus a decent performance of HOOMD requires
> of the order of 10,000 particles to reach its full speed.
Axel's quick analysis is indeed correct. I've done some speedup
benchmarks vs. system size: look at slide 24 here
http://gladiator.ncsa.uiuc.edu/PDFs/accelerators/day3/breakouts/chem/anderson.pdf
The speedup is about 20x at 5k particles, 40x at 10k and 50x at 20k.
So while hoomd doesn't run at optimal efficiency with small numbers of
particles, 5k is still large enough to get decent speedups.
>> this). I was wondering if it was possible to play a little trick to
>> effectively increase the system size in order to boost performance.
>> Since there are no long range interactions in this system, I was
>> thinking you could build a lattice of the polymers, harmonically
>> restraining a single atom in each replica to a point in space such
>> that each realization of the polymer was separated from the others so
>> that they don't interact. You could then scale the number of
>> particles
>> in the system up to the sweet spot for the GPU.
>
> in principle it should be possible to do something like that, but
> there
> are currently no position restraints. it would be better to have one
> of
> those on the center of mass of a group of particles to avoid
> artifacts.
>
> HTH,
> axel.
>
>> Any suggestions, feedback, advice pertaining to simulating this sort
>> of system with HOOMD would be appreciated. It looks like a great
>> package and it would be great to be able to use it for our work.
>
> --
>
> You received this message because you are subscribed to the Google
> Groups "hoomd-users" group.
> To post to this group, send email to
hoomd...@googlegroups.com.
> To unsubscribe from this group, send email to
hoomd-users...@googlegroups.com
> .
> For more options, visit this group at
http://groups.google.com/group/hoomd-users?hl=en
> .
>
>
>
>