Thanks.
A further question:
Is there any solution to run gmx+plumed on a single node and use several GPUs?
I try to compile gmx 5.1.4 +plumed 2.3 for single nodes, which have 16 CPU and 4 GPU. It seems for me that I can utilize only one GPU with gmx patched with plumed.
I think that I have to compile both gmx and plumed with MPI and run gmx with 1 MPI and 16 thread.
I think if I compile gmx without MPI and try to use more than one GPU, gmx will start thread-MPI processes that are not compatible with plumed compiled either with or without MPI.
Most likely I can start 1 thread-MPI process with 16 threads, but in this case I can also use only one GPU.
Thanks,
Tamas
On Wednesday, September 13, 2017 at 9:48:02 AM UTC+2, Giovanni Bussi wrote:
PLUMED_NUM_THREADS is only available starting with v2.3, so you should upgrade it.
Then you have to experiment with settings until you find the best choice. Keep in mind that plumed only uses the CPU, and that the gromacs load balancing algorithm will typically shift more non-bonded interactions to the GPU while using plumed
Hi,
I do not understand, what could be a good running solution for gmx patched with plumed on a single node with 16 CPUs and 3 GPUs.
There is plumed 2.2 on our HPC.
Does PLUMED_NUM_THREADS exist in this version? It is only mentioned in 2.3
How should I distribute the tasks on CPU/GPU?
1 task for the node and 15 CPUs for the task? With 1 or 3 GPUs?
3 MPI tasks for the node and 5 CPUs/task with 3 GPUs?
How does these relate to PLUMED? Should I save CPUs for PLUMED?
E.g.
3 MPI tasks for the node and 4 CPUs/task for mdrun and 3-4 CPUs for PLUMED?
Thanks for your suggestions,
Tamas
--
You received this message because you are subscribed to the Google Groups "PLUMED users" group.