The problem of slow computing speed with ipi+lammps on hpc system

71 views
Skip to first unread message

Carl “Carlos” MAX

unread,
Nov 25, 2024, 10:36:15 AM11/25/24
to ipi-users
Hi All,
I am having an efficiency issue with my PIMD calculations on an HPC system using i-pi and LAMMPS. I have set up 8 beads and used 9 nodes, with one node running i-pi and the remaining 8 nodes calculating one bead each. However, I don't know why the efficiency of the 9-node system is only 50% higher than that of a single node calculating all 8 beads. Here are my input files and submission script, and I would appreciate your response.d6236ae706bc9fb2c6f63e13f8c3d5d5.png
input.xml
job_mult.sh

Michele Ceriotti

unread,
Nov 25, 2024, 10:42:21 AM11/25/24
to ipi-users
Hm. What version of i-PI are you using? And how large is the system you're running?
We accelerated a lot the 3.0 version so I'd start with that. Then there are many little things
you can change to reduce the overhead. Reduce the <latency> setting to 1e-3 or 1e-4;
Outputting a checkpoint at every step is also very slow. 
If your system is reasonably stable (meaning you don't expect creashes) you can also 
reduce the flushing stride by using the "safe_stride" option in <simulation>.

If you look in the ipi_tests/profiling folder you'll see some input files that have been
optimized to minimize overhead.

Let me know if any of this helps.
M

Carl “Carlos” MAX

unread,
Nov 25, 2024, 10:51:50 AM11/25/24
to ipi-users
Thank you for your timely reply. i use I-PI 3.0, my simulation system is 64 molecules of water, and it is supposed that a small system should be faster, and the speed I tested is 18ps/h (with 9 nodes). I hope you can check whether there are any errors in my script when you are free

Michele Ceriotti

unread,
Nov 25, 2024, 11:20:51 AM11/25/24
to ipi-users
I already pointed at two issues in your input, and to a place where you can start looking for more tuning.
Test, see how far you can get, and report back if you still have a problem. 

Reply all
Reply to author
Forward
0 new messages