i-PI + JAX/Torch models: efficiency

100 views
Skip to first unread message

Sander Vandenhaute

unread,
Oct 7, 2023, 6:25:18 AM10/7/23
to ipi-users
Hi all,

How efficient is i-PI + PyTorch force evaluation compared to, say, directly running simulations with LAMMPS/OpenMM + PyTorch? Particle positions and box vectors need to be communicated at each step, which might take almost as much time as a force evaluation (which typically takes about 10 to 100 ms for a few hundred atoms)? 

Thanks,
Sander

Michele Ceriotti

unread,
Oct 7, 2023, 9:18:58 PM10/7/23
to ipi-users
The overhead from communication should not be too high. Of the order of 10ms if you set everything up nicely, and you use a Unix domain socket. This is mostly down to latency, so if you go to a larger system the communication overhead shouldn't grow substantially.
We are currently doing some profiling https://github.com/i-pi/i-pi/pull/277 so we should also be able to reduce a few bottlenecks and give better indications of how to set up a "fast" calculation. 
Michele

Michele Ceriotti

unread,
Jun 10, 2024, 4:36:34 AM6/10/24
to ipi-users
Just reviving this thread to point at the i-PI 3.0 preprint https://arxiv.org/abs/2405.15224 that discusses quantitatively the (negligible) overhead of using i-PI in precisely this scenario. 
Reply all
Reply to author
Forward
0 new messages