Dear CP2K Community,
I am facing issues with my AIMD calculations. My system has 162 atoms and runs on an HPC setup with 500 CPUs. The Problem Is that each step takes ~15 minutes, and I also encounter out-of-memory errors.
I suspect the issues might be due to input settings or parallelization inefficiencies. Could you please advise how to optimize performance and memory usage for a system of this size?
I appreciate your help!
Best regards,
Dear CP2K Community,
I am facing issues with my AIMD calculations. My system size is 162 atoms, and I run on an HPC setup of 500 CPUs. Each step takes ~15 minutes, and I also encounter out-of-memory errors.
I suspect the issues might be due to input settings or parallelization inefficiencies. Could you please advise how to optimize performance and memory usage for a system of this size?
I appreciate your help!
Best regards,
Jawad
--
You received this message because you are subscribed to the Google Groups "cp2k" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+uns...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/cp2k/463afbde-75aa-4338-9eeb-3f4d64cd4227n%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/cp2k/ade71ae0-4617-4824-adc9-4c782057c7f6n%40googlegroups.com.
Dear Frederick,
Thank you for your prompt response and helpful suggestions.
I haven’t yet checked whether the numerical stress tensors are faster than the analytical ones. I initially chose the numerical option thinking it might provide better accuracy, but I will test the analytical stress tensors as well to compare performance.
As for CP2K 8.2, there is no specific reason for using this version—it’s simply the version currently installed on our HPC.
Regarding the MPI error, I am using mpirun (Open MPI) 4.1.1.
Thank you again for your time and assistance. I will implement your suggestions and let you know the outcomes.
Best regards,
Jawad