Dear Huan,
Like other parallel programs which make use of MPI, CP2K does not know or care exactly where each MPI process is running. Allocation and placement of MPI processes onto physical nodes is up to the MPI library you have installed on your cluster. We regularly run CP2K on several thousands of CPU cores on large-scale HPC machines, so depending on the size of system you wish to study you will find CP2K to be very scalable. Without knowing more, I would suggest that you benchmark CP2K with your existing system to better understand the performance and scalability.
How to run a parallel job across multiple compute nodes across depends exactly on which MPI library (e.g. OpenMPI, MPICH2, Intel MPI...) you use and if you use any batch job submission system (e.g. SLURM, PBS, Sun Grid Engine...). Typically, one needs to have the mpi deamon running on each node and all the nodes added to a node-list file or similar, then the 'mpirun' command will launch MPI ranks across your two nodes. Best thing is to speak to your local systems administrators who may be able to help more specifically.
Also, you might consider upgrading to a more recent release of CP2K as 2.3 is over a year old and there have been many improvements, new features and performance improvements since.
Cheers
- Iain
--
Iain Bethune
Project Manager, EPCC
Email:
ibet...@epcc.ed.ac.uk
Twitter: @IainBethune
Web:
http://www2.epcc.ed.ac.uk/~ibethune
Tel/Fax:
+44 (0)131 650 5201/6555
Mob:
+44 (0)7598317015
Addr: 2404 JCMB, The King's Buildings, Mayfield Road, Edinburgh, EH9 3JZ
> --
> You received this message because you are subscribed to the Google Groups "cp2k" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
cp2k+uns...@googlegroups.com.
> To post to this group, send email to
cp...@googlegroups.com.
> Visit this group at
http://groups.google.com/group/cp2k.
> For more options, visit
https://groups.google.com/groups/opt_out.
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.