The Memory Usage is Increasing

90 views
Skip to first unread message

Ke Zhou

unread,
Oct 9, 2019, 2:37:38 AM10/9/19
to cp2k
Dear Developers,

Now I am running cp2k 6.1 on our cluster. 
The memory usage for cp2k is increased with time during running. Finally,  it stops when the usage reach the upper bound.

I think it is the problem from compile. Can you give me some suggestion?

Best,
Ke   

Nikhil Maroli

unread,
Oct 9, 2019, 2:58:42 AM10/9/19
to cp...@googlegroups.com
I had the same problem, this is due to the wrong execution of the program.
You may try with another type such as popt, psmp. Pls refer the previous threads.

--
You received this message because you are subscribed to the Google Groups "cp2k" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+uns...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/0b40e271-0ebb-46c9-9c58-5a3bdd401da7%40googlegroups.com.


--
Regards,
Nikhil Maroli

Ke Zhou

unread,
Oct 9, 2019, 3:21:54 AM10/9/19
to cp...@googlegroups.com
Dear Nikhil,

Thanks for your suggestion.
I used popt. Now I am trying psmp.

Best,
Ke

Nikhil Maroli <scin...@gmail.com> 于2019年10月9日周三 下午2:58写道:
You received this message because you are subscribed to a topic in the Google Groups "cp2k" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cp2k/lwPdChE_Iuk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cp2k+uns...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/CAMEzy6TGGB1%2B9APRZbR7OVBaxv0uLcS%2BmD7%3D3Kp-n1og-d_cqQ%40mail.gmail.com.

Ke Zhou

unread,
Oct 9, 2019, 7:25:41 AM10/9/19
to cp...@googlegroups.com
Dear Nikhil,

I have also tested psmp. The memory is still increased. 

Best,
Ke  

Nikhil Maroli <scin...@gmail.com> 于2019年10月9日周三 下午2:58写道:
I had the same problem, this is due to the wrong execution of the program.
You received this message because you are subscribed to a topic in the Google Groups "cp2k" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cp2k/lwPdChE_Iuk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cp2k+uns...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/CAMEzy6TGGB1%2B9APRZbR7OVBaxv0uLcS%2BmD7%3D3Kp-n1og-d_cqQ%40mail.gmail.com.

Nikhil Maroli

unread,
Oct 9, 2019, 7:44:44 AM10/9/19
to cp...@googlegroups.com
please share your running command here and details of your system. 

Ke Zhou

unread,
Oct 9, 2019, 7:57:24 AM10/9/19
to cp...@googlegroups.com
I test the same input files in our old cluster (cp2k 2.6) and it works well. 
I thinks the problem could be from compile. 
I compile the package use intel mpi. The follow is the "Linux-x86-64-intel.popt"  file.
Can you give me some suggestion on it?


LIBXSMM  = /apps/lib/libxsmm/1.9.0/e/gnu

LIBXC    = /apps/lib/libxc/4.0.4/e5/gnu

LIBINT   = /apps/lib/libint/1.1.4/gnu

LIBELPA  = /apps/lib/elpa/2017.05.002/e5/gnu


CC       = cc

CPP      = 

FC       = mpif90

LD       = mpif90

AR       = ar -r

CPPFLAGS =

DFLAGS   = -D__MKL -D__FFTW3 -D__HAS_NO_SHARED_GLIBC -D__LIBXSMM \

  -D__parallel -D__SCALAPACK \

  -D__ELPA=201705 \

  -D__LIBXC \

           -D__LIBINT -D__LIBINT_MAX_AM=7 -D__LIBDERIV_MAX_AM1=6 \

           -D__MAX_CONTR=4

CFLAGS   = $(DFLAGS) -O2

FCFLAGS  = $(DFLAGS) -O2 -ffast-math -ffree-form -ffree-line-length-none -ftree-vectorize -funroll-loops -mtune=native -std=f2008

#FCFLAGS  = $(DFLAGS) -O2 -xW -heap-arrays 64 -funroll-loops -fpp -free

#FCFLAGS += -fp-model precise

#FCFLAGS += -g -traceback

FCFLAGS += -I${MKLROOT}/include -I${MKLROOT}/include/fftw

FCFLAGS += -I$(LIBXSMM)/include

FCFLAGS += -I$(LIBXC)/include

FCFLAGS += -I$(LIBELPA)/include/elpa-2017.05.002/modules -I$(LIBELPA)/include/elpa-2017.05.002/elpa

LDFLAGS  = $(FCFLAGS) 

LDFLAGS_C = $(FCFLAGS)  

LIBS     = -L$(LIBELPA)/lib -lelpa

MKL_LIB  = ${MKLROOT}/lib/intel64

LIBS    += $(MKL_LIB)/libmkl_scalapack_lp64.a -Wl,--start-group \

  $(MKL_LIB)/libmkl_intel_lp64.a ${MKL_LIB}/libmkl_sequential.a \

  $(MKL_LIB)/libmkl_core.a \

           ${MKL_LIB}/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group \

           -lpthread -lm

LIBS    += -L$(LIBXSMM)/lib -lxsmmf -lxsmm -ldl

LIBS    += -L$(LIBXC)/lib -lxcf03 -lxc

LIBS    += -L$(LIBINT)/lib -lderiv -lint -lstdc++


# Required due to memory leak that occurs if high optimisations are used

mp2_optimize_ri_basis.o: mp2_optimize_ri_basis.F

$(FC) -c $(subst O2,O0,$(FCFLAGS)) $<



Nikhil Maroli <scin...@gmail.com> 于2019年10月9日周三 下午7:44写道:
please share your running command here and details of your system. 

--
You received this message because you are subscribed to a topic in the Google Groups "cp2k" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cp2k/lwPdChE_Iuk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cp2k+uns...@googlegroups.com.

Phillip Seeber

unread,
Oct 10, 2019, 4:21:44 AM10/10/19
to cp2k
Dear Ke,
i had the same problem on our cluster, when running MD simulations, as memory usage in every MD step increased slowly. In our case OpenMPI was causing the problem. Switching to MVAPICH2 solved the problem and also increased perfomance quite noticably on our OmniPath network.

Best wishes
Phillip
To unsubscribe from this group and all its topics, send an email to cp...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages