Problem about PNO-LCCSD(T)-F12 calculation.

257 views
Skip to first unread message

azuma

unread,
May 19, 2021, 2:48:18 AM5/19/21
to molpro-user
Dear MOLPRO experts,

I'm trying to perform a single-point PNO-LCCSD(T)-F12/VTZ-F12 calculation for a molecule (C10H18O3) using MOLPRO 2020.1 and run into a problem.

The problem is that the output of the program has stopped suddenly without any error messages and cpu (wall) time has been consumed.

When specifying "gprint,cpu=3,io", the output stopped after printing the message "Global array created for S-matrices, size: 762.96 MW, Maximum size per node: 762.96 MW, 224765 blocks".  The outline of the output is as follows;

-- from here
  (snip)
Memory per process:       128 MW
 Total memory per node:  89048 MW
 Total GA space:         87000 MW

 GA preallocation enabled
 GA check enabled

 Variable memory set to 128.0 MW
  (snip)
Estimated GA usage of PNO-LCCSD-F12 integrals:               1656.73 MW
  (snip)
Maximum GA usage after allocating GAs for CCSD integrals:    1658.01 MW
  (snip)

     Elapsed time for 2-External K integrals (CCSD): 77.00 sec
       Step                                  Current   Average   Maximum
       2-External K in: 3-idx ints             60.09     63.19     70.68
       2-External K in: assembly                6.72      5.53      6.88
       2-External K in: transformation          8.42      7.01      9.22
       2-External K in: Other steps             1.78      1.27      2.03

     Size of S-matrices (max per node): [D, D]: 497.47 MW, [D, S]: 239.76 MW, [S, S]: 25.73 MW
     Global array created for S-matrices, size: 762.96 MW, Maximum size per node: 762.96 MW, 224765 blocks
-- to here

My input file is listed below;

-- from here
memory,stack=128m,ga=87000m
!
gthresh,energy=1.d-8,throvl=0.5d-9,twoint=1.d-12
gprint,cpu=3,io
!
angstrom
symmetry,nosym
geometry={
  (snip)
}
explicit,gem_beta=1.0
!
basis=vtz-f12
df-hf,accu=16
pno-lccsd(t)-f12,cabs_singles=1
-- to here

The system resource used is single node with 40 cores, 768 GB memory. But I tried 16 MPI calculation (-n 16) because of allocating larger GA memory.

My understaniding is that total memory is estimated as (128 + 300)*16 + 87000 = 93848 MW = 750784 MB < 768 GB, according to the manual.

Contrary to PNO-LCCSD(T)-F12, CCSD(T)/AVTZ and CCSD(T)-F12/AVTZ single point calculations worked well (-n 20, memory,3800,m) on the same system.

Could anyone help me, or give me any suggestions or comments?

Best regards,
Azuma

qia...@theochem.uni-stuttgart.de

unread,
May 19, 2021, 3:38:21 PM5/19/21
to molpro-user
This is not a very big calculation and it should run with ~16 GB of GA. I suspect that this is a technical problem related to the GA library (the build may not support that much shared memory in a node). You can probably try one of the following:

  1. Run the calculations with a moderate GA memory specification like 2000 MW or 10000 MW.
  2. Use the disk option instead of GA (by adding "implementation=disk" option to the pno-lccsd line of input). Since you have abundant memory, you may consider passing "-D /dev/shm" in the Molpro command line so that the files replacing GAs will stay in tmpfs.

If the problem persists, please provide the geometry of the molecules and we will try to reproduce the problem.

azuma

unread,
May 20, 2021, 3:27:10 AM5/20/21
to molpro-user
Thank you very much for your assitance!

I have already tested smaller GA memory and "implementation=disk" option for a smaller molecule on a node with the smaller resource but not tested "-D /dev/shm".

Because the total wall time (per cpu) I can use is limited, I'll try to test several calculations for the smaller molecule according to your suggestions.

Best regards,
Azuma

2021年5月20日木曜日 4:38:21 UTC+9 qia...@theochem.uni-stuttgart.de:

azuma

unread,
May 26, 2021, 3:38:28 AM5/26/21
to molpro-user
According to your assistance, I tried the following tests (input files are listed on the bottom);
  test1: insert "implementation=disk" after the memory line.
  test2: add "ga=1024m" option to the memory line of input.
  test3: add "ga=8000m" option to the memory line of input.
  test4: add "implementation=disk" option to the pno-lccsd line of input.
  test5: as test4, but pass "-D /dev/shm" in the script to submit the Molpro job.
  test6: as test4, but add "setenv MOLPRO_GLOBAL_SCRATCH /dev/shm" in the script to submit the Molpro job.

As a results, I got the followings;
  test4 and test6: normally terminated.
  test1 and test2: the output stopped after the line, "Global array created for S-matrices, ... blocks"
  test3:           surprisingly, worked well in the first trial, but the output stopped as listed above in the second trial.
  test5:           my script did not work with the message "unknown option -D".

From this results, I also suspect something wrong in the GA library. I'd like to know what is the difference between the test1 and the test4.

Anyway, using the way of the test6, the original calculations was also normally terminated (tried twice and normally terminated both time).

Thank you very much again for your assistance!

Best regards,
Azuma
  
  input:
      memory,stack=64m
      gthresh,energy=1.d-8,throvl=0.5d-9
      gprint,cpu=3,io
      !
      include,test1.geom
      !
      explicit,gem_beta=1.0
      !
      basis=vtz-f12
      df-hf,accu=16
      pno-lccsd(t)-f12,cabs_singles=1
  test1.geom:
      symmetry,nosym
      angstrom
      geometry={
      H
      C  1  R2
      N  2  R3   1  A3
      C  2  R4   1  A4    3  D4
      H  2  R5   1  A5    3  D5
      C  3  R6   2  A6    4  D6
      H  3  R7   2  A7    6  D7
      N  4  R8   2  A8    3  D8
      O  4  R9   2  A9    8  D9
      H  6  R10  3  A10   2  D10
      O  6  R11  3  A11  10  D11
      H  8  R12  4  A12   2  D12
      H  8  R13  4  A13  12  D13
      }
      R2    =     1.08602884
      R3    =     1.45564678
      R4    =     1.53143259
      R5    =     1.09071890
      R6    =     1.35215064
      R7    =     1.00662061
      R8    =     1.35599176
      R9    =     1.22498976
      R10   =     1.09844148
      R12   =     1.00629301
      R13   =     1.01043119
      A3    =   108.94016984
      A4    =   107.54644607
      A5    =   109.75822308
      A6    =   122.23567623
      A7    =   119.29932221
      A8    =   114.02641624
      A9    =   121.69375068
      A10   =   113.14813474
      A11   =   124.19084912
      A12   =   118.29975416
      A13   =   120.30320985
      D4    =  -122.34662645
      D5    =   117.99105244
      D6    =   -82.80058217
      D7    =   175.93629688
      D8    =    68.48094214
      D9    =   179.77094030
      D10   =  -178.54655412
      R11   =     1.22779479
      D11   =   179.68875127
      D12   =  -175.48072782
      D13   =   170.66633672

2021年5月20日木曜日 16:27:10 UTC+9 azuma:

qia...@theochem.uni-stuttgart.de

unread,
May 26, 2021, 5:25:16 AM5/26/21
to molpro-user
The "implementation=disk" is an option for PNO-LCCSD and not Molpro in general. Adding it after the "memory" card in the input merely defines a variable in that input file and does not affect the program flow of PNO-LCCSD. Since Molpro 2021.1 a new command-line option "--ga-impl disk" has been added to used disk in all job steps.

Judging from test 5, you might be running a rather old version of Molpro. If that is the case MOLPRO_GLOBAL_SCRATCH variable has no effect. You should see something like "Global scratch directory       : /dev/shm/molpro.jWAgsufgWD/" at the top of the output if it is working. Please upgrade Molpro if possible.

azuma

unread,
May 27, 2021, 3:57:15 AM5/27/21
to molpro-user
I've just heard that the latest Molpro version 2021.1 was already installed in the computer center of IMS in Okazaki, Japan, where I use MOLPRO.  Using the latest version, I confirmed that both test2 and test5 input worked fine.

The header of the output by older MOLPRO 2020.1 I used is as follows;

-- from here

 Primary working directories    : /work/users/cto/molpro.uuoVqJyPx5
 Secondary working directories  : /work/users/cto/molpro.uuoVqJyPx5
 Wavefunction directory         : /work/users/cto/wfu/
 Main file repository           : /work/users/cto/
(snip)
1


                                         ***  PROGRAM SYSTEM MOLPRO  ***
                                       Copyright, TTI GmbH Stuttgart, 2015
                                    Version 2020.1 linked Oct 19 2020 17:02:16


 **********************************************************************************************************************************
 LABEL *   test1                                                                         
  64 bit mpp version                                                                     DATE: 25-May-21          TIME: 15:16:12  
 **********************************************************************************************************************************
(snip)
-- to here

As you suggested, I could not find the line of "Global scratch directory       : /dev/shm/molpro.XXX..." in the output of test6 by the older version.  I understood test6 was substantially identical to test4 when using the older version.

On the contrary, I found the line in the output by the latest version as listed below, and I could confirm -D option worked fine, whereas I could not ls /dev/shm of the node where the program was executed;
      1 
      2  Working directory              : /work/users/cto/molpro.NSmwSh3eKr/
      3  Global scratch directory       : /dev/shm/molpro.I8a8XgrNwy/
      4  Wavefunction directory         : /work/users/cto/wfu/
      5  Main file repository           : /work/users/cto/

Best regards,
Azuma

2021年5月26日水曜日 18:25:16 UTC+9 qia...@theochem.uni-stuttgart.de:
Reply all
Reply to author
Forward
0 new messages