KRCI convergence

82 views
Skip to first unread message

Peterson, Kirk

unread,
Aug 12, 2021, 4:48:05 PM8/12/21
to dirac...@googlegroups.com

Dear Dirac experts,

 

are there any secrets to improving the convergence rate of KRCI calculations?  My KRCI calculations, whether atoms or molecules, can routinely take more than a 1000 iterations to get the energy converged to 1e-6.  I haven't tried anything special outside of the defaults.

 

best regards,

 

-Kirk

Ayaki Sunaga

unread,
Aug 12, 2021, 6:28:12 PM8/12/21
to dirac...@googlegroups.com
Dear Kirk,

According to my experience, the convergence is improved by increasing the number of roots.

When you need five roots in irrep X = 1,
.CIROOTS
1 5

You should run the calculation by replacing 5 to 6.
.CIROOTS
1 6

The time per iteration is increased, as expected.
However, the convergence of the target state (root 1..5 in this case) is improved.

I do not know the mechanism of this.

Best regards,
Ayaki

2021年8月13日(金) 5:48 'Peterson, Kirk' via dirac-users <dirac...@googlegroups.com>:
--
You received this message because you are subscribed to the Google Groups "dirac-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dirac-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dirac-users/BCB8406D-7F0A-45F8-B838-BD668C834166%40wsu.edu.

Peterson, Kirk

unread,
Aug 12, 2021, 7:28:51 PM8/12/21
to dirac...@googlegroups.com

Thanks Ayaki.   Alternatively, I wonder if the equivalent effect is obtained by increasing the size of the Davidson subspace (.MXCIVE).

 

-Kirk

Hans Jørgen Aagaard Jensen

unread,
Aug 13, 2021, 3:32:28 AM8/13/21
to dirac...@googlegroups.com

Dear Kirk

 

Strange. If you e-mail me one of the outputs with ~1000 iterations, I will try to figure out what is going on.

My e-mail: hjj@sdu.k

 

Hans Jørgen.

Stephan P. A. Sauer

unread,
Aug 13, 2021, 5:59:59 AM8/13/21
to dirac...@googlegroups.com
Dear all,

I am trying to run a CCSD-T calculation, which is maybe a bit big and
the jobs just keeps doing on me and I cannot figure out why (not
enough memory or disk full or whatever), because I don't get a real
error message.

Maybe there is not enough memory, because I am not sure, whether I
really understand the use of the -ag and -gb options to pam.

I attach the output file and the error file from the queue system. Any
suggestion will be very welcome.

Mvh / Best wishes / Mit freundlichen Grüssen
Stephan

----------------------------------------------------------------------------------------------------
Stephan P. A. Sauer
Professor
Head of Section Physical Chemistry

Department of Chemistry, University of Copenhagen
Universitetsparken 5, DK-2100 Copenhagen Ø, Denmark
Tel : +45-35320268
E-mail : sa...@kiku.dk (research related issues)
sa...@chem.ku.dk (administration related issues)
http://www.ki.ku.dk/ansatte/alle/profil/?id=110408
https://sites.google.com/site/spasauer
http://www.researcherid.com/rid/B-4966-2008

Author of Molecular Electromagnetism: A Computational Chemistry Approach
(http://ukcatalogue.oup.com/product/9780199575398.do)
----------------------------------------------------------------------------------------------------
zero_SnCl2-exp-c1-c3-c3.out
zero-25392476.err

Johann Pototschnig

unread,
Aug 14, 2021, 8:51:38 AM8/14/21
to dirac-users
As far as I can tell you are running out of memory, but hard drive not RAM.
If you look at your error file you can see that the Integral file for sorting (MDCINT) has about 209 GB
which should be available (at least *2) in the order defined with --tmp='\tmp' in the pam line:
pam --inp=.. --mol=... --tmp= ..

One way to get it working would be to restrict the orbitals with a threshold:

best,
Johann Pototschnig

Johann Pototschnig

unread,
Aug 14, 2021, 8:54:47 AM8/14/21
to dirac-users
Another possibility would be to change something in the generation of the reference orbitals and check if there are problems there.
(You probably use SCF orbitals, you could try AOS-SCF, MCSCF will probably show similar issues. )

best,
Johann

Ilias Miroslav, doc. RNDr., PhD.

unread,
Aug 14, 2021, 9:10:30 AM8/14/21
to dirac-users
Hi,

RelCC calculations are demanding wrt to the disk space and memory. If MDCINT is 209GB and you have several threads, you may consume more 209GB of disk space for the integral transformation. Usually local  /tmp (check with df -h  /tmp ) are not so big as common disk fields.

Next, if you are lucky and got MDCINt file, save it for subsequent RelCC run. For memory consumption consult http://diracprogram.org/doc/master/tutorials/cc_memory_count/count_cc_memory.html

Miro


From: dirac...@googlegroups.com <dirac...@googlegroups.com> on behalf of Johann Pototschnig <pototschn...@gmail.com>
Sent: Saturday, August 14, 2021 14:51
To: dirac-users <dirac...@googlegroups.com>
Subject: [dirac-users] Re: Problem with CCSD-T energy calculation
 
--
You received this message because you are subscribed to the Google Groups "dirac-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dirac-users...@googlegroups.com.

Peterson, Kirk

unread,
Aug 14, 2021, 11:10:32 AM8/14/21
to dirac...@googlegroups.com

Thanks Johann.  Generally I'm always using AOC-DHF orbitals and my GAS in the KRCI is generally defined with all doubly occupied orbitals in one shell and the open-shell space in another (generally something like a min/max of 0/2 for each).  MCSCF would of course be better, but I'm generally using MRCI and KRCI to determine SO effects (MRCI with Dyall's spinfree H) and there is no spinfree MCSCF in Dirac to make an equivalent calculation.

 

best,  -Kirk

 

From: <dirac...@googlegroups.com> on behalf of Johann Pototschnig <pototschn...@gmail.com>


Reply-To: "dirac...@googlegroups.com" <dirac...@googlegroups.com>
Date: Saturday, August 14, 2021 at 5:54 AM
To: dirac-users <dirac...@googlegroups.com>

--

You received this message because you are subscribed to the Google Groups "dirac-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dirac-users...@googlegroups.com.

Hans Jørgen Aagaard Jensen

unread,
Aug 17, 2021, 8:45:42 AM8/17/21
to dirac...@googlegroups.com, Jeppe Olsen

Dear Kirk,

Cc Jeppe

 

I have now looked at the output file you sent me, and I see that the CI converges excruciatingly slow, and it has not at all converged in 1000 CI iterations (Norm of CI residual ca. 1.e-2, threshold 1.e-6 !!). The first ca. 10 iterations are OK, then it gets stuck and although the energy slowly decreases, norm stays more and less the same. That is, it does get a correction lowering the energy, but not at all a big component of the CI error vector.

 

I think indeed that increasing .MXCIVE from the default 3 to e.g. 10 or more would help the convergence. And much better than by increasing the number of roots by 1, because 1) with 2 roots it has 2 sigma vectors per CI microiteration, i.e. ca. double CPU time per CI iteration, and 2) it will probably give faster convergence.

 

NOTE: although I write the following as if it were facts, it is my best guess on what happens, and I might be (partly) wrong:

The problem is that there are many CI roots of same symmetry as the gs which are very close in energy to the gs energy. Jeppe’s default corresponding to .MXCIVE 3 works excellently when the gs is well separated from the lowest excited energies, I believe this is what he tested it for and for which he found that it converged (almost) as fast as if keeping a reduced space of, say, 10 vectors; thus saving memory and being able to handle bigger CI spaces. I have Cc Jeppe so he can correct me if needed.

Because of many very close-lying states in your case, cf. the .RESOLVE output, the reduced space needs to be bigger and better cover the near-degeneracy. With .MXCIVE 3 the Davidson “forgets” information about the other low-lying roots and will reconstruct again and again trial vectors with much more overlap with the other low-lying CI roots than to the desired gs root. This means that if we had asked for print of the reduced eigenvector, it would have a very small coefficient on the latest trial vector, much less than the residual norm, and a coefficient very close to one on the best vector from the previous CI iteration. That is, very little changes.

Formulated a little differently, the Davidson algorithm is a PT2 algorithm, and it is well-known that for near-degenerate states single state PT2 must be replaced with quasi-degenerate PT. This is what I think can be achieved by increasing .MXCIVE.

 

Please write back and tell us if .MXCIVE works as I hope it does!

 

Hans Jørgen.

 

PS. With Ayaki’s suggestion to increase number of roots from 1 to 2, the .MXCIVE will increase to 6, furthermore, by asking explicitly for root no. 2 the algorithm will ensure that current approximate gs is orthogonal to current approximate root 2. I think these two points explain why it converges better, but I think with a considerably greater number of sigma vector constructions before converges, and thus using more computer time.

Peterson, Kirk

unread,
Aug 17, 2021, 10:21:48 AM8/17/21
to dirac...@googlegroups.com, Jeppe Olsen

Dear Hans Jørgen,

 

thanks very much for taking a look at this and replying with such a nice detailed explanation below.  The other day I started a similar case to what I sent you (the neutral, not the cation, but otherwise identical) with .MXCIVE set to 9 rather than the default of 3.  It certainly immediately helped to get the energy down but it still took about 50 iterations to get the norm into the 1e-2 range. It is now at 120 iterations and the norm is wandering around in the 7e-3 range with the energy decreasing about 6e-6 every iteration.  This case is particularly painful since each iteration requires 55' of cpu.  Perhaps this means that even 9 is not nearly enough?  I assume making this really large will increase the memory requirement (not disk) significantly?

 

best wishes,

 

-Kirk

Hans Jørgen Aagaard Jensen

unread,
Aug 17, 2021, 5:41:37 PM8/17/21
to dirac...@googlegroups.com, Jeppe Olsen, Andreas Nyvang

 

Hi All, 

 

Thanks for the comments and questions concerning the convergence of the KRCI calculations using the LUCIA/LUCITA path. Andreas Nyvang and I have looked into the problems with slow convergence, and Andreas has made new routines in our local  DIRAC code  with much improved convergence. 

 

There are several problems with the general codes currently in DIRAC. As correctly pointed out in the previous mails, the number of vectors in the subspace is important. However, another, and more subtle point, is how you reset. That is, when you reach the maximal dimension of the subspace, how do you then reduce the the size of the subspace, and which vectors do you choose to define the new subspace. Obviously, the current approximations to the eigenvectors should be included. If one only includes these vectors in the reset space, then the reset space contains a number of vectors equal to the number of roots. This is what is done in the current LUCIA/LUCITA iterative diagonalization codes in DIRAC. However, it is generally known, that such a drastic reduction leads to very slow convergence. This is the major cause of the observed slow convergence. 

 

In standard Non-relativistic CI calculations, it is well-known that much improved convergence can be obtaines by resetting to a space containing a number of vectors equal to 2 times the number of roots, or even larger spaces. I will not here come into the details of how these additional vectors are chosen, but it is important to choice these in the right manner. Also, one needs to pay additional attention to ensue that linear dependencies do not show up.

 

To make a long story short, Andreas has developed a much improved code for the  iterative diagonalization, and is planning to add these to the DIRAC code. 

 

Best regards, 

 

Jeppe


Peterson, Kirk

unread,
Aug 18, 2021, 7:56:33 PM8/18/21
to dirac...@googlegroups.com, Jeppe Olsen, Andreas Nyvang

Dear Hans Jørgen,

 

thanks very much for taking the time to look so closely into this issue.  Thanks Andreas for writing some new code!  I'm looking forward to using this once it makes it into Dirac.

 

regards, -Kirk

Reply all
Reply to author
Forward
0 new messages