Calculation Hangs After Renormalizing Density

58 views
Skip to first unread message

Robert Stanton

unread,
Dec 29, 2020, 6:02:09 PM12/29/20
to NWChem Forum
Dear all,

    I am new to using nwchem and I just want to try to figure out if something is wrong with the input file I have here, as the calculation keeps hanging on renormalizing the density, followed by a crash. I've attached the input and output files here. I initially thought it may be a memory issue, but I've tried changing that to 8000mb and find the same issue. It is my first time using this basis set, but have not had any issues with similar size systems in the past, so I am wondering if that might be the issue? I also noticed the large number of linear dependencies. I have also tried an HF calculation to use vectors as an input, but it hangs in a similar fashion. Any help would be greatly appreciated, thanks in advance!

Regards,
Robert Stanton
in.txt
out.txt

Edoardo Aprà

unread,
Dec 29, 2020, 6:17:44 PM12/29/20
to NWChem Forum
Please try this modified input

...
O     -4.76809373    -0.54584685     0.22875880
H     -4.14105898     0.87265390    -2.83584627
 end
 basis  spherical
    * library def2-tzvpd
 end
 dft
   direct
   MULT 1
   maxiter 250
   xc b3lyp
 end
 set quickguess t
 task dft optimize

Robert Stanton

unread,
Dec 29, 2020, 7:19:56 PM12/29/20
to NWChem Forum
Dear Dr. Aprà

    Thank you very much for the input file. This calculation does end up hanging for me as well, the output is similar to last time up to just after the linear dependencies (attached). Also, now in addition to the crash, there are .gridpts files also generated, all of ~1MB each. Some additional computational information if it is of any help; the calculation is being run on 160 cores (4 nodes) with 128GB RAM per node. Thanks again for the help.

Regards,
Robert Stanton
outUpdated.txt

Edoardo Aprà

unread,
Dec 29, 2020, 7:40:41 PM12/29/20
to NWChem Forum

Robert Stanton

unread,
Dec 29, 2020, 10:30:45 PM12/29/20
to nwchem...@googlegroups.com
Dear Dr. Aprà

     I've decreased to 2000mb per the previous suggestion, which in conjunction with reducing the number of cores to 80 does work out (but hangs still at 4 nodes/160 cores). It was suggested that there may be issues sometimes with > ~150 cores in the github discussion linked in the above google groups post. Scf iterations are currently taking ~18 minutes each at the moment, so ideally I'd like to get the number of cores able to run at a time back up. At this point would it be right to think this may be more of something to bring up with those who did the install of nwchem on the cluster, as opposed to something that could be controlled through user input? Again thank you so much for the help, I really appreciate it.

Regards,
Robert Stanton

On Tue, Dec 29, 2020 at 4:40 PM Edoardo Aprà <edoard...@gmail.com> wrote:
--
You received this message because you are subscribed to a topic in the Google Groups "NWChem Forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/nwchem-forum/JoOzJH-pEXM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to nwchem-forum...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/nwchem-forum/70e752f4-d740-4ef4-b6f7-e3d02660e129n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages