the wfu file is not created when a task failed

27 views
Skip to first unread message

Anatoliy

unread,
Mar 9, 2024, 3:41:09 AMMar 9
to molpro-user
Dear Molpro users!

I have a problem trying to get a wfu file in Molpro 2015. I think I read a manual carefully but I didn't understand where I was wrong and I didn't manage to figure it out.

A wfu file for any job is created only if this job finishes normally/successfully. 
I will appreciate it much if somebody helps me with this. I think it's not connected to a certain input file, I'm not sure.

Best reagards, Anatoliy

The attached input file is here:

***, MSX_W5_W6_v4
memory,3000,MW
file,2,MSX_W5_W6_v4.wfu,new
basis={
... (! I took def2-TZVPPD in https://www.basissetexchange.org)
}
geometry={angstrom;
O;
Sn, 1, r1;
O,  2, r2, 1, a1
}
r1 = 2.020073594
r2 = 2.019721422
a1 = 53.11833641

{multi;occ,17,6;closed,8,3;wf,38,1,2;canonical,2140.2}
{rhf;start,2140.2;orbital,2100.2}
{multi;start,2100.2;occ,17,6;closed,8,3;wf,38,1,2;state,2 !triplet state
maxiter,40;
CPMCSCF,NACM,1.1,2.1,accu=1.0d-7,record=5100.1
CPMCSCF,GRAD,1.1,spin=1,accu=1.0d-7,record=5101.1 !cpmcscf for gradient of triplet state 1
CPMCSCF,GRAD,2.1,spin=1,accu=1.0d-7,record=5102.1 !cpmcscf for gradient of triplet state 2
}

{Force
SAMC,5100.1     !compute coupling matrix element
CONICAL,6100.1} !save information for optimization of conical intersection
{Force
SAMC,5101.1          !state averaged gradient for triplet state 1
CONICAL,6100.1} !save information for OPTCONICAL
{Force
SAMC,5102.1          !state averaged gradient for triplet state 2
CONICAL,6100.1} !save information for OPTCONICAL

optg,startcmd=multi  !find triplet 1 - triplet 2 crossing point

The PBS file is here:
#!/bin/bash -l
#
#SBATCH --job-name=MSX_W5_W6_v4
#SBATCH --time=150:00:00
#SBATCH --nodes=1 --ntasks-per-node=12 --constraint=mem512
#SBATCH --partition batch

cd $SLURM_SUBMIT_DIR

module load molpro/2015

molpro -n 12 MSX_W5_W6_v4.molpro

tibo...@gmail.com

unread,
Mar 20, 2024, 6:42:37 PMMar 20
to molpro-user
Dear Anatoliy,

You mention PBS file, but then you are using SLURM-related keywords in your batch file. I have not worked on any cluster that used PBS, but that was a bit surprising.
Anyways, I think it might be related to how your cluster handles files written to temporary directories. If they end up written to some node-local directory, it is possible they are automatically deleted if your job terminates abnormally.
It is hard to say for sure, as job epilogues and similar things are often customized by the cluster admins, for example to prevent misbehaving jobs filling up the disks of compute nodes and whatnot.

Best wishes,
Tibor
Reply all
Reply to author
Forward
0 new messages