Calculation the electronic spectrum of pyxaid cannot run

368 views
Skip to first unread message

misa...@gmail.com

unread,
Sep 9, 2017, 3:03:18 AM9/9/17
to Quantum-Dynamics-Hub
Dear Alexey,


Error in /wd/job0/CRASH,


 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     task #        14
     from pw_readfile : error #         1
     error opening xml data file
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     task #        10
     from pw_readfile : error #         1
     error opening xml data file
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
     task #        13
     from pw_readfile : error #         1
     error opening xml data file
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


py-scr2.py,

from PYXAID import *
import os

nsteps_per_job = 2
tot_nsteps = 4

os.system("rm -rf wd")
os.system("rm -rf res")
os.system("mkdir res")

# Step 1 - split MD trajectory on the time steps
# Provide files listed below: "x.md.out" and "x.scf.in"
# IMPORTANT: 
# 1) use only ABSOLUTE path for PP in x.scf.in file
# 2) provided input file is just a template, so do not include coordinates
#out2inp.out2inp("x.md.out","x0.scf.in","wd","x0.scf",0,tot_nsteps,1)  # neutral setup
#out2inp.out2inp("x.md.out","x1.scf.in","wd","x1.scf",0,tot_nsteps,1)  # charged setup
xdatcar2inp.xdatcar2inp("XDATCAR","x0.scf.in","wd","x0.scf",0,4,1)

# Step 2 - distribute all time steps into groups(jobs) 
# several time steps per group - this is to accelerate calculations
# creates a "customized" submit file for each job and submit it - run
# a swarm in independent calculations (trajectory pieces)
# (HTC paradigm)
# Provide the files below: 
# submit_templ.pbs - template for submit files - manually edit the variables
# x.exp.in - file for export of the wavefunction

os.system("cp submit_templ.pbs wd")
os.system("cp x0.exp.in wd")
#os.system("cp x1.exp.in wd")
os.chdir("wd")
distribute.distribute(0,4,nsteps_per_job,"submit_templ.pbs",["x0.exp.in"],["x0.scf"],1) # 1 = PBS, 2 = SLURM, 0 = no run
#distribute.distribute(0,tot_nsteps,nsteps_per_job,"submit_templ.pbs",["x0.exp.in","x1.exp.in"],["x0.scf","x1.scf"],1)





submit_templ.pbs,

在此输入代码...#!/bin/sh

#PBS -l nodes=1:ppn=24
#PBS -N qe_test
#PBS -q q_zhq_bnulongr

 cd $PBS_O_WORKDIR

 NPROCS=$PBS_NP

 source /home/export/parastor/clussoft/profile.d/intelmpi.sh 
 # mpirun -machinefile $PBS_NODEFILE -np $NPROCS  pw.x <x.in> x.out


exe_qespresso=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/pw.x
exe_export=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/pw_export.x
exe_convert=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/iotk

NP=$(wc -l $PBS_NODEFILE | awk '{print $1}')
res="/home/export/parastor/keyuser/bnulongr/apps/test/test2/res"

# These will be assigned automatically, leave them as they are
param1=
param2=


minband=176




# This is invocation of the scripts which will further handle NA-MD calclculations
# on the NAC calculation step
python -c "from PYXAID import *
params = { }
params[\"BATCH_SYSTEM\"]=\"mpirun\"
params[\"NP\"]=$NP
params[\"EXE\"]=\"$exe_qespresso\"
params[\"EXE_EXPORT\"]=\"$exe_export\"
params[\"EXE_CONVERT\"] =\"$exe_convert\"
params[\"start_indx\"]=$param1
params[\"stop_indx\"]=$param2
params[\"wd\"]=\"wd\"
params[\"rd\"]=\"$res\"
params[\"minband\"]=$minband
params[\"nocc\"]=176
params[\"maxband\"]=177
params[\"nac_method\"]=0
params[\"wfc_preprocess\"]=\"complete\"
params[\"do_complete\"]=1
params[\"prefix0\"]=\"x0.scf\"
params[\"compute_Hprime\"]=1
params[\"pptype\"]=\"US\"
print params
runMD1.runMD(params)
"



Thank you. Looking forward to your reply.

Best.


Misaraty

Wei Li

unread,
Sep 9, 2017, 3:52:42 AM9/9/17
to Quantum-Dynamics-Hub
Hi Misaraty,

Not enough information to say what is going wrong. 
It seems your QE did not execute normally. Can you please post your QE input file? Or post the tail of x0.scf.x.out file.

Wei

misa...@gmail.com

unread,
Sep 9, 2017, 6:26:37 AM9/9/17
to Quantum-Dynamics-Hub
Hi Wei,

Thanks.

I re-tested the pyxaid, qe seems to be no problem. Related content is as follows,

[bnulongr@sn21 test0]$ tree
.
├── py-scr2.py
├── submit_templ.pbs
├── wd
│   ├── job0
│   │   ├── CRASH
│   │   ├── qe_test.e38391
│   │   ├── qe_test.o38391
│   │   ├── scf.save
│   │   │   ├── charge-density.dat
│   │   │   ├── C.pbe-rrkjus.UPF
│   │   │   ├── data-file.xml
│   │   │   ├── gvectors.dat
│   │   │   ├── H.pbe-rrkjus.UPF
│   │   │   ├── I.pbe-n-rrkjus_psl.0.2.UPF
│   │   │   ├── K00001
│   │   │   │   ├── eigenval.xml
│   │   │   │   ├── evc.dat
│   │   │   │   └── gkvectors.dat
│   │   │   ├── N.pbe-rrkjus.UPF
│   │   │   └── Pb.pbe-dn-rrkjus_psl.0.2.2.UPF
│   │   ├── submit_templ.pbs
│   │   ├── wd
│   │   │   ├── curr0
│   │   │   │   ├── x0.export
│   │   │   │   └── x0.scf.0.out
│   │   │   └── next0
│   │   │       ├── x0.export
│   │   │       └── x0.scf.1.out
│   │   ├── x0.exp.in
│   │   ├── x0.exp.out
│   │   ├── x0.scf.0.in
│   │   ├── x0.scf.1.in
│   │   └── x0.scf.2.in
│   ├── job1
│   │   ├── CRASH
│   │   ├── qe_test.e38392
│   │   ├── qe_test.o38392
│   │   ├── scf.save
│   │   │   ├── charge-density.dat
│   │   │   ├── C.pbe-rrkjus.UPF
│   │   │   ├── data-file.xml
│   │   │   ├── gvectors.dat
│   │   │   ├── H.pbe-rrkjus.UPF
│   │   │   ├── I.pbe-n-rrkjus_psl.0.2.UPF
│   │   │   ├── K00001
│   │   │   │   ├── eigenval.xml
│   │   │   │   ├── evc.dat
│   │   │   │   └── gkvectors.dat
│   │   │   ├── N.pbe-rrkjus.UPF
│   │   │   └── Pb.pbe-dn-rrkjus_psl.0.2.2.UPF
│   │   ├── submit_templ.pbs
│   │   ├── wd
│   │   │   ├── curr0
│   │   │   │   ├── x0.export
│   │   │   │   └── x0.scf.2.out
│   │   │   └── next0
│   │   │       ├── x0.export
│   │   │       └── x0.scf.3.out
│   │   ├── x0.exp.in
│   │   ├── x0.exp.out
│   │   ├── x0.scf.2.in
│   │   └── x0.scf.3.in
│   ├── submit_templ.pbs
│   ├── tmp
│   ├── x0.exp.in
│   ├── x0.scf.0.in
│   ├── x0.scf.1.in
│   ├── x0.scf.2.in
│   ├── x0.scf.3.in
│   └── x0.scf.4.in
├── x0.exp.in
├── x0.scf.in
└── XDATCAR

17 directories, 56 files

~/test0/wd/job0/wd/curr0/x0.scf.0.out


     Program PWSCF v.5.3.0 (svn rev. 11974) starts on  9Sep2017 at 18: 0:10 

     This program is part of the open-source Quantum ESPRESSO suite
     for quantum simulation of materials; please cite
         "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
          URL http://www.quantum-espresso.org", 
     in publications or presentations arising from this work. More details at

     Parallel version (MPI), running on    12 processors
     R & G space division:  proc/nbgrp/npool/nimage =      12
     Waiting for input...
     Reading input from standard input
Warning: card &IONS ignored
Warning: card   ION_DYNAMICS = 'VERLET', ignored
Warning: card   ION_TEMPERATURE = 'ANDERSEN', ignored
Warning: card   TEMPW = 300.00 , ignored
Warning: card   NRAISE = 1, ignored
Warning: card / ignored
     Message from routine read_cards :
     DEPRECATED: no units specified in CELL_PARAMETERS card

     Current dimensions of program PWSCF are:
     Max number of different atomic species (ntypx) = 10
     Max number of k-points (npk) =  40000
     Max angular momentum in pseudopotentials (lmaxx) =  3
               file C.pbe-rrkjus.UPF: wavefunction(s)  2S 2P renormalized
               file N.pbe-rrkjus.UPF: wavefunction(s)  2S renormalized
               file H.pbe-rrkjus.UPF: wavefunction(s)  1S renormalized
               file Pb.pbe-dn-rrkjus_psl.0.2.2.UPF: wavefunction(s)  6S renormalized

     gamma-point specific algorithms are used

     Subspace diagonalization in iterative solution of the eigenvalue problem:
     a serial algorithm will be used


     Parallelization info
     --------------------
     sticks:   dense  smooth     PW     G-vecs:    dense   smooth      PW
     Min        1149     614    153               101593    39566    4945
     Max        1152     616    156               101604    39644    4950
     Sum       13809    7377   1853              1219157   475273   59373
     Tot        6905    3689    927



     bravais-lattice index     =            0
     lattice parameter (alat)  =       1.8897  a.u.
     unit-cell volume          =   13896.6342 (a.u.)^3
     number of atoms/cell      =           96
     number of atomic types    =            5
     number of electrons       =       432.00
     number of Kohn-Sham states=          259
     kinetic-energy cutoff     =      40.0000  Ry
     charge density cutoff     =     300.0000  Ry
     convergence threshold     =      1.0E-05
     mixing beta               =       0.4500
     number of iterations used =            8  plain     mixing
     Exchange-correlation      = PBE ( 1  4  3  4 0 0)

     celldm(1)=   1.889700  celldm(2)=   0.000000  celldm(3)=   0.000000
     celldm(4)=   0.000000  celldm(5)=   0.000000  celldm(6)=   0.000000

     crystal axes: (cart. coord. in units of alat)
               a(1) = (  12.722600   0.000000   0.000000 )  
               a(2) = (   0.000000  12.722600   0.000000 )  
               a(3) = (   0.000000   0.000000  12.722700 )  

     reciprocal axes: (cart. coord. in units 2 pi/alat)
               b(1) = (  0.078600  0.000000  0.000000 )  
               b(2) = (  0.000000  0.078600  0.000000 )  
               b(3) = (  0.000000  0.000000  0.078600 )  


     PseudoPot. # 1 for C  read from file:
     /home/export/parastor/keyuser/bnulongr/soft/qe/upf_files/C.pbe-rrkjus.UPF
     MD5 check sum: 00fb224312de0c5b6853bd333518df6f
     Pseudo is Ultrasoft, Zval =  4.0
     Generated by new atomic code, or converted to UPF format
     Using radial grid of  627 points,  4 beta functions with: 
                l(1) =   0
                l(2) =   0
                l(3) =   1
                l(4) =   1
     Q(r) pseudized with 0 coefficients 


     PseudoPot. # 2 for N  read from file:
     /home/export/parastor/keyuser/bnulongr/soft/qe/upf_files/N.pbe-rrkjus.UPF
     MD5 check sum: 0c3fbe5807a93f9ba59d5a7019aa238b
     Pseudo is Ultrasoft, Zval =  5.0
     Generated by new atomic code, or converted to UPF format
     Using radial grid of 1257 points,  4 beta functions with: 
                l(1) =   0
                l(2) =   0
                l(3) =   1
                l(4) =   1
     Q(r) pseudized with 0 coefficients 


     PseudoPot. # 3 for H  read from file:
     /home/export/parastor/keyuser/bnulongr/soft/qe/upf_files/H.pbe-rrkjus.UPF
     MD5 check sum: 7cc9d459525c9a0585f487a71c3c9563
     Pseudo is Ultrasoft, Zval =  1.0
     Generated by new atomic code, or converted to UPF format
     Using radial grid of 1061 points,  2 beta functions with: 
                l(1) =   0
                l(2) =   0
     Q(r) pseudized with 0 coefficients 


     PseudoPot. # 4 for Pb read from file:
     /home/export/parastor/keyuser/bnulongr/soft/qe/upf_files/Pb.pbe-dn-rrkjus_psl.0.2.2.UPF
     MD5 check sum: 7afc420059c37d6888fc09c0f121e83f
     Pseudo is Ultrasoft + core correction, Zval = 14.0
     Generated using "atomic" code by A. Dal Corso  v.5.0.2 svn rev. 9415
     Using radial grid of 1281 points,  6 beta functions with: 
                l(1) =   0
                l(2) =   0
                l(3) =   1
                l(4) =   1
                l(5) =   2
                l(6) =   2
     Q(r) pseudized with 0 coefficients 


     PseudoPot. # 5 for  I read from file:
     /home/export/parastor/keyuser/bnulongr/soft/qe/upf_files/I.pbe-n-rrkjus_psl.0.2.UPF
     MD5 check sum: f9a5bb98cc7d7d8a1a5c1f9867e5ab3f
     Pseudo is Ultrasoft + core correction, Zval =  7.0
     Generated using "atomic" code by A. Dal Corso  v.5.0.2 svn rev. 9415
     Using radial grid of 1247 points,  4 beta functions with: 
                l(1) =   0
                l(2) =   0
                l(3) =   1
                l(4) =   1
     Q(r) pseudized with 0 coefficients 


     atomic species   valence    mass     pseudopotential
        C              4.00    12.01000     C ( 1.00)
        N              5.00    14.00700     N ( 1.00)
        H              1.00     1.00800     H ( 1.00)
        Pb            14.00   207.20000     Pb( 1.00)
        I              7.00   126.90400      I( 1.00)

     No symmetry found



   Cartesian axes

     site n.     atom                  positions (alat units)
         1           C   tau(   1) = (  12.4919900   0.0400400   1.0058100  )
         2           C   tau(   2) = (   6.8445800  12.4877400  12.5842900  )
         3           C   tau(   3) = (  12.4829700   0.2653700   7.0510000  )
         4           C   tau(   4) = (   5.9233800   0.3575200   7.2043300  )
         5           C   tau(   5) = (   0.1250700   6.5634400   0.8483900  )
         6           C   tau(   6) = (   6.2863700   6.5229000  12.4244900  )
         7           C   tau(   7) = (   0.4100500   6.2821900   6.8020300  )
         8           C   tau(   8) = (   6.4432300   7.1875000   6.0918700  )
         9           N   tau(   9) = (   0.8430100   0.6802900   0.5738100  )
        10           N   tau(  10) = (  11.5096300  12.3539500   0.2314700  )
        11           N   tau(  11) = (   6.7262100   0.0723700  11.3086400  )
        12           N   tau(  12) = (   5.8857600  12.5626800   0.7883300  )
        13           N   tau(  13) = (   0.6778100  12.1259500   7.4620200  )
        14           N   tau(  14) = (  11.9832300   0.3638500   5.8549500  )
        15           N   tau(  15) = (   7.0612600   0.1295700   6.5855000  )
        16           N   tau(  16) = (   4.7662900   0.1612000   6.6382000  )
        17           N   tau(  17) = (   1.1951700   6.6842000   0.0913500  )
        18           N   tau(  18) = (  11.6594000   6.4806600   0.3205100  )
        19           N   tau(  19) = (   6.4000600   5.3831700   0.4042000  )
        20           N   tau(  20) = (   6.3446300   6.6490700  11.1241500  )
        21           N   tau(  21) = (  12.4083300   6.7449400   5.7790400  )
        22           N   tau(  22) = (   0.2324200   5.1396800   7.4611100  )
        23           N   tau(  23) = (   7.4716900   6.3766100   6.3276800  )
        24           N   tau(  24) = (   5.1948900   6.8346400   6.3042500  )
        25           H   tau(  25) = (  12.4100900  12.4852800   2.1023200  )
        26           H   tau(  26) = (   1.5554400   0.9869600   1.2502900  )
        27           H   tau(  27) = (   0.9541800   1.0052400  12.3382000  )
        28           H   tau(  28) = (  11.5191400  12.5474300  11.9365700  )
        29           H   tau(  29) = (  10.6787900  11.8952600   0.6746000  )
        30           H   tau(  30) = (   7.7872900  12.1539800   0.1774700  )
        31           H   tau(  31) = (   7.4926200  12.5420300  10.6460100  )
        32           H   tau(  32) = (   5.8187800   0.2817400  10.9233000  )
        33           H   tau(  33) = (   5.0280700   0.3262800   0.5673300  )
        34           H   tau(  34) = (   6.1082700  12.3477300   1.7655300  )
        35           H   tau(  35) = (  12.0403300   0.8925300   7.7828200  )
        36           H   tau(  36) = (   1.1000700  12.1904200   8.4263000  )
        37           H   tau(  37) = (   1.1950400  11.5481600   6.7764400  )
        38           H   tau(  38) = (  12.2673900  12.4806200   5.0699900  )
        39           H   tau(  39) = (  11.2814700   1.1212900   5.6690400  )
        40           H   tau(  40) = (   5.9952700   0.7481000   8.2662700  )
        41           H   tau(  41) = (   7.9357100   0.0972200   7.1242400  )
        42           H   tau(  42) = (   7.0549700  12.3478700   5.6921700  )
        43           H   tau(  43) = (   4.6324600  12.5303300   5.6896700  )
        44           H   tau(  44) = (   3.9060800   0.4232400   7.1092200  )
        45           H   tau(  45) = (   0.2367600   6.5795200   2.0155700  )
        46           H   tau(  46) = (   2.1361900   6.6418900   0.5132700  )
        47           H   tau(  47) = (   1.1429500   6.5800800  11.8158400  )
        48           H   tau(  48) = (  11.4945200   6.4741600  12.0050800  )
        49           H   tau(  49) = (  10.8291400   6.4629900   0.9355400  )
        50           H   tau(  50) = (   6.2125500   7.4204200   0.2415600  )
        51           H   tau(  51) = (   6.2397100   5.4677200   1.4095900  )
        52           H   tau(  52) = (   6.3722200   4.4447400  12.6953900  )
        53           H   tau(  53) = (   6.3630500   5.7988000  10.5171500  )
        54           H   tau(  54) = (   6.2603300   7.5964600  10.7267700  )
        55           H   tau(  55) = (   1.1620900   6.9168700   7.1774400  )
        56           H   tau(  56) = (  12.6795300   7.5598700   5.2132000  )
        57           H   tau(  57) = (  11.7710300   6.0875000   5.2293900  )
        58           H   tau(  58) = (  12.2900800   4.4318200   7.0855300  )
        59           H   tau(  59) = (   0.8537100   4.8448800   8.2313000  )
        60           H   tau(  60) = (   6.7080500   8.2799200   5.7069400  )
        61           H   tau(  61) = (   8.4394700   6.6725100   6.1645600  )
        62           H   tau(  62) = (   7.2923000   5.4071700   6.6036600  )
        63           H   tau(  63) = (   4.9657300   5.8822300   6.6250100  )
        64           H   tau(  64) = (   4.3937100   7.4687800   6.1901900  )
        65           Pb  tau(  65) = (   3.3223100   3.0330200   3.1343400  )
        66           Pb  tau(  66) = (   9.4883700   2.8055200   2.8622300  )
        67           Pb  tau(  67) = (   2.8652400   3.0038800   9.4851000  )
        68           Pb  tau(  68) = (   9.4333700   2.8068100   9.2205000  )
        69           Pb  tau(  69) = (   3.0607500   9.2176100   2.6108500  )
        70           Pb  tau(  70) = (   9.3284100   9.3833700   2.9170100  )
        71           Pb  tau(  71) = (   3.1973400   9.6238800   9.0181600  )
        72           Pb  tau(  72) = (  10.1182900   8.7738400   9.1584900  )
        73           I   tau(  73) = (   0.0736100   3.9516400   3.1730600  )
        74           I   tau(  74) = (   2.5400000   0.1736600   3.9222500  )
        75           I   tau(  75) = (   3.9132200   2.5425700   0.1036600  )
        76           I   tau(  76) = (   6.5502800   3.2752500   3.6722500  )
        77           I   tau(  77) = (   9.6405100  12.5064500   3.4155500  )
        78           I   tau(  78) = (   9.1756100   2.3590800  12.4907900  )
        79           I   tau(  79) = (  12.7091900   3.4476900  10.5446600  )
        80           I   tau(  80) = (   3.2117300   0.1387300  10.2090300  )
        81           I   tau(  81) = (   2.9787200   3.7375100   6.3680700  )
        82           I   tau(  82) = (   6.4285700   3.4771200   9.0660700  )
        83           I   tau(  83) = (   9.5584600  12.4561500   9.5372700  )
        84           I   tau(  84) = (   9.9466000   3.3302100   5.9067700  )
        85           I   tau(  85) = (  12.6327800   9.6750400   3.5477800  )
        86           I   tau(  86) = (   3.7782600   6.3580300   3.2901300  )
        87           I   tau(  87) = (   4.1346700   9.6924300  12.5078800  )
        88           I   tau(  88) = (   6.2929300  10.2429500   3.7867000  )
        89           I   tau(  89) = (   9.5942700   6.4656000   3.6355600  )
        90           I   tau(  90) = (   8.2797700   9.6649500  12.6198300  )
        91           I   tau(  91) = (  12.7128000   9.7931200  10.3346000  )
        92           I   tau(  92) = (   2.9643000   6.5013900   9.4788700  )
        93           I   tau(  93) = (   3.1393000   9.6313300   5.9390600  )
        94           I   tau(  94) = (   6.1648500  10.0217400   9.0935100  )
        95           I   tau(  95) = (   9.9499700   5.9194300  10.2464200  )
        96           I   tau(  96) = (   9.3139500   9.7968000   6.1221900  )

     number of k points=     1  gaussian smearing, width (Ry)=  0.0050
                       cart. coord. in units 2pi/alat
        k(    1) = (   0.0000000   0.0000000   0.0000000), wk =   2.0000000

     Dense  grid:   609579 G-vectors     FFT dimensions: ( 135, 135, 135)

     Smooth grid:   237637 G-vectors     FFT dimensions: ( 100, 100, 100)

     Largest allocated arrays     est. size (Mb)     dimensions
        Kohn-Sham Wavefunctions         9.78 Mb     (    2475,  259)
        NL pseudopotentials            22.96 Mb     (    2475,  608)
        Each V/rho on FFT grid          3.34 Mb     (  218700)
        Each G-vector array             0.39 Mb     (   50799)
        G-vector shells                 0.16 Mb     (   21160)
     Largest temporary arrays     est. size (Mb)     dimensions
        Auxiliary wavefunctions        19.56 Mb     (    2475, 1036)
        Each subspace H/S matrix        8.19 Mb     (    1036, 1036)
        Each <psi_i|beta_j> matrix      1.20 Mb     (     608,  259)
        Arrays for rho mixing          26.70 Mb     (  218700,    8)

     Initial potential from superposition of free atoms

     starting charge  428.13062, renormalised to  432.00000
     Starting wfc are  304 randomized atomic wfcs

     total cpu time spent up to now is       13.8 secs

     per-process dynamical memory:   119.8 Mb

     Self-consistent Calculation

     iteration #  1     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  1.00E-02,  avg # of iterations =  5.0

     total cpu time spent up to now is       42.1 secs

     total energy              =   -3025.42578700 Ry
     Harris-Foulkes estimate   =   -3059.45176516 Ry
     estimated scf accuracy    <      53.47385667 Ry

     iteration #  2     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  1.00E-02,  avg # of iterations =  5.0

     total cpu time spent up to now is       70.9 secs

     total energy              =   -3036.96864618 Ry
     Harris-Foulkes estimate   =   -3054.27057340 Ry
     estimated scf accuracy    <      46.11759532 Ry

     iteration #  3     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  1.00E-02,  avg # of iterations =  2.0

     total cpu time spent up to now is       89.6 secs

     total energy              =   -3046.35043786 Ry
     Harris-Foulkes estimate   =   -3047.28252385 Ry
     estimated scf accuracy    <       3.04620659 Ry

     iteration #  4     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  7.05E-04,  avg # of iterations =  5.0

     total cpu time spent up to now is      115.2 secs

     total energy              =   -3046.54314178 Ry
     Harris-Foulkes estimate   =   -3046.75265758 Ry
     estimated scf accuracy    <       0.61165556 Ry

     iteration #  5     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  1.42E-04,  avg # of iterations =  3.0

     total cpu time spent up to now is      136.7 secs

     total energy              =   -3046.60397934 Ry
     Harris-Foulkes estimate   =   -3046.63668807 Ry
     estimated scf accuracy    <       0.10598924 Ry

     iteration #  6     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  2.45E-05,  avg # of iterations =  7.0

     total cpu time spent up to now is      161.7 secs

     total energy              =   -3046.61692321 Ry
     Harris-Foulkes estimate   =   -3046.62183164 Ry
     estimated scf accuracy    <       0.02018780 Ry

     iteration #  7     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  4.67E-06,  avg # of iterations = 11.0

     total cpu time spent up to now is      189.1 secs

     total energy              =   -3046.61962115 Ry
     Harris-Foulkes estimate   =   -3046.62001181 Ry
     estimated scf accuracy    <       0.00178714 Ry

     iteration #  8     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  4.14E-07,  avg # of iterations =  3.0

     total cpu time spent up to now is      213.1 secs

     total energy              =   -3046.62001751 Ry
     Harris-Foulkes estimate   =   -3046.62010963 Ry
     estimated scf accuracy    <       0.00050524 Ry

     iteration #  9     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  1.17E-07,  avg # of iterations =  2.0

     total cpu time spent up to now is      233.3 secs

     total energy              =   -3046.62008352 Ry
     Harris-Foulkes estimate   =   -3046.62009707 Ry
     estimated scf accuracy    <       0.00014802 Ry

     iteration # 10     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  3.43E-08,  avg # of iterations =  2.0

     total cpu time spent up to now is      254.2 secs

     total energy              =   -3046.62010510 Ry
     Harris-Foulkes estimate   =   -3046.62010449 Ry
     estimated scf accuracy    <       0.00002471 Ry

     iteration # 11     ecut=    40.00 Ry     beta=0.45
     Davidson diagonalization with overlap
     ethr =  5.72E-09,  avg # of iterations =  2.0

     total cpu time spent up to now is      277.1 secs

     End of self-consistent calculation

          k = 0.0000 0.0000 0.0000 ( 29687 PWs)   bands (ev):

   -18.0521 -18.0342 -17.9534 -17.8608 -17.6900 -17.6488 -17.5557 -17.2922
   -15.8147 -15.6955 -15.6220 -15.5481 -15.5402 -15.4460 -15.3875 -15.1642
   -14.6231 -14.5936 -14.5750 -14.5328 -14.5030 -14.4460 -14.4392 -14.4264
   -14.4146 -14.4120 -14.4067 -14.3966 -14.3703 -14.3571 -14.3272 -14.2205
   -14.1947 -14.1753 -14.1744 -14.1580 -14.1560 -14.1358 -14.1283 -14.1023
   -14.0959 -14.0847 -14.0712 -14.0518 -14.0437 -14.0323 -13.9780 -13.9713
   -13.9522 -13.9413 -13.9241 -13.8336 -13.8107 -13.7891 -13.7819 -13.7632
    -9.9307  -9.8420  -9.7400  -9.7243  -9.6716  -9.5629  -9.5510  -9.4889
    -9.4665  -9.3916  -9.3479  -9.3411  -9.2823  -9.2677  -9.2212  -9.1849
    -9.1701  -9.1596  -9.0397  -9.0072  -8.9887  -8.9138  -8.8789  -8.8628
    -8.8318  -8.7661  -8.7317  -8.6846  -8.6788  -8.5838  -8.4552  -8.3821
    -7.8747  -7.8005  -7.6743  -7.6613  -7.6406  -7.5340  -7.4401  -7.2534
    -7.2387  -7.1005  -7.0694  -6.9964  -6.8699  -6.7726  -6.7397  -6.5595
    -5.7529  -5.5396  -5.4560  -5.3951  -5.3841  -5.3419  -5.3271  -5.3028
    -5.2928  -5.2390  -5.2112  -5.0880  -5.0158  -4.9080  -4.8042  -4.5879
    -4.0928  -3.9205  -3.8523  -3.7788  -3.7439  -3.7008  -3.5687  -3.5359
    -3.3982  -3.3739  -3.3207  -3.3169  -3.1652  -3.1315  -3.0836  -2.8016
    -0.9536  -0.7925  -0.7561  -0.6331  -0.5817  -0.5111  -0.4818  -0.4595
    -0.4206  -0.3107  -0.2932  -0.2223  -0.2196  -0.1772  -0.1241  -0.1172
    -0.0503  -0.0156   0.0533   0.1332   0.2054   0.2313   0.3285   0.4526
     0.4783   0.5197   0.5371   0.5832   0.5910   0.6685   0.6913   0.7505
     0.8137   0.8296   0.8498   0.8716   0.9087   0.9280   0.9516   0.9786
     1.0149   1.0524   1.0734   1.1206   1.1464   1.1857   1.2238   1.2402
     1.2750   1.2894   1.3189   1.3341   1.3685   1.3886   1.4104   1.4648
     1.5120   1.5188   1.5346   1.5982   1.6212   1.6384   1.6598   1.6844
     1.6900   1.7464   1.7557   1.7723   1.8115   1.8527   1.8830   1.9464
     1.9717   1.9988   2.0258   2.0473   2.0971   2.1412   2.2443   2.2874
     4.1588   4.3119   4.3533   4.4116   4.4309   4.6307   4.7539   4.7964
     4.8986   4.9889   5.0700   5.1133   5.2301   5.2728   5.4010   5.4413
     5.4626   5.5621   5.5800   5.6566   5.8258   5.8745   5.9663   6.1086
     6.2079   6.3749   6.4530   6.5068   6.6329   6.7235   6.8843   6.9062
     7.0223   7.1603   7.2420   7.2896   7.3807   7.4608   7.5092   7.5675
     7.6429   7.7140   7.7466

     the Fermi energy is     2.9944 ev

!    total energy              =   -3046.62010819 Ry
     Harris-Foulkes estimate   =   -3046.62011082 Ry
     estimated scf accuracy    <       0.00000440 Ry

     The total energy is the sum of the following terms:

     one-electron contribution =   -1006.61785163 Ry
     hartree contribution      =     677.20464798 Ry
     xc contribution           =   -1496.32419759 Ry
     ewald contribution        =   -1220.88270695 Ry
     smearing contrib. (-TS)   =      -0.00000000 Ry

     convergence has been achieved in  11 iterations

     Writing output data file scf.save

     init_run     :     12.74s CPU     12.93s WALL (       1 calls)
     electrons    :    257.54s CPU    263.42s WALL (       1 calls)

     Called by init_run:
     wfcinit      :      5.25s CPU      5.32s WALL (       1 calls)
     potinit      :      2.78s CPU      2.82s WALL (       1 calls)

     Called by electrons:
     c_bands      :    216.69s CPU    220.67s WALL (      11 calls)
     sum_band     :     28.36s CPU     29.37s WALL (      11 calls)
     v_of_rho     :      3.32s CPU      3.47s WALL (      12 calls)
     newd         :      9.20s CPU      9.85s WALL (      12 calls)
     mix_rho      :      0.47s CPU      0.48s WALL (      11 calls)

     Called by c_bands:
     init_us_2    :      0.22s CPU      0.24s WALL (      23 calls)
     regterg      :    215.75s CPU    219.70s WALL (      11 calls)

     Called by sum_band:
     sum_band:bec :      0.02s CPU      0.02s WALL (      11 calls)
     addusdens    :      9.99s CPU     10.65s WALL (      11 calls)

     Called by *egterg:
     h_psi        :     80.92s CPU     82.18s WALL (      59 calls)
     s_psi        :     32.03s CPU     32.68s WALL (      59 calls)
     g_psi        :      0.07s CPU      0.07s WALL (      47 calls)
     rdiaghg      :     24.47s CPU     24.74s WALL (      58 calls)

     Called by h_psi:
     add_vuspsi   :     31.76s CPU     32.29s WALL (      59 calls)

     General routines
     calbec       :     45.45s CPU     46.12s WALL (      70 calls)
     fft          :      3.19s CPU      3.28s WALL (     188 calls)
     ffts         :      0.10s CPU      0.11s WALL (      23 calls)
     fftw         :     17.26s CPU     17.53s WALL (    8878 calls)
     interpolate  :      0.51s CPU      0.54s WALL (      23 calls)

     Parallel routines
     fft_scatter  :      5.74s CPU      5.60s WALL (    9089 calls)

     PWSCF        :  4m31.26s CPU     4m38.14s WALL


   This run was terminated on:  18: 4:48   9Sep2017            

=------------------------------------------------------------------------------=
   JOB DONE.
=------------------------------------------------------------------------------=


~/py-scr2.py,

from PYXAID import *
import os

nsteps_per_job = 2
tot_nsteps = 4

# os.system("rm -rf wd")

# Step 1 - split MD trajectory on the time steps
# Provide files listed below: "x.md.out" and "x.scf.in"
# IMPORTANT: 
# 1) use only ABSOLUTE path for PP in x.scf.in file
# 2) provided input file is just a template, so do not include coordinates
#out2inp.out2inp("x.md.out","x0.scf.in","wd","x0.scf",0,tot_nsteps,1)  # neutral setup
#out2inp.out2inp("x.md.out","x1.scf.in","wd","x1.scf",0,tot_nsteps,1)  # charged setup
xdatcar2inp.xdatcar2inp("XDATCAR","x0.scf.in","wd","x0.scf",0,4,1)

# Step 2 - distribute all time steps into groups(jobs) 
# several time steps per group - this is to accelerate calculations
# creates a "customized" submit file for each job and submit it - run
# a swarm in independent calculations (trajectory pieces)
# (HTC paradigm)
# Provide the files below: 
# submit_templ.pbs - template for submit files - manually edit the variables
# x.exp.in - file for export of the wavefunction

os.system("cp submit_templ.pbs wd")
os.system("cp x0.exp.in wd")
#os.system("cp x1.exp.in wd")
os.chdir("wd")
distribute.distribute(0,4,nsteps_per_job,"submit_templ.pbs",["x0.exp.in"],["x0.scf"],1) # 1 = PBS, 2 = SLURM, 0 = no run
#distribute.distribute(0,tot_nsteps,nsteps_per_job,"submit_templ.pbs",["x0.exp.in","x1.exp.in"],["x0.scf","x1.scf"],1)


~/submit_templ.pbs,

#!/bin/sh

#PBS -l nodes=1:ppn=12
#PBS -N qe_test
#PBS -q q_zhq_bnulongr

 cd $PBS_O_WORKDIR

 NPROCS=$PBS_NP

 source /home/export/parastor/clussoft/profile.d/intelmpi.sh 
 # mpirun -machinefile $PBS_NODEFILE -np $NPROCS  pw.x <x.in> x.out


exe_qespresso=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/pw.x
exe_export=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/pw_export.x
exe_convert=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/iotk

NP=$(wc -l $PBS_NODEFILE | awk '{print $1}')
res=/home/export/parastor/keyuser/bnulongr/works/test/test0/res

&inputpp
  prefix = 'x0',
  outdir = './',
  pseudo_dir = '/home/export/parastor/keyuser/bnulongr/soft/qe/upf_files',
  psfile(1) = 'C.pbe-rrkjus.UPF',
  psfile(2) = 'N.pbe-rrkjus.UPF',
    psfile(3) = 'H.pbe-rrkjus.UPF',
  psfile(4) = 'Pb.pbe-dn-kjpaw_psl.0.2.2.UPF'
     psfile(5) = 'I.pbe-n-kjpaw_psl.0.2.UPF',
  single_file = .FALSE.,
  ascii = .TRUE.,
  uspp_spsi = .FALSE.,
/
&CONTROL
  calculation = 'scf',
  pseudo_dir = '/home/export/parastor/keyuser/bnulongr/soft/qe/upf_files',
  outdir = './',
  prefix = 'scf',
  disk_io = 'low',
  wf_collect = .true.
/

&SYSTEM
  ibrav = 0,
  celldm(1) = 1.8897,
  nat = 96,
  ntyp = 5,
!  nspin = 2,
!  nbnd = 20,
  ecutwfc = 40.D0,
  ecutrho   = 300.D0,
!  tot_charge = 0.0,
!  starting_magnetization(1) = 0.01  !do the spin polarized calculation to get the wfc1,2,etc files
  occupations = 'smearing',
  smearing = 'gaussian',
  degauss = 0.005,
  nosym = .true.,
/

&ELECTRONS
  electron_maxstep = 300,
  conv_thr = 1.D-5,
  mixing_beta = 0.45,
/

&IONS
  ion_dynamics = 'verlet',
  ion_temperature = 'andersen',
  tempw = 300.00 ,
  nraise = 1,
/


ATOMIC_SPECIES
 C  12.01  C.pbe-rrkjus.UPF
 N     14.0069999695    N.pbe-rrkjus.UPF
   H  1.008  H.pbe-rrkjus.UPF
Pb    207.1999969482    Pb.pbe-dn-rrkjus_psl.0.2.2.UPF
I    126.9039993286    I.pbe-n-rrkjus_psl.0.2.UPF


K_POINTS gamma                  
                               
CELL_PARAMETERS
    12.7225999832000003    0.0000000000000000    0.0000000000000000
     0.0000000000000000   12.7225999832000003    0.0000000000000000
     0.0000000000000000    0.0000000000000000   12.7227001190000006


Best.

Misaraty

Wei Li

unread,
Sep 9, 2017, 7:16:59 AM9/9/17
to Quantum-Dynamics-Hub
Are you sure the x0.exp.out file generated correctly?

Please ensure that the prefix tag in x0.exp.in and x0.scf.in file set consistently. Thing is related to the error when converting the wfc file or reading xml file, I think.

Wei


misa...@gmail.com

unread,
Sep 9, 2017, 8:00:58 AM9/9/17
to Quantum-Dynamics-Hub
Hi Wei,

Thanks.

According to your opinion, I have modified the prefix = x0 in x0.exp.in and x0.scf.in, but did not generate the  _re and _im in res folder.

[bnulongr@sn21 test0]$ tree
.
├── py-scr2.py
├── submit_templ.pbs
├── wd
│   ├── job0
│   │   ├── qe_test.e38395
│   │   ├── qe_test.o38395
│   │   ├── submit_templ.pbs
│   │   ├── wd
│   │   │   └── curr0
│   │   │       ├── x0.export
│   │   │       │   ├── grid.1
│   │   │       │   ├── index.xml
│   │   │       │   ├── mgrid
│   │   │       │   ├── wfc.1
│   │   │       │   └── wfc.1.xml
│   │   │       ├── x0.scf.2.out
│   │   │       ├── x0.wfc1
│   │   │       ├── x0.wfc10
│   │   │       ├── x0.wfc11
│   │   │       ├── x0.wfc12
│   │   │       ├── x0.wfc2
│   │   │       ├── x0.wfc3
│   │   │       ├── x0.wfc4
│   │   │       ├── x0.wfc5
│   │   │       ├── x0.wfc6
│   │   │       ├── x0.wfc7
│   │   │       ├── x0.wfc8
│   │   │       └── x0.wfc9
│   │   ├── x0.exp.in
│   │   ├── x0.exp.out
│   │   ├── x0.save
│   │   │   ├── charge-density.dat
│   │   │   ├── C.pbe-rrkjus.UPF
│   │   │   ├── data-file.xml
│   │   │   ├── gvectors.dat
│   │   │   ├── H.pbe-rrkjus.UPF
│   │   │   ├── I.pbe-n-rrkjus_psl.0.2.UPF
│   │   │   ├── K00001
│   │   │   │   ├── eigenval.xml
│   │   │   │   ├── evc.dat
│   │   │   │   └── gkvectors.dat
│   │   │   ├── N.pbe-rrkjus.UPF
│   │   │   └── Pb.pbe-dn-rrkjus_psl.0.2.2.UPF
│   │   ├── x0.scf.0.in
│   │   ├── x0.scf.1.in
│   │   └── x0.scf.2.in
│   ├── job1
│   │   ├── qe_test.e38396
│   │   ├── qe_test.o38396
│   │   ├── submit_templ.pbs
│   │   ├── wd
│   │   │   └── curr0
│   │   │       ├── x0.export
│   │   │       │   ├── grid.1
│   │   │       │   ├── index.xml
│   │   │       │   ├── mgrid
│   │   │       │   ├── wfc.1
│   │   │       │   └── wfc.1.xml
│   │   │       ├── x0.scf.3.out
│   │   │       ├── x0.wfc1
│   │   │       ├── x0.wfc10
│   │   │       ├── x0.wfc11
│   │   │       ├── x0.wfc12
│   │   │       ├── x0.wfc2
│   │   │       ├── x0.wfc3
│   │   │       ├── x0.wfc4
│   │   │       ├── x0.wfc5
│   │   │       ├── x0.wfc6
│   │   │       ├── x0.wfc7
│   │   │       ├── x0.wfc8
│   │   │       └── x0.wfc9
│   │   ├── x0.exp.in
│   │   ├── x0.exp.out
│   │   ├── x0.save
│   │   │   ├── charge-density.dat
│   │   │   ├── C.pbe-rrkjus.UPF
│   │   │   ├── data-file.xml
│   │   │   ├── gvectors.dat
│   │   │   ├── H.pbe-rrkjus.UPF
│   │   │   ├── I.pbe-n-rrkjus_psl.0.2.UPF
│   │   │   ├── K00001
│   │   │   │   ├── eigenval.xml
│   │   │   │   ├── evc.dat
│   │   │   │   └── gkvectors.dat
│   │   │   ├── N.pbe-rrkjus.UPF
│   │   │   └── Pb.pbe-dn-rrkjus_psl.0.2.2.UPF
│   │   ├── x0.scf.2.in
│   │   └── x0.scf.3.in
│   ├── submit_templ.pbs
│   ├── tmp
│   ├── x0.exp.in
│   ├── x0.scf.0.in
│   ├── x0.scf.1.in
│   ├── x0.scf.2.in
│   ├── x0.scf.3.in
│   └── x0.scf.4.in
├── x0.exp.in
├── x0.scf.in
└── XDATCAR

13 directories, 86 files


At the same time, I tested the official example again and there was the same problem,

[bnulongr@sn21 test3]$ tree
.
├── py-scr2.py
├── py-scr6.py
├── py-scr7.py
├── res.tar.bz2
├── spectr.tar.bz2
├── submit.pbs
├── submit_templ.pbs
├── submit_templ.slm
├── wd
│   ├── job0
│   │   ├── qe_test.e38388
│   │   ├── qe_test.o38388
│   │   ├── submit_templ.pbs
│   │   ├── wd_test
│   │   │   └── curr0
│   │   │       ├── x0.export
│   │   │       │   ├── grid.1
│   │   │       │   ├── grid.2
│   │   │       │   ├── index.xml
│   │   │       │   ├── mgrid
│   │   │       │   ├── wfc.1
│   │   │       │   ├── wfc.1.xml
│   │   │       │   └── wfc.2
│   │   │       ├── x0.scf.2.out
│   │   │       └── x0.wfc1
│   │   ├── x0.exp.in
│   │   ├── x0.exp.out
│   │   ├── x0.save
│   │   │   ├── charge-density.dat
│   │   │   ├── C.pbe-rrkjus.UPF
│   │   │   ├── data-file.xml
│   │   │   ├── gvectors.dat
│   │   │   ├── H.pbe-rrkjus.UPF
│   │   │   ├── K00001
│   │   │   │   ├── eigenval1.xml
│   │   │   │   ├── eigenval2.xml
│   │   │   │   ├── evc1.dat
│   │   │   │   ├── evc2.dat
│   │   │   │   └── gkvectors.dat
│   │   │   └── spin-polarization.dat
│   │   ├── x0.scf.0.in
│   │   ├── x0.scf.1.in
│   │   └── x0.scf.2.in
│   ├── job1
│   │   ├── qe_test.e38389
│   │   ├── qe_test.o38389
│   │   ├── submit_templ.pbs
│   │   ├── wd_test
│   │   │   └── curr0
│   │   │       ├── x0.export
│   │   │       │   ├── grid.1
│   │   │       │   ├── grid.2
│   │   │       │   ├── index.xml
│   │   │       │   ├── mgrid
│   │   │       │   ├── wfc.1
│   │   │       │   ├── wfc.1.xml
│   │   │       │   └── wfc.2
│   │   │       ├── x0.scf.3.out
│   │   │       └── x0.wfc1
│   │   ├── x0.exp.in
│   │   ├── x0.exp.out
│   │   ├── x0.save
│   │   │   ├── charge-density.dat
│   │   │   ├── C.pbe-rrkjus.UPF
│   │   │   ├── data-file.xml
│   │   │   ├── gvectors.dat
│   │   │   ├── H.pbe-rrkjus.UPF
│   │   │   ├── K00001
│   │   │   │   ├── eigenval1.xml
│   │   │   │   ├── eigenval2.xml
│   │   │   │   ├── evc1.dat
│   │   │   │   ├── evc2.dat
│   │   │   │   └── gkvectors.dat
│   │   │   └── spin-polarization.dat
│   │   ├── x0.scf.2.in
│   │   └── x0.scf.3.in
│   ├── submit_templ.pbs
│   ├── tmp
│   ├── x0.exp.in
│   ├── x0.scf.0.in
│   ├── x0.scf.1.in
│   ├── x0.scf.2.in
│   ├── x0.scf.3.in
│   └── x0.scf.4.in
├── x0.exp.in
├── x0.scf.in
└── x.md.out

13 directories, 74 files

submit.pbs,

#!/bin/sh

#PBS -l nodes=1:ppn=2
#PBS -N qe_test
#PBS -q q_zhq_bnulongr

 # cd $PBS_O_WORKDIR

 # NPROCS=$PBS_NP

 source /home/export/parastor/clussoft/profile.d/intelmpi.sh 
 # mpirun -machinefile $PBS_NODEFILE -np $NPROCS  pw.x <x.in> x.out


exe_qespresso=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/pw.x
exe_export=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/pw_export.x
exe_convert=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/iotk


res=/home/export/parastor/keyuser/bnulongr/apps/test/test3/res


# These will be assigned automatically, leave them as they are
param1=
param2=


# This is invocation of the scripts which will further handle NA-MD calclculations
# on the NAC calculation step
python -c "from PYXAID import *
params = { }
params[\"NP\"]=$NP
params[\"EXE\"]=\"$exe_qespresso\"
params[\"EXE_EXPORT\"]=\"$exe_export\"
params[\"EXE_CONVERT\"] = \"$exe_convert\"
params[\"start_indx\"]=$param1
params[\"stop_indx\"]=$param2
params[\"wd\"]=\"wd_test\"
params[\"rd\"]=\"$res\"
params[\"minband\"]=1
params[\"nocc\"]=6
params[\"maxband\"]=20
params[\"nac_method\"]=0
params[\"wfc_preprocess\"]=\"complete\"
params[\"do_complete\"]=1
params[\"prefix0\"]=\"x0.scf\"
params[\"prefix1\"]=\"x1.scf\"
params[\"compute_Hprime\"]=0
print params
runMD1.runMD(params)
"


cd $PBS_O_WORKDIR
echo $PBS_O_WORKDIR

# Don't forget to source right MPI library:
# source /usr/usc/openmpi/1.8.1/gnu/setup.sh
source /home/export/parastor/clussoft/profile.d/intelmpi.sh 

NP=$(wc -l $PBS_NODEFILE | awk '{print $1}')
echo $NP

mpirun -n $NP $exe_qespresso < x.md.in > x.md.out

x0.exp.in,

在此输入代码..&inputpp
  prefix = 'x0',
  outdir = './',
  pseudo_dir = '/home/export/parastor/keyuser/bnulongr/soft/qe/upf_files',
  psfile(1) = 'C.pbe-rrkjus.UPF',
  psfile(2) = 'H.pbe-rrkjus.UPF',
  single_file = .FALSE.,
  ascii = .TRUE.,
  uspp_spsi = .FALSE.,
&CONTROL
  calculation = 'scf',
  pseudo_dir = '/home/export/parastor/keyuser/bnulongr/soft/qe/upf_files',
  outdir = './',
  prefix = 'x0',
  disk_io = 'low',
  wf_collect = .true.
/

&SYSTEM
  ibrav = 0,
  celldm(1) = 24.08,
  nat = 6,
  ntyp = 2,
  nspin = 2,
  nbnd = 20,
  ecutwfc = 30,
  tot_charge = 0.0,
  starting_magnetization(1) = 0.01
  occupations = 'smearing',
  smearing = 'gaussian',
  degauss = 0.005,
  nosym = .true.,

/

&ELECTRONS
  electron_maxstep = 300,
  conv_thr = 1.D-5,
  mixing_beta = 0.45,
/

&IONS
  ion_dynamics = 'verlet',
  ion_temperature = 'andersen',
  tempw = 300.00 ,
  nraise = 1,
/


ATOMIC_SPECIES
 C  12.01  C.pbe-rrkjus.UPF
 H  1.008  H.pbe-rrkjus.UPF


K_POINTS gamma                  
                               
CELL_PARAMETERS
     1.0000000    0.0000000    0.0000000
     0.0000000    1.0000000    0.0000000
     0.0000000    0.0000000    1.0000000

Best.

Misaraty

Alexey Akimov

unread,
Sep 9, 2017, 3:36:34 PM9/9/17
to Quantum-Dynamics-Hub
Hi Misaraty,

Would you give the Pyxaid2 a try? I haven't dealt with Pyxaid for a while, but remember that at some point they (QE) had wfc.1 files printed in a binary format. In which case I had an instruction to run a format conversion (via iotk module of QE). Later, it was not need to do this conversion so I removed the instruction. So, I'd recommend you looking inside the files (with vi) and determine if they are in binary or plane text format. In the latter case, it is likely that you have some other problems, more likely related to QE or parameters' setup. Just triple check everything.

Alexey

brendanqhd

unread,
Sep 9, 2017, 4:21:35 PM9/9/17
to Quantum-Dynamics-Hub
Hi Misaraty,

Is your problem that the files you need are not going into the res directory? If this is the case, you may want to make sure that you have properly placed the address of your res directory written correctly. Also, i see you have two different systems. One with just C and H, and the other with Pb and I in addition. Which exactly is it? the reason i am asking is because you may have forgotten a pseudo-potential for one of the atoms.

Best,
Brendan

misa...@gmail.com

unread,
Sep 9, 2017, 10:19:02 PM9/9/17
to Quantum-Dynamics-Hub
Hi Brendan,

Thanks.

I used the two examples of calculation in the replies, one is provided by myself, the other is provided by the official. The pseudo-potential was suitable.

Best.

Misaraty

misa...@gmail.com

unread,
Sep 9, 2017, 10:25:33 PM9/9/17
to Quantum-Dynamics-Hub
Hi Alexey,

Thanks.

I checked wfc.1 file, which was a binary format file.

At present, pyxaid2 and parameters of QE are being tested.

Best.

Misaraty

Alexey Akimov

unread,
Sep 10, 2017, 5:55:09 AM9/10/17
to Quantum-Dynamics-Hub
Oh, ok. Then, you may want to try a different version of QE. Most of the QE versions produce these files in the xml format. 

Alternatively, you would need a conversion like:

os.system("%s convert %s/curr0/x0.export/wfc.1 %s/curr0/x0.export/wfc.1.xml" % (EXE_CONVERT,wd,wd))

and then use the wfc.1.xml file instead of wfc.1  Well, it would be good to add this type of computational workflow, since the current assumption is that the files produced are already in a human-readable xml format.

Alexey


misa...@gmail.com

unread,
Sep 10, 2017, 5:58:52 AM9/10/17
to Quantum-Dynamics-Hub
Hi Alexey,

Thanks.

According to your instructions, I will have a try.

Best.

Misaraty

misa...@gmail.com

unread,
Sep 11, 2017, 10:25:11 AM9/11/17
to Quantum-Dynamics-Hub
Hi Alexey,

I test the pyxaid2 and have a question, why does ~/example/0-non-rel-non-sp/x0.scf.in file lack ATOMIC_POSITIONS parameter as official example of pyxaid/pyxaid2?

Thanks.

Best.

Misaraty


misa...@gmail.com

unread,
Sep 11, 2017, 10:56:49 AM9/11/17
to Quantum-Dynamics-Hub
Hi Alexey,

I add the details of the official example of pyxaid2 test and guess whether or not ATOMIC_POSITIONS parameter lacking is a problem.

path,

[bnulongr@sn21 0-non-rel-non-sp]$ pwd
/home/export/parastor/keyuser/bnulongr/apps/test/test4/0-non-rel-non-sp

run,

[bnulongr@sn21 0-non-rel-non-sp]$ python py-scr2.py 
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::Atom_Record already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for std::vector<liblibra::libforcefield::Atom_Record, std::allocator<liblibra::libforcefield::Atom_Record> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<liblibra::libforcefield::Atom_Record, std::allocator<liblibra::libforcefield::Atom_Record> >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<liblibra::libforcefield::Atom_Record, std::allocator<liblibra::libforcefield::Atom_Record> >, false> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::Bond_Record already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for std::vector<liblibra::libforcefield::Bond_Record, std::allocator<liblibra::libforcefield::Bond_Record> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<liblibra::libforcefield::Bond_Record, std::allocator<liblibra::libforcefield::Bond_Record> >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<liblibra::libforcefield::Bond_Record, std::allocator<liblibra::libforcefield::Bond_Record> >, false> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::Angle_Record already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for std::vector<liblibra::libforcefield::Angle_Record, std::allocator<liblibra::libforcefield::Angle_Record> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<liblibra::libforcefield::Angle_Record, std::allocator<liblibra::libforcefield::Angle_Record> >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<liblibra::libforcefield::Angle_Record, std::allocator<liblibra::libforcefield::Angle_Record> >, false> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::Dihedral_Record already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for std::vector<liblibra::libforcefield::Dihedral_Record, std::allocator<liblibra::libforcefield::Dihedral_Record> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<liblibra::libforcefield::Dihedral_Record, std::allocator<liblibra::libforcefield::Dihedral_Record> >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<liblibra::libforcefield::Dihedral_Record, std::allocator<liblibra::libforcefield::Dihedral_Record> >, false> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::Fragment_Record already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::ForceField already registered; second conversion method ignored.
  from liblibra_core import *
39345.mgmt2.nsccjn.con
39346.mgmt2.nsccjn.con


files distribution,

[bnulongr@sn21 0-non-rel-non-sp]$ tree
.
├── py-scr2.py
├── py-scr3.py
├── submit.pbs
├── wd
│   ├── job0
│   │   ├── CRASH
│   │   ├── qe_nac.o39345
│   │   ├── submit.pbs
│   │   ├── wd_test
│   │   │   ├── curr0
│   │   │   │   └── x0.scf.0.out
│   │   │   └── next0
│   │   │       └── x0.scf.1.out
│   │   ├── x0.exp.in
│   │   ├── x0.exp.out
│   │   ├── x0.scf.0.in
│   │   ├── x0.scf.1.in
│   │   └── x0.scf.2.in
│   ├── job1
│   │   ├── CRASH
│   │   ├── qe_nac.o39346
│   │   ├── submit.pbs
│   │   ├── wd_test
│   │   │   ├── curr0
│   │   │   │   └── x0.scf.2.out
│   │   │   └── next0
│   │   │       └── x0.scf.3.out
│   │   ├── x0.exp.in
│   │   ├── x0.exp.out
│   │   ├── x0.scf.2.in
│   │   └── x0.scf.3.in
│   ├── submit.pbs
│   ├── tmp
│   ├── x0.exp.in
│   ├── x0.scf.0.in
│   ├── x0.scf.1.in
│   ├── x0.scf.2.in
│   ├── x0.scf.3.in
│   └── x0.scf.4.in
├── x0.exp.in
└── x0.scf.in

9 directories, 32 files


errors,

[bnulongr@sn21 0-non-rel-non-sp]$ nano /home/export/parastor/keyuser/bnulongr/apps/test/test4/0-non-rel-non-sp/wd/job0/CRASH 

  GNU nano 2.0.9 文件: ...test/test4/0-non-rel-non-sp/wd/job0/CRASH         


 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%$
     task #         0
     from pw_export : error #         1
     reading inputpp namelist
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%$


/home/export/parastor/keyuser/bnulongr/apps/test/test4/0-non-rel-non-sp/wd/job0/wd_test/curr0/x0.scf.0.out,

/home/export/parastor/keyuser/bnulongr/apps/test/test4/0-non-rel-non-sp/wd/job0/qe_nac.o39345,
/home/export/parastor/keyuser/bnulongr/apps/test/test4/0-non-rel-non-sp/wd/job0
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::Atom_Record already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for std::vector<liblibra::libforcefield::Atom_Record, std::allocator<liblibra::libforcefield::Atom_Record> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<liblibra::libforcefield::Atom_Record, std::allocator<liblibra::libforcefield::Atom_Record> >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<liblibra::libforcefield::Atom_Record, std::allocator<liblibra::libforcefield::Atom_Record> >, false> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::Bond_Record already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for std::vector<liblibra::libforcefield::Bond_Record, std::allocator<liblibra::libforcefield::Bond_Record> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<liblibra::libforcefield::Bond_Record, std::allocator<liblibra::libforcefield::Bond_Record> >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<liblibra::libforcefield::Bond_Record, std::allocator<liblibra::libforcefield::Bond_Record> >, false> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::Angle_Record already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for std::vector<liblibra::libforcefield::Angle_Record, std::allocator<liblibra::libforcefield::Angle_Record> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<liblibra::libforcefield::Angle_Record, std::allocator<liblibra::libforcefield::Angle_Record> >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<liblibra::libforcefield::Angle_Record, std::allocator<liblibra::libforcefield::Angle_Record> >, false> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::Dihedral_Record already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for std::vector<liblibra::libforcefield::Dihedral_Record, std::allocator<liblibra::libforcefield::Dihedral_Record> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for boost::python::detail::container_element<std::vector<liblibra::libforcefield::Dihedral_Record, std::allocator<liblibra::libforcefield::Dihedral_Record> >, unsigned long, boost::python::detail::final_vector_derived_policies<std::vector<liblibra::libforcefield::Dihedral_Record, std::allocator<liblibra::libforcefield::Dihedral_Record> >, false> > already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::Fragment_Record already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD.py:18: RuntimeWarning: to-Python converter for liblibra::libforcefield::ForceField already registered; second conversion method ignored.
  from liblibra_core import *
/home/export/parastor/keyuser/bnulongr/apps/test/test4/0-non-rel-non-sp/wd/job0
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
mv: cannot stat `*.wfc*': No such file or directory
mv: cannot stat `x0.export': No such file or directory
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
mv: cannot stat `*.wfc*': No such file or directory
mv: cannot stat `x0.export': No such file or directory
{'maxband_soc': 37, 'maxband': 37, 'EXE': '/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/pw.x', 'wd': 'wd_test', 'compute_Hprime': 0, 'BATCH_SYSTEM': '/home/export/parastor/clussoft/mpi/openmpi/1.8.7/intel/bin/mpirun', 'prefix1': 'x1.scf', 'prefix0': 'x0.scf', 'minband': 36, 'stop_indx': 2, 'rd': '/home/export/parastor/keyuser/bnulongr/apps/test/test4/0-non-rel-non-sp/res', 'start_indx': 0, 'pptype': 'US', 'EXE_EXPORT': '/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/pw_export.x', 'nac_method': 0, 'wfc_preprocess': 'complete', 'do_complete': 1, 'NP': 1, 'EXE_CONVERT': '/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/iotk', 'minband_soc': 36}
Starting runMD
Warning: Parameter with key = dt does not exist in dictionary
Using the default value of 1.0
Warning: Parameter with key = pp_type does not exist in dictionary
Using the default value of NC
non-relativistic, non spin-polarized calculation for NAC  

In runMD: current working directory for python:  /home/export/parastor/keyuser/bnulongr/apps/test/test4/0-non-rel-non-sp/wd/job0
In runMD: current working directory for sh: 0
>>>>>>>>>>>>>>>>>>>>  t=  0  <<<<<<<<<<<<<<<<<<<<<
Starting first point in this batch
Time to run first calculations =  0.01
End of step t= 0
>>>>>>>>>>>>>>>>>>>>  t=  1  <<<<<<<<<<<<<<<<<<<<<
Continuing with other points in this batch
Time to run first calculations =  0.01
Generate NAC from WFCs at two adjacent points
Traceback (most recent call last):
  File "<string>", line 24, in <module>
  File "/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master/PYXAID2/runMD2.py", line 562, in runMD
    info0, all_e_dum0 = QE_methods.read_qe_index("%s/curr0/x0.export/index.xml" % wd, [], 0)
  File "/home/export/parastor/keyuser/bnulongr/apps/libra/bin/src/libra_py/QE_methods.py", line 41, in read_qe_index
    ctx = Context(filename)  #("x.export/index.xml")
RuntimeError: wd_test/curr0/x0.export/index.xml: cannot open file


-----------------------------------------------------------------------------------------


input files,


py-scr2.py,

from PYXAID2 import *
import os

user = 1 #weili or eric
#user = 2 #alexey

nsteps_per_job = 2
tot_nsteps = 4

# Step 1 - split MD trajectory on the time steps
# Provide files listed below: "GaAs-md.out" and "x0.scf.in", "x1.scf.in"
# IMPORTANT: 
# 1) use only ABSOLUTE path for PP in x.scf.in file
# 2) provided input file is just a template, so do not include coordinates
rt="/home/export/parastor/keyuser/bnulongr/apps/test/test4/"
out2inp.out2inp(rt+"GaAs-md.out","x0.scf.in","wd","x0.scf",0,tot_nsteps,1)  # non-relativistic setup
#out2inp.out2inp(rt+"GaAs-md.out","x1.scf.in","wd","x1.scf",0,tot_nsteps,1)  # relativistic setup


# Step 2 - distribute all time steps into groups(jobs) 
# several time steps per group - this is to accelerate calculations
# creates a "customized" submit file for each job and submit it - run
# a swarm in independent calculations (trajectory pieces)
#(HTC paradigm)
# Provide the files below: 
# submit_templ.pbs - template for submit files - manually edit the variables
# x.exp.in - file for export of the wavefunction

if user==1: 
   os.system("cp submit.pbs wd")
elif user==2:
   os.system("cp submit_templ.slm wd")

os.system("cp x0.exp.in wd")
#os.system("cp x1.exp.in wd")
os.chdir("wd")

if user==1:
   #distribute.distribute(0,tot_nsteps,nsteps_per_job,"submit.pbs",["x0.exp.in","x1.exp.in"],["x0.scf","x1.scf"],1)
   distribute.distribute(0,tot_nsteps,nsteps_per_job,"submit.pbs",["x0.exp.in"],["x0.scf"],1)
   #distribute.distribute(0,tot_nsteps,nsteps_per_job,"submit.pbs",["x1.exp.in"],["x1.scf"],1)
# elif user==2:
   #distribute.distribute(0,tot_nsteps,nsteps_per_job,"submit_templ.slm",["x0.exp.in"],["x0.scf"],2)
   #distribute.distribute(0,tot_nsteps,nsteps_per_job,"submit_templ.slm",["x1.exp.in"],["x1.scf"],2)
   #distribute.distribute(0,tot_nsteps,nsteps_per_job,"submit_templ.slm",["x0.exp.in","x1.exp.in"],["x0.scf","x1.scf"],2)


submit.pbs,

#!/bin/bash
#PBS -l nodes=1:ppn=1
#PBS -N qe_nac
#PBS -j oe
#PBS -q q_zhq_bnulongr
#PBS -l walltime=30:00:00

cd $PBS_O_WORKDIR
echo $PBS_O_WORKDIR

# the below settings works for my laptop, please customize your own environmental variables!
###### some conventional settings ###########
# export PYTHONPATH=/home/eric/src/Libra/libra-code/_build/src:$PYTHONPATH
# export LD_LIBRARY_PATH=/home/eric/src/Libra/libra-code/_build/src:$LD_LIBRARY_PATH
# export PYTHONPATH=/home/eric/src/Libra/pyxaid2:$PYTHONPATH
# export LD_LIBRARY_PATH=/home/eric/src/Libra/pyxaid2:/opt/boost/1.55.0/lib:$LD_LIBRARY_PATH
# export LD_LIBRARY_PATH=/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64:$LD_LIBRARY_PATH
export PYTHONPATH=/home/export/parastor/keyuser/bnulongr/apps/libra/bin/src:$PYTHONPATH
export LD_LIBRARY_PATH=/home/export/parastor/keyuser/bnulongr/apps/libra/bin/src:$LD_LIBRARY_PATH
export PYTHONPATH=/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master:$PYTHONPATH
export LD_LIBRARY_PATH=/home/export/parastor/keyuser/bnulongr/apps/pyxaid2-master:/home/export/parastor/keyuser/bnulongr/apps/boost1.6/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/home/export/parastor/clussoft/compiler/intel/composer_xe_2015.2.164/mkl/lib/intel64:$LD_LIBRARY_PATH
##### path for the QE module #####
exe_qespresso=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/pw.x
exe_export=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/pw_export.x
exe_convert=/home/export/parastor/keyuser/bnulongr/apps/qe/5.3.0/espresso-5.3.0/bin/iotk
##### MPI #####
MPIRUN=/home/export/parastor/clussoft/mpi/openmpi/1.8.7/intel/bin/mpirun
NP=$(wc -l $PBS_NODEFILE | awk '{print $1}')

# These will be assigned automatically, leave them as they are
param1=
param2=

res=/home/export/parastor/keyuser/bnulongr/apps/test/test4/0-non-rel-non-sp/res

# This is invocation of the scripts which will further handle NA-MD calclculations
# on the NAC calculation step
python -c "from PYXAID2 import *
params = { }
params[\"BATCH_SYSTEM\"]=\"$MPIRUN\"
params[\"NP\"]=$NP
params[\"EXE\"]=\"$exe_qespresso\"
params[\"EXE_EXPORT\"]=\"$exe_export\"
params[\"EXE_CONVERT\"] =\"$exe_convert\"
params[\"start_indx\"]=$param1
params[\"stop_indx\"]=$param2
params[\"wd\"]=\"wd_test\"
params[\"rd\"]=\"$res\"
params[\"minband\"]=36
params[\"maxband\"]=37
params[\"minband_soc\"]=36
params[\"maxband_soc\"]=37
params[\"nac_method\"]=0
params[\"wfc_preprocess\"]=\"complete\"
params[\"do_complete\"]=1
params[\"prefix0\"]=\"x0.scf\"
params[\"prefix1\"]=\"x1.scf\"
params[\"compute_Hprime\"]=0
params[\"pptype\"]=\"US\"
print params
runMD2.runMD(params)
"



&inputpp
  prefix = 'x0',
  outdir = './',
  pseudo_dir = '/home/export/parastor/keyuser/bnulongr/apps/test/test4/pp',
  psfile(1) = 'Ga.pbe-dn-rrkjus_psl.0.2.UPF',
  psfile(2) = 'As.pbe-n-rrkjus_psl.0.2.UPF',
  single_file = .FALSE.,
  ascii = .TRUE.,
  uspp_spsi = .FALSE.,
/
&CONTROL
  calculation = 'scf',
  dt = 20.67055,
!  nstep = 50,
  pseudo_dir = '/home/export/parastor/keyuser/bnulongr/apps/test/test4/pp',
  outdir = './',
  prefix = 'x0',
  disk_io = 'low',
  wf_collect = .true.
/
&SYSTEM
  ibrav = 0,
  celldm(1) = 1.89,
  nat = 8,
  ntyp = 2,
!  nspin = 2,
  nbnd = 50,
  ecutwfc = 50,
  tot_charge = 0.0,
  occupations = 'smearing',
!  starting_magnetization(1) = 0.01,
  smearing = 'gaussian',
  degauss = 0.005,
  nosym = .true.,
  !lspinorb=.true.,
  !noncolin = .true.,
/
&ELECTRONS
  electron_maxstep = 300,
  conv_thr = 1.D-5,
  mixing_beta = 0.45,
/
&IONS
  ion_dynamics = 'verlet',
  ion_temperature = 'andersen',
  tempw = 300.00 ,
  nraise = 1,
/
ATOMIC_SPECIES
 Ga   69.723   Ga.pbe-dn-rrkjus_psl.0.2.UPF
 As   74.921   As.pbe-n-rrkjus_psl.0.2.UPF
K_POINTS automatic
1 1 1 0 0 0
     
CELL_PARAMETERS
   5.745328496   0.000000000   0.000000000
   0.000000000   5.745329803   0.000000000
   0.000000000   0.000000000   5.745329473 

Best.

Misaraty

Wei Li

unread,
Sep 11, 2017, 11:40:17 AM9/11/17
to Quantum-Dynamics-Hub
Hi Misaraty,

The error reminds "reading inputpp namelist", it may come from the error namelist in the QE input file, maybe you should check it carefully. 

Wei

misa...@gmail.com

unread,
Sep 11, 2017, 9:12:11 PM9/11/17
to Quantum-Dynamics-Hub
Hi Wei,

Thanks.

I used the official example of pyxaid2 (pyxaid was also a similar situation), just modified the path of pseudo_dir parameter.

But I did not know why the *.in, which was QE input file, did not contain the ATOMIC_POSITIONS parameter?

Best.

Misaraty

Alexey Akimov

unread,
Sep 12, 2017, 1:15:13 AM9/12/17
to Quantum-Dynamics-Hub
Hi Misaraty,

Have you looked at the Pyxaid (1!) tutorials? As far as I remember, there is an explanation for why there is not ATOMIC_POSITIONS parameters in that file.

Alexey


brendanqhd

unread,
Sep 13, 2017, 1:29:31 PM9/13/17
to Quantum-Dynamics-Hub
Hi Misaraty,

The atomic positions are not included in step 2 of PYXAID because you actually already computer the atomic positions at each time step during step 1 of PYXAID. You have to make sure that the x.md.out file from step 1 is in the step 2 directory. Have you done this already? I am trying to gauge where you are.

Best,
Brendan

misa...@gmail.com

unread,
Sep 13, 2017, 8:32:56 PM9/13/17
to Quantum-Dynamics-Hub
Hi Brendan,

Thanks.

Yes. The x.md.out file from step 1 is really in the step 2 directory.

Best.

Misaraty

misa...@gmail.com

unread,
Sep 17, 2017, 10:15:34 AM9/17/17
to Quantum-Dynamics-Hub
Hi Alexey,

The pyxiad2+qe is already running. Thanks for you, Wei and Brendan.

Best.

Misaraty

brendanqhd

unread,
Sep 17, 2017, 4:34:14 PM9/17/17
to Quantum-Dynamics-Hub
Hi, Misaraty,

So is everything working fine? If not -- what specifically is not working, I am still not completely following what the specific problem is you are having. Could you try to say it in 2-3 sentences?

Best,
Brendan

misa...@gmail.com

unread,
Sep 17, 2017, 11:10:13 PM9/17/17
to Quantum-Dynamics-Hub
Hi Brendan,

Thanks.

Yes. At present, it is normal.

The main problem is the mpi parameters of QE installation need to be modified. I installed the software on multiple clusters. Since the cluster environment is different, the same installation steps do not seem to guarantee the same result, even if you have installed all the dependencies alone.

Best.

Misaraty




brendanqhd

unread,
Sep 18, 2017, 11:05:45 AM9/18/17
to Quantum-Dynamics-Hub
Great, Glad to hear it is working,

Best,
Brendan
Reply all
Reply to author
Forward
0 new messages