Dear Franck,I managed to compile everything using openmpi + intel compilers on our cluster.However, I ran into memory related issues during runtime (intel mpirun + intel compilers never gave such issues) so out of curiosity I tried a full SCF calculation with a smaller material, NiO and it was successful.I'm attaching the makefile.include (for vasp compilation), make.inc (internal library compilation) and Makefile.in (external library compilation) for compilation with openmpi+intel compilers along with the calculation directory.The loaded modules in our cluster were;- intel 2018- openmpi 3.1.4_intel18I compiled scalapack-2.1.0 using openmpi (mpif90) and linked it to vasp.Wannier90 v 1.2 was compiled similarly and its libwannier.a was linked to vasp.wannier90.x was retrieved from Wannier90 v 3.1.0 and was copied to the /bin directory of DMFTwDFT.Please note that in the DMFTwDFT_eb version w90chk2chk.x is no longer needed as it is embedded within dmft0.x.Give it a try to see if it works on your end.Best,UthpalaOn Fri, Feb 19, 2021 at 11:12 PM Franck <franck.nga...@ipcms.unistra.fr> wrote:Dear Uthpala,
Thanks for you message.
resp1. indeed when we "deparalised" the RUNDMFT.py script by removing para_com in front of all the executables, this just allowed the computation to go to the end and display that the computation is finished. but by analyzing the OUTCAR file (in root/DMFT calculation) we realize that the fermi energy is around 21 eV which cannot be the case for the SrVO3 system. therefore even if he went to the end for the "deparallelized" case, the calculation remains erroneous for vaspDMFT. So we don't know if it's really (or only) the parallelization the problem. maybe it's a code bug (and or with our way of compiling or executing it).
resp2. I have always used version 2 and 3 of wannier in combination with vasp5.4.4 and it has always worked well. maybe an exception for the code https://rehnd.github.io/tutorials/vasp/vasp-wannier90 or https://ntq1982.github.io/files/20200624.html we have vasp verion 6 if the code is implemented there it would be good
https://www.vasp.at/wiki/index.php/Precompiler_flags#wannier90
resp3. resp4. Yes you are right it may work if you have in the version of the compiler you used the static library ($ (MKL_PATH) /libmkl_scalapack_lp64.a in our case (most recent version of the compiler) we only have the dynamic library. loading scalapack (which also depends on lapack) we directly have the static library.
resp5 What modules are you loaded for installation? I think the installation of openmpi with the intel compiler is pretty simple and fast, the same for scalapack and lapack and blas.
here is the procedure to install scalapack to compile it you have to have the lapack and blas libraries and those that compile fairly quickly.1-LAPACK AND BLAS
i-) wget http://www.netlib.org/lapack/lapack.tgz ii) tar zxvf lapack.tgz iii) cd lapack-3.9.0 iv) cp INSTALL/make.inc.ifort make.inc v) make blaslib (this will create librefblas.a that you can also modified the name to libblas.a) vi) make lapacklib (this will create liblapack.a)libblas.a and liblapack.a for next step
2-SCALAPACK
i-) wget http://www.netlib.org/scalapack/scalapack-2.1.0.tgz ii-)tar zxvf scalapack-2.1.0.tgz iii-) cp scalapack-2.1.0 iv-) cp SLmake.inc.example SLmake.inclink libblas.a and liblapack.a in SLmake.inc v-) gedit SLmake.inc BLASLIB = ………./libblas.a LAPACKLIB = …………/liblapack.a vi-) make lib (libscalapack.a)
Bests
Franck O.
Le 2021-02-20 00:58, Uthpala Herath a écrit :
Hello Franck,Thanks for the summary.1. To get an idea of what we're dealing with, can you tell me if it works when you run vaspDMFT in serial i.e. using one core?2. Also, w90chk2chk.x is indeed taken from v2. However, the wannier library mode that links to vasp (libwannier90.a) is the one from v1.2 as recommended by VASP here: https://www.vasp.at/wiki/index.php/LWANNIER903. I notice that you are not using lapack even if you import the module. Since you're using scalapack I don't think you have to use it in vasp. Correct me if I'm wrong please.4. Is there a reason you use the generic scalapack (libscalapack) instead of the intel ones ($(MKL_PATH)/libmkl_scalapack_lp64.a). Since you already have the intel mkl loaded why not just use that? In your Makefile.in you use -mkl.The tests I sent you earlier were from my desktop computer which has intel mpi + intel compilers. As I've mentioned sometime back, we have tested the DMFTwDFT code for gnu compilers and it works fine, but we have not tested it along with vasp compiled with gnu or openmpi compiled with intel, which is what we are testing now. Unfortunately my cluster doesn't have scalapack as a loadable module so I will recompile it myself using openmpi (compiled with intel) and test vasp using that.I'll get back to you when I am done compiling and running a test calculation.Best,Uthpala
On Fri, Feb 19, 2021 at 4:31 AM Franck <franck.nga...@ipcms.unistra.fr> wrote:Hi,
I understand what you mean.
Here is a summary of what we were saying. there are two combinations for the intel compiler intel + intelmpi (whose wrappers are mpiifort, mpiicc, mpiicpc) and the intel + openmpi compiler (whose wrappers are mpif90, mpicc, mpicxx). we have compiled the code of DMFTwDFT, wannier90v2, vasp5.4.4 with the intel + openmpi compiler (mpif90, mpicc, mpicxx) and test but this combination the bug code during the self-consistent test. I didn't use intelmpi for the support and complexity reasons that David explained in his last post. I-) We would like to know if your openmpi was compiled with intel not gnu. If yes did you test the code with intel + openmpi (Intel C wrapper: mpicc; Intel C ++ wrapper: mpicxx, Intel fortran wrapper: mpif90)? II-) I am attaching here the Makefile.in file that we used for the compilation of DMFTwDFT with intel + openmpi and also the makefile.include file used to compile vasp (and which has also worked) and intel.make.inc && make.inc files in DMFTwDFT/source- before compiling vasp you will need to load the scalapack module. here are the loaded modules. or just link scalapack library here in makefile.include {SCALAPACK = -L$(SCALAPACKLIB) -lscalapack $(BLACS)} 1) intel/intel20 3) lapack/lapack-3.9.0.i20(latest intel) 2) openmpi/openmpi-4.0.i20(latest intel) 4) scalapack/scalapack-2.1.0.i20(latest intel)
III-) if this intel + openmpi compilation works for you, we will know that it is our environment that is causing the problem. if it does not work we will know that the code only works with intelmpiDavid could correct me if it's not clear
Remarques: you used wannier90 version 1, how did you do with w90chk2chk.x which is only available from version 2 of wannier90. thank you for your support
Bests,
Franck O.
Le 2021-02-18 23:06, Uthpala Herath a écrit :
Dear Franck,Please find the files attached.makefile.include - My current makefile for the vasp compilation with intel compilers.makefile.include.gnu - Latest attempt to compile vasp without intel compilers.In the SrVO3_scf folder you can see that I am running vasp with 4 cores. You can set the number of cores for the DFT code inside para_com_dft.dat. If this file is not present, it will use the number of cores in para_com.dat for the DFT calculation.I would say give it a try with intel compilers because most codes recommend it and it is much easier to compile. I have always had trouble compiling programs with non-intel compilers. Another thing I noticed is that I am using wannier90 v1.2's library to compile vasp.Best,Uthpala
On Thu, Feb 18, 2021 at 10:46 AM Franck <franck.nga...@ipcms.unistra.fr> wrote:
lease could you transfer me your directory where you successfully compile (DMFTWDFT, makefile.include of vasp) (mpiifort) and also what have you tried now?
Le 2021-02-18 16:29, Uthpala Herath a écrit :
Dear Franck,I have tried multiple times but haven't had any luck with compiling vasp with the non-intel compilers. For us everything works well even with parallelization.Did you say your calculation works without an issue when vasp is run in serial?Best,Uthpala
On Thu, Feb 18, 2021 at 9:10 AM Franck <franck.nga...@ipcms.unistra.fr> wrote:Dear Uthpala,
Just to know if you have advanced in the code
Bests,
Franck O.
Le 2021-02-17 04:07, Uthpala Herath a écrit :
Thanks for the tips David. I totally understand.I'll give your suggestions a try.Best,Uthpala
On Tue, Feb 16, 2021 at 6:42 AM David Brusson <bru...@unistra.fr> wrote:Hi,
IntelMPI is installed on our cluster, but we prefer openmpi because it's opensource, and our users can use it with Intel, GCC, PGI or whatever compiler they want to use, so that only makes one implementation on which we have to provide support. Besides, with the way IntelMPI works users can't test their programs before submitting them to our job scheduler, so that is not optimal. It is something that we may reconsider in the future, but as long as there is only 2 persons to ensure support for hundreds of user, that won't change any time soon.
Concerning you issue, I think there is a mismatch between intel compiler and gfortran. Is your openmpi compiled with intel ? I'm sorry I won't have the time to help you more than that.
Greetings,
David
-- --------------------------------------------------------- David Brusson - bru...@unistra.fr Ingénieur calcul du meso-centre Université de Strasbourg / Direction du Numérique / Pôle CESAR Tel. : 03 68 85 61 42 ---------------------------------------------------------Le 16/02/2021 à 04:11, Uthpala Herath a écrit :Hello,I set all the non-intel related libraries set up and tried to compile vasp but was met with the following error:ukh0001 at trcis001 in ~/local/VASP/vasp.5.4.4_dmft
$ make all
mkdir build/std ; \
cp src/makefile src/.objects makefile.include build/std ; \
make -C build/std VERSION=std all
make[1]: Entering directory '/gpfs20/users/ukh0001/local/VASP/vasp.5.4.4_dmft/build/std'
rsync -ru ../../src/lib .
cp makefile.include lib
make -C lib -j1
make[2]: Entering directory '/gpfs20/users/ukh0001/local/VASP/vasp.5.4.4_dmft/build/std/lib'
make libdmy.a
make[3]: Entering directory '/gpfs20/users/ukh0001/local/VASP/vasp.5.4.4_dmft/build/std/lib'
fpp -f_com=no -free -w0 preclib.F preclib.f90
mpif90 -O1 -free -names lowercase -c -o preclib.o preclib.f90
gfortran: error: lowercase: No such file or directory
gfortran: error: unrecognized command line option ‘-names’; did you mean ‘-maes’?
make[3]: *** [makefile:28: preclib.o] Error 1
make[3]: Leaving directory '/gpfs20/users/ukh0001/local/VASP/vasp.5.4.4_dmft/build/std/lib'
make[2]: *** [makefile:18: all] Error 2
make[2]: Leaving directory '/gpfs20/users/ukh0001/local/VASP/vasp.5.4.4_dmft/build/std/lib'
make[1]: *** [makefile:156: lib] Error 2
make[1]: Leaving directory '/gpfs20/users/ukh0001/local/VASP/vasp.5.4.4_dmft/build/std'
make: *** [makefile:10: std] Error 2Attached is the makefile.include used. Any idea what's going on? This is at the very start of the compilation.Best,Uthpala
On Mon, Feb 15, 2021 at 12:21 PM Uthpala Herath <ukh...@mix.wvu.edu> wrote:Hello David,Thank you for the clarification.I'll keep that in mind when I try to recompile it again.However, I am curious as to why you use openmpi when you already are using the Intel compilers suite 2020 update 4 which has it's own mpi compilers (mpiifort etc.). Is VASP more efficient with that?Best,Uthpala
On Mon, Feb 15, 2021 at 11:06 AM David Brusson <bru...@unistra.fr> wrote:Hi,
Just to clarify some points, we are using :
- Intel compilers suite 2020 update 4 with Intel MKL (except for SCALAPACK)
- Openmpi-4.0.3
- Scalapack -2.1.0
You are raising an interesting point with the MPI_INC = $(I_MPI_ROOT)/include64/ in the Vasp makefile.include. It's under the #GPU stuff comment so I never paid attention to it, but it should be something like MPI_INC = $(MPIROOT)/include to fit with our system. To be honest I don't think it's important because Vasp itself compile and run well with that include file (I'm not even sure they're really using that variable), but it would be cleaner to fix that.
Greetings,
David
-- --------------------------------------------------------- David Brusson - bru...@unistra.fr Ingénieur calcul du meso-centre Université de Strasbourg / Direction du Numérique / Pôle CESAR Tel. : 03 68 85 61 42 ---------------------------------------------------------Le 15/02/2021 à 16:21, Uthpala Herath a écrit :So it seems that you are using the Intel MKL library instead of the gnu lapack etc. ones.Also, what is MPI_INC = $(I_MPI_ROOT)/include64/ ?For me it is, /opt/intel/oneapi/mpi/2021.1.1.Best,Uthpala
On Mon, Feb 15, 2021 at 9:23 AM Franck <franck.nga...@ipcms.unistra.fr> wrote:
Dear Uthpala,
thank you for your message 1-) load this module(if you have it): openmpi, intel, scalapack, lapack 2-) make std with makefile.include(modified) file attached Franck O.Le 2021-02-15 14:35, Uthpala Herath a écrit :
Hi Franck,I tried to compile vasp with gnu compilers and openmpi, but the following error popped up:mpif90 -ffree-form -ffree-line-length-none -w -O2 -I/usr/local//include -c wave.f90
wave.f90:1208:15:
W1%CPTWFP=>W%CPTWFP(:,:,:,ISPIN)
1
Error: Assignment to contiguous pointer from non-contiguous target at (1)
wave.f90:1209:15:
W1%CPROJ =>W%CPROJ (:,:,:,ISPIN)
1
Error: Assignment to contiguous pointer from non-contiguous target at (1)
wave.f90:1183:15:
W1%CPTWFP=>W%CPTWFP(:,NB,NK,ISP)
1
Error: Assignment to contiguous pointer from non-contiguous target at (1)
wave.f90:1184:15:
W1%CPROJ =>W%CPROJ(:,NB,NK,ISP)
1
Error: Assignment to contiguous pointer from non-contiguous target at (1)
wave.f90:682:20:
----
Fatal Error: Can't open module file ‘wave.mod’ for reading at (1): No such file or directory
compilation terminated.I've attached the makefile.include I used. Please take a look to see if I'm missing something. I have never used gnu and openmpi to compile vasp before (always used intel compilers and mkl libraries) so I don't know what's going on.Best,Uthpala
On Mon, Feb 15, 2021 at 7:09 AM Franck <franck.nga...@ipcms.unistra.fr> wrote:
please could you test it with openmpi if you have time? I can't compile vasp with intelmpi (mpiifort) or we can give you access to our data center so that you can test it
Le 2021-02-15 00:51, Uthpala Herath a écrit :
mpi4py is a python library.I install it with pip install mpi4py (python2) but sometimes it comes with python 2 itself.
On Sun, Feb 14, 2021 at 6:37 PM Franck <franck.nga...@ipcms.unistra.fr> wrote:
how did you install mpi4py with intelmpi?
Le 2021-02-14 16:32, Uthpala Herath a écrit :
I think this occurs only when running wannier90 in parellel your case.What happens if you remove the para_com parameter only from the wannier90.x call?Also, wannier90 3.1 is that I use for trying wannier90.xBest,Uthpala--
On Sun, Feb 14, 2021 at 7:52 AM Franck <franck.nga...@ipcms.unistra.fr> wrote:
the wannier90 computation does not work because the fermi energy of the vasp calculation is not correct and therefore the writing of wannier90.win is not correct too. I am all the same convinced that this is related to the parallelization. I did a test that works by replacing for example cmd = (para_com_dft + "" + p["path_bin"] + "vaspDMFT by p[" path_bin "] +" vaspDMFTI am attaching the RUNDMFT.py file which I have "deparallelized" (removing para_com) and which ran to the end without error. So the problem is perhaps the parallelization or at least related to the parallelization either directly via the declaration of the variables in the RUNDMFT.py script or in the generation of libdmft.a libraries or even the parallelization in the mlwf.F, charge.F, electron.F, main.F, and us.F files.I will now compile with intelmpi mpiifort, mpicpc, mpiicc
Le 2021-02-14 09:19, Uthpala Herath a écrit :
Yes, currently both my VASP and DMFTwDFT are compiled used intel compilers, i.e. mpiifort, mpiicc, mpiicpc.Did you check running wannier90.x in the directory?
On Sun, Feb 14, 2021 at 3:14 AM Franck <franck.nga...@ipcms.unistra.fr> wrote:You mean IntelMPI? mpiifort?
Le 2021-02-14 09:07, Uthpala Herath a écrit :
So we were right about the issue. The problem is because the wannier90.x calculation fails which is a result of an improper wannier window being adjusted with the Fermi level of -18.51 eV. Try running wannier90.x wannier90 in this folder and confirm if this is the case.This is just very strange as for the same calculation I get a different value.The VASP I use is compiled with intel compilers and I don't know if that makes a difference.
On Sun, Feb 14, 2021 at 2:39 AM Franck <franck.nga...@ipcms.unistra.fr> wrote:
by changing the RUNDMFT.py file like you did and increasing Niter, we have the calculation that stops by calculating wannier90.----- Starting DMFT loop : 2 -----
Running dmft.x...
Ed.out: [6.42127143 4.77163377]
Reading a file Delta.out
---i: 0 ['V1'] ---loc_idx: 0
---i: 0 ['V1'] ---loc_idx: 1
Ed: [[6.42127143 4.77163377]]
Setting up CTQMC for DMFT...
Eimp = [ 0. -1.64963766]
Executed file atom_d.inp
Br: kindx= [(7, 3), (9, 1), (5, -5), (7, -1), (6, 2), (3, -3), (5, 1), (1, -1), (4, -2), (4, 0), (3, 3), (6, -4), (4, 4), (8, -2), (7, -3), (2, 2), (10, 0), (5, 3), (1, 1), (5, -1), (6, 4), (0, 0), (8, 2), (7, 1), (2, -2), (6, 0), (9, -1), (8, 0), (4, 2), (5, -3), (5, 5), (4, -4), (3, 1), (3, -1), (6, -2), (2, 0)]
Br: Stage1: Computing F^ in direct base
Br: Stage2: Compressing F^+ according to its block diagonal form
Br: Stage3: Renumbers states -- creates superstates for ctqmc
Br: Stage4: F^dagger matrices between superstates evaluated
--- Running qmc for atom 0 iteration: 2 ---
Reading a file imp.0/Sig.out
Total energy = -41.103958 eV
Reading a file imp.0/Delta.inp
Reading a file G_loc.out
Reading a file sig.inp
Reading a file G_loc.out
Reading a file sig.inp
----- Starting DFT loop : 2 -----
--- Running vaspDMFT ---
Number of bands = 52
Fermi energy = -18.517335 eV
Running wannier90...
wannier90 calculation failed! Exiting.
Le 2021-02-13 21:25, Uthpala Herath a écrit :
Update:I was running the calculation with the files you sent me and I've passed the second DFT+DMFT iteration without an issue. I did increase Ndft=50 from Ndft=2.----- Starting DFT loop : 2 -----
--- Running vaspDMFT ---
Number of bands = 48
Fermi energy = -3.753579 eV
Running wannier90...
wannier90 calculation complete.
----- Starting DMFT loop : 1 -----
Running dmft0.x...
Running dmft.x...- Uthpala
On Sat, Feb 13, 2021 at 3:09 PM Uthpala Herath <ukh...@mix.wvu.edu> wrote:So sorry Franck, my bad.I've apparently pushed from a different computer with an older read_inputs.f90.Should be fixed now.Best,Uthpala
On Sat, Feb 13, 2021 at 3:00 PM Uthpala Herath <ukh...@mix.wvu.edu> wrote:I only modified RUNDMFT.py in the bin directory.You were able to compile and run DMFTwDFT_eb before yes?
On Sat, Feb 13, 2021 at 2:59 PM Franck <franck.nga...@ipcms.unistra.fr> wrote:
I reloaded and tried to compile but the read_inputs.F90 file that you modified does not work
Le 2021-02-13 19:34, Uthpala Herath a écrit :
Another thing I noticed is that "Ndft": 2, which is very low to even attempt a somewhat convergence.Set it to a higher number and check again. (Maybe "Ndft": 50,)Best,Uthpala
On Sat, Feb 13, 2021 at 12:09 PM Uthpala Herath <ukh...@mix.wvu.edu> wrote:In the meantime, I updated DMFTwDFT_eb/RUNDMFT.py to detect if wannier90.x calculation is done properly. You can just copy that file from the repo and replace it in your local DMFTwDFT_eb/bin and test it again.
On Sat, Feb 13, 2021 at 11:52 AM Uthpala Herath <ukh...@mix.wvu.edu> wrote:This seems to be the same issue with the previous version of the code you were using, the fact that wannier90.chk is not generated.I'm currently running a calculation with the files you gave me to see.The problem happens when wannier90.x is run after the vasp calculation in the 2nd scf cycle. It doesn't produce the wannier90.chk file.The code seems to be continuing because there are files from the previous cycle that are sufficient for the DMFT step but the self energies would not be updated.One weird thing I noticed is that in your first vasp calculation your fermi energy is around 5 ev and then inside the DMFT folder for the DMFT scf cycle it is -21 eV which is really weird.I think this has to do something with parallelization.Can you create the file para_com_dft.dat and put mpirun -np 4 in it and check again?
On Sat, Feb 13, 2021 at 9:09 AM Franck <franck.nga...@ipcms.unistra.fr> wrote:Dear Uthpala,
The calculation seems to go to the end by marking "Calculation complete." but the file 1-) ksum_output_dmft.x ksum_output_dmfto.x have the following messagewannier90.chk must be present!!
wannier90.chk must be present!!
2-) ksum_error_dmft.x and ksum_error_dmft0.x have the following messagempirun has exited due to process rank 6 with PID 0 on
node hpc-login1 exiting improperly. There are three reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
3. this process called "MPI_Abort" or "orte_abort" and the mca parameter
orte_create_session_dirs is set to false. In this case, the run-time cannot
detect that the abort call was an abnormal termination. Hence, the only
error message you will receive is this one.
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
You can avoid this message by specifying -quiet on the mpirun command line.
Le 2021-02-13 04:42, Uthpala Herath a écrit :
Hi Franck,Nope. That’s just a warning. Don’t worry about that message.Best,Uthpala--
On Fri, Feb 12, 2021 at 9:36 PM Franck <franck.nga...@ipcms.unistra.fr> wrote:Dear Uthpala, I did not notice a point during the calculation ... YES Although the calculation runs to the end showing "Calculation complete.", I notice there is a message as you mentioned. Is this problematic? since the calculation is finished
----- Starting DFT loop : 2 -----
--- Running vaspDMFT ---
Number of bands = 48
Fermi energy = -19.334225 eV
rm: cannot remove 'wannier90.chk.fmt': No such file or directory
Running wannier90...
wannier90 calculation complete.
----- Starting DMFT loop : 1 -----
Running dmft0.x...
Running dmft.x...
Ed.out: [6.40106533 4.75296882]
Reading a file Delta.out
---i: 0 ['V1'] ---loc_idx: 0
---i: 0 ['V1'] ---loc_idx: 1
Ed: [[6.40106533 4.75296882]]
Setting up CTQMC for DMFT...
Eimp = [ 0. -1.64809651]
.
.
.
.
.
Calculation complete.
Le 2021-02-13 01:36, Uthpala Herath a écrit :
This looks weird.Can you compress your directory and send it to me?I'll try to see what is going on.
On Fri, Feb 12, 2021 at 7:34 PM Franck <franck.nga...@ipcms.unistra.fr> wrote:Here is exactly the message that appears at the end---- Starting DFT loop : 2 -----
--- Running vaspDMFT ---
--- Running vaspDMFT ---
Number of bands = 48
Fermi energy = -21.362563 eV
Fermi energy = -21.362563 eV
Total energy = -41.104116 eV
Running wannier90...
wannier90 calculation complete.
----- Starting DMFT loop : 1 -----
Running XHF0...
Le 2021-02-13 01:26, Uthpala Herath a écrit :
In the output can you see something like this?----- Starting DFT loop : 15 -----
--- Running vaspDMFT ---
Augmentation difference = 0.056119
Charge difference = 0.186340
Number of bands = 16
Fermi energy = 5.235547 eV
rm: cannot remove 'wannier90.chk.fmt': No such file or directory
Running wannier90...
wannier90 calculation complete.
----- Starting DMFT loop : 1 -----
On Fri, Feb 12, 2021 at 7:20 PM Franck <franck.nga...@ipcms.unistra.fr> wrote:1-) there is no error message in wannier90.wout file and here is the end of the fileStarting a new Wannier90 calculation ...
Time to get kmesh 0.123 (sec)
Reading overlaps from wannier90.mmn : File generated by VASP: SrVO3
2-) there is no .werr file
3-) the end of the OUTCAR file in the DMFT direction seems to be okay since this end looks like the one present in root
if we look at the message which is displayed during the calculation just before it freezes, we have the message which says that the wannier calculation is well finished
--- Running vaspDMFT ---
Number of bands = 48
Fermi energy = -21.362563 eV
Total energy = -41.104116 eV
Running wannier90...
wannier90 calculation complete.
----- Starting DMFT loop : 1 -----
Running XHF0...
Le 2021-02-13 00:49, Uthpala Herath a écrit :
Okay this means that the wannier90.x run hasn't completed completely.Can you check what wannier90.wout says?Or if there is a .werr file associated with wannier90?Also the OUTCAR to see if the DFT calculation has completed successfully within the DMFT directory?- Uthpala
On Fri, Feb 12, 2021 at 6:42 PM Franck <franck.nga...@ipcms.unistra.fr> wrote:Thank you for your reply.I am installing this new version to test it.The ksum_error_XHF0 file is empty and the ksum_output_XHF0 file contains this message:Hello, World! I am process 0 of 4 on hpc-login1.
Hello, World! I am process 1 of 4 on hpc-login1.
Hello, World! I am process 2 of 4 on hpc-login1.
Hello, World! I am process 3 of 4 on hpc-login1.
chk file does not exist! We are exiting
Le 2021-02-12 21:25, Uthpala Herath a écrit :
In this run, what is the output of ksum_output_XHF0 and ksum_error_XHF0 ?Yes, I have tested the latest version (after the fix for UNI_mat.dat) for LaNiO3, SrVO3 and NiO.In the meantime, we have an updated version of our code that replaces XHF0.py and WAN90.py to do the XHF0 calculation by dmft0.x which also supports excluded bands in the wannier input.You could check that too in the mean time.- Uthpala
On Fri, Feb 12, 2021 at 12:50 PM Franck <franck.nga...@ipcms.unistra.fr> wrote:Dear Uthpala,
we recompiled vasp and it works fine (with mpif90). I don't know if it is the compilation of libdmft.a that maybe has errors. Yes there is no problem with UNI_mat.dat Did you run the latest version of the code (with Niter> 1)? the last example you sent me and which seemed to work has been done since. our compilation is made with mpif90, mpic ++, mpicc,
when I run the code in itterative session with DMFT.py -dft vasp -dmft -v here is for example the result which hangs on "Running XHF0 ..."
Initializing calculation...
Initial self-energy file generated.
Incomplete DMFT calculation found.
Number of bands read from INCAR = 48
Running VASP in /home2020/home/ipcms/fngassam/DM/examples/SrVO3_vasp
DFT calculation complete.
Number of bands = 48
Fermi energy = 5.231161 eV
wannier90.win updated.
Running wannier90...
wannier90 calculation complete.
No Quantum Espresso results have been found in a ../ directory!
VASP results have been found in a ../ directory!
Copying DFT file OUTCAR to the current directory
Copying DFT file OSZICAR to the current directory
Copying DFT file POSCAR to the current directory
Copying DFT file POTCAR to the current directory
Copying DFT file KPOINTS to the current directory
Copying DFT file INCAR to the current directory
Copying DFT file WAVECAR to the current directory
Copying DFT file para_com.dat to the current directory
Copying DFT file DFT_mu.out to the current directory
Copying Wannier file wannier90.chk to the current directory
Copying Wannier file wannier90.eig to the current directory
Copying Wannier file wannier90.win to the current directory
Copying Wannier file wannier90.amn to the current directory
Copying DMFT file sig.inp to the current directory
Copying DMFT file INPUT.py to the current directory
DMFT_mu.out file does not exist! Copying from DFT_mu.out
para_com_dft.dat file does not exist! Copying from para_com.dat
None
DMFT initialization complete. Ready to run calculation.
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
*-*-*-*-*- Starting DMFT calculation -*-*-*-*-*
Calculation type : Charge self-consistent DFT+DMFT calculation
Wannier orbitals in correlated subspace:
O1 : ['p_z', 'p_x', 'p_y']
O2 : ['p_z', 'p_x', 'p_y']
O3 : ['p_z', 'p_x', 'p_y']
V1 : ['d_z2', 'd_xz', 'd_yz', 'd_x2y2', 'd_xy']
----- Starting DMFT loop : 1 -----
Executed file atom_d.inp
Br: kindx= [(7, 3), (9, 1), (5, -5), (7, -1), (6, 2), (3, -3), (5, 1), (1, -1), (4, -2), (4, 0), (3, 3), (6, -4), (4, 4), (8, -2), (7, -3), (2, 2), (10, 0), (5, 3), (1, 1), (5, -1), (6, 4), (0, 0), (8, 2), (7, 1), (2, -2), (6, 0), (9, -1), (8, 0), (4, 2), (5, -3), (5, 5), (4, -4), (3, 1), (3, -1), (6, -2), (2, 0)]
Br: Stage1: Computing F^ in direct base
Br: Stage2: Compressing F^+ according to its block diagonal form
Br: Stage3: Renumbers states -- creates superstates for ctqmc
Br: Stage4: F^dagger matrices between superstates evaluated
Running XHF0...
Running dmft.x...
index: 0 for atom: V1 , orbital: d_z2
index: 1 for atom: V1 , orbital: d_xz
index: 2 for atom: V1 , orbital: d_yz
index: 3 for atom: V1 , orbital: d_x2y2
index: 4 for atom: V1 , orbital: d_xy
Ed.out: [6.44789103 4.79610628]
Reading a file Delta.out
---i: 0 ['V1'] ---loc_idx: 0
---i: 0 ['V1'] ---loc_idx: 1
Ed: [[6.44789103 4.79610628]]
Setting up CTQMC for DMFT...
Eimp = [ 0. -1.65178475]
Executed file atom_d.inp
Br: kindx= [(7, 3), (9, 1), (5, -5), (7, -1), (6, 2), (3, -3), (5, 1), (1, -1), (4, -2), (4, 0), (3, 3), (6, -4), (4, 4), (8, -2), (7, -3), (2, 2), (10, 0), (5, 3), (1, 1), (5, -1), (6, 4), (0, 0), (8, 2), (7, 1), (2, -2), (6, 0), (9, -1), (8, 0), (4, 2), (5, -3), (5, 5), (4, -4), (3, 1), (3, -1), (6, -2), (2, 0)]
Br: Stage1: Computing F^ in direct base
Br: Stage2: Compressing F^+ according to its block diagonal form
Br: Stage3: Renumbers states -- creates superstates for ctqmc
Br: Stage4: F^dagger matrices between superstates evaluated
--- Running qmc for atom 0 iteration: 1 ---
Reading a file imp.0/Sig.out
Total energy = -41.104116 eV
Reading a file imp.0/Delta.inp
Reading a file G_loc.out
Reading a file sig.inp
Reading a file G_loc.out
Reading a file sig.inp
----- Starting DMFT loop : 2 -----
Running dmft.x...
index: 0 for atom: V1 , orbital: d_z2
index: 1 for atom: V1 , orbital: d_xz
index: 2 for atom: V1 , orbital: d_yz
index: 3 for atom: V1 , orbital: d_x2y2
index: 4 for atom: V1 , orbital: d_xy
Ed.out: [6.44789103 4.79610628]
Reading a file Delta.out
---i: 0 ['V1'] ---loc_idx: 0
---i: 0 ['V1'] ---loc_idx: 1
Ed: [[6.44789103 4.79610628]]
Setting up CTQMC for DMFT...
Eimp = [ 0. -1.65178475]
Executed file atom_d.inp
Br: kindx= [(7, 3), (9, 1), (5, -5), (7, -1), (6, 2), (3, -3), (5, 1), (1, -1), (4, -2), (4, 0), (3, 3), (6, -4), (4, 4), (8, -2), (7, -3), (2, 2), (10, 0), (5, 3), (1, 1), (5, -1), (6, 4), (0, 0), (8, 2), (7, 1), (2, -2), (6, 0), (9, -1), (8, 0), (4, 2), (5, -3), (5, 5), (4, -4), (3, 1), (3, -1), (6, -2), (2, 0)]
Br: Stage1: Computing F^ in direct base
Br: Stage2: Compressing F^+ according to its block diagonal form
Br: Stage3: Renumbers states -- creates superstates for ctqmc
Br: Stage4: F^dagger matrices between superstates evaluated
--- Running qmc for atom 0 iteration: 2 ---
Reading a file imp.0/Sig.out
Total energy = -41.104116 eV
Reading a file imp.0/Delta.inp
Reading a file G_loc.out
Reading a file sig.inp
Reading a file G_loc.out
Reading a file sig.inp
----- Starting DFT loop : 2 -----
--- Running vaspDMFT ---
Number of bands = 48
Fermi energy = -21.362563 eV
Total energy = -41.104116 eV
Running wannier90...
wannier90 calculation complete.
----- Starting DMFT loop : 1 -----
Running XHF0...
Le 2021-02-12 18:06, Uthpala Herath a écrit :
Dear Franck,Hope you are doing well.I'm sorry it is still not working properly.First, were you able to compile vasp with the modifications?Did the latest fix regarding UNI_mat.dat improve the calculation?Yes, we'll work together and figure it out.Best,Uthpala
On Fri, Feb 12, 2021 at 11:17 AM Franck <franck.nga...@ipcms.unistra.fr> wrote:
Dear colleague; We've been trying to compile your DMFTwDFT code and run for over 4 months. We have compiled it by all possible methods. the Non-Selfconsistent calculation (Niter = 1) works very well. But the self-consistent full load calculation DFT + DMFT (Niter> 2) does not work. This calculation is important for the end of my thesis and that your code works when you compile it, is it possible to give you access to our account on the super calculator provided you with the compiler? I recall our compilation characteristics (Intel20 + openmpi-4.0.i20)
I copy here the person in charge of the supercompulator who himself also tried several times without success.
We will really appreciate your help.
Franck O.
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
--
Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/--Uthpala HerathPhD CandidateDepartment of Physics and AstronomyWest Virginia UniversityMorgantown, WV 26505Tel. (304) 216-2535Email: ukh...@mix.wvu.eduWebsite: https://uthpalaherath.github.io/
Dear Uthpala,
I tried the last version that you uploaded 18 hours ago. first of all I want to be sure that I understood your message on compilation with wannier90. you said that for the compilation you used wannier90-1.2 for the library libwannier.a and wannier90-3.1.0 for wannier90.x? if it is this precessus it is what I followed. I tested a calculation with Niter = 2 Nit = 2 and Ndft = 90. the calculation seems to finish but when I analyze the file root / vasp.out I have the error message about "Fail to find mu, increase mu_iter" and also when I do a "grep fermi OUTCAR" in the root / DMFT , the energy of fermi is -5.4365 eV then in the root the energy of fermi in the outcar is 5.2144 I attach you here in addition to the file root / vasp.out the file root / OUTCAR I really don't know what's wrong with the code. maybe i will try with TRIQS (https://triqs.github.io/triqs/latest/)Bests,
Franck O.