workflow added but no vasp calculation is performed

38 views
Skip to first unread message

zzy9...@gmail.com

unread,
Apr 8, 2019, 6:12:07 PM4/8/19
to atomate
Hi, 

I followed all the documentation, configured everything, added the pseudopotential files, and got my database set up properly. I was able to add workflows and submit the job. However, even though the job submission is successful, I don't get any output. 

Here is the python file that adds the workflow: 


#Calculate band structure of bilayer Graphene
from pymatgen import Structure
from fireworks import LaunchPad
from atomate.vasp.workflows.presets.core import wf_bandstructure

struct = Structure.from_file('POSCAR')  
wf = wf_bandstructure(struct)
lpad = LaunchPad.auto_load() 
lpad.add_wf(wf)


And if I do "qlauanch singleshot", this is the only output I get: 
2019-04-08 17:56:42,376 INFO Hostname/IP lookup (this will take a few seconds)

And the status of the job is completed. 

The vasp calculation doesn't start, and it doesn't create incar or any other files. 

Any help is appreciated! 

Thanks, 
Zoe

Anubhav Jain

unread,
Apr 9, 2019, 12:36:27 PM4/9/19
to atomate
Hi Zoe,

Congrats on getting everything set up!

When you say "the status of the job is completed", do you mean your PBS/SLURM/etc job is completed, or that FireWorks says that your job is in the COMPLETED state?

I think the following checks / information is needed to debug more:

1. What is the state of your Firework? My guess is it will be FIZZLED. If it is fizzled, what is the error message in the launch object?
2. What are the contents of your FW_job.out and FW_job.error files (assuming you used standard my_qadapter.yaml names for error files) in your output directory?
3. If you can send any other files in your directory (e.g. FW.json) that would also be helpful.

Zoe Zhu

unread,
Apr 9, 2019, 9:11:20 PM4/9/19
to Anubhav Jain, atomate
Hi Anubhav, 

Thank you so much for your response! I got the vasp to run, and it had to do with how I tunnel from my database to my localhost. However, the job still doesn't finish. Now I am doing an elastic constant calculation. 

1. The SLURM job says it's COMPLETED, but the state of my Fireworks is FIZZLED. 
2. Here is my the error message I got in my FW_job.error: 
 ERROR:custodian.custodian:
{ 'actions': [ { 'action': { '_set': { 'ALGO': 'Normal'}},
                 'dict': 'INCAR'}],
  'errors': [ 'Positive '
              'energy'],
  'handler': <custodian.vasp.handlers.PositiveEnergyErrorHandler object at 0x2b4687887780>}
ERROR:custodian.custodian:
{ 'actions': [ { 'action': { '_set': { 'IBRION': 3,
                                       'SMASS': 0.75}},
                 'dict': 'INCAR'}],
  'errors': [ 'POTIM'],
  'handler': <custodian.vasp.handlers.PotimErrorHandler object at 0x2b4687887da0>}
ERROR:custodian.custodian:
{ 'actions': None,
  'errors': [ 'Positive '
              'energy'],
  'handler': <custodian.vasp.handlers.PositiveEnergyErrorHandler object at 0x2b4687887780>}
ERROR:custodian.custodian:Unrecoverable error for handler: <custodian.vasp.handlers.PositiveEnergyErrorHandler object at 0x2b
Traceback (most recent call last):
  File "/n/home02/zzhu/.conda/envs/atomate_env/lib/python3.6/site-packages/custodian/custodian.py", line 320, in run 
    self._run_job(job_n, job)
  File "/n/home02/zzhu/.conda/envs/atomate_env/lib/python3.6/site-packages/custodian/custodian.py", line 446, in _run_job
    raise CustodianError(s, True, x["handler"])
custodian.custodian.CustodianError: (CustodianError(...), 'Unrecoverable error for handler: <custodian.vasp.handlers.Positive

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/n/home02/zzhu/atomate/codes/fireworks/fireworks/core/rocket.py", line 262, in run 
    m_action = t.run_task(my_spec)
  File "/n/home02/zzhu/atomate/codes/atomate/atomate/vasp/firetasks/run_calc.py", line 205, in run_task
    c.run()
  File "/n/home02/zzhu/.conda/envs/atomate_env/lib/python3.6/site-packages/custodian/custodian.py", line 330, in run 
    .format(self.total_errors, ex))
RuntimeError: 5 errors reached: (CustodianError(...), 'Unrecoverable error for handler: <custodian.vasp.handlers.PositiveEner
INFO:rocket.launcher:Rocket finished

Here is my FW_job.out file content: 
2019-04-09 16:39:04,330 INFO Hostname/IP lookup (this will take a few seconds)
2019-04-09 16:39:18,399 INFO Created new dir /n/home02/zzhu/atomate/testruns/elastic/launcher_2019-04-09-20-39-18-392015
2019-04-09 16:39:18,404 INFO Launching Rocket
2019-04-09 16:39:19,786 INFO RUNNING fw_id: 30 in directory: /n/home02/zzhu/atomate/testruns/elastic/launcher_2019-04-09-20-3
2019-04-09 16:39:20,607 INFO Task started: FileWriteTask.
2019-04-09 16:39:20,619 INFO Task completed: FileWriteTask 
2019-04-09 16:39:20,791 INFO Task started: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}}.
2019-04-09 16:39:20,867 INFO Task completed: {{atomate.vasp.firetasks.write_inputs.WriteVaspFromIOSet}} 
2019-04-09 16:39:21,039 INFO Task started: {{atomate.vasp.firetasks.run_calc.RunVaspCustodian}}.
2019-04-09 19:55:16,512 INFO Rocket finished

3. Here are my config files: 

db.json
{
    "host": "localhost",
    "port": 27021,
    "database": "2d_materials",
    "collection": "tasks",
    "admin_user": "<<admin_user>>",
    "admin_password": "<<admin_pwd>>",
    "readonly_user": "<<ro_user>>",
    "readonly_password": <<ro_pwd>>",
    "aliases": {}
}

FW_config.yaml 
CONFIG_FILE_DIR: <<home_directory>>/atomate/config_nersc/db.json

my_fworker.yaml 
name: fwork 
category: ''
query: '{}'
env:
    db_file: <<home_directory>>/atomate/config_nersc/db.json
    vasp_cmd: mpirun -n 2 <<vasp_directory>>/vasp.5.4.4.std
    scratch_dir: null

my_launchpad.yaml
host: localhost
port: 27021
name: 2d_materials
username: <<admin_user>>
password: <<admin_pwd>>
ssl: false
logdir: null
strm_lvl: INFO
user_indices: []
wf_user_indices: []


I really appreciate your help! 

Best.
Zoe 

___________________________________

Ziyan (Zoe) Zhu

Ph.D. Candidate

Department of Physics, Harvard University

zz...@g.harvard.edu, zzy9...@gmail.com

(310)-210-0580



--
You received this message because you are subscribed to a topic in the Google Groups "atomate" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/atomate/I33JMmiGTzM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to atomate+u...@googlegroups.com.
To post to this group, send email to ato...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Anubhav Jain

unread,
Apr 10, 2019, 12:37:53 PM4/10/19
to Zoe Zhu, atomate
Hi Zoe,

It looks like VASP ran OK, but at the end of the run you had a positive energy. This usually happens when the structure itself is quite bad - e.g., atoms are too close to one another. Custodian will try to fix errors like this, but a positive energy one can be difficult to recover from. This one did not recover.

I would inspect the VASP files manually in that directory. Note that custodian will sometimes zip up old runs when it reruns a job, so you would need to check the zipped folders to find the original files. I'd pay particular attention to the:

POSCAR / CONTCAR - is your structure reasonable
OSZICAR - did something strange happen during the convergence procedure

Best
Anubhav
Reply all
Reply to author
Forward
0 new messages