One more question along the same line. I have some FIZZLED workflow resulted due to unconverged vasp geometry relaxation with similar errors like below. so basically one of the steps in the double relaxation workflow did not converge. In such a situation to restart the jobs, do I have the same workaround choices as you mentioned before or there is something inbuilt?
{ 'actions': None,
'errors': [ 'Non-converging '
'job'],
'handler': <custodian.vasp.handlers.NonConvergingErrorHandler object at 0x2aaac8a2bcf8>}
Unrecoverable error for handler: <custodian.vasp.handlers.NonConvergingErrorHandler object at 0x2aaac8a2bcf8>. Raising RuntimeError
Traceback (most recent call last):
File "/gpfs/backup/users/home/desa/.local/lib/python3.6/site-packages/custodian/custodian.py", line 320, in run
self._run_job(job_n, job)
File "/gpfs/backup/users/home/desa/.local/lib/python3.6/site-packages/custodian/custodian.py", line 446, in _run_job
raise CustodianError(s, True, x["handler"])
custodian.custodian.CustodianError: (CustodianError(...), 'Unrecoverable error for handler: <custodian.vasp.handlers.NonConvergingErrorHandler object at 0x2aaac8a2bcf8>. Raising RuntimeError')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/gpfs/backup/users/home/desa/.local/lib/python3.6/site-packages/fireworks/core/rocket.py", line 262, in run
m_action = t.run_task(my_spec)
File "/gpfs/backup/users/home/desa/.local/lib/python3.6/site-packages/atomate/vasp/firetasks/run_calc.py", line 204, in run_task
c.run()
File "/gpfs/backup/users/home/desa/.local/lib/python3.6/site-packages/custodian/custodian.py", line 330, in run
.format(self.total_errors, ex))
RuntimeError: 1 errors reached: (CustodianError(...), 'Unrecoverable error for handler: <custodian.vasp.handlers.NonConvergingErrorHandler object at 0x2aaac8a2bcf8>. Raising RuntimeError'). Exited...
Walltime used is = 07:56:56
CPU Time used is = 316:10:19
Memory used is = 5836620kb