Just to add some information to the original query, I was originally testing with the PBS job adaptor, although targeting a TORQUE platform. Switching to the TORQUE job adaptor results in the job state correctly switching to 'Failed' when the wall time is exceeded and I can access the TORQUE exit code via the exit_code attribute of the job object.
However, when testing with the PBS job adaptor against a PBS cluster, the job seems to disappear from the qstat output once it fails so saga-python sees that the job is no longer listed and assumes it has completed switching the status to done. The exit_code parameter is 'None'. This is causing some issues but I presume this is related to the PBS deployment that I'm targeting and there's not a lot that can be done to address this on the saga-python side?
One suggestion/question, might it be useful to map error codes within an adaptor to a string description of the cause of the error? So, in the event of the job state being set to 'Failed', one can call job.error_info or similar to get a string description of the error?