Making chained test dependencies is a bad idea, everybody knows that...
So, this is how I did it.
First the rationale. 99% of our test base is either data driven or BDD style, and we keep our tests pretty short and focussed. While having specific tests for each portion of the process, occasionally we want to test the entire flow of the process itself. Because of this our QA team still wanted to have full workflow tests, while keeping the tests decently verbose at the test level. This allows us to have workflow tests that our product management team can understand without any training.
To do this we took a few steps: - Each workflow is it's own test file/suite, that way we were able to control the order of the tests simply by the order they appear in the file.
- The test file/suite as a whole is tagged with 'workflow'
- We shorten "Run Keyword and Continue On Failure" "Continue If Fail" because we use it frequently for verifications that won't actually block the flow of the test. For example in a login test, verifying that the company's logo is present.
- In the 'Test Setup' we check to see if the variable ${workflow continue} is set to True:
- If it is set to true, then we flip it to False
- if it's not set to true, then the test case fails and is tagged 'noncritical' with a message indicating that it failed because a preceding workflow test failed.
- The last step of each test case (not the teardown) is 'the test case is finished' in which we set a suite level variable ${workflow continue} to True. This means that if there were any failures in the test that blocked the flow, then the remaining tests in this workflow won't run.
The final problem that we encountered was when re-running failed tests. To manage this we added a post-processing step using a ResultsVisitor that went through and marked test cases in a workflow that preceded a failed tests also as being non-critical failures so that on re-run they would also get run.
This process has worked pretty well for us when we want to verify that a complex workflow is continuing to work from beginning to end. I would not recommend using it for most of your test cases. This kind of method makes it harder to track specific bugs and will mask bugs that appear later in the flow if early steps fail. But, for that 1% of tests where it makes sense, I think it's worth doing.
-Frank B
This lets us run tests as an overall workflow.