Run a test as precondition in another test

1,517 views
Skip to first unread message

quality goudengids

unread,
Jan 14, 2015, 7:27:32 AM1/14/15
to robotframe...@googlegroups.com
Hi,

Suppose I have a test 1 that creates an item, does some checks and deletes that same item in a tear down.

next, I want to modify something in an item.
I would have to rerun all keywords of test 1 and additionally modify the existing item.
Is there a way to "first run test 1" as precondition before test 2 can continue.

I would like to avoid rewriting keyword sequences that already exist.

A solution could be to put everything in 1 test case, but then it becomes rather big and a lot of things can go wrong.

Is there a way to create test case chains?

kind regards,

Geert

Skip Huffman

unread,
Jan 14, 2015, 7:42:27 AM1/14/15
to quality goudengids, robotframework-users
Use test 1 as a Test Setup.

--
You received this message because you are subscribed to the Google Groups "robotframework-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to robotframework-u...@googlegroups.com.
To post to this group, send email to robotframe...@googlegroups.com.
Visit this group at http://groups.google.com/group/robotframework-users.
For more options, visit https://groups.google.com/d/optout.



--
Skip Huffman
Senior Software Engineer in Test/Continuous Integration

Bryan Oakley

unread,
Jan 14, 2015, 7:42:34 AM1/14/15
to quality goudengids, robotframework-users
No, there is not a way to create test case chains. It's generally best to make every test case completely independent of other test cases. You could move all of your logic into keywords, and then call that keyword from multiple test cases. 


--

Skip Huffman

unread,
Jan 14, 2015, 7:46:21 AM1/14/15
to oak...@bardo.clearlight.com, quality goudengids, robotframework-users
To expand a bit.  Put your code for "test 1" into a keyword instead. Use that keyword all by itself in a test and use it as as a setup for other tests.  See the user manual (2.2.6   Test setup and teardownfor examples of test setup keywords.

quality goudengids

unread,
Jan 14, 2015, 7:51:41 AM1/14/15
to robotframe...@googlegroups.com
Hi again,

Ok thanks. I perfectly understand dependencies are not a good practice.
I will abstract test 1 scenario towards an overall keyword and use that as a setup for test 2

thanks guys for the speedy feedback. amazing support here!

fkberthold

unread,
Jan 15, 2015, 11:19:32 AM1/15/15
to robotframe...@googlegroups.com
Making chained test dependencies is a bad idea, everybody knows that...

So, this is how I did it.

First the rationale.  99% of our test base is either data driven or BDD style, and we keep our tests pretty short and focussed.  While having specific tests for each portion of the process, occasionally we want to test the entire flow of the process itself. Because of this our QA team still wanted to have full workflow tests, while keeping the tests decently verbose at the test level. This allows us to have workflow tests that our product management team can understand without any training.

To do this we took a few steps: 
  1. Each workflow is it's own test file/suite, that way we were able to control the order of the tests simply by the order they appear in the file.
  2. The test file/suite as a whole is tagged with 'workflow'
  3. We shorten "Run Keyword and Continue On Failure" "Continue If Fail" because we use it frequently for verifications that won't actually block the flow of the test. For example in a login test, verifying that the company's logo is present.
  4. In the 'Test Setup' we check to see if the variable ${workflow continue} is set to True:
    1. If it is set to true, then we flip it to False
    2. if it's not set to true, then the test case fails and is tagged 'noncritical' with a message indicating that it failed because a preceding workflow test failed.
  5. The last step of each test case (not the teardown) is 'the test case is finished' in which we set a suite level variable ${workflow continue} to True. This means that if there were any failures in the test that blocked the flow, then the remaining tests in this workflow won't run.
The final problem that we encountered was when re-running failed tests.  To manage this we added a post-processing step using a ResultsVisitor that went through and marked test cases in a workflow that preceded a failed tests also as being non-critical failures so that on re-run they would also get run.

This process has worked pretty well for us when we want to verify that a complex workflow is continuing to work from beginning to end. I would not recommend using it for most of your test cases. This kind of method makes it harder to track specific bugs and will mask bugs that appear later in the flow if early steps fail. But, for that 1% of tests where it makes sense, I think it's worth doing.

-Frank B

This lets us run tests as an overall workflow.
Reply all
Reply to author
Forward
0 new messages