We are hoping to reuse the same test suite in multiple environments where we know up front that certain tests will always fail with certain environments (but work in others), and we want to exclude those specific tests when running the respective environments. For example, let's say we have tests 1-20 tests and environments A-E, and the following combinations we'd like to skip:
A (skip 19,20)
B (run them all)
C (skip 4, 15, 16, 17, 20)
D (skip 11)
E (run them all)
We launch Robot from a command line using Pybot, and initially were including test cases by name, adding them to each environments list as they were proving to be stable. However we're now to the point where most tests are stable in most environments, so the reverse (only listing the tests to skip) is preferable and more maintainable as new tests get added to the suite. Plus we have a short list in our configuration file (which is used to launch Pybot) which kind of self-documents the tests that don't work in that environment.
There is the ability to tag tests, and exclude them via the command line using -Exclude, however the examples we see talk more in terms of using tags to categorize tests. Nevertheless we looked into leveraging this to create a tag for each test that contained the test name, allowing us to exclude tests by name. For example:
*** Test Cases ***
First test case - we use long names
[Tags] ${TEST NAME}
However it seems the tag for this test case doesn't get set until the running of the test case. The following should work, but of course we'd like to avoid duplicating each test case name and introducing error there:
*** Test Cases ***
First test case - we use long names
[Tags] First test case - we use long names
Certainly a few other ideas come to mind that are less desirable: 1. Create a separate test suite file per environment, and reference (via a superset resource file) the subset that we'd like to run. 2. Pass the "environment" as a variable to the test suite, and internally have it skip certain test cases conditionally based on the environment.
Does anyone have any other creative ideas for how to do this type of thing? I'm hoping for the ability to exclude the tests by test case name via the command line, since as I say this allows our configuration file to "self-document" the tests cases we are skipping, but I'm open to other suggests as well. And of course, yes, the Robot script could be this "documentation", but having a simple, clean list of test cases (outside a long Robot script) is going to be more easily consumable by others within the Dev/QA.
Thanks,
Bill