Dynamic Test Case Generation (Again...)

2,345 views
Skip to first unread message

Benjamin Kemper

unread,
Oct 27, 2014, 4:07:29 PM10/27/14
to robotframe...@googlegroups.com
Hey all,

I know I'm a little late for the party, so if there are any new developments I'd love to here about them.

As for Ricardo's question, I was able to get the new API working, but only from a script that calls into the RF API. Since I want to keep my test logic in RF, I didn't want to write an external script that will run the tests (We have several types of tests, and only one requires dynamic generation, so I didn't want to make an exception, I want them all to run the same way). When I tried using the API from within my library keyword (suite within a suite) it worked, but the xml parsing fails as there a <suite> node inside a <suite> node in the output.xml which apparently is wrong. 

If anyone is still interested in the above I can come up with working example.

I'm now trying a different take (more hackish, not public API), by using the following code:
def my_keyword_that_is_supposed_to_add_test_cases_dynamically_to_the_current_suite(self):
  suite
= {'dirname': ['test1', 'test2']} # A dynamically generated dict that contains
                                       
# directories and the tests inside them
 
for dirname, tests in suite.items():
   
if len(tests) == 0:
     
continue
   
from robot.running.context import EXECUTION_CONTEXTS
    suite
= EXECUTION_CONTEXTS.current.suite
    for test in tests:
      t
= suite.tests.create('{dir}: {testname}'.format(dir=dirname, testname=test), tags=[dirname])
      t
.keywords.create('Run Test In Directory', args=[os.path.join(self.dir, dirname), test])

In theory, calling this keyword was supposed to append the new test cases to the current suite and run them, but only shows that the test cases were added (and doesn't run them).

I'll continue searching for a way to get this to work, but if anyone has any input I'd also love to hear about it.

Thanks,
Benjamin.

Pekka Klärck

unread,
Oct 28, 2014, 6:44:01 PM10/28/14
to kempe...@gmail.com, robotframework-users
Hi Benjamin,

The reason tests you add aren't actually run is that you add them to
the result object, not to the executed suite. Looking at Robot's code
briefly, I didn't see easy way to get access to the executed suite,
but I'm certain there's some possibility. Using non-public APIs like
your current code already does is obviously risky, because there can
be big changes to the internal execution logic in major releases.

It would be interesting to hear what are you use cases. There may be
other solutions available, or we can think how Robot's APIs could be
enhanced to make what you want easier.

Cheers,
.peke
> --
> You received this message because you are subscribed to the Google Groups
> "robotframework-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to robotframework-u...@googlegroups.com.
> To post to this group, send email to robotframe...@googlegroups.com.
> Visit this group at http://groups.google.com/group/robotframework-users.
> For more options, visit https://groups.google.com/d/optout.



--
Agile Tester/Developer/Consultant :: http://eliga.fi
Lead Developer of Robot Framework :: http://robotframework.org

Benjamin Kemper

unread,
Oct 29, 2014, 3:02:21 AM10/29/14
to Pekka Klärck, robotframework-users
Hi,

I'm in the works of rebuilding our team validation system, and decided to use RF as our testing framework. Most of our tests would be easy to re-write in RF, but one of our biggest test suites is written in another system (Makefile). 

We have several directories of tests (~1800 tests) that are managed (Build and test rules) by a complex makefile system that automatically generates the correct rules for testing and building the tests. Since I don't want to re-write everything in RF, I wanted to keep using it (and users keep write tests in the original makefile system), but only hook it to RF.

I'm able to get the list of available directories and tests in each directory by querying our makefiles. I want to use this list to run the tests through RF, to get both the same user experience and the benefits (reporting, libraries...).

Since the makefiles will be constantly evolving, I can't run it once and save the RF test suite so I need to generate the test suite before every run.

Here is what I've tried already:
  1. Dynamically generate the test suite using a script before every run - Seems like the most reasonable solution, but I didn't like it...
  2. Dynamically generate the test suite using keywords - I liked this one better since it maintains the same user experience, but because I can't launch test suites from a keyword, it requires two invocations of RF...
  3. Dynamically generate the test suite using a script that import RF APIs and uses TestSuite class to dynamically build the test suite, run it  and collect results - If I could do this from inside a keyword it would be perfect! but since it has to be in a complete external script, thus has a different invocation method, I didn't like it.....
Maybe I'm spoiled, but I would really love it if every one of my tests could be executed using a single "pybot tests/mytest.txt"...

Thanks for help and interest!

Benjamin.
--
Benjamin.

Tatu Aalto

unread,
Oct 30, 2014, 4:05:30 AM10/30/14
to kempe...@gmail.com, robotframework-users, Pekka Klärck

Ugh

Just crossed in my mind:
2) You could get the info based on keyword(s) and then could launch new pybot process, example using Process library, to run the required test and later combine the the results to a single result.

-Tatu
Send from my mobile

sandeep s

unread,
Oct 30, 2014, 4:31:17 AM10/30/14
to Tatu Aalto, kempe...@gmail.com, robotframework-users, Pekka Klärck
In our project we generate test cases/suites dynamically based on the check-in done by developers. 
So we have a mapper Database which runs the comparison engine and generates the suites/test cases names . And the rest is pulled to the pybot for execution…

Apparently this technique have solved many of our Defect verification in automated way  , Code refactoring regressions, Only Changelist verifications etc and fits right at the heart of our agile process…

_Sandeep

Pekka Klärck

unread,
Nov 3, 2014, 7:13:24 PM11/3/14
to Benjamin Kemper, robotframework-users
2014-10-29 9:01 GMT+02:00 Benjamin Kemper <kempe...@gmail.com>:
>
> We have several directories of tests (~1800 tests) that are managed (Build
> and test rules) by a complex makefile system that automatically generates
> the correct rules for testing and building the tests. Since I don't want to
> re-write everything in RF, I wanted to keep using it (and users keep write
> tests in the original makefile system), but only hook it to RF.
>
> I'm able to get the list of available directories and tests in each
> directory by querying our makefiles. I want to use this list to run the
> tests through RF, to get both the same user experience and the benefits
> (reporting, libraries...).
>
> Since the makefiles will be constantly evolving, I can't run it once and
> save the RF test suite so I need to generate the test suite before every
> run.

Interesting use case.

> Here is what I've tried already:
>
> 1. Dynamically generate the test suite using a script before every run - Seems
> like the most reasonable solution, but I didn't like it...

This is what I would have recommended for you. I like that generating
test data is separated from execution conceptually. This would also be
very easy to understand and maintain.

> 2. Dynamically generate the test suite using keywords - I liked this one better
> since it maintains the same user experience, but because I can't launch test
> suites from a keyword, it requires two invocations of RF...

Personally I'd use a real programming language for generating tests
rather that Robot itself.

> 3. Dynamically generate the test suite using a script that import RF APIs and
> uses TestSuite class to dynamically build the test suite, run it and
> collect results - If I could do this from inside a keyword it would be
> perfect! but since it has to be in a complete external script, thus has a
> different invocation method, I didn't like it.....

Yeah, generating test suite dynamically and running it shouldn't be
too complicated. Depending on the context, I might use this instead of
the first approach. It's a problem if you want to run `pybot
tests.txt` and not `python tests.py`, though.

> Maybe I'm spoiled, but I would really love it if every one of my tests could
> be executed using a single "pybot tests/mytest.txt"...

I don't understand why you are so attached to running `pybot
something`. It is actually very common that there's a custom script to
run tests (run_tests.sh, run.py, whatever.bat, ...) that sets common
command line options, sets environment, and so on. These scripts can
either use pybot (or jybot or ipybot) commands or use Robot's Python
API (robot.run) directly. Often such scripts accept also free
arguments that they pass directly to the executed tests.

For example, you could have a script that accepts both paths to your
normal Robot tests as well as paths to Makefiles. In the latter case
it would automatically generate tests, either on file system (option
1. above) or in memory (option 3.), and run them afterwards. For users
the experience would be mostly the same regardless the input data.

Cheers,
.peke

Taylor, Martin

unread,
Nov 4, 2014, 10:19:29 AM11/4/14
to pekka....@gmail.com, Benjamin Kemper, robotframework-users
We have a similar use case at Texas Instruments, Inc. We need a suite of test cases to verify the Lua APIs embedded in our TI-Nspire products. We have a set of Perl scripts that generate RF test cases using a set of pre-defined custom RF keywords. Then we run the generated test suites on all our various platforms including PC, Mac, iPad and TI-Nspire handheld devices. Yes, it’s a 2-step process, but we don't really see any other way to do this. Once generated for a given version of the product software, the test cases can be run multiple times across the various platforms.

Cheers,
Martin

Benjamin Kemper

unread,
Nov 7, 2014, 4:29:25 AM11/7/14
to Taylor, Martin, pekka....@gmail.com, robotframework-users
Hey,

Thanks for the responses and the suggestions. Looks like I'll go with this route as well.

I wanted to keep my test running system as simple as possible, with a single entry point (pybot). So I think I will simply create a test suite that generates the other test suites. This way I'm keeping the simple interface but with two steps.

If I'll have any fancy conclusions from my work, I'll report back my results.

Regards,
Benjamin.

--
Benjamin.

kb

unread,
Nov 23, 2014, 12:47:47 AM11/23/14
to robotframe...@googlegroups.com, cmta...@ti.com, pekka....@gmail.com
I've been struggling with a similar issue and I'm not sure of the best way to proceed.  I have python test files in a couple of subdirectories, where each test file represents a single class.  I also have a set of argument lists and each test class needs to be instantiated once for every argument list in the set so I can execute a few methods (keywords).  I have a Variables file that creates a list of objects for all the possible combinations of test classes and argument lists.  The test suite file then uses that list of objects, but I can't figure out how to actually execute the methods (keywords):

*** Settings ***
Library     Collections
Variables  variables.py
*** Test Cases ***
Example
    @{keys} =  Get Dictionary Keys  ${test objects}
    :For   ${key}   IN    @{keys}
    \  Set Tags     ${key}
    \  ${class} =   Get From Dictionary     ${test objects}   ${key}
    \  ${class}.test Me  <------- "Fails ... No keyword with name '${class}.test Me' found."

I wasn't aware of the TestSuite API, but if I can't manually create a test suite file, I doubt I'd be successful in creating one dynamically.  Any suggestions would be most welcome.

Benjamin Kemper

unread,
Nov 23, 2014, 2:50:19 AM11/23/14
to robotframe...@googlegroups.com, cmta...@ti.com, pekka....@gmail.com
Hi Karri,

I'm far from an RF expert, so I don't know if you can actually instantiate python classes inside the test case definition but this what I  would do.

I would create a simple library has a keyword that gets the class name and its arguments as its parameters. This keyword would instantiate, run and report the result of the test run. Then I would write a script that generates a test suite file with a test case for each combination. Because the way you wrote it above with the for loop, its a single test case and will be reported as such.

I had a similar make and here is an excerpt from the code I wrote:
suite = {} # Contains all the test information

for dirname, tests in suite.items():
    if len(tests) == 0:
        continue
    with open(os.path.join(outputdir, 'make_test_{dir}_{arch}.txt'.format(
            dir=dirname, arch=test_arch)), 'w') as outfile:
        outfile.write('| *Setting*    |     *Value*           |\n')
        outfile.write('| Library      | Process               |\n')
        outfile.write('| Library      | MyDefaultsLib         | {arch}     |\n'.format(arch=test_arch))
        outfile.write('| Library      | MyMake                | {dir}      |\n\n'.format(dir=self.dir.replace('\\', '\\\\')))
        outfile.write('| *Test Case*  | *Action*              | *Argument* |\n')
        for test in tests:
            outfile.write('| {testname}   | Run Test In Directory | {dir} | {testname} |\n'.format(
                testname=test,  dir=dirname) )


In my case, I had to execute makefile based tests in different directories and with different parameters, so I wrote two libraries (MyDefaultsLib and MyMake) which create the required commands and then run them using the instance of Process. In your case it should be similar I assume.

You can even put the test suite generation code inside that library, and then you simply call it twice, once for generating and once for executing.

I hope this helps.

Benjamin.

kb

unread,
Nov 23, 2014, 11:07:39 PM11/23/14
to robotframe...@googlegroups.com, cmta...@ti.com, pekka....@gmail.com

Benjamin,

Thank you so much for your detailed and quick reply!  I wasn't aware of the Process Library; the Robot Framework libraries are something that I should have taken the time to review.  Your point about my attempt producing a single test case was very valid.

So I tried your suggestion at work today and it solved my problem!  I created the test suite manually to try it out, but in the future I'm thinking of using a configuration file and a python script that works much like yours. 

One of the differences between invoking a test script thru the Process API versus using a keyword/method is that the test's stdout doesn't go into log.html.  The tests we use produce about 50 lines of output so it's not much, but it would be nice to have that output when a test fails.  That's the last piece of the puzzle for me.

Thank you again for the help and the details in your response.  You saved me a lot of head-banging!

Karri

Benjamin Kemper

unread,
Nov 24, 2014, 2:03:17 AM11/24/14
to karri...@gmail.com, robotframework-users, Martin Taylor, Pekka Klärck
Glad to be of help.

Check out the documentation of the Process library, especially: http://robotframework.org/robotframework/latest/libraries/Process.html#Process%20configuration (See the "Standard output and error streams" section).

--
You received this message because you are subscribed to a topic in the Google Groups "robotframework-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/robotframework-users/Rd8G-53rYJ4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to robotframework-u...@googlegroups.com.

To post to this group, send email to robotframe...@googlegroups.com.
Visit this group at http://groups.google.com/group/robotframework-users.
For more options, visit https://groups.google.com/d/optout.



--
Benjamin.
Reply all
Reply to author
Forward
0 new messages