On 8 Aug 2013, at 23:59,
br...@gibson-consulting.com.au wrote:
> I'm in the process of doing some testing using stb tester - it's
> awesome! thanks for all the work everyone's done with it :-D - and was
> wondering about whether others are using it as a standalone tool, or
> whether anyone's using it with some kind of test framework? And/or with
> another tool for managing tests...
Hi Bryce
You're right: stb-tester is designed to sit below a higher-level test
management system. All that system needs is the ability to run a
command-line program (stbt run), check its exit status, and gather any
logs.
I've looked into both Robot Framework and Cucumber. Both of them require
their own language for defining test cases, that would have to be
translated into an stb-tester python script. This is my programmer bias
speaking, but I prefer to write test scripts directly in a fully fledged
programming language. (With a real programming language you do run the
risk of writing unnecessarily complex code, so you have to be vigilant
with code reviews etc; but still, the flexibility is worth it IMO.)
Another test management system we investigated is TestRail[1] (a paid
closed-source application that integrates with Jira, a ticket-tracking
system). TestRail has a web interface and an HTTP API, so the idea would
be to write a wrapper script that runs "stbt run" and then sends an HTTP
POST request to TestRail with the results. You can use TestRail to
manage the results from your manual *and* automated tests.
At YouView we initially used Jenkins to run stb-tester scripts. See [2]
for instructions on converting "stbt run" output to a format suitable
for Jenkins's xUnit test-results plugin. But Jenkins isn't really
designed for this type of thing; setting up test variations to run, and
in particularly reporting of results and triaging test failures,
is *very* tedious.
Now we're using a simple script called the "runner" that takes a list of
tests to run, runs them, and generates an html report. The best thing
about our runner is that you can teach it about known failure modes (for
example: the video-capture hardware froze, or known defects in your
system under test) and it automatically classifies test failures in the
html report to save you triaging time. We've just open-sourced this
script; it's in the stb-tester repository under "extra/runner/". Hooks
for custom user logging and failure-reason classification are under
development on the branch "runner-hooks". One day soon I'll write an
article showing an example of the html report that the runner generates,
but in the meantime you can try it out yourself! It is fairly well
documented.
Whatever you end up doing, I'll be very interested to hear about your
approach and any lessons learned, so please do post a follow-up to this
list.
> We're expecting to have a lot of tests with many different
> permutations on a consistent theme
This is something that Robot Framework handles quite well with its
tabular test scripts, and that stb-tester doesn't handle quite so well.
At the moment we tend to put the core test logic into a separate python
module, and have several one-line python scripts that just call a
function in that module with differing parameters.
I've been thinking about this for a while and I think the best solution
is to extend "stbt run" so that it passes any additional command-line
arguments on to the test script, and to extend the "runner" script so
that it does the same, and shows those arguments as columns in the html
report.
Cheers,
Dave.
[1]
http://docs.gurock.com/testrail-api/start
[2]
http://stb-tester.com/jenkins.html