stb-tester in a test framework

252 views
Skip to first unread message

br...@gibson-consulting.com.au

unread,
Aug 8, 2013, 6:59:55 PM8/8/13
to stb-t...@googlegroups.com
Hi :-)

I'm in the process of doing some testing using stb tester - it's awesome! thanks for all the work everyone's done with it :-D - and was wondering about whether others are using it as a standalone tool, or whether anyone's using it with some kind of test framework? And/or with another tool for managing tests...

We're expecting to have a lot of tests with many different permutations on a consistent theme - eg wireless testing with a whole pile of different wireless configurations - and we're trying to decide a way to tackle this problem - and are wondering what solutions others :-)

I'd wondered whether integrating stb tester into a test framework - I'm thinking (Robot Framework)[www.robotframework.org], but initially had just contemplated something like Cucumber - is something that anyone else has done, or has some experience with?

It seems logical as stb tester is more about managing a set top box, and less of an overall testing framework. Ie it's a lower level piece in a testing stack, but not the whole stack. And hence, integrating it into a higher level framework can make for more maintainable and manageable test management.

Basically, I was just wondering what others are doing :-)

Cheers :-)
Bryce

David Röthlisberger

unread,
Aug 9, 2013, 6:21:08 AM8/9/13
to br...@gibson-consulting.com.au, stb-t...@googlegroups.com
On 8 Aug 2013, at 23:59, br...@gibson-consulting.com.au wrote:
> I'm in the process of doing some testing using stb tester - it's
> awesome! thanks for all the work everyone's done with it :-D - and was
> wondering about whether others are using it as a standalone tool, or
> whether anyone's using it with some kind of test framework? And/or with
> another tool for managing tests...

Hi Bryce

You're right: stb-tester is designed to sit below a higher-level test
management system. All that system needs is the ability to run a
command-line program (stbt run), check its exit status, and gather any
logs.

I've looked into both Robot Framework and Cucumber. Both of them require
their own language for defining test cases, that would have to be
translated into an stb-tester python script. This is my programmer bias
speaking, but I prefer to write test scripts directly in a fully fledged
programming language. (With a real programming language you do run the
risk of writing unnecessarily complex code, so you have to be vigilant
with code reviews etc; but still, the flexibility is worth it IMO.)

Another test management system we investigated is TestRail[1] (a paid
closed-source application that integrates with Jira, a ticket-tracking
system). TestRail has a web interface and an HTTP API, so the idea would
be to write a wrapper script that runs "stbt run" and then sends an HTTP
POST request to TestRail with the results. You can use TestRail to
manage the results from your manual *and* automated tests.

At YouView we initially used Jenkins to run stb-tester scripts. See [2]
for instructions on converting "stbt run" output to a format suitable
for Jenkins's xUnit test-results plugin. But Jenkins isn't really
designed for this type of thing; setting up test variations to run, and
in particularly reporting of results and triaging test failures,
is *very* tedious.

Now we're using a simple script called the "runner" that takes a list of
tests to run, runs them, and generates an html report. The best thing
about our runner is that you can teach it about known failure modes (for
example: the video-capture hardware froze, or known defects in your
system under test) and it automatically classifies test failures in the
html report to save you triaging time. We've just open-sourced this
script; it's in the stb-tester repository under "extra/runner/". Hooks
for custom user logging and failure-reason classification are under
development on the branch "runner-hooks". One day soon I'll write an
article showing an example of the html report that the runner generates,
but in the meantime you can try it out yourself! It is fairly well
documented.

Whatever you end up doing, I'll be very interested to hear about your
approach and any lessons learned, so please do post a follow-up to this
list.

> We're expecting to have a lot of tests with many different
> permutations on a consistent theme

This is something that Robot Framework handles quite well with its
tabular test scripts, and that stb-tester doesn't handle quite so well.
At the moment we tend to put the core test logic into a separate python
module, and have several one-line python scripts that just call a
function in that module with differing parameters.

I've been thinking about this for a while and I think the best solution
is to extend "stbt run" so that it passes any additional command-line
arguments on to the test script, and to extend the "runner" script so
that it does the same, and shows those arguments as columns in the html
report.

Cheers,
Dave.

[1] http://docs.gurock.com/testrail-api/start
[2] http://stb-tester.com/jenkins.html

David Röthlisberger

unread,
Aug 18, 2013, 11:08:57 AM8/18/13
to br...@gibson-consulting.com.au, stb-t...@googlegroups.com
On 9 Aug 2013, at 11:21, David Röthlisberger wrote:
> On 8 Aug 2013, at 23:59, br...@gibson-consulting.com.au wrote:
>> I'm in the process of doing some testing using stb tester - it's
>> awesome! thanks for all the work everyone's done with it :-D - and was
>> wondering about whether others are using it as a standalone tool, or
>> whether anyone's using it with some kind of test framework? And/or with
>> another tool for managing tests...
>
> Now we're using a simple script called the "runner" that takes a list of
> tests to run, runs them, and generates an html report. The best thing
> about our runner is that you can teach it about known failure modes (for
> example: the video-capture hardware froze, or known defects in your
> system under test) and it automatically classifies test failures in the
> html report to save you triaging time. We've just open-sourced this
> script; it's in the stb-tester repository under "extra/runner/". Hooks
> for custom user logging and failure-reason classification are under
> development on the branch "runner-hooks". One day soon I'll write an
> article showing an example of the html report that the runner generates,
> but in the meantime you can try it out yourself! It is fairly well
> documented.

A demonstration of the report generated by the scripts in
stb-tester/extra/runner: http://stb-tester.com/runner.html

br...@gibson-consulting.com.au

unread,
Aug 19, 2013, 6:34:58 PM8/19/13
to stb-t...@googlegroups.com, br...@gibson-consulting.com.au, da...@rothlis.net
Thanks David, it looks good :-)

For the moment I'm quickly poc-ing using stb-tester with Robot - seems to be going well :-)

One thing I noticed is that when using the save-movie argument, stbt checks to ensure that the executable name is stbt-run (or argv[0] anyway) - this is on line 716 of stbt.py.

Is there a specific reason for that?

As part of the Robot integrating-ness, I wanted to use the save-video functionality, but am not using the stbt-run executable (instead have created a Robot library that contains most of the same functionality as stbt-run) - and hence the save-video parameter (when calling init_run) is "ignored".

For now, I've just removed the 'and os.path.basename(sys.argv[0]) == "stbt-run"' aspect of the if statement in stbt.py; but was sort of wondering what the purpose is, and whether it's worth keeping :-)

Cheers :-)
Bryce

David Röthlisberger

unread,
Aug 20, 2013, 3:14:47 AM8/20/13
to stb-t...@googlegroups.com, br...@gibson-consulting.com.au
On 19 Aug 2013, at 23:34, br...@gibson-consulting.com.au wrote:
>
> One thing I noticed is that when using the save-movie argument, stbt checks to ensure that the executable name is stbt-run (or argv[0] anyway) - this is on line 716 of stbt.py.
>
> Is there a specific reason for that?
>
> As part of the Robot integrating-ness, I wanted to use the save-video functionality, but am not using the stbt-run executable (instead have created a Robot library that contains most of the same functionality as stbt-run) - and hence the save-video parameter (when calling init_run) is "ignored".
>
> For now, I've just removed the 'and os.path.basename(sys.argv[0]) == "stbt-run"' aspect of the if statement in stbt.py; but was sort of wondering what the purpose is, and whether it's worth keeping :-)

Normally I'd suggest using `git annotate` to find the commit that
introduced that line, to see if the commit message answers your
question. But in this case the commit message† isn't helpful -- it
doesn't mention *why* stbt.py checks `argv[0] == "stbt-run"`.

https://github.com/drothlis/stb-tester/commit/2ee7b003

I don't think the argv check is necessary. The only caller of
`stbt.Display.__init__` is `stbt.init_run`, and the only callers of
`init_run` are `stbt-run` and `stbt-record`. `stbt-record` sets
`save_video = False` so the argv check is redundant.

Reply all
Reply to author
Forward
0 new messages