We like the features of TestTrack TCM www.seapine.com/tttcm.html but I
hope it isn't going to be a lot of work to get it integrated with
RobotFramework.
--Carl
I believe most people simply use their preferred version control
system for storing the tests. There has been some discussion about a
web based management system but although some prototype code has been
written [1] this project hasn't yet moved much forward.
> We like the features of TestTrack TCM www.seapine.com/tttcm.html but I
> hope it isn't going to be a lot of work to get it integrated with
> RobotFramework.
I also hope integrating these tools isn't too complicated. Let us know
how everything works out and especially if RF could be enhanced
somehow to make such integration easier.
Cheers,
.peke
--
Agile Tester/Developer/Consultant :: http://eliga.fi
Lead Developer of Robot Framework :: http://robotframework.org
+ historical reporting capabilities (that perhaps Risto.py can do) with
trending per test-case and for the suite,
+ ability to record manual testcases (e.g., Mabot),
+ being able to estimate the execution time of a selection of (manual and
automated) tests,
+ and other features like ability to declare whether testcases are enabled
for automation (as the keywords become available).
+ and I want all of this to be easy for management to use without training
or multi-step installation.
If there is a way to do it all with FOSS software, then that would be
fantastic!
--Carl
I don't know anything about TestTrackTCM, so I don't think I can help
you there.
However, if we do finish our integration with TestLink, I can post the
steps somewhere, and maybe that could be of some help to you.
Speaking of that:
I think it would be good if there were a place where RF users could
share their experiences, and tips-and-tricks with other RF users.
Right now a lot of good information just sinks into the abyss of the
discussion forum here. Not saying that this isn't a good forum for
discussions, but it would be nice with a complement, like a user wiki,
or something. Maybe we can use the "pages" (http://groups.google.com/
group/robotframework-users/web) of this group for this? (I haven't
tried what can be done with them yet)
Pekka, and the rest of the RF team, do you have any opinions on this?
BRs
Magnus
On Jan 13, 4:54 am, "Carl Dichter" <carl.dich...@gmail.com> wrote:
> Test Case Management can me a lot of things. I'm talking about:
>
> + historical reporting capabilities (that perhaps Risto.py can do) with
> trending per test-case and for the suite,
> + ability to record manual testcases (e.g., Mabot),
> + being able to estimate the execution time of a selection of (manual and
> automated) tests,
> + and other features like ability to declare whether testcases are enabled
> for automation (as the keywords become available).
> + and I want all of this to be easy for management to use without training
> or multi-step installation.
>
> If there is a way to do it all with FOSS software, then that would be
> fantastic!
>
> --Carl
>
> -----Original Message-----
> From: Pekka Klärck [mailto:pekka.kla...@gmail.com]
> Sent: Tuesday, January 12, 2010 4:39 PM
> To: carl.dich...@gmail.com
>
> Cc: robotframework-users
> Subject: Re: Test Case Management?
>
> 2010/1/12 Carl Dichter <carl.dich...@gmail.com>:
> > Is anyone using RobotFramework with a Test Case Management tool.
>
> I believe most people simply use their preferred version control
> system for storing the tests. There has been some discussion about a
> web based management system but although some prototype code has been
> written [1] this project hasn't yet moved much forward.
>
> [1]http://groups.google.com/group/robotframework-users/browse_thread/thr...
> 376bd0e370168/ab7b66b649bff856?#ab7b66b649bff856
>
> > We like the features of TestTrack TCMwww.seapine.com/tttcm.htmlbut I
Hi Magnus,
I think that is great idea. We have to investigate which Wiki we could
use for that purpose. The problem with Google code Wiki is that it is
not easy to allow access for anonymous changes and that is what Wiki
is all about.
Br,
Juha
BRs
Magnus
> + with
> trending per test-case and for the suite,
> + ability to record manual testcases (e.g., Mabot), being able to
> + estimate the execution time of a selection of (manual and
> automated) tests,
> + and other features like ability to declare whether testcases are
> + enabled
> for automation (as the keywords become available).
> + and I want all of this to be easy for management to use without
> + training
1. In Robot, test case names start with TCnnnn: where nnnn is the
number of a TestCase record in TestTrack. I played with the idea of
sourcing the robot test definition in TestTrack, but I think it is too
cumbersome to try to do that given the limitations of the editing
tools in TestTrack.
2. There will be a separate "results publisher" tool that takes the
output.xml from a robot run, and for each test with a TCnnnn in the
name, it will post the result to TestTrack. It does this by searching
for TestRun records that are associated with the TestCase number in
question. By keeping the publishing as a separate step you can have
TestTrack reflect the run results of only the robot runs that you are
interested in posting.
Sorry I don't have any code to share, some quick proof of concept
scripts did not take that long (wrestling with the TT SOAP interface
was the biggest obstacle), and it looked pretty straightforward to
have TT start showing robot results.
Chris
On Jan 12, 5:41 pm, Carl Dichter <carl.dich...@gmail.com> wrote:
> Is anyone using RobotFramework with a Test Case Management tool.
>
> We like the features of TestTrack TCMwww.seapine.com/tttcm.htmlbut I
Why not to use tags for this purpose instead?
Cheers,
.peke
I'm trying to remember, I made that choice a while ago... I think it
was because:
- our TestTrack test cases didn't have 'short names', only a longer
description, so I figured I might as well use the test case number in
the robot test name since I wanted something short and unique.
Sometimes the names have an additional short description the form:
TCnnn: <short description>.
- It would have required adding a [tags] line to every test, whereas
the majority would not otherwise need test-specific tags (most of my
other tags are done at the suite level)
- if tags were used, you'd be forced to always use --tagstatcombine or
--tagstatexclude because every test would create a unique tag.
I can still pick and choose tests to run with -t tcnnnn* if needed, so
overall I don't think I really lost anything by not going with tags in
this case. Tags are still used for 'performance', 'smoke', component
areas, etc because then tags apply to groups of tests.
Chris
On Jan 14, 6:53 pm, Pekka Klärck <pekka.kla...@gmail.com> wrote:
> 2010/1/14 Chris Prinos <chrispri...@gmail.com>:
Kris
#!/usr/bin/python
import sys
import re
from datetime import datetime
from lxml import etree
out = etree.Element('results')
x = etree.parse(sys.argv[1])
tests = x.xpath('//suite/test')
for test in tests:
n = test.attrib['name']
match = re.match(r'^(SLF-\d*)\s.', n)
if match:
n = match.group(1)
status = test.find('status')
s = status.attrib['status']
ts = status.attrib['starttime']
ts = datetime.strptime(ts, '%Y%m%d %H:%M:%S.%f')
ts = ts.strftime('%Y-%m-%d %H:%M:%S')
t = status.text
e_tc = etree.Element('testcase', external_id=n)
e_tester = etree.Element('tester')
e_tester.text = 'robot'
e_tc.append(e_tester)
e_ts = etree.Element('timestamp')
e_ts.text = ts
e_tc.append(e_ts)
e_result = etree.Element('result')
e_result.text = s[0].lower()
e_tc.append(e_result)
if t:
e_notes = etree.Element('notes')
e_notes.text = t
e_tc.append(e_notes)
out.append(e_tc)
print(etree.tostring(out, pretty_print=True))
--
Radek
What we've done in our TestLink integration is quite similar to
Radek's solution, with some differences. I'm using xslt to make the
transformation, and I'm putting the TestLink ids in tags instead.
For each RF test case, we create a corresponding TestLink test case,
and then tag the RF test case with "testlink_PRE-1234" where PRE-1234
is the TestLink id, including prefix.
We've also created a specific user in TestLink (called
"robotframework") for tests run from RF.
Then after each test run we just run the output.xml through an xslt
parser (Saxon in our case) along with the stylesheet below.
Unfortunately, the last step is still manual, which is to open
TestLink, go to Execute, click on a test case, and chose to import xml-
file.
I, too, want to move towards using TestLink's web services, but this
hasn't happened yet either. I've also gotten some good tips from Pekka
on how to use the internal RF API to post-process the results. Maybe
I'll get around to do the long planned improvements soon...
Cheers,
Magnus
==================================================================
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<results>
<xsl:for-each select="//test">
<xsl:if test="starts-with(tags/tag[starts-
with(text(),'testlink')],'testlink')">
<testcase external_id="{substring-after(tags/tag[starts-
with(text(),'testlink')],'testlink_')}">
<tester>robotframework</tester>
<xsl:choose>
<xsl:when test="status/@status ='PASS'">
<result>p</result>
<notes>Executed automatically using Robot Framework.</notes>
</xsl:when>
<xsl:otherwise>
<result>f</result>
<notes><xsl:value-of select="status"/> - Executed automatically
using Robot Framework.</notes>
</xsl:otherwise>
</xsl:choose>
</testcase>
</xsl:if>
</xsl:for-each>
</results>
</xsl:template>
</xsl:stylesheet>
==================================================================
You're right, this will probably be slower, but the code is very
readable. (as opposed to my xslt)
To me, post-processing seems a lot simpler, rather then having
listeners at the end of each test case.
Radek, I assume that the build number, and browser name is information
that you pass into the RF test execution by command-line variables, or
variable files, right?
Then, why don't you just pass the same values into the post-
processing? (if you trigger it from your runner script) I think that
beats having it in tags, actually, unless you want it to be clearly
visible in the RF report as well.
I agree that this discussion is really interesting, and I would really
like us to conclude the things said here (when we're done) and put it
somewhere more easily reachable (if not as a standard part of RF).
Tell you what, I'll start by creating a page here at the user-group,
and then we'll see where it goes from there. I don't have much time to
spend on this, unfortunately, so I'll need help in getting the right
things on the page.
Cheers,
Magnus
On Apr 8, 11:23 pm, Pekka Klärck <pekka.kla...@gmail.com> wrote:
> Hi all,
>
> The discussion related to integrating Robot Framework with different
> test management tools is really interesting. If it's possible to
> create generic solutions, it would be great to have them available
> somewhere with adequate documentation. It's possible to add them into
> the main project as new supporting tools, or alternatively we can
> start a new project to collect similar smallish tools.
>
> 2010/4/8 Radek <radek...@gmail.com>:
Please help me in filling this page with interesting and useful tips
on how to integrate RF with TCMs.
Cheers,
Magnus
Yes, I currently pass the browser name by command-line variable and I
planned the same for the build number.
I wanted to have the results in TestLink as soon as possible to be
able to see track them. That's why I picked the listener but it can be
more error-prone. I might move to post-processing because the import
might be repeated in case some errors occurred.
I'm having trouble with TestLink's XMLRPC in 1.9beta - it's buggy and
does not work at the moment.
--
Radek
I have personally tried the TestLink API. There is already a sample
code in python. I have tested it on TestLink version 1.8.5 and on
1.9.4 beta.
What I miss is how to actually use the ${TEST_STATUS} or "Run Keyword
If Test Failed" in every test case teardown and accordingly trigger
the TestLink API script to update the status of the test case as
"pass" or "fail".
Probably with "http://robotframework.googlecode.com/svn/tags/
robotframework-2.1.3/doc/libraries/BuiltIn.html#Run Keyword If"
For every test case in RF you would also need to pass the test case ID
which is the actual test case ID from TestLink.
e.g. of the TestLink API Client
def reportTCResult(self, tcid, tpid, buildid, status, notes):
data = {"devKey":self.devKey, "testcaseid":tcid,
"testplanid":tpid, "buildid":buildid, "status":status, "notes":notes}
return self.server.tl.reportTCResult(data)
Result = client.reportTCResult(1111, 2222, 33, "p", "executed")
So, what I do is - create the buildid according to the timestamp when
RF runs and pass it to reportTCResult. testcaseid will be passed
explicitly for every testcase and therefore one needs to know the
internal id for the test case from TestLink, the same for testplanid.
You could also maintain the IDs are variables in a resource file which
is used specifically for Testlink stuff.
Note that the above code is written for TestLink 1.8.5 which is why
the platformid is not present. Testlink version 1.9.4 requires
platformid.
I will play around a bit more this weekend :)
On Apr 9, 10:08 am, Magnus <magnus.smedb...@gmail.com> wrote:
> Ok, the ball is now in motion...
> I wrote a start for a page herehttp://groups.google.com/group/robotframework-users/web/integrating-r...
XSLT is pretty horrible language for anything non-trivial. The code
you got when using good XML parser such as ElementTree or lxml is much
better.
Magnus:
> I wrote a start for a page here
> http://groups.google.com/group/robotframework-users/web/integrating-rf-with-tcm-tools
> You should also be able to find it by clicking on pages on the left
> hand side menu of any discussion (unless you're reading this as an
> email of course).
Cool! This will also give practical knowledge about the usefulness of
Pages in Google Groups in general.
ambi:
> What I miss is how to actually use the ${TEST_STATUS} or "Run Keyword
> If Test Failed" in every test case teardown and accordingly trigger
> the TestLink API script to update the status of the test case as
> "pass" or "fail".
> Probably with "http://robotframework.googlecode.com/svn/tags/
> robotframework-2.1.3/doc/libraries/BuiltIn.html#Run Keyword If"
If you want to do updates at run time, it's probably better to use the
listener interface:
http://robotframework.googlecode.com/svn/tags/robotframework-2.1.3/doc/userguide/RobotFrameworkUserGuide.html#using-listener-interface
About using tags for IDs:
One reason why it might be a good idea to this approach is that you
can create links from tags to external systems:
http://robotframework.googlecode.com/svn/tags/robotframework-2.1.3/doc/userguide/RobotFrameworkUserGuide.html#creating-links-from-tag-names
I'd be very interested to learn more about how you are using RF with
SpiraTeam. What exactly does it do for you? Are you storing test cases
in SpiraTeam, or just using SpiraTeam to log results?
Thanks,
Mark
We don't use SpiraTeam but the simpler SpiraTest. Although since
SpiraTeam contains the features of SpiraTest it should work the same.
We are using the SOAP API provided in SpiraTest to add "runs" to
existing test cases, which includes pass/fail status and detailed
failure information.
Regards,
Chris
My intention was to keep the testcase names in RF the same as in
TestLink. Since it possible to retrieve the ID's based on the
testcase name I don't need anything special in RF. So no extra tags.
That's the way I learned it when I started to work, to format testcase
names in a specific way: x_y_z_01 where x,y,z are (sub)domain names
(you can have more/less levels if needed). That way have got a folder
structure (suite) like \x\y\z. All this comes back in RF and
testlink.
I will see that I can add some more info on this and maybe create some
seperate page incase you want to use ironpython instead of python to
use with RF.
But it will be for the coming months ... still some other libraries to
write to interface with my test object before I even need to update it
in testlink ...
Initially I was thinking about post processing, but on the fly update
might be more useful to monitor a long run ...
CEO | Inflectra Corporation |