Thanks for starting this discussion and submitting the issue. I though
about this a bit more and realized there are some problems with this
idea in exactly this format. I added a comment to the issue explaining
my concerns and the same points are also below:
1) It is possible that different options have been used when the logs
have been generated earlier and when a combined report is generated.
If, for example, different --critical/--noncritical options are used,
tests in logs don't fully match tests in report. Even worse, options
like --include/--exclude can lead to a situation where a test in
report doesn't even exist in a log file or vice versa.
2) It is somewhat complicated to tell which log file matches which
output file. This is especially problematic if there are multiple
output files and the process should be fully automated. We probably
needed to come up with some convention (such as x-output.xml matching
y-log.html) but that means that everyone must adapt to this
convention.
3) If we anyway need to read the output files to be able to produce a
combined report, generating the log files isn't that big a task.
I also submitted another issue that is nearly identical with your idea
[1]. Individual log files per combined suite are also the core of this
idea, but this time those logs are generated at the same time when the
report is generated. This resolves the above problems and also allows
getting individual outputs when you don't have earlier generated log
files.
What do you think, is this new enhancement idea enough? Am I missing
some benefits that your approach has compared to it?
[1] http://code.google.com/p/robotframework/issues/detail?id=880
Cheers,
.peke
--
Agile Tester/Developer/Consultant :: http://eliga.fi
Lead Developer of Robot Framework :: http://robotframework.org
I appreciate the problems you have with my original proposal and think that your alternative would be OK with ONE BIG CAVEAT. I've added this comment to Issue 880:
The main issue I have with any re-generated log file is that to date the rebot process is not intelligent about embedded HTML references. All our in-house Robot Framework test engines have a feature where they automatically capture a screenshot of the SUT and embed this file as an HTML reference in the produced log.html file. Such references look like this:
<tr><td colspan="3"><a href="FAIL_Screen_1_280~01.png"><img src="FAIL_Screen_1_280~01.png" width="320px"></a></td></tr>
The captured FAIL_Screen_1_280~01.png file exists in the same folder as the original log.html and output.xml files.
If this issue's proposed solution is implemented, instead of re-using existing log.html files as I proposed in Issue 876, then rebot MUST be made smart enough to recognize these "in the same folder" HTML references and correct them if it creates a new log.html file in a different folder. For example, using the sample folder structure that I attached to Issue 876, if rebot was to produce say a log1.html file at the top BAT folder level, and if this log file contained an HTML reference to a captured screen shot like the one shown above that originally came from BAT/Devices/Dev_NCAS/log.html, then rebot should correct the HTML reference like this:
<tr><td colspan="3"><a href="BAT/Devices/Dev_NCAS/FAIL_Screen_1_280~01.png"><img src="BAT/Devices/Dev_NCAS/FAIL_Screen_1_280~01.png" width="320px"></a></td></tr>
Without this href=... and img src=... correction the rebot log files are not very useful to my team.
Thanks,
Martin
In the end the problem of big log files was fixed using a totally
different approach and both the issue you submitted and the related
issue I submitted myself were WontFixed. In the new approach the log
is split totally transparently (except that you need to use --splitlog
option) and it seems to work very well. You can read more about it
from the issue describing the functionality [1] and test it with the
just to be released beta 3.
[1] http://code.google.com/p/robotframework/issues/detail?id=898
I'll be interested to try the new split log format in the Beta 3 release. Can you tell me if it correctly handles embedded images (e.g. screenshot on failure) so that the rebot-generated log files don't get the wrong URL path to the image files?
Thanks,
Martin
-----Original Message-----
From: pekka....@gmail.com [mailto:pekka....@gmail.com] On Behalf Of Pekka Klärck
Sent: Thursday, July 07, 2011 2:25 PM
To: Taylor, Martin
Cc: robotframework-users
B3 is out so go ahead and give it a try!
> Can you tell me if it correctly handles embedded images (e.g. screenshot on failure)
> so that the rebot-generated log files don't get the wrong URL path to the image files?
Could you explain this problem a bit more thoroughly? I'm not aware of
this kind of problems and don't think anything has changed related to
it. If there's a bug somewhere, hopefully it can be fixed.
I just didn't really understand the problem fully and even asked for
clarification in issue 880.
> In the original file, BAT/PC_WinXP_Pro_SP3/CAS_TS/log.html, pybot generates
> a relative path to embedded image files like this:
>
> <td class="msg"><td></td></tr><tr><td colspan="3"><a
> href="FAIL_Screen_1_254.png">
>
> <img src="FAIL_Screen_1_254.png"
> width="320px"></a><p>FAIL_Screen_1_254.png</td></tr>
>
> When we run rebot at the BAT folder level, to produce a consolidated report
> for all BAT runs on all platforms and variants of the product, the embedded
> screen file references are “dumbly copied” from the output.xml file to the
> newly generated log file. Since they’re relative paths, that were relative
> to lower-level folders (e.g. BAT/PC_WinXP_Pro_SP3/CAS_TS/), the embedded
> images no longer show correctly in the newly generated log files because
> they’re now treated as relative to a higher-level folder in the folder tree.
Now I understand the problem and unfortunately I don't think this can
be fixed in Robot side. Most teams that take screenshots or produces
external files otherwise also copy them to the folder where the final
log file is situated when processing result. If we would automatically
change the paths, this approach would not work. Trying to detect which
messages to manipulate would also be rather tricky and error prone.
I think you have three solutions for your problem:
1) Copy the external files to the directory where you generate the log
file so that links work automatically.
2) Create a custom tool that manipulates the output.xml file and fixes
the links. This ought to be pretty easy because the tool only needs to
work in your context.
3) See could a command line option be added to Robot that enables
automatic path manipulation. I'm afraid this is a rather big task but
I may be wrong. It's anyway very unlikely that the NSN sponsored
development team has time to look at this in the foreseeable future.