Thank you for your interest in iBUGS. I took a look at your scripts
and it seems like you've done nothing wrong. The thing is that some of
the tests in AspectJ fail if you don't have a correct XWindow setup.
Can you try re-running the tests with a running XServer? If you're
working remotely on another machine, vnc might be a solution.
Nevertheless, even then there is usually only very few tests that fail
in pre-fix and not in post-fix. If you're trying to reproduce a
failure, you need to look for tests specified in <testsforfix> in file
repository.xml . Not all bugs have this tag. To make sure that a test
for a fix actually fixes the problem, execute the test in the post-fix
version and make sure it does not fail. Afterwards, copy the files
that are relevant for the test from post-fix to pre-fix and run the
test again. It should now fail.
I hope this information helps. Let me know if you have more questions.
And once again, thank you for your interest in iBUGS.
Regards,
Valentin
> --
> You received this message because you are subscribed to the Google Groups
> "iBugs" group.
> To post to this group, send email to ib...@googlegroups.com.
> To unsubscribe from this group, send email to
> ibugs+un...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/ibugs?hl=en.
>
> Is there a simple (easily automated) way to copy the tests from post-
> fix to pre-fix?
I do have tools for that, but they are undocumented and hence I don't
include them. For AspectJ the heuristic is to copy all files in
"org.aspectj/modules/tests/" .
Does that help?
Regards,
Valentin
> I am confused.
Let's see if I can help you.
>
> I use the sequence below for each version (I run several).
>
> (* using proper parameter such as -DfixId=... -Dtag=pre-fix*)
> ant checkoutversion
> ant buildversion
> ant buildversion (some versions build tests only after the second
> buildversion)
> ant buildtests
> ant runharnesstests
Looks good.
>
> The results don't match with the results from repository.xml. I noted
> that the size of the test suite are different (as showed in the <pre/
> post-fix-testcases> tag in repository.xml). Is the result fro
> repository.xml counting harness and unit tests? Note I am only
> considering harness tests.
Yes, the results from repository.xml also include unit tests.
>
> As for Valentin reply: I am running the experiments in a machine
> running Ubuntu 9.04 with gnome. I ran the command top and Xorg shows
> up running (guess this is the window manager). I try to run startx
> but I obtain the message below.
>
> =======================
> X: warning; process set to priority -1 instead of requested priority 0
>
>
>
> Fatal server error:
>
> Server is already active for display 0
>
> If this server is no longer running, remove /tmp/.X0-lock
>
> and start again.
>
> …
>
> =======================
>
> The value of variable DISPLAY is “:0.0”. I am guessing that this may
> not be the problem. Are there any other configuration that I need to
> be aware.
Uhm, it looks like something's wrong with your X setup. This has
nothing to do with executing AspectJ tests. Can you execute other
tools that use X from the shell that you're using to execute the
tests? It is usually not possible to do this if you're logged in using
ssh.
Regards,
Valentin
Sorry for the delay, but I was on a holiday.
> I ran the harness tests with xterm now. I executed , the last sequence
> of commands sent, ten times for each version and noted that three of
> out them changed the results across this executions (124808, 59895,
> 46280).
> Is that ok ?
Hmm, I've never witnessed this behavior. Could you let me know which
tests are changing their outcome?
Valentin
Hi!
This looks really weird. Before I reproduce this, can you try to further narrow down the tests that fail? One way might be to generate individual runs for all junit tests and compare console output.
Valentin
> > I ran the harness tests with xterm now....
--
You received this message because you are subscribed to the Google Groups "iBugs" group.
To post...