Failing a test from a Listener

6 views
Skip to first unread message

Aviram

unread,
Sep 12, 2006, 7:34:57 AM9/12/06
to testng...@googlegroups.com
i am trying to fail a test from a listener and for some reason i doesn't work
when i to iTestResults.setStatus(ITestResult.FAILURE);
it does nothing, in the report its still passed and green
if i do setThrowable("BLAH");
i see it on the report but the test still passed and its green

what am i doing wrong ?
---------------------------------------------------------------------
Posted via Jive Forums
http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=86133#86133

Alexandru Popescu

unread,
Sep 12, 2006, 8:33:12 AM9/12/06
to testng...@googlegroups.com
Aviram you shouldn't try this :-). A listener/reporter is used to see
the real results and not to cover it. TestNG is making sure it is
setting the real flags when the test is run.

./alex
--
.w( the_mindstorm )p.
TestNG co-founder
EclipseTestNG Creator

Aviram

unread,
Sep 12, 2006, 9:06:14 AM9/12/06
to testng...@googlegroups.com
so i'm in a bit of a problem
i have tests that run
if the pass i look in the log file for interesting messages, if i see any i want to fail the test

can you think of how to do it ?

adding a call to a check in every test is not an options, its too many tests...

b.t.w
what use does the set functions have in the ITestResult, only in your code ?


---------------------------------------------------------------------
Posted via Jive Forums

http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=86156#86156

Alexandru Popescu

unread,
Sep 12, 2006, 10:28:03 AM9/12/06
to testng...@googlegroups.com
On 9/12/06, Aviram <testng...@opensymphony.com> wrote:
>
> so i'm in a bit of a problem
> i have tests that run
> if the pass i look in the log file for interesting messages, if i see any i want to fail the test
>
> can you think of how to do it ?
>
> adding a call to a check in every test is not an options, its too many tests...

Unfortunately I don't understand what the problem is. Can you detail?

>
> b.t.w
> what use does the set functions have in the ITestResult, only in your code ?

From the perspective of listeners/reporters ITestResult should be used
only in read-only mode. Maybe in the future we will do separate this
so that things are more clear.

./alex
--
.w( the_mindstorm )p.
TestNG co-founder
EclipseTestNG Creator

> ---------------------------------------------------------------------

Aviram

unread,
Sep 12, 2006, 11:02:57 AM9/12/06
to testng...@googlegroups.com
a test passes (no exceptions)
during that test messages are logged using a logger that was created by something i initialized in the @BeforeSuite

now for the problem
even though the test passed, i have some log messages which indicates that something went wrong, let say an exception that didn't got to testng but still interesting

i want to fail the test in that case

that means i gotta check the log for each test and fail it if i found anything, any ideas of how to do it ?


---------------------------------------------------------------------
Posted via Jive Forums

http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=86187#86187

Cédric Beust ♔

unread,
Sep 12, 2006, 11:48:34 AM9/12/06
to testng...@googlegroups.com
You could have another test verify() that depends on your first test method.  After the test is run, verify() will be invoked, it will check the logs and then fail if the logs don't contain the right thing.

Would this work?

--
Cedric


On 9/12/06, Aviram <testng...@opensymphony.com> wrote:

Aviram

unread,
Sep 12, 2006, 11:54:35 AM9/12/06
to testng...@googlegroups.com
that makes me add verify() to all my tests
and i got many many tests

---------------------------------------------------------------------
Posted via Jive Forums
http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=86202#86202

Cédric Beust ♔

unread,
Sep 12, 2006, 11:57:41 AM9/12/06
to testng...@googlegroups.com
On 9/12/06, Aviram <testng...@opensymphony.com> wrote:

that makes me add verify() to all my tests
and i got many many tests

Ok.

Can the methods that output these logs verify that they're adding the right thing?  Ideally, you really want to fail then and not later...

--
Cédric

Aviram

unread,
Sep 12, 2006, 12:21:48 PM9/12/06
to testng...@googlegroups.com
in the onTestStart i start listening in the log for the current test
and onTestEnd(succesful whatever) i check what i got

its working (adding to reporter and all that) besides that i can't fail them

actualy in what you output on the end of the test it shows they failed, but not on the report


---------------------------------------------------------------------
Posted via Jive Forums

http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=86208#86208

Aviram

unread,
Sep 13, 2006, 11:58:35 AM9/13/06
to testng...@googlegroups.com
sorry for bumping

so i get it there is no way to do such thing ?
(something that runs right after a test and can fail the test)


---------------------------------------------------------------------
Posted via Jive Forums

http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=86476#86476

Cédric Beust ♔

unread,
Sep 13, 2006, 2:29:20 PM9/13/06
to testng...@googlegroups.com
Not right now, no, but I don't think it would be very hard to change the current behavior so that TestNG will check if the listener altered the ITestResult object, and if it did, just skip the rest of the logic...

--
Cedric

Alexandru Popescu

unread,
Sep 13, 2006, 4:31:46 PM9/13/06
to testng...@googlegroups.com
On 9/13/06, Cédric Beust ♔ <cbe...@google.com> wrote:
> Not right now, no, but I don't think it would be very hard to change the
> current behavior so that TestNG will check if the listener altered the
> ITestResult object, and if it did, just skip the rest of the logic...
>

For me this behavior just doesn't make sense. I would rather refine
the ITestResult so that the listeners are receiving a read-only object
(things will be much more clear).

./alex
--
.w( the_mindstorm )p.
TestNG co-founder
EclipseTestNG Creator

Cédric Beust ♔

unread,
Sep 13, 2006, 4:34:15 PM9/13/06
to testng...@googlegroups.com
On 9/13/06, Alexandru Popescu <the.mindstor...@gmail.com> wrote:
On 9/13/06, Cédric Beust ♔ <cbe...@google.com> wrote:
> Not right now, no, but I don't think it would be very hard to change the
> current behavior so that TestNG will check if the listener altered the
> ITestResult object, and if it did, just skip the rest of the logic...
>

For me this behavior just doesn't make sense. I would rather refine
the ITestResult so that the listeners are receiving a read-only object
(things will be much more clear).

Sure, that's a possibility too, but don't you agree that a mechanism allowing for what Aviram is requesting would be nice?  And cheaper than AOP...

--
Cédric

Alexandru Popescu

unread,
Sep 13, 2006, 4:39:20 PM9/13/06
to testng...@googlegroups.com
On 9/13/06, Cédric Beust ♔ <cbe...@google.com> wrote:
>
>
>
> On 9/13/06, Alexandru Popescu
> <the.mindstor...@gmail.com> wrote:
> > On 9/13/06, Cédric Beust ♔ <cbe...@google.com> wrote:
> > > Not right now, no, but I don't think it would be very hard to change the
> > > current behavior so that TestNG will check if the listener altered the
> > > ITestResult object, and if it did, just skip the rest of the logic...
> > >
> >
> > For me this behavior just doesn't make sense. I would rather refine
> > the ITestResult so that the listeners are receiving a read-only object
> > (things will be much more clear).
>
>
> Sure, that's a possibility too, but don't you agree that a mechanism
> allowing for what Aviram is requesting would be nice? And cheaper than
> AOP...
>

I don't know the details of the scenario and I must confess that at
first sight I am getting the feeling that failing a test this way is
the wrong way of doing it.

Considering that I am still checking that other piece of code, why not
doing it in the real test? How I am gonna explain it to the other
developers? They are seeing no assertions in the real test, but still
they see a failure. Another option may be some @AfterMethod or
something similar that is doing these checks.

Unfortunately, I haven't seen the real code so I may be wrong.

./alex
--
.w( the_mindstorm )p.
TestNG co-founder
EclipseTestNG Creator

> --
> Cédric
>
> >
>

Cédric Beust ♔

unread,
Sep 13, 2006, 4:43:21 PM9/13/06
to testng...@googlegroups.com
On 9/13/06, Alexandru Popescu <the.mindstor...@gmail.com> wrote:

> Sure, that's a possibility too, but don't you agree that a mechanism
> allowing for what Aviram is requesting would be nice?  And cheaper than
> AOP...
>

I don't know the details of the scenario and I must confess that at
first sight I am getting the feeling that failing a test this way is
the wrong way of doing it.

Considering that I am still checking that other piece of code, why not
doing it in the real test? How I am gonna explain it to the other
developers? They are seeing no assertions in the real test, but still
they see a failure.  Another option may be some @AfterMethod or
something similar that is doing these checks.

Unfortunately, I haven't seen the real code so I may be wrong.

Fair enough.  Let's hear more from the original poster...

--
Cédric

Aviram

unread,
Sep 14, 2006, 2:43:38 AM9/14/06
to testng...@googlegroups.com
well the first idea was that even if a test passed (which just means no exception got thrown by it) some exception might have happen during the process that are interesting even if they were catched, that why the idea of checking the log produced while each test was running came up.
the thing is that adding a verify functions and setup functions to each test is obviously a wrong idea, we thought of using a listener for that.
now i see i misunderstood the concept of the listener...
i though of a before/after method thing, but i had 2 problems, one is technical and i might just don't know how to do it is how to create one before/after method for all the tests.
the second is how a person looking at the report will know which test failed, he will see that the afterMethod fail, and then he needs to look up what method did that occured after

if you comeup with a mechanism that allows me to fail a test from the AfterMethod that could be the answer
(thats if its actually possible to have one @Before/AfterMethod for everything (i'm thinking of allways run but hasn't tried that yet)


---------------------------------------------------------------------
Posted via Jive Forums

http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=86641#86641

Cédric Beust ♔

unread,
Sep 14, 2006, 11:44:29 AM9/14/06
to testng...@googlegroups.com
Hi Aviram,

On 9/13/06, Aviram <testng...@opensymphony.com> wrote:

well the first idea was that even if a test passed (which just means no exception got thrown by it) some exception might have happen during the process that are interesting even if they were catched, that why the idea of checking the log produced while each test was running came up.

I still have problems with this idea.

You're talking about a test that can fail in two ways, but that second way will still not cause that test to fail.  This seems very counterintuitive to me. 

Why can't that test method verify the log itself?  Or catch the exception itself? 

the second is how a person looking at the report will know which test failed, he will see that the afterMethod fail, and then he needs to look up what method did that occured after

That's another reason why you should fail in the very method that fails, not in a listener that is applied to all test methods...

--
Cédric

Alexandru Popescu

unread,
Sep 14, 2006, 1:26:48 PM9/14/06
to testng...@googlegroups.com
On 9/14/06, Aviram <testng...@opensymphony.com> wrote:
>
> well the first idea was that even if a test passed (which just means no exception got thrown by it) some exception might have happen during the process that are interesting even if they were catched, that why the idea of checking the log produced while each test was running came up.

This is somehow very interesting: it pretty much looks like your
product will continue to run in real life, but somehow during testing
you want to make it fail. Unfortunately, I have to say that this looks
like a real problem with the system: if it is a failure (as you want
to point out with your tests) but your system is hidding 'em than I
would say this is a design problem.

> the thing is that adding a verify functions and setup functions to each test is obviously a wrong idea, we thought of using a listener for that.
> now i see i misunderstood the concept of the listener...
> i though of a before/after method thing, but i had 2 problems, one is technical and i might just don't know how to do it is how to create one before/after method for all the tests.
> the second is how a person looking at the report will know which test failed, he will see that the afterMethod fail, and then he needs to look up what method did that occured after
>
> if you comeup with a mechanism that allows me to fail a test from the AfterMethod that could be the answer
> (thats if its actually possible to have one @Before/AfterMethod for everything (i'm thinking of allways run but hasn't tried that yet)
>

Now answering if we can help this scenario, I guess my answer would be
no for the moment. Maybe an approach for you would be to have the
group of these test form up a hierarchy and have an @AfterMethod
checking the log in the parent class.

In the past I think I have suggested a new feature that would allow
you to plug into your test additional @Configuration methods without
really defining them in the place.

./alex
--
.w( the_mindstorm )p.
TestNG co-founder
EclipseTestNG Creator

---------------------------------------------------------------------

Aviram

unread,
Sep 17, 2006, 2:55:41 AM9/17/06
to testng...@googlegroups.com
well the error is not critical that the application should fail, but it is important for the developers to see

about how they will know, i use the Reported to add info to the method

the thing is that the same verification is done for all the test so adding it to each test doesn't seem right to me


---------------------------------------------------------------------
Posted via Jive Forums

http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=87222#87222

Alexandru Popescu

unread,
Sep 17, 2006, 6:21:11 AM9/17/06
to testng...@googlegroups.com
On 9/17/06, Aviram <testng...@opensymphony.com> wrote:
>
> well the error is not critical that the application should fail,

still, you want a test to fail... and this is the part I don't understand.

>but it is important for the developers to see
>

I agree with this part and a reporter will do this.

> about how they will know, i use the Reported to add info to the method
>
> the thing is that the same verification is done for all the test so adding it to each test doesn't seem right to me

I have suggested a possible improved approach for implementing the
annotation transformers, which will help you achieve this scenario
(even if I completely disagree with it ;-]).

./alex
--
.w( the_mindstorm )p.
TestNG co-founder
EclipseTestNG Creator

> ---------------------------------------------------------------------

Aviram

unread,
Sep 17, 2006, 9:10:37 AM9/17/06
to testng...@googlegroups.com
well you got a good point...
i'll start with just making the report for now, i'll see if its a good solution

i still think there should be a way to fail a test not from the test itself
for example that verification that needs to be done to all test, not such a good idea to add a call to each test, should be something more generic


---------------------------------------------------------------------
Posted via Jive Forums

http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=87260#87260

BK Lau

unread,
Sep 17, 2006, 11:47:53 PM9/17/06
to testng...@googlegroups.com
Folks:
I have encountered this situation before while testing Eclipse before last year.
What happened was that Elipse generated its own OSGi log, mostly plugin loading messages. So part of the test was to look at the log before and after a test executes and parse the log to see there's anything "interesting" worthy of raising an alarm. We still passed/failed a test according to it's outcome within a test but we logged a Warning if there is a suspicious log entry lurking after the test.
I think a better method is to introduced pre/post-method hooks that can be associated with each test method.
So you can use global hooks that applies to all methods or
specific hooks for specific methods.
This hook should just be a Thread class with suite info and test method passed to it as part of its arguments. You can register as many hooks as possible for a test method. Each hook should run indepdently of the other, each can do some "pre-condition" and post-condition "last rite" verification, processing,etc.(including set Pass/Fail bits?)

So part of the @Test for method could have new annotation value of
pre-hooks=.....
post-hooks=....

Viram, I think failing a test based on parsing the log is probably too confusing IMO.
Logging a warning message thru testng's Report.log() might be the correct approach to be considered.

What you folks think?.

-BK-


---------------------------------------------------------------------
Posted via Jive Forums

http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=87353#87353

Aviram

unread,
Sep 18, 2006, 7:24:17 AM9/18/06
to testng...@googlegroups.com
if the test passed (green) nobody will go looking for warnings

---------------------------------------------------------------------
Posted via Jive Forums
http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=87430#87430

BK Lau

unread,
Sep 18, 2006, 11:07:49 AM9/18/06
to testng...@googlegroups.com
Hi folks:

THis morning I was thinking if Aviram's issue could have wider use or could be recast into a more generic usage context.
In fact, I just found one which I like to share the thoughts here:

Before that I defined some terminology for sake of ease of discussion:

Def#1 : Test contents - unit tests and its associated helper lib which are used to test the "test substrate".
Def#2: Test substrate - the software component(s) that we are testing against

In most unit tests, it is taken for granted that we can easily write tests to directly test against the test substrate, usually just a Java class., for example.
But this might not be true all the time.

For example below where tests called component A which in turn call component B:
Test contents ----<<tests>>---> componentA----->componentB

If we are testing component A directly, then the method PASS/FAIL with respect to component A are fairly obvious.
Next take component B. If component B can only be tested indirectly via by invoking component A(since component B could be very complex or intractable) as a proxy, then we might actually look for side-effects(log for example?) produced from component B, apart from the fact that method call into component A(the proxy) has to pass first undoubtedly.

So in the above scenario, then a PASS in B must mandate that A passed first, followed by B passed.

-BK-


---------------------------------------------------------------------
Posted via Jive Forums

http://forums.opensymphony.com/thread.jspa?threadID=42969&messageID=87490#87490

Reply all
Reply to author
Forward
0 new messages