Random Tests Failing. (I know... please read)

277 views
Skip to first unread message

Steve Messer

unread,
Sep 28, 2015, 10:26:50 PM9/28/15
to SpecFlow
The specflow output even says that random tests are failing.

I have read all the posts about crosstalk between tests.
I have looked for patterns. I have grouped different tests together and different tests fail randomly while tests that were failing randomly start passing.
I create my resources using the BeforeScenario hook. No web browssr nor database dependencies.

If I have about 162 tests they never fail, regardless of which subset of tests I run.
Once I have 200 or more tests, one or two randomly fail nearly every time.

I can't give code examples as the code is part of a medical rules engine.

My code is ported straight from Cucumber / IronRuby to Specflow. Meaning the tests are know to pass 100% of the time before SpecFlow.

I am looking for ideas of how to narrow this down.

Background.
1. All my tests use the same step definitions
1a. Rule data in tables
1b. Claim data in tables.
2. Run the rule data and claim data against the rule engine code
3. Compare the rule engine output with the expected results (table)

Step 3 is where is randomly falls apart.
For example I expect the rule engine output to have 3 flags (rules engine creates flags).
Sometimes I get 3 flags, 2 flags, 1 flag or no flags.

Wild guess: It would appear that either rule data or claim data is crossing between tests.
Can't imagine how as everything is created in BeforeScenario.

Suggestions??? 

Dan Ryan

unread,
Sep 29, 2015, 3:25:49 AM9/29/15
to SpecFlow
You are going to have some shared data amongst your tests somewhere (the dreaded shared fixture). If you really can't work out why this is then I would add lots of trace logs throughout the code. Tools such as log4net have the ability to assign the logger a context (ie see logical thread context for log4net); you can use this feature to relate your trace logs back to the tests. Following this, there should be some logs that look suspicious (i.e. expecting there to be two rows returned from a database instead of three) and help you reason about the problem.

Sam Holder

unread,
Sep 29, 2015, 3:30:48 AM9/29/15
to specflow
I would start by looking at anything in your tests or supporting code which is static, as this can potentially be shared between multiple tests, and could be the cause of the issue. Try and remove the static dependencies or add logging around their access to see if you can see if the tests are accessing them in a strange way. 

On Tue, Sep 29, 2015 at 8:25 AM, Dan Ryan <mail.d...@googlemail.com> wrote:
You are going to have some shared data amongst your tests somewhere (the dreaded shared fixture). If you really can't work out why this is then I would add lots of trace logs throughout the code. Tools such as log4net have the ability to assign the logger a context (ie see logical thread context for log4net); you can use this feature to relate your trace logs back to the tests. Following this, there should be some logs that look suspicious (i.e. expecting there to be two rows returned from a database instead of three) and help you reason about the problem.

--
You received this message because you are subscribed to the Google Groups "SpecFlow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to specflow+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Dan Ryan

unread,
Sep 29, 2015, 4:57:09 AM9/29/15
to SpecFlow
This is good advice.  In relation to this, if you are using a dependency injection container,try to avoid using single instance registrations for the reason above.

Mark Levison

unread,
Sep 29, 2015, 6:23:52 AM9/29/15
to spec...@googlegroups.com

While you're at it consider putting loggers on any setters. Once the bug is found consider eliminating setters. Many hard to fathom problems come from misused setters.

Cheers

Steve Messer

unread,
Sep 29, 2015, 11:44:54 AM9/29/15
to SpecFlow
Thanks for the tips.

1. I have nothing that is static. The only static items are generated by SpecFlow one in each generated test (private static TechTalk.SpecFlow.ITestRunner testRunner;)
2. I am not using any DI (had some extension methods, removed them. No Change)
3. On setters, I use reflection to turn the untyped Specflow/Cucumber tables into strongly typed objects. Since a given table has columns that aren't always required I don't know at run-time which columns will be provided. So using the column name I determine the type and set the objects public field. I don't believe this qualifies as a setter if I understand your tip correctly. This is something I can't change.

4. I already use Log4Net so I can try the logging tip. 

5. This is really strange as I have my step definintions exactly as I did with Ruby and Cucumber just converted to C# for SpecFlow. 

Sam Holder

unread,
Sep 29, 2015, 12:05:59 PM9/29/15
to specflow
Are your tests running in parallel? Or is there some state which is retained which is causing the tests to fail?

Mark Levison

unread,
Sep 29, 2015, 12:38:59 PM9/29/15
to spec...@googlegroups.com

Setters I wasn't thinking this was likely a specflow problem. I assume that there is some state that is being carried over.

I assume that means a business or rules object has gotten into a strange state. I find that often happens because of setters.

Perhaps I should create an anti setters campaign just like Francesco did for anti if.

Cheers
Mark

Steve Messer

unread,
Sep 29, 2015, 3:22:29 PM9/29/15
to SpecFlow
I looked and I don't know how to turn parallel on or off. I am using the default setup. 

I am trying to find some state that may be retained etc. No luck so far.

If one scenario is run at time and BeforeScenario is run each time. I am out of ideas.
Logging "seems" to show that everything is as I expect it to be when passing the necessary data to the rule logic.


Steve Messer

unread,
Sep 29, 2015, 3:27:02 PM9/29/15
to SpecFlow
PS: The rule logic is designed as to not have any internal state. In production the rule logic is used across many threads and works fine.

I need to put it away for a while, not getting anywhere. 

Steve Messer

unread,
Sep 30, 2015, 11:14:18 AM9/30/15
to SpecFlow
Thanks for all the pointers. I finally figured out my test failure "randomness".

It wasn't a static nor shared code issue.
This probably won't help anyone else but I will explain it just in case.

When I convert my table data into strongly typed objects I sort them by 4 of the columns to ensure I always process the data in the same order regardless of how it is ordered in the feature file table.

I had inadvertently used DateTime.Now instead of DateTime.Today (porting error) as one of the sort filters.

This caused the order to be based on millisecond differences instead of daily differences.

So depending on how fast the system converted the table rows into objects determined the order of the data which influenced the outcome of the tests "randomly".

Reply all
Reply to author
Forward
0 new messages