@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt", "com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}
Hi all,
The option to rerun failed tests at the end is useful, but I don't see way for it to properly report the results.
Expected Result: If I run 5 tests during the first round and 1 fails... then the second round that I run it, AND the failed test passes, then the results should show all 5 tests as passing
Actual Result: The results overwrite each other instead of smartly appending to the past results, so the result now only shows that 1 test passed and ignores the fact that 4 other tests passed during the previous run.
mvn -e verify -Pintegration-tests -Dcucumber.options="@rerun.txt"@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt", "com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}
Advice about this appreciated
--
Unfortunately you're not going to get a lot of help for that on this group - everyone is going to just tell you that you're doing it wrong and you're using cucumber wrong.If this is something you need, my suggestion would be to write a rake task that automatically performs reruns, and aggregates the passes / failures of multiple runs into a consolidated result.
On Tuesday, April 21, 2015 at 1:06:08 AM UTC-4, Samuel S wrote:Hi all,
The option to rerun failed tests at the end is useful, but I don't see way for it to properly report the results.
Expected Result: If I run 5 tests during the first round and 1 fails... then the second round that I run it, AND the failed test passes, then the results should show all 5 tests as passing
Actual Result: The results overwrite each other instead of smartly appending to the past results, so the result now only shows that 1 test passed and ignores the fact that 4 other tests passed during the previous run.
mvn -e verify -Pintegration-tests -Dcucumber.options="@rerun.txt"@RunWith(Cucumber.class)
@CucumberOptions(format = {"rerun:rerun.txt", "com.trulia.infra.WebDriverInitFormatter",
"json:target/cucumber.json","html:target/cucumber"})
public class RunCukesIT {
}
Advice about this appreciated
--
On Tuesday, 21 April 2015 at 17:01, Matt Metzger wrote:
Yes, I understand what this report would show. I also assume Samuel understands what this report would show, when he asked for a way to generate this sort of report. I can't answer what the purpose of such a report is, because I am not the one asking for it. I don't know what problems Samuel is solving, or how he is using Cucumber to solve those problems. It would be wrong of me to make assumptions about that, and tell him that he is using the tool wrong.There's no doubt about it - in most cases, it makes sense to figure out why tests intermittently pass/fail, and address the root cause. Perhaps in Samuel's case, there is a very small subset of intermittent tests, and his team cannot justify the level of effort it would require to solve those. We simply don't know.If Samuel came here and said "I have some tests that sometimes fail and sometimes pass, what should I do about this?" I would be echoing your comments, but that's not what he asked for.
Hey guys,
Sorry for late response, I never got the initial notifications about this thread.
Why am I (and others on the web) asking for such a feature? Because the failures come not from our code, but from occasional hick ups in the third party tools we are using. For example, with a login test case, Appium will sometimes enter my login name as "Samule" and this will break the test.
This will happen 1 out of 20 times. It will be incredibly counterproductive to try and predict and code around every possible failure a third party tool can provide us. This is why we would want an immediate test retry, and for that test to NOT count as a failure.
Thanks again for hearing us out.
PS I am using cucumber JVM
On Tue, Apr 21, 2015 at 12:57 AM, Samuel S <smers...@gmail.com> wrote:
> Hi all,
> The option to rerun failed tests at the end is useful, but I don't see way
> for it to properly report the results.
>
> Expected Result: If I run 5 tests during the first round and 1 fails... then
> the second round that I run it, AND the failed test passes, then the results
> should show all 5 tests as passing
The rerun functionality AFAIK is meant to be used during development
to re-run tests.
"during the first round" makes me think that you are trying to fix
flickering tests by re-running them a few times till they pass. If
this is correct, have you considered the harder but more valuable task
of fixing the cause?
> Actual Result: The results overwrite each other instead of smartly appending
> to the past results, so the result now only shows that 1 test passed and
> ignores the fact that 4 other tests passed during the previous run.
>
> mvn -e verify -Pintegration-tests -Dcucumber.options="@rerun.txt"
mvn -e verify -Pintegration-tests -Dcucumber.options="--plugin json:rerun_result.json @rerun.txt"
>
>
> @RunWith(Cucumber.class)
> @CucumberOptions(format = {"rerun:rerun.txt",
> "com.trulia.infra.WebDriverInitFormatter",
> "json:target/cucumber.json","html:target/cucumber"})
> public class RunCukesIT {
> }
>
>
> Advice about this appreciated
>
On Sat, May 9, 2015 at 8:18 AM, Samuel S <smers...@gmail.com> wrote:
> Hey guys,
> Sorry for late response, I never got the initial notifications about this thread.
>
> Why am I (and others on the web) asking for such a feature? Because the failures come not from our code, but from occasional hick ups in the third party tools we are using. For example, with a login test case, Appium will sometimes enter my login name as "Samule" and this will break the test. This will happen 1 out of 20 times. It will be incredibly counterproductive to try and predict and code around every possible failure a third party tool can provide us. This is why we would want an immediate test retry, and for that test to NOT count as a failure.
So if the issue is within Appium, why don't you try and fix that
instead of asking for a workaround in Cucumber?
On Sat, May 9, 2015 at 9:18 AM, Samuel S <smers...@gmail.com> wrote:Hey guys,
Sorry for late response, I never got the initial notifications about this thread.
Why am I (and others on the web) asking for such a feature? Because the failures come not from our code, but from occasional hick ups in the third party tools we are using. For example, with a login test case, Appium will sometimes enter my login name as "Samule" and this will break the test.Ouch! I wouldn't want to use an automation library that has this kind of bugs. Is there nothing better out there?
This will happen 1 out of 20 times. It will be incredibly counterproductive to try and predict and code around every possible failure a third party tool can provide us. This is why we would want an immediate test retry, and for that test to NOT count as a failure.
Thanks for providing context and a concrete example. That makes a big difference.How would you like to specify what to retry, and how many times it should be retried?
On Saturday, May 9, 2015 at 1:08:31 AM UTC-7, Aslak Hellesøy wrote:On Sat, May 9, 2015 at 9:18 AM, Samuel S <smers...@gmail.com> wrote:Hey guys,
Sorry for late response, I never got the initial notifications about this thread.
Why am I (and others on the web) asking for such a feature? Because the failures come not from our code, but from occasional hick ups in the third party tools we are using. For example, with a login test case, Appium will sometimes enter my login name as "Samule" and this will break the test.Ouch! I wouldn't want to use an automation library that has this kind of bugs. Is there nothing better out there?I'm not sure how familiar you are with the Automation QA market, but Appium (and its older brother Selenium), are two of the hottest tools in Silicon Valley. The odds of convincing management that it's worth the investment of switching tools are slim to none.
This will happen 1 out of 20 times. It will be incredibly counterproductive to try and predict and code around every possible failure a third party tool can provide us. This is why we would want an immediate test retry, and for that test to NOT count as a failure.
Thanks for providing context and a concrete example. That makes a big difference.How would you like to specify what to retry, and how many times it should be retried?Here is the exact example of how we are doing it for our Selenium tests using TestNG. Note, the below code will execute an immediate rerun. You could hardcode the re-run count, or pass it in as a jenkins build parameter.
On Sat, May 9, 2015 at 6:46 PM, Samuel S <smers...@gmail.com> wrote:
On Saturday, May 9, 2015 at 1:08:31 AM UTC-7, Aslak Hellesøy wrote:On Sat, May 9, 2015 at 9:18 AM, Samuel S <smers...@gmail.com> wrote:Hey guys,
Sorry for late response, I never got the initial notifications about this thread.
Why am I (and others on the web) asking for such a feature? Because the failures come not from our code, but from occasional hick ups in the third party tools we are using. For example, with a login test case, Appium will sometimes enter my login name as "Samule" and this will break the test.Ouch! I wouldn't want to use an automation library that has this kind of bugs. Is there nothing better out there?I'm not sure how familiar you are with the Automation QA market, but Appium (and its older brother Selenium), are two of the hottest tools in Silicon Valley. The odds of convincing management that it's worth the investment of switching tools are slim to none.I'm reasonably familiar with the automation market. I was one of the first 3 contributors to Selenium back in 2004 and have used it regularly since. I've also written and contributed to a couple of other popular automation tools, and I have published several books on the topic. I regularly deliver training courses in BDD/Cucumber/automation.I'm well aware that Appium is one of the most popular automation tools for Android and iOS, but I haven't used it beyond simple examples. From my friends who develop mobile apps I keep hearing it's quite buggy. I find it hard to believe it's so buggy that it can't fill in text fields reliably though. Do you have a source for that? A link to a bug report?
When a widely-adopted open source tool has severe bugs in basic functionality, users will attempt to fix it. If bugs still don't get fixed it's usually because of poor project management, or because the code is so complex nobody knows how to fix it. What happens next is either a fork, or a complete replacement by a new and better tool.Companies switch to new and better tools and technologies all the time. Companies where technology decisions are made by management and not developers are usually the last ones to switch to a new technology.This will happen 1 out of 20 times. It will be incredibly counterproductive to try and predict and code around every possible failure a third party tool can provide us. This is why we would want an immediate test retry, and for that test to NOT count as a failure.
Thanks for providing context and a concrete example. That makes a big difference.How would you like to specify what to retry, and how many times it should be retried?Here is the exact example of how we are doing it for our Selenium tests using TestNG. Note, the below code will execute an immediate rerun. You could hardcode the re-run count, or pass it in as a jenkins build parameter.The problem with this approach is that it would apply to *all* failing tests. That could have a pretty negative knock-on effect. If we were to add support for this in Cucumber, it would have to use a mechanism that allows users to easily target the automatic retry to specific scenarios.It seems to me this would work best with a tagged after hook. Something like this:@After("@non-deterministic")public void retry(Scenario scenario) {if(scenario.getRetries() <= 3) scenario.retry();}
@RunWith(ExtendedCucumber.class)
@ExtendedCucumberOptions(
retryCount = 3,
detailedAggregatedReport = true
)
@CucumberOptions(
format = ["pretty", "json:Reports/Cucumber/TestResults.json"],
tags = ["@test"],
glue = "src/test/groovy",
features = "src/test/resources"