For example a rate calculation feature. The user story will say, as defined in the attached excel sheet, a sequence of lookups and summations, adjustments will lead to an end result. Based on different inputs, we get different outputs. The challenge is to validate the intermediate assertion points (for an end to end behavior of single user story that has multiple intermediate steps). For example
Scenario outline
|Input 1 | Input 2| Assertion 1| assertion 2 | assertion 3 | assertion 4|.... |end result|
In this case using Scenario outline makes the spec short and its easy to see multiple scenarios within the same table . However, the devs argue that it takes time to bind and pass the tests when we use scenario outline and prefer spec with only scenarios. Using only scenarios restricts our ability to add multiple scenarios as the spec gets really long. This will impact coverage of variables and variations.
My question is, is it true that we should try to avoid scenario outline as much as possible ?
When to use scenario outline and when not to.
Not sure if we have similar topics discussed, apologize if this is a repeat.
Regards and Thanks,
Nilofar
--
You received this message because you are subscribed to the Google Groups "Behaviour Driven Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to behaviordrivendeve...@googlegroups.com.
To post to this group, send email to behaviordriv...@googlegroups.com.
Visit this group at https://groups.google.com/group/behaviordrivendevelopment.
For more options, visit https://groups.google.com/d/optout.
Thanks for your reply.
That leads to another question. When do we use Scenario outline.
For example, when we have 20 scenarios that can be combined as 1 scenario outline, it looks better and simple.
I also see that 20 scenarios in single feature file is not a good practice. However, its hard to have test coverage (positive and negative scenarios) with just 1 or 2 scenarios per spec file.
After few months of BDD, I have feature files with mostly 50+ scenarios. Yes i am not testing edge cases.. all valid scenarios for that user story. May be user stories needs to be simplified ??
Any suggestion would help
On Sunday, July 24, 2016 at 12:49:20 PM UTC-4, apremdas wrote:Without a proper example, there isn't much more to be said. Hope thats useful.Generally speaking if you writing scenarios that have lots of variables, values, intermediate checkpoint e.g. When Then When Then etc. then you are writing pretty poor quality scenarios. The solution is not to use tools that make writing these sorts of scenarios easier. The solution is to write better scenarios.The only thing that they do well is make long specs shorter, but they do this in a really poor way. The way to make long specs shorter is to name things and use higher level of abstractions, and avoid writing scenarios that say HOW something is done. HOW has no place in scenarios they should be about WHAT you are doing and WHY you are doing it.5. their maintenance cost is much higher, particularly when a business rule changes.4. they hide assumptions/rules in their values, instead of stating them clearly3. they tend to give poor error messages when something goes wrong2. they are error prone, because of the example data they rely uponYes you should avoid scenario outlines and large tables. They have the following faults1. they are much harder to implementOn 23 July 2016 at 19:53, nilofar nissa <nilofa...@gmail.com> wrote:We have been using BDD for the past 6 months, we often debate when to use scenario outline. most of our user stories defines a single calculation with multiple steps.
For example a rate calculation feature. The user story will say, as defined in the attached excel sheet, a sequence of lookups and summations, adjustments will lead to an end result. Based on different inputs, we get different outputs. The challenge is to validate the intermediate assertion points (for an end to end behavior of single user story that has multiple intermediate steps). For example
Scenario outline
|Input 1 | Input 2| Assertion 1| assertion 2 | assertion 3 | assertion 4|.... |end result|
In this case using Scenario outline makes the spec short and its easy to see multiple scenarios within the same table . However, the devs argue that it takes time to bind and pass the tests when we use scenario outline and prefer spec with only scenarios. Using only scenarios restricts our ability to add multiple scenarios as the spec gets really long. This will impact coverage of variables and variations.
My question is, is it true that we should try to avoid scenario outline as much as possible ?
When to use scenario outline and when not to.
Not sure if we have similar topics discussed, apologize if this is a repeat.
Regards and Thanks,
Nilofar
--
You received this message because you are subscribed to the Google Groups "Behaviour Driven Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to behaviordrivendevelopment+unsub...@googlegroups.com.
To post to this group, send email to behaviordrivendevelopment@googlegroups.com.
Visit this group at https://groups.google.com/group/behaviordrivendevelopment.
For more options, visit https://groups.google.com/d/optout.
I also see that 20 scenarios in single feature file is not a good practice. However, its hard to have test coverage (positive and negative scenarios) with just 1 or 2 scenarios per spec file.Test coverage has nothing to do with BDD. The words test and coverage are illusions in this space. Tests imply you are proving something, the best a passing scenario can do is give you some confidence that the behaviour that existed when the scenario last run is probably still the same. This in itself is sufficient to make great strides with development but please don't think that it 'proves' anything. Coverage is also something of a misnomer in BDD. Its easy to get very high levels of code coverage with BDD because scenarios exercise alot of code. This does not mean that the code is tested though.
On Wednesday, March 22, 2017 at 9:22:01 PM UTC+2, apremdas wrote:I also see that 20 scenarios in single feature file is not a good practice. However, its hard to have test coverage (positive and negative scenarios) with just 1 or 2 scenarios per spec file.Test coverage has nothing to do with BDD. The words test and coverage are illusions in this space. Tests imply you are proving something, the best a passing scenario can do is give you some confidence that the behaviour that existed when the scenario last run is probably still the same. This in itself is sufficient to make great strides with development but please don't think that it 'proves' anything. Coverage is also something of a misnomer in BDD. Its easy to get very high levels of code coverage with BDD because scenarios exercise alot of code. This does not mean that the code is tested though.As a development methodology BDD inherits from TDD which ideally directs that code should only be written in response to a defined behavior / test in order to implement that specific part of the system. As such, coverage is not the issue since all code should be covered by a behavior or it wouldn't have been written to begin with. If you are retrofitting BDD onto an existing code base this will not be the case. Then you are using BDD as a documentation and automation tool rather than as a development methodology.
I'm not sure I understand the distinction being made between "confidence that the behavior ... is still the same" and testing. Since the defined criteria of the system is the behaviors described in your features then having the same behavior means the system is working properly. If there is some aspect of the system that is not captured in the behavior descriptions then they should be revised.
As for the original question, if different inputs lead to different outputs then there need to be scenarios for each type of behavior. For example, if you are creating an order processing system that has a volume discount then you would need to have a scenario for an order that doesn't qualify for the discount and an order that does. On the other hand, if you have a behavior with multiple observable outcomes you may want to abstract them into a single state of success or, depending on the specific business requirements, spell them out in the scenario. For example, if you have an action that the user takes which updates the local database, makes an external API call to update another service and sends a notification to a system admin then you may write the scenario as:
Given I am the appropriate user typeWhen I execute the specified actionThen the action completes successfullyorGiven I am the appropriate user typeWhen I execute the specified actionThen the local system is updatedAnd the external system is calledAnd the admin is notifiedIn the first scenario option you will move the multiple Then steps into the step implementation. It will be up to the people involved to determine what level of detail is most appropriate for everyone.
Handling error conditions is where I have found proliferation of scenarios. There seem to be many more error cases then success cases for many of the behaviors I have encountered. While they are the exception cases I still think it is critical to define the behaviours for those scenarios. I haven't found a way to get around that without compromising on the completeness of the description of the system behaviour.
--
You received this message because you are subscribed to the Google Groups "Behaviour Driven Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to behaviordrivendevelopment+unsub...@googlegroups.com.
To post to this group, send email to behaviordrivendevelopment@googlegroups.com.
Visit this group at https://groups.google.com/group/behaviordrivendevelopment.
For more options, visit https://groups.google.com/d/optout.
On 23 Mar 2017, at 10:17, Andrew Premdas <apre...@gmail.com> wrote:On 22 March 2017 at 20:19, <da...@listonfire.com> wrote:
On Wednesday, March 22, 2017 at 9:22:01 PM UTC+2, apremdas wrote:I also see that 20 scenarios in single feature file is not a good practice. However, its hard to have test coverage (positive and negative scenarios) with just 1 or 2 scenarios per spec file.Test coverage has nothing to do with BDD. The words test and coverage are illusions in this space. Tests imply you are proving something, the best a passing scenario can do is give you some confidence that the behaviour that existed when the scenario last run is probably still the same. This in itself is sufficient to make great strides with development but please don't think that it 'proves' anything. Coverage is also something of a misnomer in BDD. Its easy to get very high levels of code coverage with BDD because scenarios exercise alot of code. This does not mean that the code is tested though.As a development methodology BDD inherits from TDD which ideally directs that code should only be written in response to a defined behavior / test in order to implement that specific part of the system. As such, coverage is not the issue since all code should be covered by a behavior or it wouldn't have been written to begin with. If you are retrofitting BDD onto an existing code base this will not be the case. Then you are using BDD as a documentation and automation tool rather than as a development methodology.The idea that code driven by integration tests has the same level of coverage as code driven by unit tests is clearly flawed (or at best has a very loose definition of coverage).
Clearly high level abstraction tests cover more code, but the coverage is much thinner.
Code driven by scenarios does not inherit all the characteristics of code driven by unit tests quite the opposite. Code driven by scenarios can provide a framework in which TDD can be applied with greater efficacy.
But there is no guarantee that TDD is even used. If you remember the two circles diagram from the RSpec book you can see that often very little time is spent in the inner circle.
Basically if you are not writing really good fast unit tests before your code then you are not doing TDD as far as I'm concerned, and alot of the time thats perfectly reasonable.
I'm not sure I understand the distinction being made between "confidence that the behavior ... is still the same" and testing. Since the defined criteria of the system is the behaviors described in your features then having the same behavior means the system is working properly. If there is some aspect of the system that is not captured in the behavior descriptions then they should be revised.Its about understanding what your tests can do and what they cannot do. Alot of the scenarios I see are written and viewed in a way that treats them as more than they actually are. I've seen projects managed on just whether scenarios are complete and green, which means nothing without reviewing and looking at the application itself.
If you think your tests can prove your application works you will write lots and lots of scenarios to prove that your application works under all sorts of different circumstances. If you realise from the beginning that proof is unobtainable then it just might help you write alot fewer scenarios, and question the value and costs of each scenario you write.
We regularly hear on this mailing list about test suites that take many hours too run. The simplest way to make these tests faster is remove all the scenarios you don't need.
As for the original question, if different inputs lead to different outputs then there need to be scenarios for each type of behavior. For example, if you are creating an order processing system that has a volume discount then you would need to have a scenario for an order that doesn't qualify for the discount and an order that does. On the other hand, if you have a behavior with multiple observable outcomes you may want to abstract them into a single state of success or, depending on the specific business requirements, spell them out in the scenario. For example, if you have an action that the user takes which updates the local database, makes an external API call to update another service and sends a notification to a system admin then you may write the scenario as:
Given I am the appropriate user typeWhen I execute the specified actionThen the action completes successfullyorGiven I am the appropriate user typeWhen I execute the specified actionThen the local system is updatedAnd the external system is calledAnd the admin is notifiedIn the first scenario option you will move the multiple Then steps into the step implementation. It will be up to the people involved to determine what level of detail is most appropriate for everyone.Handling error conditions is where I have found proliferation of scenarios. There seem to be many more error cases then success cases for many of the behaviors I have encountered. While they are the exception cases I still think it is critical to define the behaviours for those scenarios. I haven't found a way to get around that without compromising on the completeness of the description of the system behaviour.The above is all good stuff. The way to handle error conditions is too realise that you don't have to define behaviour for every exceptional case, and that you can apply the same principle of abstraction you used above in your example e.g. instead ofWhen foo goes wrongThen I should see foo failedWhen bar goes wrongThen I should see bar failed... twenty other failure conditionsyou can sayWhen something goes wrongThen I should see something failed
most of the time this has more value than the twenty failure scenarios, even if the twenty scenarios are already written.The reason to write and run scenarios is too enhance development, to make refactoring possible, to make adding functionality possible (regression), to have some confidence that nothing has been broken recently. Tests can never show that your application works as intended the only way to do that is too look at your application.
All bestAndrew--
You received this message because you are subscribed to the Google Groups "Behaviour Driven Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to behaviordrivendevelopment+unsub...@googlegroups.com.
To post to this group, send email to behaviordrivendevelopment@googlegroups.com.
Visit this group at https://groups.google.com/group/behaviordrivendevelopment.
For more options, visit https://groups.google.com/d/optout.
--------------------------Andrew Premdas--
You received this message because you are subscribed to the Google Groups "Behaviour Driven Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to behaviordrivendeve...@googlegroups.com.
To post to this group, send email to behaviordriv...@googlegroups.com.
Visit this group at https://groups.google.com/group/behaviordrivendevelopment.
For more options, visit https://groups.google.com/d/optout.
Given I am the appropriate user typeWhen I execute the specified actionThen the local system is updatedAnd the external system is calledAnd the admin is notified
In the first scenario option you will move the multiple Then steps into the step implementation. It will be up to the people involved to determine what level of detail is most appropriate for everyone.
Handling error conditions is where I have found proliferation of scenarios. There seem to be many more error cases then success cases for many of the behaviors I have encountered. While they are the exception cases I still think it is critical to define the behaviors for those scenarios. I haven't found a way to get around that without compromising on the completeness of the description of the system behavior.
The above is all good stuff. The way to handle error conditions is too realise that you don't have to define behaviour for every exceptional case, and that you can apply the same principle of abstraction you used above in your example e.g. instead ofWhen foo goes wrongThen I should see foo failedWhen bar goes wrongThen I should see bar failed... twenty other failure conditionsyou can sayWhen something goes wrongThen I should see something failed>most of the time this has more value than the twenty failure scenarios, even if the twenty scenarios are already written.>The reason to write and run scenarios is too enhance development, to make refactoring possible, to make adding functionality possible >(regression), to have some confidence that nothing has been broken recently. Tests can never show that your application works as intended the >only way to do that is too look at your application.
On 23 Mar 2017, at 10:17, Andrew Premdas <apre...@gmail.com> wrote:On 22 March 2017 at 20:19, <da...@listonfire.com> wrote:
On Wednesday, March 22, 2017 at 9:22:01 PM UTC+2, apremdas wrote:I also see that 20 scenarios in single feature file is not a good practice. However, its hard to have test coverage (positive and negative scenarios) with just 1 or 2 scenarios per spec file.Test coverage has nothing to do with BDD. The words test and coverage are illusions in this space. Tests imply you are proving something, the best a passing scenario can do is give you some confidence that the behaviour that existed when the scenario last run is probably still the same. This in itself is sufficient to make great strides with development but please don't think that it 'proves' anything. Coverage is also something of a misnomer in BDD. Its easy to get very high levels of code coverage with BDD because scenarios exercise alot of code. This does not mean that the code is tested though.As a development methodology BDD inherits from TDD which ideally directs that code should only be written in response to a defined behavior / test in order to implement that specific part of the system. As such, coverage is not the issue since all code should be covered by a behavior or it wouldn't have been written to begin with. If you are retrofitting BDD onto an existing code base this will not be the case. Then you are using BDD as a documentation and automation tool rather than as a development methodology.The idea that code driven by integration tests has the same level of coverage as code driven by unit tests is clearly flawed (or at best has a very loose definition of coverage).Hi Andrew,I’m not sure it’s that “clearly”. Dan Worthington-Bodart has several fantastic examples[1][2] of builds running many hundreds or even thousands of functional (HTML round trip) tests in less than 10 seconds (seriously, he’s bonkers), with phenomenal coverage.
Clearly high level abstraction tests cover more code, but the coverage is much thinner.Again, I challenge your use of “clearly.” I find the boundaries are a lot more blurred. I’ve seen codebases with tons of code-level tests that have glaring holes in coverage, and other codebases with almost exclusively application-level tests that are a joy to confidently change because they catch practically anything stupid thad I do (and I do lots of stupid!).
Additionally I should point out that both BDD and TDD cover both code-level and application-level design. Neither one is high or low level, that’s just a misconception.
Code driven by scenarios does not inherit all the characteristics of code driven by unit tests quite the opposite. Code driven by scenarios can provide a framework in which TDD can be applied with greater efficacy.Maybe. I would say the concerns are orthogonal, and depend on your choice of tooling, the amount of ground you cover in your tests, and how rigorous you are in the design in terms of separation of concerns.
But there is no guarantee that TDD is even used. If you remember the two circles diagram from the RSpec book you can see that often very little time is spent in the inner circle.Here I talk about “surface area” of a codebase. Some applications are very thin and flat, almost all surface area. Here, testing through APIs is the most effective way I know to reduce uncertainty and gain confidence. Other apps have lots of “guts”, think algorithmic trading systems or complex ETLs. In this case I’ll want to test as close to the algo as I can, often with generative, property-based testing or fuzzing to cover lots of cases.
Basically if you are not writing really good fast unit tests before your code then you are not doing TDD as far as I'm concerned, and alot of the time thats perfectly reasonable.
I’m going to challenge your definition of TDD there. It may be that most of your experience has been with application code with lots of “insides” and not much surface area, but for me that is a code smell, and I prefer to refactor towards smaller components with explicit APIs. TDD isn’t about writing good, fast unit tests, it’s about writing just enough code examples or tests to drive your design. Of course faster tests are better, but you can generate a lot of “TDD theatre” that provides very little design or testing benefit.
I'm not sure I understand the distinction being made between "confidence that the behavior ... is still the same" and testing. Since the defined criteria of the system is the behaviors described in your features then having the same behavior means the system is working properly. If there is some aspect of the system that is not captured in the behavior descriptions then they should be revised.Its about understanding what your tests can do and what they cannot do. Alot of the scenarios I see are written and viewed in a way that treats them as more than they actually are. I've seen projects managed on just whether scenarios are complete and green, which means nothing without reviewing and looking at the application itself.Right, but evidence of misapplication of a tool doesn’t mean the tool is bad, it means it is being misapplied.
If you think your tests can prove your application works you will write lots and lots of scenarios to prove that your application works under all sorts of different circumstances. If you realise from the beginning that proof is unobtainable then it just might help you write alot fewer scenarios, and question the value and costs of each scenario you write.The scenario exists to drive the design of the application. You don’t have to use a slow, interpreted scenario runner like Cucumber or SpecFlow. Most of the BDD scenarios I’ve ever written have been in Java.
We regularly hear on this mailing list about test suites that take many hours too run. The simplest way to make these tests faster is remove all the scenarios you don't need.I agree with this, and generally with not using scenarios as tests, but as an indicator of which tests you might want to write to increase confidence in the behaviour of the application (see the other current threat for more on this).
As for the original question, if different inputs lead to different outputs then there need to be scenarios for each type of behavior. For example, if you are creating an order processing system that has a volume discount then you would need to have a scenario for an order that doesn't qualify for the discount and an order that does. On the other hand, if you have a behavior with multiple observable outcomes you may want to abstract them into a single state of success or, depending on the specific business requirements, spell them out in the scenario. For example, if you have an action that the user takes which updates the local database, makes an external API call to update another service and sends a notification to a system admin then you may write the scenario as:
Given I am the appropriate user type
When I execute the specified action
Then the action completes successfullyor
Given I am the appropriate user typeWhen I execute the specified actionThen the local system is updatedAnd the external system is calledAnd the admin is notified
In the first scenario option you will move the multiple Then steps into the step implementation. It will be up to the people involved to determine what level of detail is most appropriate for everyone.
Handling error conditions is where I have found proliferation of scenarios. There seem to be many more error cases then success cases for many of the behaviors I have encountered. While they are the exception cases I still think it is critical to define the behaviours for those scenarios. I haven't found a way to get around that without compromising on the completeness of the description of the system behaviour.
The above is all good stuff. The way to handle error conditions is too realise that you don't have to define behaviour for every exceptional case, and that you can apply the same principle of abstraction you used above in your example e.g. instead ofWhen foo goes wrongThen I should see foo failedWhen bar goes wrongThen I should see bar failed... twenty other failure conditionsyou can sayWhen something goes wrongThen I should see something failed
That depends on the business impact of each error. You may be able to group the same types of error together, but often different errors require different kinds of recovery, alerting, audit, and other business concerns.
If the behaviour is simply “When an error occurs, Then I should see the corresponding message”, then you could definitely drive this from a single data-driven scenario with a list of errors and corresponding messages.