Hi Nicholas,
Thanks for your reply.
My step definitions are actually intended to be client-side code that
perform I/O to the server that is being tested. The I/O is
asynchronous, via an I/O thread pool, which allows every thread to
dispatch many different scenarios concurrently. The stress system
makes assertions on reply latency and correctness, and we also record
various server performance metrics that we analyze later.
I have a system like this in production, and it scales to a million
simultaneously active pseudo-clients (probably much more, never tried)
by means of several stress client machines. Each pseudo-client runs a
scenario. That scenario requires a complex setup, and represents a
client with very particular characteristics and behavior, which
therefore causes the tested server to behave differently. Highly
common scenarios are performed by a large number of clients, in
proportion to their appearance in production.
Unfortunately, because I had not used BDD when designing that system,
I have no elegant means of formulating the scenarios and reuse the
implementation. Instead, I have dozens of hard-coded scenarios. This
means that any new feature requires coding in the stress client to
support it.
That is actually a waste, since I already have acceptance tests which
cover the entire range of functionality. There no need for further
coding to support new features. Hence the idea of reusing SpecFlow
scenarios also for acceptance tests.
What I would ideally like to do is write a few dozens of SpecFlow
scenarios (the steps and step definitions should already exist), and
then launch each multiple times (via xUnit API or SpecFlow API), based
on some configuration that specifies how many pseudo-clients should
run in each scenario.
For that purpose I need SpecFlow to be safe for:
1) Concurrent execution of scenarios in multiple threads (i.e. no
statics or trivially shared state)
2) Having each thread running multiple scenarios via async., event-
driven I/O (i.e. no ThreadStatics either)
Thanks,
- Te
On Sep 5, 5:58 pm, Nicholas Swarr <
nsw...@gmail.com> wrote:
> Let's turn this into a requirement. I'm making assumptions on what sort of
> test feedback you're trying to find...system out of memory, response time,
> spurios threading erros, etc. Feel free to rewrite the scenario in what
> you're trying to verify about your system.
>
> Given the app is fully configured for production use
> When a thousand requests are sent concurrently
> Then the average response should be under one second
>
> If you're trying to perform stress tests then you could factor the
> interesting Step Definitions into helper methods. You could then execute
> the grouping of helper methods as the expected workflow. The "when" could
> spawn the requests from the scenario above.
>
> Or, without the helper methods, just call the grouping of Step Definitions
> themselves. It's just my preference to use helpers, I guess. You may have
> run into situations like this when you have a story for logging into your
> app. Then you'll have another story whose "given" expects you to be logged
> in already. You wouldn't run the login scenario first. You'd factor out
> that workflow into a helper and use that to login.
>
> The benefit here is that you're removing SpecFlow execution from the
> important part of your stress test. BDD testing frameworks are generally
> being run serially to verify each area of functionality. I guess running
> whole scenarios concurrently are a somewhat unnatural use of the tool.
>