Make some perfomances test with SpecFlow

1,046 views
Skip to first unread message

Gregory Estimbre

unread,
Mar 5, 2014, 10:28:55 AM3/5/14
to spec...@googlegroups.com
Hi all,

We use SpecFlow on our project to create all our business test on our application.

We want know to make some performances ( count time to acheive the function and fail test is time is too long) tests on specifics functions of the application. I want to know if anyone have already done this kind of things and can give me some feedback.

First basic idea can be to right specifc scenario and count with stop watch or anything else time to complete the scenario and fail if time is too long.

Given Existing Element1 of type Type1
When I Copy and paste the Element1
Then new Element2 is created in less than 5 seconds

Seems to be very basic.

As SpecFlow is able to made some statistics and produce some metrics on each scenarion including time (http://qtp-automate.blogspot.fr/2013/10/creating-basic-specflow-tests-in-visual.html),  my question is does it possible to get back time metric and fail the test if exceed some values with an existing part of SpecFlow API ?


Thanks to all

Gregory


Oliver Friedrich

unread,
Mar 5, 2014, 10:49:44 AM3/5/14
to spec...@googlegroups.com
I haven't done anything like this but there's an idea that might be less invasive than adding "in less than 5 seconds" to the last step of all of your scenarios.
  1. You could use tags like @lessthan5, @lessthan10, etc.
  2. Start your time measurement in a [BeforeScenario] method and stop it in [AfterScenario] - e.g. save a Stopwatch instance in the current ScenarioContext: use ScenarioContext.Current.Set(stopwatch) and ScenarioContext.Current.Get<Stopwatch>().
  3. Also in [AfterScenario], check for the existence of a tag that starts with "lessthan", extract the number of seconds from it, and check that against the elapsed time of the stopwatch. Fail the scenario if necessary.
This will allow you to add time measurement and constraints without touching your steps.

--
In famiglia al Giovanni Paolo II. - http://infamigliaalsanto.it/
Rodziną do Jana Pawła II. - http://rodzinadojp2.pl/


--
You received this message because you are subscribed to the Google Groups "SpecFlow" group.
To unsubscribe from this group and stop receiving emails from it, send an email to specflow+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Nithin Shenoy

unread,
Mar 6, 2014, 10:24:27 AM3/6/14
to spec...@googlegroups.com
With performance tests, typically measurements should done specifically for the action(s) that need to be measured. The problem with starting and stopping a stopwatch in the hooks is that it may also include the measurement of any setup and teardown that the code may be doing or the overhead of SpecFlow itself (or any other extraneous activities happening within the context of the test that may skew numbers). In addition, it's very important for performance tests to measure the scenario a set number of iterations repeatedly, with the final measure reported as some sort of aggregate of the sample (e.g. taking an average of all the samples and throwing out outliers, reporting stddev, etc). Finally, you may want to consider having the notion of a 'warmup' where the action under test is done a small number of times to make sure the system is warmed with things properly cached. I've had some experience in trying to use existing integration tests for performance and it always resulted in poor measurements. Integration tests often do a lot more than what you really want to measure. Perf tests should be targeted specifically towards what you want to measure only and reduce extraneous overhead as much as possible.

That being said, I did do some small benchmark tests using SpecFlow for some APIs that I'm testing. This worked out ok as a quick and dirty micro benchmark and suited my purpose. I called my API in my When, and structured it something like "When I call MyTestAPIThingy (.*) times" where the parameter allowed me to change the number of iterations (or 'samplesToTake' in the below code). In the When's stepdefinition, I created a stopwatch and wrapped the start/stop around the API. For example:

   List<long> perfSamples = new List<long>();
   for(int i = 0; i < samplesToTake; i++)
   {
    long elapsedTime;
    Stopwatch watch = new Stopwatch();
    watch.Start();
    responses = MyTestAPIThingy();
    watch.Stop();
    Trace.WriteLine("    Elapsed time: " + watch.ElapsedMilliseconds);
    perfSamples.Add(watch.ElapsedMilliseconds);
   }

You can then take the perfSamples list, put it in the ScenarioContext.Current dictionary, and then retrieve it in the scenario's "Then" to do your math and assert that it's less than whatever your goal is for this scenario.

For something that is not an API level test, you could do something similar, but try to wrap the stopwatch around the specific action. In your example above, consider changing your When to something like, "When I copy and paste Element1 20 times". In the step definition, wrap the actual copy and paste action with the stopwatch to take measurements x number times. And your Then may be something like, "Then New Element2 should be created on average in less than 5 seconds". In the stepdef, retrieve the perfSamples from the ScenarioContext.Current, and do the necessary math for the assertion. Remember, if you decide not to do a warmup, then you may need to do some extra math on your perf samples to throw out any outliers, or exclude the first x number of samples when calculating average.

Thanks,
Nithin

Gáspár Nagy

unread,
Apr 8, 2014, 3:25:44 AM4/8/14
to spec...@googlegroups.com
I agree.

SpecFlow+ Runner (aka SpecRun, http://www.specflow.org/plus/), does a separated measurement of the execution of the "When" steps (the ones that should contain the application execution code), which is also included in the test report. The server component of SpecRun can even collect these data, that you can query through OData.

Br,
Gaspar
Reply all
Reply to author
Forward
0 new messages