default report output

196 views
Skip to first unread message

alex bagehot

unread,
Apr 22, 2014, 7:36:29 AM4/22/14
to gat...@googlegroups.com
Hi,

I have been testing out the open workload model support in 2.0.0 snapshot -
Great stuff overall, I have a couple of thoughts:

1) meanNumberOfRequestsPerSecond is confusing (or could cause confusion)

I can make this metric change just by changing the test duration as it includes the time to complete all the requests. This is clear when there is a long response time eg 10 seconds, which could happen if we break a system we are just starting to tune.

for example: scn.inject(constantUsersPerSec(1).during(10 seconds))

================================================================================

2014-04-22 10:46:33                                          20s elapsed

================================================================================

---- Global Information --------------------------------------------------------

> numberOfRequests                                      10 (OK=10     KO=0     )

> minResponseTime                                    10000 (OK=10000  KO=-     )

> maxResponseTime                                    10000 (OK=10000  KO=-     )

> meanResponseTime                                   10000 (OK=10000  KO=-     )

> stdDeviation                                           0 (OK=0      KO=-     )

> percentiles1                                       10000 (OK=10000  KO=-     )

> percentiles2                                       10000 (OK=10000  KO=-     )

> meanNumberOfRequestsPerSecond                       0.50 (OK=0.50   KO=-     )

---- Response Time Distribution ------------------------------------------------

> t < 800 ms                                             0 (  0%)

> 800 ms < t < 1200 ms                                   0 (  0%)

> t > 1200 ms                                           10 (100%)

> failed                                                 0 (  0%)

================================================================================




================================================================================

2014-04-22 10:50:25                                          30s elapsed

================================================================================

---- Global Information --------------------------------------------------------

> numberOfRequests                                      20 (OK=20     KO=0     )

> minResponseTime                                    10000 (OK=10000  KO=-     )

> maxResponseTime                                    10000 (OK=10000  KO=-     )

> meanResponseTime                                   10000 (OK=10000  KO=-     )

> stdDeviation                                           0 (OK=0      KO=-     )

> percentiles1                                       10000 (OK=10000  KO=-     )

> percentiles2                                       10000 (OK=10000  KO=-     )

> meanNumberOfRequestsPerSecond                       0.67 (OK=0.67   KO=-     )

---- Response Time Distribution ------------------------------------------------

> t < 800 ms                                             0 (  0%)

> 800 ms < t < 1200 ms                                   0 (  0%)

> t > 1200 ms                                           20 (100%)

> failed                                                 0 (  0%)

================================================================================


The number is really 1 but 0.50 and 0.67 are reported for an injection rate of 1.


It would be worth splitting this metric out to 

meanRequestArrivalRatePerSecond and 

meanRequestCompletionRatePerSecond

then we will see the value of 1/sec each assuming the server isn't overloaded. It will make it much easier to validate that Gatling is providing the correct arrival rate and whether or not the server is "keeping up" with the desired completion rate that is the number reported by benchmarks.



2) don't report the standard deviation with latency figures

Latency is very likely not a normal distribution so providing the std deviation risks people calculating percentiles based off that number, which will in most cases result in the wrong latency figure being output as the underlying distribution is not normal.

The spread can be provided by min/max/percentiles1/2 (or other stat)

The use of mean is also discouraged in favour of median (ie. 50th percentile)

We could then rearrange into a mostly ascending order:

> minResponseTime                                    9999 (OK=10000  KO=-     )

> medianResponseTime                                   10000 (OK=10000  KO=-     )

> percentiles1                                       10000 (OK=10000  KO=-     )

> percentiles2                                       10000 (OK=10000  KO=-     ) 

> maxResponseTime                                    10001 (OK=10000  KO=-     )


Thanks,

Alex

Stéphane Landelle

unread,
May 20, 2014, 12:21:15 PM5/20/14
to gat...@googlegroups.com
That's a bug.
Investigating.


--
You received this message because you are subscribed to the Google Groups "Gatling User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gatling+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

alex bagehot

unread,
May 20, 2014, 1:02:43 PM5/20/14
to gat...@googlegroups.com

Thanks

Stéphane Landelle

unread,
May 20, 2014, 1:06:13 PM5/20/14
to gat...@googlegroups.com
Damn, actually, what would your definition of constantRate(1) during(10 seconds) be?

0 -> start 1 user
1 -> start 1 user
2 -> start 1 user
3 -> start 1 user
4 -> start 1 user

so you'd start a total of 5 users, and nothing at t=5sec?

alex bagehot

unread,
May 21, 2014, 3:59:49 AM5/21/14
to gat...@googlegroups.com

assuming each user only sends 1 request without pausing

None completed at t=5
5 requests sent in 5 seconds  so a sent/arrival rate of 1 which is the input of constantRate(1) validated. We asked for an arrival rate of 1 and got it.

Arrival/sent here means gatling applied that rate to the SUT. There could be a bottleneck between gatling and the SUT resulting in no or only some requests making it through. Typically we can only measure the true arrival rate from within the SUT  or at the SUT boundary.

Completion rate would be undefined at t=5 and until the first request does complete. Because the injection is constant, from t=10 the completion rate will be 1 also, assuming the SUT can sustain the load.

Anything else doesn't mean a lot as it mixes up different measurements.

If the users inject rate is changing/ramping up then it gets more involved. For example ramp of 100 seconds to 100 users per second followed by constant of 100 seconds. If we include the ramp in the final throughput value then it will be less than 100 even if it was able to sustain 100 users per second for 100 seconds.

We can use 1 summary stat object for each part of the load profile as both t digest and hdr histogram support add/union. The reporting the above should be possible.

Stéphane Landelle

unread,
May 21, 2014, 10:57:17 AM5/21/14
to gat...@googlegroups.com
I think inject should be fine now: https://github.com/excilys/gatling/pull/1885

alex bagehot

unread,
May 22, 2014, 6:11:46 PM5/22/14
to gat...@googlegroups.com
thanks, basic question - how do you run gatling from sbt/eclipse? to have a look at those changes

Stéphane Landelle

unread,
May 23, 2014, 1:32:56 AM5/23/14
to gat...@googlegroups.com
Running Gatling directly from sbt: Pierre's new sbt plugin: https://github.com/gatling/gatling-sbt
Integrating an sbt project into eclipse:
  1. https://github.com/typesafehub/sbteclipse (declare the plugin in the global file)
  2. Or... still use the gatling maven archetype (requires installing m2e-scala)

alex bagehot

unread,
May 23, 2014, 3:56:23 AM5/23/14
to gat...@googlegroups.com
apologies I meant from source not snapshot, more like in eclipse with debugger etc

alex bagehot

unread,
May 26, 2014, 2:36:04 PM5/26/14
to gat...@googlegroups.com
thanks, with snapshot from the 26th I retried the test from before on the reported request rate and got the same numbers so have opened an issue for it.



On Wed, May 21, 2014 at 3:57 PM, Stéphane Landelle <slan...@excilys.com> wrote:
Reply all
Reply to author
Forward
0 new messages