================================================================================
2014-04-22 10:46:33 20s elapsed
================================================================================
---- Global Information --------------------------------------------------------
> numberOfRequests 10 (OK=10 KO=0 )
> minResponseTime 10000 (OK=10000 KO=- )
> maxResponseTime 10000 (OK=10000 KO=- )
> meanResponseTime 10000 (OK=10000 KO=- )
> stdDeviation 0 (OK=0 KO=- )
> percentiles1 10000 (OK=10000 KO=- )
> percentiles2 10000 (OK=10000 KO=- )
> meanNumberOfRequestsPerSecond 0.50 (OK=0.50 KO=- )
---- Response Time Distribution ------------------------------------------------
> t < 800 ms 0 ( 0%)
> 800 ms < t < 1200 ms 0 ( 0%)
> t > 1200 ms 10 (100%)
> failed 0 ( 0%)
================================================================================
================================================================================
2014-04-22 10:50:25 30s elapsed
================================================================================
---- Global Information --------------------------------------------------------
> numberOfRequests 20 (OK=20 KO=0 )
> minResponseTime 10000 (OK=10000 KO=- )
> maxResponseTime 10000 (OK=10000 KO=- )
> meanResponseTime 10000 (OK=10000 KO=- )
> stdDeviation 0 (OK=0 KO=- )
> percentiles1 10000 (OK=10000 KO=- )
> percentiles2 10000 (OK=10000 KO=- )
> meanNumberOfRequestsPerSecond 0.67 (OK=0.67 KO=- )
---- Response Time Distribution ------------------------------------------------
> t < 800 ms 0 ( 0%)
> 800 ms < t < 1200 ms 0 ( 0%)
> t > 1200 ms 20 (100%)
> failed 0 ( 0%)
================================================================================
The number is really 1 but 0.50 and 0.67 are reported for an injection rate of 1.
It would be worth splitting this metric out to
meanRequestArrivalRatePerSecond and
meanRequestCompletionRatePerSecond
then we will see the value of 1/sec each assuming the server isn't overloaded. It will make it much easier to validate that Gatling is providing the correct arrival rate and whether or not the server is "keeping up" with the desired completion rate that is the number reported by benchmarks.
2) don't report the standard deviation with latency figures
Latency is very likely not a normal distribution so providing the std deviation risks people calculating percentiles based off that number, which will in most cases result in the wrong latency figure being output as the underlying distribution is not normal.
The spread can be provided by min/max/percentiles1/2 (or other stat)
The use of mean is also discouraged in favour of median (ie. 50th percentile)
We could then rearrange into a mostly ascending order:
> minResponseTime 9999 (OK=10000 KO=- )
> medianResponseTime 10000 (OK=10000 KO=- )
> percentiles1 10000 (OK=10000 KO=- )
> percentiles2 10000 (OK=10000 KO=- )
> maxResponseTime 10001 (OK=10000 KO=- )
Thanks,
Alex
--
You received this message because you are subscribed to the Google Groups "Gatling User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gatling+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Thanks
assuming each user only sends 1 request without pausing
None completed at t=5
5 requests sent in 5 seconds so a sent/arrival rate of 1 which is the input of constantRate(1) validated. We asked for an arrival rate of 1 and got it.
Arrival/sent here means gatling applied that rate to the SUT. There could be a bottleneck between gatling and the SUT resulting in no or only some requests making it through. Typically we can only measure the true arrival rate from within the SUT or at the SUT boundary.
Completion rate would be undefined at t=5 and until the first request does complete. Because the injection is constant, from t=10 the completion rate will be 1 also, assuming the SUT can sustain the load.
Anything else doesn't mean a lot as it mixes up different measurements.
If the users inject rate is changing/ramping up then it gets more involved. For example ramp of 100 seconds to 100 users per second followed by constant of 100 seconds. If we include the ramp in the final throughput value then it will be less than 100 even if it was able to sustain 100 users per second for 100 seconds.
We can use 1 summary stat object for each part of the load profile as both t digest and hdr histogram support add/union. The reporting the above should be possible.