Incorrect req/sec getting simulated

142 views
Skip to first unread message

CyberNinja

unread,
Sep 1, 2016, 9:14:53 PM9/1/16
to Gatling User Group
I created a scenario to run with 50 users and pac as 10sec with the objective to achieve 10 req/sec for the block of requests (1, 2 and 3).
But what actually happening is that, in "Global Information" section of the report I see 10 req/sec and each request (1, 2 and 3) show 3.3 req/sec each.

I was assuming that my setup should have shown "Global Information" with 30 req/sec and 10req/sec for each request type. What am I missing here?

def workload (step: Int, pacing: Int, duration: Int) = scenario(s"Workload Step $step")
 
.during(duration seconds){
 pace
(pacing seconds)
 
.exitBlockOnFail {
 feed
(conversationIdFeeder)
 
.feed(requestIdFeeder)
 
.group("Request1") {exec(Request1)}
 
.feed(requestIdFeeder)
 
.group("Request2") {exec(Request2)}
 
.feed(requestIdFeeder)
 
.group("Request3"){exec(Request3)}
 
}
 
}


setUp
(
 workload
(1,5,250).inject(
 nothingFor
(10 seconds),
 rampUsers
(50) over(50 seconds),
 nothingFor
(220 seconds)
 
)
).protocols(httpProtocol)


Requests Total OK KO % KO Req/s
Global Information 2464 2464 0 0% 10.855
Request1 822 822 0 0% 3.621
Request2 821 821 0 0% 3.617
Request3 820 820 0 0% 3.612

CyberNinja

unread,
Sep 5, 2016, 2:21:20 AM9/5/16
to Gatling User Group
Is there any fundamental limitation in gatling which I am missing? I added more users keeping other settings same but still the load isn't increasing.

Stéphane LANDELLE

unread,
Sep 5, 2016, 10:50:57 AM9/5/16
to gat...@googlegroups.com
my2cents: caching

Stéphane Landelle
GatlingCorp CEO


--
You received this message because you are subscribed to the Google Groups "Gatling User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gatling+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

CyberNinja

unread,
Sep 5, 2016, 6:16:45 PM9/5/16
to Gatling User Group
Caching should improve response times which is not happening. Why would caching affect the load being injectrd? Can you please put few more cents and elaborate? Those three requests in my simulation are REST apis.

CyberNinja

unread,
Sep 5, 2016, 8:22:25 PM9/5/16
to Gatling User Group
I tried with disableCaching at httpProtocol level as well but no luck. Any advise would be really helpful. Thank you!

CyberNinja

unread,
Sep 6, 2016, 10:29:21 PM9/6/16
to Gatling User Group
 I removed during and pace from the scenario and tried to load 10 users using ConstantUsersPerSec. This should have given me 10reqs/sec for each request type. Still the rps for each request is getting limited to 2 req/sec!!. I understand that if the combined response time is > 1 sec the RPS would reduce. But 90% of response times are less than 1 sec so there is no reason why it should limit the rps to 2req/sec. So frustrating. I have spent 3 days on this investigation. As I am out of options now my manager has advised me explore another tool but I really don't want to give up :-(

Attaching simulation.log

def workload (step: Int, pacing: Int, duration: Int) = scenario(s"Workload Step $step")
 .exitBlockOnFail {
 feed
(conversationIdFeeder)
 
.feed(requestIdFeeder)
 
.group("Request1") {exec(Request1)}
 
.feed(requestIdFeeder)
 
.group("Request2") {exec(Request2)}
 
.feed(requestIdFeeder)
 
.group("Request3"){exec(Request3)}
 
}


setUp
(

 workload
.inject(
 nothingFor
(10 seconds),
 rampUsers(10) over(10 seconds),
 constantUsersPerSec(10) during(200 seconds),
 nothingFor(10 seconds)
 
)
).protocols(httpProtocol)
simulation.log

Stéphane LANDELLE

unread,
Sep 7, 2016, 4:33:00 AM9/7/16
to gat...@googlegroups.com
Your maths are wrong.
You don't have a constant number of concurrent users as you use "rampUsers". You only get the numbers you expect during the plateau, but the global stats account for the ramp up and the ramp down.

Stéphane Landelle
GatlingCorp CEO


--

CyberNinja

unread,
Sep 7, 2016, 6:43:12 AM9/7/16
to Gatling User Group
Where has it gone wrong?
I am assuming your response was related to my first scenario (which had during and pace): i was ramping up 50 users over 50 sec and then kept them running for 250 sec with a pace of 5 sec. So 50 users with a pace of 5 sec should have given me 10 req/s for each request. I understand "global information" includes ramp up and ramp down. I sent the table only for reference. I see the same stats even for the steady state duration of 250 sec.

In second scenario which has ConstantUsersPerSec with 10 users should have fired 10 req/s for each req regardless of server response. All i see
Is 2 req/s.

CyberNinja

unread,
Sep 8, 2016, 7:38:33 PM9/8/16
to Gatling User Group
Alright, I have figured out the problem. Nothing is wrong with the scenario configurations. One of the feeders in my scenario was calling an external jar and that was taking considerable to generate value feed values.
I believe the execution time of feeders isn't logged so it went unnoticed (please correct me if I am wrong). Now I pre-generate the required feeds and load them as csv feeder.
Thank you for your helps.

Is 2 req/s.a lot

Stéphane LANDELLE

unread,
Sep 11, 2016, 3:01:03 PM9/11/16
to gat...@googlegroups.com
Glad you found it out.
All the built-in feeders that we provide perform data fetching and parsing on simulation start up so they can't suffer from such flaw.
Performance wise, doing something like tons of SQL requests would be a bad idea.

Stéphane Landelle
GatlingCorp CEO


--

CyberNinja

unread,
Sep 11, 2016, 7:43:45 PM9/11/16
to Gatling User Group

Thanks. The external jar was doing heavy encryption.


One question: In the following scenario, I was intending to inject the load in steps of 2tps (2tps->4tps->6tps and so on) but didn't happen accurately. 4/8/12 tps got injected correctly whereas 2/6/10 tps shows hiccups. If gatling uses asynchttpclient why would this happen?


The scenario had following type of setup

def workload (step: Int, pacing: Int, duration: Int) = scenario(s"Workload Step $step")

 
.during(duration seconds){
 pace
(pacing seconds)
 
.exitBlockOnFail {...



To unsubscribe from this group and stop receiving emails from it, send an email to gatling+u...@googlegroups.com.

Stéphane LANDELLE

unread,
Sep 12, 2016, 7:58:30 AM9/12/16
to gat...@googlegroups.com
Could you provide a full gist, please?

Stéphane Landelle
GatlingCorp CEO


To unsubscribe from this group and stop receiving emails from it, send an email to gatling+unsubscribe@googlegroups.com.

CyberNinja

unread,
Sep 12, 2016, 7:14:06 PM9/12/16
to Gatling User Group
Simulation log for the test attached. Scenario definition and setup definition below.

def pVusers: Int = Integer.getInteger("vusers", 20)
def pPacing: Int = Integer.getInteger("pacing", 10) // in seconds
def pTps: Int = Integer.getInteger("tps", 2) // pVusers/pPacing``

def pRampTime: Int = Integer.getInteger("rampUpTime", 5) // in seconds
def pStepTime: Int = Integer.getInteger("stepTime", 60) //in seconds 1200
def pNumSteps: Int = Integer.getInteger("numSteps", 20) //number of steps of incremental load
//scenario
def workload (step: Int, pacing: Int, duration: Int) = scenario(s"STEP NO $step").during(duration seconds){
pace(pacing seconds)
.exec(
exitBlockOnFail {
feed(requestIdFeeder)
//.group("ION PRVS GF") {
.exec(session => {
session.set("url", spBrokerURL)
})
.group("RequestGroup1"){exec(request1)}
.feed(requestIdFeeder)
.group("RequestGroup2"){exec(request2)}
.feed(requestIdFeeder)
.group("RequestGroup3"){exec(request3)}
})}

val stepsParams = 1 to pNumSteps map { i => {
workload(i, pPacing, ((pNumSteps - i + 3) * (pRampTime+pStepTime)))
 .inject(
nothingFor((i-1) * (pRampTime+pStepTime) seconds),
rampUsers(pVusers) over (pRampTime seconds),
nothingFor(pStepTime seconds))
}
}
println("Load Pattern:" + stepsParams + "\n")

setUp(stepsParams:_*)
.protocols(httpProtocol)
.assertions(
 global.responseTime.max.lessThan(10000)
)

CyberNinja

unread,
Sep 12, 2016, 7:15:20 PM9/12/16
to Gatling User Group
Sorry, simulation log attached now.
simulation.log

CyberNinja

unread,
Sep 20, 2016, 7:54:08 PM9/20/16
to Gatling User Group
Did anyone have a chance to look into this? I am facing this issue quite frequently.
Issue is: I was intending to inject the load in steps of 2tps (2tps->4tps->6tps and so on) but didn't happen accurately. 4/8/12 tps got injected correctly whereas 2/6/10 tps shows hiccups. If gatling uses asynchttpclient why would this happen?

Here is graph from another test.

Reply all
Reply to author
Forward
0 new messages