Throttle Usage

3,744 views
Skip to first unread message

William Hoad

unread,
Sep 5, 2014, 5:14:59 AM9/5/14
to gat...@googlegroups.com
Hi
I have recently started using Gatling to run performance tests sending (at peek 600 rps) in ramping up fashion.

I know i can simply inject these ramps
constantUsersPerSec(100) during(1 min)
constantUsersPerSec
(200) during(1 min)
constantUsersPerSec(300) during(1 min)
This setup does work, but from the way i have seen others use .throttle is the right way to do it.

But using the throttle functionality just dosnt seem to work. The few set-ups it does compile in nothing happens. I have searched for every bit of documentation i can find and through many many other topics from this group, but just cant get it produce anything.


(probably more than what you need to see my mistake, but here is what i have been trying to do

object RESTrequest {

    val 1Feeder = ssv("file1.ssv").circular
    val 2Feeder = ssv("file2.ssv").circular
    val 3Feeder = ssv("file3.ssv").circular

    val headers_10 = Map( "Content-Type" -> """application/json""",
                          "Accept-Charset" -> """ISO-8859-1,utf-8;q=0.7,*;q=0.7""",
                          "Keep-Alive" -> """115""",
                          "Connection" -> """keep-alive""",
                          "X-Requested-With" ->  """XMLHttpRequest"""
)
    
    val initiateC2C = feed(
1Feeder)
      .feed(2Feeder)
      .feed(3Feeder)
      .exec(http("Post")
        .post("/callback/${username}")
        .basicAuth("${username}", "${password}")
        .headers(headers_10)
        .body(StringBody("""{"var1":"${calling}","var2":"${called}"}""")).asJSON

        )       
  }

val scn = scenario("TestScenario").exec(RESTrequest.initiateC2C)


setUp(scn).throttle(jumpToRps(2), holdFor(10 seconds)).protocols(httpConf)

//or perhaps
setUp(scn.inject(atOnceUsers(2000)).throttle(jumpToRps(2), holdFor(10 seconds)).protocols(httpConf))
i have tried all sorts of diffrent setUp configurations, with every combo of brackets, but just not getting any luck.

It is likly there is some key bit of gataling system i dont understand, but any help would be greatly appreciated.

Stéphane Landelle

unread,
Sep 5, 2014, 5:20:30 AM9/5/14
to gat...@googlegroups.com
setUp(scn).throttle(jumpToRps(2), holdFor(10 seconds)).protocols(httpConf)

Won't compile as you don't define injection

setUp(scn.inject(atOnceUsers(2000)).throttle(jumpToRps(2), holdFor(10 seconds)).protocols(httpConf))

LGTM. Just that your throttle will be lifted after 10 seconds, and probably everything will blows in your face then as you inject 2000 users :)

What's your problem exactly? What kind of rps profile do you get, and how is it different from what you expect?





--
You received this message because you are subscribed to the Google Groups "Gatling User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gatling+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Albert Einstein

unread,
Sep 5, 2014, 6:23:40 AM9/5/14
to gat...@googlegroups.com

I also have problems undestanding the difference between "inject" and "throttle" and how they interact.

It seems (from trying different combinations) that "inject" is essential and throttle optional. But the documentation implies that "throttle" can be applied directly to a scenario
(see https://github.com/gatling/gatling/blob/master/src/sphinx/general/scenario.rst). This gives me a compilation error: "value throttle is not a member of io.gatling.core.structure.ScenarioBuilder"

When I try:

setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(10 seconds)).protocols(httpConf)

I get 3 requests fired almost concurrently.

When I try:

setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(100 seconds)).protocols(httpConf)

I get 3 requests fired one per sec.

Why does the holdFor time (10 vs 100 sec) make a difference,since the whole thing should not take more than 3 sec anyway?

Any insight greatly aqppreciated.

Stéphane Landelle

unread,
Sep 5, 2014, 6:55:38 AM9/5/14
to gat...@googlegroups.com
I love when "Albert Einstein" have some problems understanding something I did, serves you right! ;)

I just pushed some documentation improvements, but still:

  • You have to understand that throttle is a bottleneck. You still have to define inject, and inject sufficient load to reach your bottleneck.
  • I seems you're confusing atOnceUsers and UsersPerSec
  • Your 2 set ups should produce the same thing, looks like a bug. Will try to reproduce.

Stéphane Landelle

unread,
Sep 5, 2014, 7:01:06 AM9/5/14
to gat...@googlegroups.com
Mmm, can't reproduce.

setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(10 seconds)).protocols(httpConf)
and
setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(100 seconds)).protocols(httpConf)

give me extactly the same thing, as expected (scenario has only one request):
  • 1 request at t0
  • 1 request at t0 + 1s
  • 1 request at t0 + 2s

Alex Bagehot

unread,
Sep 5, 2014, 8:09:38 AM9/5/14
to gat...@googlegroups.com
if you carry on the t0+1second then the request rate changes after the
holdFor is expired.
which gives a different result for the 2 configs - at say 20seconds
into the test.
The result(throughput) will be the same between the 2 for <10s and >100s.


Thanks
Alex


================================================================================

2014-09-05 13:00:42 0s elapsed

---- scn_sleep -----------------------------------------------------------------

[ ] 0%

waiting: 3 / running: 0 / done:0

---- Requests ------------------------------------------------------------------

> Global (OK=0 KO=0 )


================================================================================



================================================================================

2014-09-05 13:00:47 5s elapsed

---- scn_sleep -----------------------------------------------------------------

[--------------------------------------------------------------------------] 0%

waiting: 0 / running: 3 / done:0

---- Requests ------------------------------------------------------------------

> Global (OK=5 KO=0 )

> req_sleep (OK=5 KO=0 )

================================================================================



================================================================================

2014-09-05 13:00:52 10s elapsed

---- scn_sleep -----------------------------------------------------------------

[--------------------------------------------------------------------------] 0%

waiting: 0 / running: 3 / done:0

---- Requests ------------------------------------------------------------------

> Global (OK=10 KO=0 )

> req_sleep (OK=10 KO=0 )

================================================================================



================================================================================

2014-09-05 13:00:57 15s elapsed

---- scn_sleep -----------------------------------------------------------------

[--------------------------------------------------------------------------] 0%

waiting: 0 / running: 3 / done:0

---- Requests ------------------------------------------------------------------

> Global (OK=1138 KO=0 )

> req_sleep (OK=1138 KO=0 )

================================================================================



================================================================================

2014-09-05 13:01:02 20s elapsed

---- scn_sleep -----------------------------------------------------------------

[--------------------------------------------------------------------------] 0%

waiting: 0 / running: 3 / done:0

---- Requests ------------------------------------------------------------------

> Global (OK=2326 KO=0 )

> req_sleep (OK=2326 KO=0 )

================================================================================

Alex Bagehot

unread,
Sep 5, 2014, 8:37:36 AM9/5/14
to gat...@googlegroups.com


On Fri, Sep 5, 2014 at 11:23 AM, Albert Einstein <not.n...@gmail.com> wrote:
>
> I also have problems undestanding the difference between "inject" and
> "throttle" and how they interact.
>

it is worth understanding where throttling came from:
https://github.com/gatling/gatling/issues/625

it was introduced before inject rate (eg. constantUsersPerSeoncd() ):
https://github.com/gatling/gatling/issues/693

so it was created in a world where only closed models (where N users are injected and they loop) is possible.
the workaround for tools that can only do this (like JMeter, LoadRunner, etc) when you want to fix the rate is to provide a pacing(LoadRunner), constant throughput timer(JMeter) or throttling(Gatling). see for example http://blazemeter.com/blog/how-use-jmeters-throughput-constant-timer - "...we have a fixed quantity of users but we need to test the application with a different number of requests per minute.  Using the Thread Group parameters we can manage the number of users but not the frequency of requests. So, how can we deal with this scenario?......."

the closed workload model where users loop is similar to a call center where a fixed number of identifiable (eg. "Alice") users(or analogous software processes) loop around doing the same or similar tasks, and their throughput is determined by the response time of the tasks they apply to the system. I can't think of many real world examples where throttle would be applied to that closed workload.

the open workload model, where independent users arrive at a rate, like public websites, physical retail shops, api requests, etc, has the rate pre-existing, there is no need generally to throttle either in this case as the inject rate should provide this. There are use cases though where to make tests simpler, or hit some complex test constraint, as in Joerg's case you could throttle per scenario or globally.


the question then may be whether you should inject open(constantUsersPerSecond) or closed(atOnceUsers). there are several threads on that topic.

Thanks,
Alex

Albert Einstein

unread,
Sep 5, 2014, 8:40:08 AM9/5/14
to gat...@googlegroups.com
Here is the out put I get for setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(10 seconds)).protocols(httpConf)

================================================================================
2014-09-05 13:29:55                                           0s elapsed
---- Scenario Name -------------------------------------------------------------
[                                                                          ]  0%

          waiting: 3      / running: 0      / done:0    
---- Requests ------------------------------------------------------------------
> Global                                                   (OK=0      KO=0     )

================================================================================


================================================================================
2014-09-05 13:29:55                                           0s elapsed
---- Scenario Name -------------------------------------------------------------
[##########################################################################]100%
          waiting: 0      / running: 0      / done:3    
---- Requests ------------------------------------------------------------------
> Global                                                   (OK=3      KO=0     )
> Post                                                     (OK=3      KO=0     )
================================================================================

Simulation finished
Generating reports...
Parsing log file(s)...
Parsing log file(s) done

================================================================================
---- Global Information --------------------------------------------------------
> request count                                          3 (OK=3      KO=0     )
> min response time                                     47 (OK=47     KO=-     )
> max response time                                     48 (OK=48     KO=-     )
> mean response time                                    47 (OK=47     KO=-     )
> std deviation                                          0 (OK=0      KO=-     )
> response time 95th percentile                         48 (OK=48     KO=-     )
> response time 99th percentile                         48 (OK=48     KO=-     )
> mean requests/sec                                  15.54 (OK=15.54  KO=-     )
---- Response Time Distribution ------------------------------------------------
> t < 800 ms                                             3 (100%)
> 800 ms < t < 1200 ms                                   0 (  0%)
> t > 1200 ms                                            0 (  0%)
> failed                                                 0 (  0%)
================================================================================


and for

setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(100 seconds)).protocols(httpConf)

================================================================================
2014-09-05 13:28:34                                           0s elapsed
---- Scenario Name -------------------------------------------------------------
[                                                                          ]  0%

          waiting: 3      / running: 0      / done:0    
---- Requests ------------------------------------------------------------------
> Global                                                   (OK=0      KO=0     )

================================================================================


================================================================================
2014-09-05 13:28:36                                           2s elapsed
---- Scenario Name -------------------------------------------------------------
[##########################################################################]100%
          waiting: 0      / running: 0      / done:3    
---- Requests ------------------------------------------------------------------
> Global                                                   (OK=3      KO=0     )
> Post                                                     (OK=3      KO=0     )
================================================================================

Simulation finished
Generating reports...
Parsing log file(s)...
Parsing log file(s) done

================================================================================
---- Global Information --------------------------------------------------------
> request count                                          3 (OK=3      KO=0     )
> min response time                                     22 (OK=22     KO=-     )
> max response time                                     39 (OK=39     KO=-     )
> mean response time                                    28 (OK=28     KO=-     )
> std deviation                                          7 (OK=7      KO=-     )
> response time 95th percentile                         37 (OK=37     KO=-     )
> response time 99th percentile                         38 (OK=38     KO=-     )
> mean requests/sec                                   1.45 (OK=1.45   KO=-     )
---- Response Time Distribution ------------------------------------------------
> t < 800 ms                                             3 (100%)
> 800 ms < t < 1200 ms                                   0 (  0%)
> t > 1200 ms                                            0 (  0%)
> failed                                                 0 (  0%)
================================================================================
BTW, I am using 2.0.0-RC2.

Alex Bagehot

unread,
Sep 5, 2014, 8:42:17 AM9/5/14
to gat...@googlegroups.com
ok so we forgot to check whether your scenario loops?
mine:

val scn_sleep_closed = scenario("scn_sleep") .exec(forever(){req_sleep} )

note the use of forever(){...}

?

Stéphane Landelle

unread,
Sep 5, 2014, 8:43:12 AM9/5/14
to gat...@googlegroups.com
My test didn't loop.

Which version do you use?
Could you share your Simulation, please?

Albert Einstein

unread,
Sep 5, 2014, 9:13:50 AM9/5/14
to gat...@googlegroups.com
Many thanks for the repsonses.

I am not looping (not knowingly at least)
The simulation is listed at the top of the thread.
I am using 2.0.0RC2

RE: the behaviour being different for t >10 and T<100, the are only 3 requests being sent at 1 rps so the whole thing should take place in the t<10 interval in both cases. Am I missing something?

I am getting the impression I should be using only inject rather inject + throttle as am trying to run ramp test with steps of constant request rates. 

Stéphane Landelle

unread,
Sep 5, 2014, 9:18:40 AM9/5/14
to gat...@googlegroups.com
Could be related to this: https://github.com/gatling/gatling/issues/2132
Could you upgrade to RC3, please?

Albert Einstein

unread,
Sep 5, 2014, 10:23:07 AM9/5/14
to gat...@googlegroups.com
Upgraded to RC3 but it did not make a difference. Still getting different behaviour for the two values of "holdFor"

Stéphane Landelle

unread,
Sep 5, 2014, 10:35:19 AM9/5/14
to gat...@googlegroups.com
The simulation is listed at the top of the thread.

Well, I'm sure it's not. What you first posted doesn't even compile (variable names such as 1Feeder are illegal, can't start with a number).
Could you share the exact Simulation, please?

You can send privately if that's an issue for you.

Alex Bagehot

unread,
Sep 5, 2014, 10:58:23 AM9/5/14
to gat...@googlegroups.com
yep apologies both, I got mixed up with the other example on this thread.
I couldn't reproduce either
tried the following?

.inject(constantUsersPerSec(1) during(3 seconds) )

which worked for me.

Unless your system has 3 users/requests arriving at exactly the same time, and there is a component in the real system that is not in the SUT(system under test) that throttles those requests to 1rps, then the above may read and work better.

clearly there's a defect there somewhere so it should continue to be followed up with Stéphane


Albert Einstein

unread,
Sep 5, 2014, 11:36:10 AM9/5/14
to gat...@googlegroups.com
OK I can't fool you :-)
Here is a simulation that I just ran that displays the behaviour I mentioned with RC3:
--------------------------------------------------------------------------------
package simulations

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
import io.gatling.core.controller.throttle._

class TestThrottle extends Simulation {
  

  object RESTrequest {


    val headers_10 = Map( "Content-Type" -> """application/json""",
                          "Accept-Charset" -> """ISO-8859-1,utf-8;q=0.7,*;q=0.7""",
                          "Keep-Alive" -> """115""",
                          "Connection" -> """keep-alive""",
                          "X-Requested-With" ->  """XMLHttpRequest"""
)
   
    val initiateRequest = exec(http("Post")
        .post("/firstLevel/user1")
        .basicAuth("user1", "1234")
        .headers(headers_10)
        .body(StringBody("""{"firstItem":"Item1","secondItem":"Item2"}""")).asJSON

        )       
  }
 
  val httpConf = http
    .baseURL("http://127.0.0.1:8080/external/api")
    .acceptHeader("application/json")
    .doNotTrackHeader("1")
    .acceptLanguageHeader("en-US,en;q=0.5")
    .acceptEncodingHeader("gzip, deflate")
    .userAgentHeader("Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.14) Gecko/20110302 Oracle/3.6-4.0.1.el5_6 Firefox/3.6.14")

  //Now, we can write the scenario as a composition
  val scn = scenario("Scenario Name").exec(RESTrequest.initiateRequest)

 setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(100 seconds)).protocols(httpConf)
}

--------------------------------------------------------------------------------

William Hoad

unread,
Sep 5, 2014, 11:41:26 AM9/5/14
to gat...@googlegroups.com
Thanks again for your help guys

Regrettably the site and APIs we are hitting are all within are internal system, so we can put them out (and if we did you wouldnt get anything back). Albert here just put a scrubbed version of what we are trying to achive. this works fine using just the injects (of however many requests we want to send) and we can create the flow we need within those injections. I believe we are implementing a open workload model, where we need to check if how many concurrent users our service can handle using ramps. We pull the data of the users out of .SSV files to fill in the user details each time. (which the next packet is just the next entry down in those files.

It would be really good to know what is happening with these throttles, but I think you have pointed us towards the solution for todays problem :)  though we want to keep using the platform so obviously want to find out what is happening.

The Gatling docs are good clear and concise, but just lack some of the finer details when it comes to using different aspects together.

thanks again

Stéphane Landelle

unread,
Sep 5, 2014, 11:55:25 AM9/5/14
to gat...@googlegroups.com
I changed your sample to:

package simulations

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
import io.gatling.core.controller.throttle._

class TestThrottle extends Simulation {

object RESTrequest {

val headers_10 = Map("Content-Type" -> """application/json""",
"Accept-Charset" -> """ISO-8859-1,utf-8;q=0.7,*;q=0.7""",
"Keep-Alive" -> """115""",
"Connection" -> """keep-alive""",
"X-Requested-With" -> """XMLHttpRequest""")

val initiateRequest = exec(http("Post")
 .get("/")
//.get("/firstLevel/user1")
//.basicAuth("user1", "1234")
.headers(headers_10)
//.body(StringBody("""{"firstItem":"Item1","secondItem":"Item2"}""")).asJSON
)
}

val httpConf = http
.acceptHeader("application/json")
.doNotTrackHeader("1")
.acceptLanguageHeader("en-US,en;q=0.5")
.acceptEncodingHeader("gzip, deflate")
.userAgentHeader("Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.14) Gecko/20110302 Oracle/3.6-4.0.1.el5_6 Firefox/3.6.14")

//Now, we can write the scenario as a composition
val scn = scenario("Scenario Name").exec(RESTrequest.initiateRequest)

setUp(scn.inject(atOnceUsers(3))).throttle(jumpToRps(1), holdFor(10 seconds)).protocols(httpConf)
}

I get the proper expected behavior, whatever the holdFor duration: a constant 1rps from t0 to t2s:

Images intégrées 1


--
Reply all
Reply to author
Forward
0 new messages