[Gatling 2] Best way to ramp up

4,084 views
Skip to first unread message

Stefan Magnus Landrø

unread,
Jul 3, 2013, 3:28:16 AM7/3/13
to gat...@googlegroups.com
Hi, 

What's the best way to warm up the application server using gatling? As in, for one minute have 2 concurrent users, then the next minute have 4 concurrent users etc

Cheers 

Stefan

Stéphane Landelle

unread,
Jul 3, 2013, 3:40:07 AM7/3/13
to gat...@googlegroups.com
Hi,

Tricky question, it really depends on your application.
Does your application use a cache? If so, maybe one iteration is enough to load the whole cache. Or not.
Does your application run on a JVM (well, I know it does :) ). If so, the JIT will use heuristics for determining which parts of your code to compile. On Hotspot, default CompileThreshold is 10 000.

So really, it depends. But it's more a matter of the number of iterations than the duration of the ramp.

Stéphane


2013/7/3 Stefan Magnus Landrø <stefan...@gmail.com>

Stefan

--
You received this message because you are subscribed to the Google Groups "Gatling User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gatling+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Stefan Magnus Landrø

unread,
Jul 7, 2013, 5:18:35 PM7/7/13
to gat...@googlegroups.com
Well, actually I was probably looking for this:

rampRate(10 usersPerSec) to(20 usersPerSec) during(10 minutes)

The thing is, we're terminating ssl in the application server, and the ssl handshake is pretty heavy, so we need to make sure we don't overload it in the beginning and make sure we use a connection pool

Is is correct to say, referring to the above code, that in the beginning, we'll be running 10 users per sec, after 5 minutes 15 users per sec and after 10 minutes 20 users per sec?

Also, we're struggling to limit the number of connections opened to the server, since it can't handle more than about 1000 concurrent connections (in production we'll be using a load balancer to handle this kind of issues).

Is there a way in gatling to see how many connections are currently being used?

It would be nice to have a debug statement somewhere around line  993 in NettyAsynchHttpProvider. 

Also, it would be nice to see how many connections are currently in the pool for a certain uri. Maybe the easiest is to reduce the idle connection timeout since none af them will be idle, but we'll see the debug statement in IdleChannelDetector more frequently? 

Final question: will a user use the same connection in the whole scenario, or will a new connection be checked out (opened)/checked in on every request? what about pauses?

Cheers Stefan 





2013/7/3 Stéphane Landelle <slan...@excilys.com>

--
You received this message because you are subscribed to a topic in the Google Groups "Gatling User Group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/gatling/f4L96DlUJiA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to gatling+u...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
BEKK Open
http://open.bekk.no

TesTcl - a unit test framework for iRules

Stéphane Landelle

unread,
Jul 8, 2013, 4:43:26 AM7/8/13
to gat...@googlegroups.com
Gatling is about users and scenarios are workflows, so 10 usersPerSec means that you'll be injecting 10 new users every second.
By default, each user has its own connections (and so its own connection pool).

Please read this. If your use case is still about SOAP webservices, you probably have to share connections.

Cheers,

Stéphane


2013/7/7 Stefan Magnus Landrø <stefan...@gmail.com>

Nicolas Rémond

unread,
Jul 8, 2013, 4:47:43 AM7/8/13
to gat...@googlegroups.com
Is is correct to say, referring to the above code, that in the beginning, we'll be running 10 users per sec, after 5 minutes 15 users per sec and after 10 minutes 20 users per sec?

Absolutely.

Stéphane Landelle

unread,
Jul 8, 2013, 4:51:45 AM7/8/13
to gat...@googlegroups.com
"running 10 users per sec" has no meaning. You'll be starting 10 new users per sec, but the number of correct users running at a given time depends on the scenario duration. 


2013/7/8 Nicolas Rémond <nicolas...@gmail.com>

Nicolas Rémond

unread,
Jul 8, 2013, 4:52:57 AM7/8/13
to gat...@googlegroups.com
Right, it's not the number of running user, but the admission rate of new users.

Stefan Magnus Landrø

unread,
Jul 8, 2013, 5:41:58 AM7/8/13
to gat...@googlegroups.com
Yep, sorry, my wording was wrong - I meant starting of course.
However, to me it doesn't make any sense that a user has a connection pool, since a user is not performing several requests concurrently, but rather sequentially, or am I wrong? What would make sense though, is that a user opens a persistent connection, and reuses it for all requests against the same host/port. 

  




2013/7/8 Stéphane Landelle <slan...@excilys.com>

Stéphane Landelle

unread,
Jul 8, 2013, 5:46:38 AM7/8/13
to gat...@googlegroups.com
That's more of an implementation detail. Using keep-alive with only sequential requests is like having one connection pool of size 1 per user.
But someday, we'll maybe be able to implement concurrent connections (resources fetching, ajax, etc). Browsers use connections pool (usually ~6 concurrent connections)


2013/7/8 Stefan Magnus Landrø <stefan...@gmail.com>

Stefan Magnus Landrø

unread,
Jul 8, 2013, 6:16:05 AM7/8/13
to gat...@googlegroups.com
Yes, agreed.

So I presume, if you shareConnections(), the user checks out/in a connection from the pool for every request?


2013/7/8 Stéphane Landelle <slan...@excilys.com>

Stéphane Landelle

unread,
Jul 8, 2013, 6:25:01 AM7/8/13
to gat...@googlegroups.com
Yep

Stefan Magnus Landrø

unread,
Jul 8, 2013, 6:52:03 AM7/8/13
to gat...@googlegroups.com
Perfect. I looked at the ahc code last night, and if I understand correctly, if the maximumConnectionsPerHost is exceeded a new connection is created anyway, but not checked into the pool. That's kinda strange, isn't it?


2013/7/8 Stéphane Landelle <slan...@excilys.com>

Stéphane Landelle

unread,
Jul 8, 2013, 6:58:40 AM7/8/13
to gat...@googlegroups.com
maximumConnections and maximumConnectionsPerHost are misleading. They are about the max number of connections stored in the pool, not alive.

We (the guys working on AHC) would like to redesign AHC connection pools in AHC 2. I'm afraid I won't be able to work on the Netty part before september... One of the difficult task is to decide on what to do when the limit is reached. I would favor a Connection refused exception, but some users would like the framework to pool the requests until a connection becomes available. Complex stuff... Please create a thread on the AHC ML if you're interested in this topic.

Stefan Magnus Landrø

unread,
Jul 8, 2013, 10:07:41 AM7/8/13
to gat...@googlegroups.com
The best would probably be to have it configurable in AHC. Load balancers typically do some buffering to protect the backend, but thats probably hard to do. I'll create a thread on the topic. 


2013/7/8 Stéphane Landelle <slan...@excilys.com>

Pedro Vilaça

unread,
Aug 8, 2013, 6:41:43 AM8/8/13
to gat...@googlegroups.com
Hi guys,

Related with this topic, could you tell me if it's possible to configure the test to guarantee that we've a fixed number of requests in-flight? 

Example: 150 requests -> when we receive 10 responses -> 10 new requests 

We (the guys working on AHC) would like to redesign AHC connection pools in AHC 2. I'm afraid I won't be able to work on the Netty part before september... One of the difficult task is to decide on what to do when the limit is reached. I would favor a Connection refused exception, but some users would like the framework to pool the requests until a connection becomes available. Complex stuff... Please create a thread on the AHC ML if you're interested in this topic.

With this, do you want to simulate a behaviour of a load balancer with a rate limit, and when that limit is reached, the request will be sent to a queue on the load balancer?  

Thanks,
Pedro

Stéphane Landelle

unread,
Aug 9, 2013, 3:24:47 AM8/9/13
to gat...@googlegroups.com
Not supported currently.


2013/8/8 Pedro Vilaça <pmvi...@gmail.com>

Pedro Vilaça

unread,
Aug 9, 2013, 6:14:34 AM8/9/13
to gat...@googlegroups.com
Any plans to implement? Do you already have an issue for that?

Cheers,
Pedro

Spencer Chastain

unread,
Aug 13, 2013, 10:04:18 AM8/13/13
to gat...@googlegroups.com

First off, for Stephane and crew - thank you for gatling.  I started using it about a month ago - I have no previous experience load testing or using scala (though I've torn through the Odersky book) - and this has been easy and pretty pleasant to use.  I even submitted a pull request yesterday to support patch requests :)

The in-flight question is also of interest to me, as that's what I've been using gatling to profile - how many users in flight can different services handle under different server configurations.  I have a tiny sim that I tweak and run a whole lot per service to narrow in on my max load numbers.  If gatling had a mode where you can point it at a URL and gatling figure out what the max request rate is, that'd make my life a lot better.  I know that's not really what gatling is built to do - but gatling is capable, and it is useful information.  I started using gatling to narrow in on the data because I needed to learn gatling for a broader functional load test I'm implementing which will re-use a lot of this.

Also, with regard to injection rates, I'd really like something that lets me say "let all users complete before injecting more users".  This is important for connection pool warm-up, especially if you're limiting the number of connections in your pool.  For example, I only allow 100 connections per server in my pool.  So, my start of simulation looks like:

atOnce(100 users), nothingFor(10 seconds), constantRate(20 usersPerSec) during (2 minutes)

I'd really like to replace that "nothingFor" with a "waitForUsers" or some such.  I can think of some other instances where that might be useful as well.

Thanks!

Stéphane Landelle

unread,
Aug 27, 2013, 8:34:06 AM8/27/13
to gat...@googlegroups.com
Interesting idea. Will think about it (not trivial).

BTW, thanks for your help on this ML.


2013/8/13 Spencer Chastain <secha...@gmail.com>
Reply all
Reply to author
Forward
0 new messages