Ramping Users

814 views
Skip to first unread message

Steven Zaluk

unread,
Mar 10, 2014, 1:55:47 PM3/10/14
to gat...@googlegroups.com
What is the proper inject syntax to do the following:

ramp to 100 users over 60 seconds, then maintain that load for 3 minutes?

I tried this but I don't think it is correct:

inject(rampUsers(100) over (60), constantUsersPerSec(100) during (180))

I am using 2.0.0-SNAPSHOT.

Thanks,
Steve

Stéphane Landelle

unread,
Mar 10, 2014, 4:02:33 PM3/10/14
to gat...@googlegroups.com
What do you mean by "maintain that load"?
Do you mean number of users? number of requests per second?


--
You received this message because you are subscribed to the Google Groups "Gatling User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gatling+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Steven Zaluk

unread,
Mar 11, 2014, 6:37:18 AM3/11/14
to gat...@googlegroups.com

Sorry, yes I meant number of users.

Stéphane Landelle

unread,
Mar 11, 2014, 7:08:01 AM3/11/14
to gat...@googlegroups.com
Maybe I'm completely missing the point, but I've always felt that this way of creating load didn't make sense (so Gatling doesn't have a builtin for controlling the number of alive users yet).

I mean, I get:
  • Starting a given amount of new users, or starting them at a given rate: you know how many users you expect to come on your web site/log in on your application => that's inject
  • Having users browse for a given duration => that's loops, such as during
  • Shaping the number of requests per second to match your observed/expected analytics => that's throttling (new in 2M4).
But how do you get a number of alive users in the first place? Did you monitor the number of sessions on your server? But then, you have to tell appart users that stay longer and users that leave and are replaced by new ones.

You could do that the JMeter way (just that I know how it's usually done with this tool): wrap your scenario with a loop, and clear all user state (like cookies and cache) to make the virtual user look like a new one.

So, basically, I'd like some arguments to convince me this does make sense and we should implement it for 2M5.

Cheers,

Stéphane

James Pickering

unread,
Apr 24, 2014, 9:10:22 AM4/24/14
to gat...@googlegroups.com
One advantage of the JMeter-style approach is that start times for scenarios can be randomised to a degree. If your scenarios are starting at regular intervals, then they won't step on each other's toes, which could mean you miss certain types of issues, like race conditions and deadlocks, that only occur when they crash into each other.

That said, it's easy enough to hack together something like the JMeter approach using code that's already there. We're using something like:

scenario(myScenario.name)
.during(1 hour) ( // run for an hour
  pace(20 seconds, 30 seconds) // start an iteration every 20 to 30 seconds
  .exec(myScenario)
)

So it may not be necessary to add any specific code in 2M5 to support this use case - or related use cases like LoadRunner's vuser_init, Action, vuser_end execution flow.

And yes, that's why I submitted patches that fix exec(scenario), and introduced pace.

Nicolas Rémond

unread,
Apr 24, 2014, 9:14:21 AM4/24/14
to gat...@googlegroups.com
What if the first element of your scenario is a pause element of a random duration? Won't that do the same?

Regards
nicolas

James Pickering

unread,
Apr 24, 2014, 9:22:52 AM4/24/14
to gat...@googlegroups.com
Yes, that'd work too.

The approach we went with probably had more to do with the fact that we were migrating from JMeter, and management had grown used to requirements and reporting in terms of vusers.

alex bagehot

unread,
Apr 24, 2014, 12:10:00 PM4/24/14
to gat...@googlegroups.com
Hi,

I think it is worth taking time over this.

If you read the following paper:
And other books on the subject


they typically describe 2 or 3 types of workload model :
open, partly open and closed workloads



I don't have any numbers but my guess is that most systems under test for tools like these have open or partly open workloads. the paper above out of 10 sites found only 2 were definitely closed (clearly not a large enough sample!! but it would appear to be reasonable).

It turns out that a lot of popular load tools like JMeter, The Grinder, Load runner, without standard tweaks, model closed systems - there is a virtual user (implemented as a thread) that loops around starts a new session each iteration.

Others like httperf model open workloads.
And Gatling is capable of modelling all three.


To convert a tool that can only model a closed workload into one that can model open or partly open, the other tools mostly add a feature called pacing to enforce a constant start (arrival) rate for the looping users.
This works well if you get the pacing right, but from my own experience it can be very difficult to get right if you want an accurate test in a highly variable environment like (retail) web sites.

If you set the pacing too high then you have to create more threads than necessary which could cause the load generator scheduling issues, or needing more generator servers.
If you set it too low then you risk running out of pacing time and delaying the start time of the next vuser. If session start times are delayed then problems like coordinated omission can occur.
If your (session duration / ramp time) ratio is too high the load can be uneven. Ie. if the start of the session is a login and the end a order confirmation then all the logins will happen at the same time, and some time later all the order confirmations happen at the same or similar times.
Similar to the previous point, if the start and therefore end times of the vuser sessions are uneven then load can be starved as vusers stop doing work as their sessions end and they wait for the pacing time to complete. 

These problems can be avoided but I have seen them happen in highly experienced teams.


For Gatling testing open or partly open systems, pacing is not needed as it can already provide an arrival rate out of the box. What is not present is a guarantee of an inter-arrival time distribution.
there is an exponential think time for inter- request time:

So this needs to be provided to the injection rate so that we can get overlapping requests but with the right mean arrival rate.


If your system is closed then injecting the number of concurrent users and looping them with during is the way to model that closed system. A call center data entry system would be a good example - the call center operators loop round scenarios as they take different calls. If the data entry system slows down then the applied load backs off as they cannot proceed to steps in their workflows.

In terms of the DSL, for me the current approach is a breath of fresh open air which has a clear separation between a scenario and the test parameters.

If it makes it easier for people to migrate from other existing tools to use pacing then good idea to provide that in the DSL, but it needs some careful words around why the inject method, if the system you are testing is (partly-)open, ie. possibly most systems being tested, is recommended.

Hope this has been constructive and no major errors!

Thanks,
Alex

cee...@gmail.com

unread,
Apr 25, 2014, 3:56:59 AM4/25/14
to gat...@googlegroups.com
some detail on why exponential arrivals may benefit from being the default, over and above provoking overlapping of requests / contention:

the postscript comments emphasise open workloads, his assumed workload for "internet or web" applications on the analysis in the post. 

Thanks,
Alex
Hi,

To unsubscribe from this group and stop receiving emails from it, send an email to gatling+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages