I know gatling does a lot of the calculating for you, but the basic math isn't hard.
Let's say 1 user needs 60 seconds on average to complete your scenario, and that it will perform 1 hit in that time.
In order to put 1 hit per second on your server in that case you need 60 users. Agreed?
If that same user does 10 hits, it only takes 60/10 = 6 users to achieve the same result. If you want to put 10.000 requests/sec on that same site using the same scenario, it will take 60.000 users.
Plug in your own numbers here.
As for rampup: it depends on what type of test you're doing. If you know the limits of your application well, I usually go for about 30 minutes. But if you do not, or you think it's likely you will reach the limits of the application, a break test will need to be performed first to get a ballpark figure. In that case taking it slow is advisable because fast ramping will make it harder to see when exactly the first bottleneck is reached.
Now.. 10.000 hits/sec is quite a lot of load. Are you sure that is realistic? Do you feel the infrastructure you are using is supposed to handle that much?
I have my scenario set up and now I want to achieve a load of 10,000 requests per second. How do I do that? How do I ramp up and how many users should I use?I'm not exactly sure how the users and ramp feature works...
--
You received this message because you are subscribed to the Google Groups "Gatling User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gatling+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi Floris, Lira,
It may be worth considering whether concurrent users is the best way to model this.
Gatling allows us to drive throughput at the user level. It can generate for example 2000 users per second assuming they did 5 requests each on average producing the 10k rps.
You need to step back and ask a couple of questions before determining whether user throughput or user concurrency is the right model:
Is my system an administrative type system like a call center with a fixed number of users?
If the site is slow responding does this block or delay the arrival of the next user session to your site?
What is the average session length in pages?
If the answers are: no, no and <7 then user throughput may be the best model for your site.
Thanks
Alex
Hi Stéphane,
It's a good point.
If I work that logic through though, "users per second" injection can only produce equal or more unique user session (objects) than static closed loop vusers, most easily seen when the SUT starts to slow down significantly.
The number of session objects in memory at any one time is typically a function of
The arrival rate of new users or whatever request triggers session object creation,
The total duration the user interacts with the system and
The idle timeout of the session object container.
There is no concurrent user input parameter there.
The first one is most easily / naturally modelled by an open workload.
The second 2 are not related to whether you choose open or closed.
However, that is not to say that you should not validate that the test produces the number of session objects compared with what you expect from production measurements or other sources.
Given you have designed the DSL so that the scenarios are orthogonal to the injection method it should be easy to demonstrate this (or prove it wrong).
So if anything consideration for session objects leads to modelling an open workload to guarantee that the rate of creating session objects is maintained uncoordinated with the system being tested.
WDYT?
thanks,
Alex