--
You received this message because you are subscribed to the Google Groups "Gatling User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gatling+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Hm, there is no attachment ;-)
It's not really standard to test a system like this. Normally there always is a rampup/breaktest first, with a gradual rampup to some point well above expected maximum peak load, to find out where the limits of the system are, before we start doing more elaborate things.
This particular scenario will start quite suddenly, and if the system under test doesn't handle it, you have no real way of determining what resource limits you are running up against. Sure, your test may match reality better, but it's predictive value is probably going to be limited to a simple yes/no binary answer to the question "can the system handle this?"
Hm, there is no attachment ;-)
It's not really standard to test a system like this. Normally there always is a rampup/breaktest first, with a gradual rampup to some point well above expected maximum peak load, to find out where the limits of the system are, before we start doing more elaborate things.
This particular scenario will start quite suddenly, and if the system under test doesn't handle it, you have no real way of determining what resource limits you are running up against. Sure, your test may match reality better, but it's predictive value is probably going to be limited to a simple yes/no binary answer to the question "can the system handle this?"
No, it was just the tablet being too good at hiding little icons :)
As for the spikes you're predicting - if you already know what the system can handle, and what resource is going to be the limiting factor, then this type of test could be used to confirm the theory. But you should already know what will happen.
However it does leave the question what kind of clients we are looking at here. Real humans respond differently to response time delays than programs do. If a sudden spike could cause the clients to retry, or humans to hit reload, then the resulting follow on load could take something down that looked fine in your test scenario.
As an aside - what exactly is measured in that 'transactions' graph? I haven't seen anything like loadrunner transactions in gatling scripts, so what are we measuring there?
The actual scenario is 100.000 mobile phones that I know will retrieve a QR code over the span of 2 hours. The actual pickup time is randomized within this interval on the phone.I skimmed the wiki article on exponential distribution, but don't quite understand how I can use pauseExp to achieve the same effect.
Should the pauseExp have a mean of 1 hour?
Do I still have to ramp all clients at once, or should I ramp up on the average load?
So in gatling transactions really mean 'responses'. Whereas in LR a transaction is a set of requests related to a single user action.
Confusing stuff.
Groups aren't used much, are they? Is there documentation on the subject?
Myeah, the LR naming has flaws too. I don't claim it's perfect. Just that having a name mean two different things for two different tools brings confusion.
Group as a name isn't great either. A LR vuser group is a set of users all running the same script. What gatling calls a scenario, I believe. Whereas a LR test scenario includes one or more vuser groups that may or may not run concurrently. (But likely do...)
It seems to me there isn't much overlap between communities, or there would have been a lot less naming conflicts ;)