Deciding Ramp up Times for Performance Test

628 views
Skip to first unread message

karz

unread,
Aug 26, 2014, 4:27:07 AM8/26/14
to LR-Loa...@googlegroups.com

How the ramp up times are defined and how it is calculated prior to the Test Execution  . What is the basic funda behind the calculation of Ramp-Up Time ? is there any Industry standards for calculating the Ramp up time ?

aravind sai kuchibhatla

unread,
Aug 30, 2014, 7:11:26 PM8/30/14
to LR-Loa...@googlegroups.com
Kudos Asif. That is one of the best ways of explaining things. Very clear.

In brief, we need to understand from the client how the user arrive at application and what rate they come to use the application. One more example is, generally in companies employee come to office at around 9 AM and as soon as they arrive they start checking their mails. So, rate at which they start using mail application is very high at that moment. So, Ramp-up rate should be high while checking that application at peak loads to ensure optimum server usage is attainable and thus help their availability during such conditions.

So I make sense Asif.

Thanks
Aravind.




On Fri, Aug 29, 2014 at 12:36 PM, asif rehman <skar...@gmail.com> wrote:
Ramp and Ramp down can greatly affect the "availability" aspect of the web and application servers.

*To be precise: RAMP up and RAMP down can affect your CPU utilization,Memory utilization and Response time.*

I will give you a very generic example. A server basically has a pool of worker threads created at start of the application. The pool size is generally specified by the server administrators. Assume server threadpool = 300, but this doesn't mean that 300 threads will be created during boot up, rather a minimum of 10(depending on MIN value set by the admin, it could be 20,30 ...) would be spawned during startup. Say each thread has a spawn creation time of 1 second and you ramp up 40 vusers in a second, now, since only 10 threads are readily available -- 30 more will have to be created(spawned) in order to server the rest of the threads. if 1 spawn activity takes 1 second -- creating of 30 worker threads would take 30 seconds(due to queuing). This is how your transaction response time gets affected. Also, if parallel creation of thread is enabled, then multiple
threads can be created simultaneously which can consume CPU and Memory resource heavily. You may see a huge spike in the graph.

Worst case, if resources are not available you may see threads being killed or timed out depending on the server configuration.

Hence, i always recommend you to ask your client to provide you the real world usage pattern report. Some clients may have applications which have huge user ramp up at specific time of the day (example:9:00am in the morning), probably around 30-40 users logging into the application in a couple of seconds. In such cases the server needs to tuned accordingly to
such extreme cases. If you complacently set ramp up as 2-3 seconds per user then the web/app servers will be unprepared to face the real world condition.

Thanks,
Asif



On Tuesday, 26 August 2014 13:57:07 UTC+5:30, karz wrote:

How the ramp up times are defined and how it is calculated prior to the Test Execution  . What is the basic funda behind the calculation of Ramp-Up Time ? is there any Industry standards for calculating the Ramp up time ?

--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.
To post to this group, send email to LR-Loa...@googlegroups.com.
Visit this group at http://groups.google.com/group/LR-LoadRunner.
For more options, visit https://groups.google.com/d/optout.

asif rehman

unread,
Aug 29, 2014, 3:06:47 AM8/29/14
to LR-Loa...@googlegroups.com
Ramp and Ramp down can greatly affect the "availability" aspect of the web and application servers.

*To be precise: RAMP up and RAMP down can affect your CPU utilization,Memory utilization and Response time.*

I will give you a very generic example. A server basically has a pool of worker threads created at start of the application. The pool size is generally specified by the server administrators. Assume server threadpool = 300, but this doesn't mean that 300 threads will be created during boot up, rather a minimum of 10(depending on MIN value set by the admin, it could be 20,30 ...) would be spawned during startup. Say each thread has a spawn creation time of 1 second and you ramp up 40 vusers in a second, now, since only 10 threads are readily available -- 30 more will have to be created(spawned) in order to server the rest of the threads. if 1 spawn activity takes 1 second -- creating of 30 worker threads would take 30 seconds(due to queuing). This is how your transaction response time gets affected. Also, if parallel creation of thread is enabled, then multiple
threads can be created simultaneously which can consume CPU and Memory resource heavily. You may see a huge spike in the graph.

Worst case, if resources are not available you may see threads being killed or timed out depending on the server configuration.

Hence, i always recommend you to ask your client to provide you the real world usage pattern report. Some clients may have applications which have huge user ramp up at specific time of the day (example:9:00am in the morning), probably around 30-40 users logging into the application in a couple of seconds. In such cases the server needs to tuned accordingly to
such extreme cases. If you complacently set ramp up as 2-3 seconds per user then the web/app servers will be unprepared to face the real world condition.

Thanks,
Asif



On Tuesday, 26 August 2014 13:57:07 UTC+5:30, karz wrote:
Reply all
Reply to author
Forward
0 new messages