Latency Introduce by LoadRunner

323 views
Skip to first unread message

amit gupta

unread,
Feb 25, 2016, 10:31:08 AM2/25/16
to LR-Loa...@googlegroups.com
Hello,

While running the LoadTest through LoadRunner there is some concern with my developement team that LoadRunner by itself create latency and hence the higher response time on application is due to LoadRunner and LG's which simulate the load.

We run around 500 Vusers with 4 LoadGenerator in the same subnet as our application. The developement team want's to compare the timer they get from the logs in application and which is different than what is projected by LoadRunner.

Note: We are doing webservice performance testing for SOAP request using web custom request.

Questions:-

1. If LR introduce the latency what %tage of that latency?

Please suggest how to deal the situation without political :) i.e. any good reference avaialble or data point to project.

Thanks
Amit Gupta

James Pulley

unread,
Feb 25, 2016, 10:54:02 AM2/25/16
to LoadRunner
Here is a way to put this to rest.  Turn on w3c time taken in the logging for your web server or web tier.   This will track every single request with resolution up to the microsecond (millionth of a second) potentially, depending upon web server type.   Note this tracks every single request individually.  A page may be made up of hundreds of components, each of which represents one "hit" or "request" to the system.  So, your developers may be viewing apples to apple trees when comparing the data.

Next, take one of your existing scripts.  Re-save it to a new script name.   Regenerate the new script in URL mode only.  This will provide a script where every request is individual.  You will see that on requests which are repeated that you will have items such as web_url("logo_n",...Where n is the instance number of the common request, made multiple times during the execution of the test.  Remove the index  number, so you have the base request as the label.  This label will be used for Automatic Transactions.

Make all of the necessary updates for dynamic data, parameterization, error checking and branch execution for flow control.  You should be able to pull much of this from your other script which was working.

Next, go into your run time settings.   Turn on Automatic transactions.  Also, disable cache recognition.  You want all of the requests to be fulfilled from the server and not from local cache on the client.

Make note of your IP address

Ideally, take the web server logs to zero.

Run your one user performance test.

Now, take a representative item, such as logo.gif.   Look at the number of times this has been requested in your virtual users.  Make note of all of the times.

Go to the web server logs.  Pull all of the requests for logo.gif.  Look at the times collected by the time-taken field.   This represents the processing time for the request on the server, including the time spent to send the data back to the client less the final TCP ACK at the termination of the connection.    

This will ensure that you get an oranges to oranges comparison on both the same timing record/sampling basis.

For complex operations where there is some overhead LoadRunner tracks an item called "wasted time" for every single timing record.  This is the overhead from LoadRunner of the request and processing.  This "wasted time"  is automatically culled from every request as part of the reporting, so you have transaction_time-wasted_time as a default on the Analysis reporting.   If you wish to view the values for wasted time, and think time, then take any analysis session database, open it up and look for the [ Event Meter ] table.   You will see a column for "value" which is the duration of the timing record event plus columns for tracked think time and tracked wasted time.

This is a classical, "you have called my child ugly" problem.  The devs don't like your result and don't understand your tool so they will attack it.

Now, if you have engaged in poor behavior such as running your users on the controller, running them all on one host without a control set (single user on another computer), running them on a virtual machine which has clock float issues and not patched with HP's VMWARE patch, then you are the victim of self-induced problems in test design. 

Run the one user.  Compare the results at a single request level to the logs at the time of that test.   This should put the "latency"  issue to bed.  You can also show them the tracked overhead (where it exists) for "wasted time" or "think time."  Design issues are a different matter

James Pulley

amit gupta

unread,
Feb 25, 2016, 4:06:51 PM2/25/16
to LR-Loa...@googlegroups.com
Thanks James for the insight. I agree that LoadRunner can be much more close to the user experience rather than what logs can print the timestamps. Also the logs of application miss the things like LoadBalancer, Network latency if any in general. While it's all about convince the dev and performance results are mostly interpreted as question marks on application performance which dev always claim to be high performant.

Also can you point me the HP's Vmware patch link, I just need to doubly sure that admin team did not miss that. Though my HP-ALM is all on Vmware even the LG's too.

Thanks
Amit

--
You received this message because you are subscribed to the Google Groups "LoadRunner" group.
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunne...@googlegroups.com.
To post to this group, send email to LR-Loa...@googlegroups.com.
Visit this group at https://groups.google.com/group/LR-LoadRunner.
To view this discussion on the web visit https://groups.google.com/d/msgid/LR-LoadRunner/9deea7e0-e9ac-4b1a-b5a7-b4bc113e7f03%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ruslan Kholyavkin

unread,
Feb 25, 2016, 4:06:53 PM2/25/16
to LR-Loa...@googlegroups.com


Keep in mind that even if environment is on same subnet - you need to be sure it is identical  while you doing both tests test without instrumental code and test with instrumental code

Environment not doing on background some activity not control by you , for example db running some background scheduled or activated by user or application jobs /process

Caching is hot for db and app server  or at list in both cases stays at the same level  .....

Also keep in mind  there is good practice to do it at list 3 times and from this point collect min /max/90/ and std Deviation , you will be surprise to see different result via load runner as well or any other performance tool and it will be not easy for developer come to same statistics data using just entry from log files ( web/appserver/db) and generate some kind of meaningful result to properly analyze them.

While you doing test and you not just click on pages you are inserting and updating data  and depend on data size you insert /update and select after that, you may also see some different

Someone downloading huge file / heap dumps  ( log file from same subnet  environment )


Main point is  transaction time will be never exactly mach your previous run , because it will have too many un-control  activities.

if you run it as same test with instrumental code and getting different result from Load Runner and logs -- ask your developers how they calculate each transaction pool , i bet you will not hit same transaction one only during your test scenario , meas then you have some kind of pool of time for same transaction, which will be also different and depend on cache / code complication and probably other cross transaction activities you are doing while you running you test . Which will come to the same point how they generate same type of reports load runner do  based on not single hit based on multiple hits and conditions to compare it with load runner report

I hope it is helps

Ruslan


 

 


--

James Pulley

unread,
Feb 25, 2016, 4:10:28 PM2/25/16
to LoadRunner
TrustIV has an excellent page on the Virtualization challenge.   This will have all of the information you need.   Remember, turn off VMOTION on all of your load test components.  The last thing you want is a portion of your test architecture relocated during a test.  Also, consider using a physical reference generator for control purposes.

http://trustiv.co.uk/2013/06/using-virtual-machines-load-generators


On Thursday, February 25, 2016 at 4:06:51 PM UTC-5, Amit Gupta wrote:
Thanks James for the insight. I agree that LoadRunner can be much more close to the user experience rather than what logs can print the timestamps. Also the logs of application miss the things like LoadBalancer, Network latency if any in general. While it's all about convince the dev and performance results are mostly interpreted as question marks on application performance which dev always claim to be high performant.

Also can you point me the HP's Vmware patch link, I just need to doubly sure that admin team did not miss that. Though my HP-ALM is all on Vmware even the LG's too.

Thanks
Amit
To unsubscribe from this group and stop receiving emails from it, send an email to LR-LoadRunner+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages