Here is a way to put this to rest. Turn on w3c time taken in the logging for your web server or web tier. This will track every single request with resolution up to the microsecond (millionth of a second) potentially, depending upon web server type. Note this tracks every single request individually. A page may be made up of hundreds of components, each of which represents one "hit" or "request" to the system. So, your developers may be viewing apples to apple trees when comparing the data.
Next, take one of your existing scripts. Re-save it to a new script name. Regenerate the new script in URL mode only. This will provide a script where every request is individual. You will see that on requests which are repeated that you will have items such as web_url("logo_n",...Where n is the instance number of the common request, made multiple times during the execution of the test. Remove the index number, so you have the base request as the label. This label will be used for Automatic Transactions.
Make all of the necessary updates for dynamic data, parameterization, error checking and branch execution for flow control. You should be able to pull much of this from your other script which was working.
Next, go into your run time settings. Turn on Automatic transactions. Also, disable cache recognition. You want all of the requests to be fulfilled from the server and not from local cache on the client.
Make note of your IP address
Ideally, take the web server logs to zero.
Run your one user performance test.
Now, take a representative item, such as logo.gif. Look at the number of times this has been requested in your virtual users. Make note of all of the times.
Go to the web server logs. Pull all of the requests for logo.gif. Look at the times collected by the time-taken field. This represents the processing time for the request on the server, including the time spent to send the data back to the client less the final TCP ACK at the termination of the connection.
This will ensure that you get an oranges to oranges comparison on both the same timing record/sampling basis.
For complex operations where there is some overhead LoadRunner tracks an item called "wasted time" for every single timing record. This is the overhead from LoadRunner of the request and processing. This "wasted time" is automatically culled from every request as part of the reporting, so you have transaction_time-wasted_time as a default on the Analysis reporting. If you wish to view the values for wasted time, and think time, then take any analysis session database, open it up and look for the [ Event Meter ] table. You will see a column for "value" which is the duration of the timing record event plus columns for tracked think time and tracked wasted time.
This is a classical, "you have called my child ugly" problem. The devs don't like your result and don't understand your tool so they will attack it.
Now, if you have engaged in poor behavior such as running your users on the controller, running them all on one host without a control set (single user on another computer), running them on a virtual machine which has clock float issues and not patched with HP's VMWARE patch, then you are the victim of self-induced problems in test design.
Run the one user. Compare the results at a single request level to the logs at the time of that test. This should put the "latency" issue to bed. You can also show them the tracked overhead (where it exists) for "wasted time" or "think time." Design issues are a different matter
James Pulley