It is unclear from your description whether Mach-L and Mach-R are load
generators or are both load generator and controller combinations. But, if
these are controllers then you should not be executing virtual users on
them, it is considered bad practice. You're also throwing between
LoadRunner versions. You need to simply your diagnoses, keep the machines
constant, keep the version of LoadRunner constant and change one element.
As you are changing multiple elements concurrently you are having an
expectedly difficult time trying to remove the influence of one item to
check the influence of others. This is a testing process issue.
The second set of issues surround your test bed conditions. By not having
identical load generators you make your job of comparing one to another.
One load generator may be overloaded while the other may be healthy from a
resource perspective. It is considered a best practice to have matching
load generators with matching initial conditions on each prior to your test
execution.
You have a deliberate inclusion of an uncontrolled element in your test
design, which is the complex network between the application under test and
Mach-L. What you are seeing on Mach-L could very well be representative of
what a user would see from the same location given the number of application
turns and the amount of data being sent back and forth at the network layer.
But because you have this uncontrolled element in your test design your job
of determining application scalability vs network scalability vs LAN|WAN
antagonistic characteristics of your application difficult. Collapse your
network for your application test. If you must involve locations across
and uncontrolled network element then load a single virtual user as a
control element in those locations for the measurement of network affect on
the applications.
Draw back to basics on test construction. Minimize the number of
uncontrolled elements in your test design and your test bed. Purchasing and
installing additional load generators is far less expensive than the cycles
you have likely already burned trying to figure out why this is occurring
and it will certainly be less than the cycles burned by the application
group if you provide them a spurious performance defect. As your company
has invested in performance testing tools and a performance testing staff,
then should be willing to invest in a set of performance test infrastructure
to fortify the test results.
James Pulley