Welcome, and thanks also go to you for reading our paper so carefully.
Try as we might to keep it both expository and standard length (10-12 pp.), we failed miserably.
This paper is not for the faint of heart or those short on attention.
I had shown already how to emulate web traffic (an open queueing system) in theory using PDQ
in my 2010 blog post
, but I couldn't have told you how
to implement it on a real test rig.
That's where Jim comes in.
In a CMG 2012 paper, Jim had written about how he had actually emulated web
traffic for the State of Nevada in a verifiable way! However, although I could see that he and I
were 'on the same page' more or less, I had some difficulties following exactly how he went about it.
Another couple of years later, the same question appeared in this forum. Thus, at CMG 2014,
I suggested to Jim that we ought to join forces and write the definitive paper.
I naively assumed it would be fairly straightforward for us to combine our separate but related
views of this topic. Wrong! It took 9 months to sought through the rapidly exponentiating details.
> The principle A says that Z in each of the load generators should be scaled
> with N such that the ratio N/Z remains constant. When N goes up, Z should go
> up. However, in the Table 3:Web.gov simulated web user loads, the value of
> Z(ms) is decreasing and N is maintained fixed, i.e., N/Z is not contant.On
> the other hand, in the Table 1.b, when N is increasing it is clear that the
> N/Z ratio is maintained fixed.
It's now a year since we wrote the manuscript, so I may be a bit rusty on the details.
Jim's test rig was implemented and used years before we wrote this paper. Moreover,
the number of threads was prescribed and fixed at N=200 in that test environment to guarantee that
maximum available load was being applied, and there was only a limited
amount of time available to carry out all the testing. The question Jim wanted to address was,
what Z value will produce web-like requests into the SUT at that N value?
You can view Table 3 as Jim proving to himself that he is close
to having a CoV near 1.0 at N=200 with Z=6.25 seconds. This is the only data we had to
work with. He could've arrived at the same Z value by varying N/Z, but that option was
excluded (for "business" reasons) in these tests.
Jim achieved the goal of Principle A but got there via a different, more constrained, route.
That's why Principle A is referred elsewhere in the paper as the "royal road" for arriving at statistically
independent web-like requests. It's not the only way. It's the best way if you have no other testing
constraints to deal with.
He didn't know about Principle A, and I never thought about Principle B, until we wrote this joint paper.
Today, with that knowledge and if N did not have to be fixed, we would approach things using Principle A.
Maybe Jim can offer a more detailed explanation.