Crib Sheet for Emulating Web Traffic

44 views
Skip to first unread message

DrQ

unread,
Oct 13, 2016, 1:14:14 PM10/13/16
to Guerrilla Capacity Planning
Our paper entitled, How to Emulate Web Traffic Using Standard Load Testing Tools is now available online and will be presented at the upcoming CMG conference in NovemberThe motivation for this work harks back to a Guerrilla forum in 2014 that essentially centered on the same topic as the title of our paper.

Because the paper is long and the focus is the converse of what people usually have in mind when it comes to load testing, I've put together the following crib notes in an effort to help performance engineers get through it (since they're the ones that most stand to benefit).

  1. Standard load testing tools have a finite number of virtual users
  2. Web traffic is characterized by an indeterminate number of users
  3. Attention is usually focused on the performance of the SUT (system under test)
  4. We focus on the DVR (driver) side performance for web traffic
  5. Examine distribution of arriving requests and their mean rate
  6. Web traffic should be a Poisson process (just like A.K. Erlang used in 1909)
  7. That requires statistically independent arrivals (i.e., no correlations)
  8. We also refer to these as asynchronous requests
  9. Standard virtual users become correlated in the queues of the SUT
  10. We refer to these as synchronous requests
  11. We decouple them by reducing the length of queues in the SUT
  12. This is achieved by increasing think delay ZZ as the load NN is increased (Principle A in the paper)
  13. Traffic the approaches a constant mean rate λrat=N/Zλrat=N/Z as SUT queues decrease
  14. Check the traffic is indeed Poisson by measuring the coefficient of variation (CoVCoV)
  15. Must have CoV=1CoV=1 for a Poisson process (Principle B in the paper)
This blog post has more background.





benikenobi

unread,
Jun 8, 2017, 1:59:08 PM6/8/17
to Guerrilla Capacity Planning
Hi,

First of all thanks for the paper. 

I have read it and I was thinking that I had understood but reading the Jmeter example I have found something that puzzle me a lot.

The principle A says that Z in each of the load generators should be scaled with N such that the ratio N/Z remains constant. When N goes up, Z should go up. However, in the Table 3:Web.gov simulated web user loads, the value of Z(ms) is decreasing  and N is maintained fixed, i.e., N/Z is not contant.On the other hand, in the Table 1.b, when N is increasing it is clear that the N/Z ratio is maintained fixed.

Of course I know that paper is right so  there is something very important that I am not catching. 

I would appreciate if somebody can help me to understand this. I am very interested in understanding this.

Thanks in advance.

DrQ

unread,
Jun 8, 2017, 3:42:24 PM6/8/17
to Guerrilla Capacity Planning
> First of all thanks for the paper. 
> I have read it and I was thinking that I had understood but reading the Jmeter
> example I have found something that puzzle me a lot.

Welcome, and thanks also go to you for reading our paper so carefully. 
Try as we might to keep it both expository and standard length (10-12 pp.), we failed miserably. 
This paper is not for the faint of heart or those short on attention. 

I had shown already how to emulate web traffic (an open queueing system) in theory using PDQ 
in my 2010 blog post, but I couldn't have told you how to implement it on a real test rig. 
That's where Jim comes in. 

In a CMG 2012 paper, Jim had written about how he had actually emulated web
traffic for the State of Nevada in a verifiable way! However, although I could see that he and I 
were 'on the same page' more or less, I had some difficulties following exactly how he went about it. 
Another couple of years later, the same question appeared in this forum. Thus, at CMG 2014, 
I suggested to Jim that we ought to join forces and write the definitive paper. 

I naively assumed it would be fairly straightforward for us to combine our separate but related 
views of this topic. Wrong!  It took 9 months to sought through the rapidly exponentiating details.

> The principle A says that Z in each of the load generators should be scaled
> with N such that the ratio N/Z remains constant. When N goes up, Z should go
> up. However, in the Table 3:Web.gov simulated web user loads, the value of
> Z(ms) is decreasing  and N is maintained fixed, i.e., N/Z is not contant.On
> the other hand, in the Table 1.b, when N is increasing it is clear that the
> N/Z ratio is maintained fixed.

It's now a year since we wrote the manuscript, so I may be a bit rusty on the details. 

Jim's test rig was implemented and used years before we wrote this paper. Moreover, 
the number of threads was prescribed and fixed at N=200 in that test environment to guarantee that 
maximum available load was being applied, and there was only a limited 
amount of time available to carry out all the testing. The question Jim wanted to address was,
what Z value will produce web-like requests into the SUT at that N value?

You can view Table 3 as Jim proving to himself that he is close 
to having a CoV near 1.0 at N=200 with Z=6.25 seconds. This is the only data we had to 
work with. He could've arrived at the same Z value by varying N/Z, but that option was 
excluded (for "business" reasons) in these tests.

Jim achieved the goal of Principle A but got there via a different, more constrained, route.
That's why Principle A is referred elsewhere in the paper as the "royal road" for arriving at statistically 
independent web-like requests. It's not the only way. It's the best way if you have no other testing 
constraints to deal with.
 
He didn't know about Principle A, and I never thought about Principle B, until we wrote this joint paper. 
Today, with that knowledge and if N did not have to be fixed, we would approach things using Principle A.

Maybe Jim can offer a more detailed explanation. 


> Of course I know that paper is right so  there is something very important that I am not catching.  
> I would appreciate if somebody can help me to understand this. I am very interested in understanding this.

Hopefully, this background info helps. If not, keep asking. :)

benikenobi

unread,
Jun 9, 2017, 11:32:10 AM6/9/17
to Guerrilla Capacity Planning
Thanks very much for your detailed response.

With the response plus the reading of the related blog posts (I have found at least three), things are clearer now.

I'll get back to here if needed :)

Thanks a lot


El jueves, 13 de octubre de 2016, 19:14:14 (UTC+2), DrQ escribió:
Reply all
Reply to author
Forward
0 new messages