Open or closed ?

55 views
Skip to first unread message

Radu Brumariu

unread,
Nov 11, 2017, 5:33:55 PM11/11/17
to guerrilla-cap...@googlegroups.com
Hi, I am getting started with PDQ and it’s unclear to me if I need an open or a closed model.

I am planning on modeling a multi tiered architecture but I want to start small and enhance the model progressively. 
I am thinking of starting with a simple M/M/N model to represent the load balancer. But I am not sure if I need an open or a closed model.

Can someone clarify this for me please?

Also let me know if I need to provide more information.

Thanks,
Radu

DrQ

unread,
Nov 11, 2017, 6:08:17 PM11/11/17
to Guerrilla Capacity Planning
Hi Radu,

It's really very simple. Just ask yourself, is the number of requests that can be in the system or subsystem that you want to model an unbounded number or a finite number?
  1. For a so-called open queueing model, like M/M/m, there can be any unbounded number of requests in the queue (Q), as long as it's not infinity (in which case, you can't calculate anything sensible). An example might be the front-end tier of a website. That queue or waiting line can have any arbitrary length (in the PDQ model). This has nothing to do with any 'finite' limitations that might exist in a real system, such as buffer-size allocation. In a PDQ, we would ignore that constraint when defining the model. It can be matched up later, when we've calculated the PDQ output metrics.
  2. For a so-called closed queueing model, there can only be a finite number of requests (N) present in the system, where the bound might come from the number of threads or processes that can represent requests to be serviced.  Even if N = 1,000 requests, there can never be more than that number in the system (on average). This can only be true if the system is in some sense "closed off" from an arbitrary number of requests arriving from the outside.  In other words, only those requests that can get a thread, are allowed into the system. An example might be an RDBMS system at the back-end of a website.

Radu Brumariu

unread,
Nov 11, 2017, 8:16:35 PM11/11/17
to guerrilla-cap...@googlegroups.com
Thanks for the explanation.
Wouldn’t the front end tier be also a closed system because just like the database, the web servers/ngonx/haproxy would also have a limited number of preforked processes/threads? How about a nodejs app that’s single threaded but evented?

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-...@googlegroups.com.
To post to this group, send email to guerrilla-cap...@googlegroups.com.
Visit this group at https://groups.google.com/group/guerrilla-capacity-planning.
For more options, visit https://groups.google.com/d/optout.

DrQ

unread,
Nov 11, 2017, 8:45:28 PM11/11/17
to Guerrilla Capacity Planning
Yes, the front-end could also be considered to be limited by the number of network connections that can be allocated by the listen queue (buffer).

What I meant was, for a given arrival rate into the front-end from an indeterminate number of web users out on the internet, the steady state queue-length (Q) corresponds to the number of requests in the system, e.g. M/M/m. In the PDQ view, Q is an output for an open model whereas, N (e.g., number of database users) is an input parameter (or metric) for a closed PDQ model.

Another consideration is, how you want to abstract the real system. At a real website, there are likely going to be a (large) number of front ends that receive their traffic from a load balancer. The job of the load balancer is to make sure that the number of network connections is never exhausted.  So, in any observation window, I can measure the rate of HTTPget arrivals, even though I have no idea how many (N) users are generating those requests. 

Take a look at our paper, How to Emulate Web Traffic Using Standard Load Testing Tools where we try to make the distinction on the basis of asynchronous and synchronous arrivals: "Figure 1 is a graphical representation of the difference between synchronous arrivals in a closed system and asynchronous arrivals in a open system." 

As a matter of practice, when unsure, it can be easier to start with all open queues forming the queueing circuit or queueing network and replacing certain of them with closed queues later, if necessary, in order to get the overall PDQ outputs to calibrate with other measurements, such as total system response time (R).

On Saturday, November 11, 2017 at 5:16:35 PM UTC-8, Radu Brumariu wrote:
Thanks for the explanation.
Wouldn’t the front end tier be also a closed system because just like the database, the web servers/ngonx/haproxy would also have a limited number of preforked processes/threads? How about a nodejs app that’s single threaded but evented?

Radu Brumariu

unread,
Nov 12, 2017, 12:45:26 AM11/12/17
to guerrilla-cap...@googlegroups.com
Thanks again for a very thorough explanation. I will try the open circuit approach.

Much appreciated!

Radu

DrQ

unread,
Nov 12, 2017, 1:07:19 AM11/12/17
to Guerrilla Capacity Planning
No worries and feel free to send us your modeling attempts, if you want more feedback.

Meantime, here's another example to show the difference b/w open and closed queues.

Imagine the front end was represented by a simple M/M/1 queue such that, in the early part of the business day the arrival rate was 1000 gets/sec (on average) and the CPU is only 10% utilized. Later in the day, during the peak busy window, the arrival rate rises to 5000 gets/sec and the CPU becomes 50% busy. The queue length will then be longer b/c there will be more requests in the system (the logical equivalent of N) waiting to be serviced. Conversely, a back-end database, being a closed system, can only accommodate N = 2000 requests (for example). The maximum back-end request population remains fixed throughout the day.


On Saturday, November 11, 2017 at 9:45:26 PM UTC-8, Radu Brumariu wrote:
Thanks again for a very thorough explanation. I will try the open circuit approach.

Much appreciated!

Radu

Radu Brumariu

unread,
Nov 13, 2017, 10:30:46 PM11/13/17
to guerrilla-cap...@googlegroups.com
Hi,
here is my first attempt.

I have a load balancer with 5 haproxy behind it (that proxy requests to other services, but I'm trying to model just the entry point first )
As observe over 5 business days the avg arrival rate is 200 req / sec, with an avg latency of 230 ms.

------------------------
library(pdq)

arrivalRate = 200 # req / sec
serviceTime = 0.230 # seconds
maxConn = 4096 # haproxy will accept as many as 4096 ( in our config ) while the rest will start queueing and are managed via socket listen queue size )
serverPool = maxConn * 5 # I am not certain if this is my actual server count

Init("Conveyor-LB")
CreateOpen("Customers", arrivalRate)
CreateMultiNode(serverPool, "Haproxy", CEN, MSQ)
SetDemand("Haproxy", "Customers", serviceTime)

Solve(CANON)
Report()
------------------------

I want to obtain the throughput and the utilization of the setup as I increase the number of requests.
If I just loop, I obtain a linear throughput until it runs over the available capacity. This doesn't feel right, especially the linear aspect. Additionally, I am pretty sure that the service time will not remain constant as I ramp up the number of requests.

Any suggestions on how to improve this ?



--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.

DrQ

unread,
Nov 13, 2017, 11:07:03 PM11/13/17
to Guerrilla Capacity Planning
This is super! I was easily able to cut & paste this into RStudio (my preferred modeling environment). 

Great starting point (keep it simple) and it's so much easier to discuss an actual PDQ model than pie-in-the-sky guesses. Thanks for sending it along.

Notice I say starting point b/c we can iterate from here, over and over, until we converge on something that you eventually believe captures what you want to model.

So, you're starting out with a single M/M/m queue called "Haproxy", where you set m = maxConn * 5. That's fine.
  • Where does the '5' come from?
  • Did you really mean to have just m = 5 queueing servers?
  • What you have is m= 20480 Servers accommodating an arrivalRate of 200 # req / sec
  • So, their individual utilization is less than 0.25% (0.2246   Percent). Is that what you intended?
  • That would seem to be slightly over-engineered, unless the future arrival rate will grow 200x. Will it?
  • You "observe over 5 business days". Is that the basis for the averages you're using? Might be rather a long measurement period  and not steady state.
  • Presumably, there are daily periodicities and you might want to choose a steady-state window within a day rather than across 5 days.
To address your other questions, the throughput X will always grow linearly in an open queue in order to obey Little's law: X = ρ/S.
Since, ρ = λ*S, when you increase the arrival rate, the utilization ρ will increase in proportion and thus, so will X. It will increase linearly up to saturation.

A metric that will grow nonlinearly is the residence time or response time R. It will have an "elbow" shape as you increase λ and therefore ρ. Another is the queue length.

I wouldn't get distracted by whether or not the mean service time S is truly a constant or not. It's perfectly fine to assume it is constant and review that assumption later if the model refuses to calibrate with your other data. Or you have some other demonstrable data that shows that S is clearly not constant in the steady-state window of interest.


Back atcha ... 😊



On Monday, November 13, 2017 at 7:30:46 PM UTC-8, Radu Brumariu wrote:
Hi,

Radu Brumariu

unread,
Nov 13, 2017, 11:49:06 PM11/13/17
to guerrilla-cap...@googlegroups.com
Hi,

* I have 5 machines ( haproxy ) each with a limit of 4096 max connections
* If m = 5 I get an error from PDQ : "Arrival rate 200.000 for stream 'Customers' exceeds saturation thruput 21.739 of node 'Haproxy' with demand 0.046".
* There are other considerations as to why there are 5 machines, unrelated to the performance of the system
* If I look at daily averages, they range from 278 to 158 req / sec. Not much variation in the response time either

--

DrQ

unread,
Nov 14, 2017, 12:19:34 AM11/14/17
to Guerrilla Capacity Planning
>> Not much variation in the response time either

And what is that response time, prey tell?

On Monday, November 13, 2017 at 8:49:06 PM UTC-8, Radu Brumariu wrote:
Hi,

* I have 5 machines ( haproxy ) each with a limit of 4096 max connections
* If m = 5 I get an error from PDQ : "Arrival rate 200.000 for stream 'Customers' exceeds saturation thruput 21.739 of node 'Haproxy' with demand 0.046".
* There are other considerations as to why there are 5 machines, unrelated to the performance of the system
* If I look at daily averages, they range from 278 to 158 req / sec. Not much variation in the response time either

DrQ

unread,
Nov 14, 2017, 12:44:07 AM11/14/17
to Guerrilla Capacity Planning
I just noticed that you stated earlier, "As observe over 5 business days the avg arrival rate is 200 req / sec, with an avg latency of 230 ms", which suggests you might mean R = 230 ms per request. Is that correct? If so, then what is the service time inside "haproxy"? You originally used S = 230 ms for the service time. R = S + W.

Radu Brumariu

unread,
Nov 14, 2017, 10:45:20 AM11/14/17
to guerrilla-cap...@googlegroups.com
Here is the data

| day | avg latency | call count | arrival rate |
|-----+-------------+------------+--------------|
|   1 |     219.640 |   13402893 |    155.12608 |
|   2 |     230.897 |   12957504 |    149.97111 |
|   3 |     252.011 |   19258354 |    222.89762 |
|   4 |     278.888 |   20332638 |    235.33146 |
|   5 |     158.004 |   16014755 |    185.35596 |
|   6 |     242.859 |   21902988 |    253.50681 |
| Avg |   230.38317 |   17311522 |    200.36484 |

Arrival rate is calculated by dividing the call count column ( total requests within that day ) by 86400 (secs in a day )

The last row is the avg of per column

As observed from the metrics haproxy exposes there is no connect / queuing time, so I considered W = 0

Radu


To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.

DrQ

unread,
Nov 14, 2017, 11:13:20 AM11/14/17
to Guerrilla Capacity Planning
So, "latency" here means R and since there's no queueing (W = 0), you're using S = R for the service time.

This PDQ model, with 20480 servers, shows that:
  1. throughput will increase linearly up to saturation
  2. response time will remain flat at ~230 ms per request
Since there's no queueing, until saturation at ~450x the current arrival rate, a queueing model doesn't tell you very much.

Is this credible? Does it agree with other performance data (e.g., queue lengths, utilizations)  for this system?


On Tuesday, November 14, 2017 at 7:45:20 AM UTC-8, Radu Brumariu wrote:
Here is the data

| day | avg latency | call count | arrival rate |
|-----+-------------+------------+--------------|
|   1 |     219.640 |   13402893 |    155.12608 |
|   2 |     230.897 |   12957504 |    149.97111 |
|   3 |     252.011 |   19258354 |    222.89762 |
|   4 |     278.888 |   20332638 |    235.33146 |
|   5 |     158.004 |   16014755 |    185.35596 |
|   6 |     242.859 |   21902988 |    253.50681 |
| Avg |   230.38317 |   17311522 |    200.36484 |

Arrival rate is calculated by dividing the call count column ( total requests within that day ) by 86400 (secs in a day )

The last row is the avg of per column

As observed from the metrics haproxy exposes there is no connect / queuing time, so I considered W = 0

Radu

To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.
Visit this group at https://groups.google.com/group/guerrilla-capacity-planning.
For more options, visit https://groups.google.com/d/optout.

Radu Brumariu

unread,
Nov 14, 2017, 12:10:50 PM11/14/17
to guerrilla-cap...@googlegroups.com
One other data point is that it was observed that a haproxy instance would only be able to have 6000 simultaneous connections ( even though the nominal max is higher ). The reasons for this limitation seems to be related with networking ( D/SNAT, conntrack table size ), cpu / mem / IO / network bandwidth resources are not saturated.

This means that there are actually 1380 ( instead of 4096 ) "servers" per machine ( 0.230 * 6000 = 1380 ), right ?

Another data point is that not all requests are the same. There are a few (4+) services with different service time that are being multiplexed by the haproxy. My intention was to model the overall, then to start looking deeper at the different service paths and see how their scaling will affect the resources and availability of the others, given that they share the same entry queue.

----
library(pdq)

arrivalRate = 200 # calls / sec

serviceTime = 0.230 # seconds
maxConn = 1380

xc <- 0
yc <- 0
zc <- 0
for(server in 1:10) {
  serverPool <- maxConn * server
  Init("")

  CreateOpen("Customers", arrivalRate)

  CreateMultiNode(serverPool, "Haproxy", CEN, MSQ)
  SetDemand("Haproxy", "Customers", serviceTime) # set service time
 
  Solve(CANON)
 
  xc[server] <- server
  yc[server] <- GetUtilization("Haproxy", "Customers", TRANS) * 100
}
plot(xc, yc, type="l",xlim=c(1,10), ylim=c(0, 5), lwd=2, xlab="N Servers", ylab="Utilization")
title("Increasing number of servers")


xc <- 0
zc <- 0
serverPool <- maxConn * 1
  for (client in 1:10) {
    Init("")
    CreateOpen("Customers", arrivalRate * client)
 
    CreateMultiNode(serverPool * server, "Haproxy", CEN, MSQ)
    SetDemand("Haproxy", "Customers", serviceTime) # set service time
 
    Solve(CANON)
 
    xc[client] <- client
    zc[client] <- GetThruput(TRANS, "Customers")
    if(client == 1) {
      plot(xc, zc, type="l",xlim=c(1,10), ylim=c(0, 2500), lwd=2, xlab="N Servers", ylab="Throughput")
      title("Increasing number of servers")
    } else {
      points(xc, zc, type="l", lwd=2)
    }
  }



To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.

DrQ

unread,
Nov 14, 2017, 1:50:58 PM11/14/17
to Guerrilla Capacity Planning
FWIW, maybe this is how it's supposed to be in that one doesn't want the front-end LB throttling arrivals. 
In other words, that's why this part of the PDQ model has "infinite" bandwidth.

# Modified by NJG on Tue Nov 14 10:25:41 2017
library(pdq)

arrivalRate <- 200 # calls / sec
serviceTime <- 0.230 # seconds
maxConn     <- 1380 

xc <- 0
yc <- 0
zc <- 0
serverPool <- maxConn 
clients <- 1:299

for (client in clients) {
  Init("")
  CreateOpen("Calls", arrivalRate * client)
  
  CreateMultiNode(serverPool * server, "Haproxy", CEN, MSQ)
  SetDemand("Haproxy", "Calls", serviceTime) # set service time
  
  Solve(CANON)
  
  xc[client] <- GetThruput(TRANS, "Calls")
  yc[client] <- GetResponse(TRANS, "Calls")
}

# add space to RHS in plot window
par(mar=c(5, 4, 4, 6) + 0.1)
plot(clients, xc, type="b", cex=0.25,
     xlim=c(1, max(clients)), 
     ylim=c(0, arrivalRate * max(clients)), 
     col="black",
     xlab="m Servers", ylab="Throughput (calls/sec)",
     main="Increasing Call Rate"
     )
par(new=TRUE)
plot(clients, yc, type="b", cex=0.25,
     xlim=c(1, max(clients)), ylim=c(0,0.500), col="red",
     xlab="", ylab="",
     axes = FALSE, bty = "n"
)
mtext("Response time (secs)", side=4, line=3)
axis(4, ylim=c(0,max(yc)), col="red", col.axis="red", las=1)
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.
Visit this group at https://groups.google.com/group/guerrilla-capacity-planning.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.
Visit this group at https://groups.google.com/group/guerrilla-capacity-planning.
For more options, visit https://groups.google.com/d/optout.

DrQ

unread,
Nov 14, 2017, 2:25:53 PM11/14/17
to Guerrilla Capacity Planning
I think I screwed up the plot labels. 

Radu Brumariu

unread,
Nov 14, 2017, 2:31:44 PM11/14/17
to guerrilla-cap...@googlegroups.com
Thanks a lot!
Is there value in going down the route of different backend services and their respective service time ?
For example to answer questions like, if service X would become more popular ( say 3x current traffic ) how would capacity look like ? Although given the current values, it looks way overprovisioned

Thanks,
Radu

To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.

DrQ

unread,
Nov 14, 2017, 2:42:14 PM11/14/17
to Guerrilla Capacity Planning
One of the great things about PDQ models is that you can totally ignore reality (whatever that is).

You can play around with any and all of the PDQ input parameters/metrics to your heart's content. Nothing will be harmed. The only proviso for multiple workloads is, you'll need to have the respective service times and arrival rates for each workload class you want to distinguish; either from data or good guesses. That's how they're defined in PDQ.


On Tuesday, November 14, 2017 at 11:31:44 AM UTC-8, Radu Brumariu wrote:
Thanks a lot!
Is there value in going down the route of different backend services and their respective service time ?
For example to answer questions like, if service X would become more popular ( say 3x current traffic ) how would capacity look like ? Although given the current values, it looks way overprovisioned

Thanks,
Radu
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.
Visit this group at https://groups.google.com/group/guerrilla-capacity-planning.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.
Visit this group at https://groups.google.com/group/guerrilla-capacity-planning.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.
Visit this group at https://groups.google.com/group/guerrilla-capacity-planning.
For more options, visit https://groups.google.com/d/optout.

Radu Brumariu

unread,
Nov 20, 2017, 11:01:34 AM11/20/17
to guerrilla-cap...@googlegroups.com
If I were to obtain service times at higher arrival rate, how could I use that in the model. As I mentioned, I don’t think the real throughput is linear ( b/c the service time will not be )

On Tue, Nov 14, 2017 at 2:42 PM 'DrQ' via Guerrilla Capacity Planning <guerrilla-cap...@googlegroups.com> wrote:
One of the great things about PDQ models is that you can totally ignore reality (whatever that is).

You can play around with any and all of the PDQ input parameters/metrics to your heart's content. Nothing will be harmed. The only proviso for multiple workloads is, you'll need to have the respective service times and arrival rates for each workload class you want to distinguish; either from data or good guesses. That's how they're defined in PDQ.


On Tuesday, November 14, 2017 at 11:31:44 AM UTC-8, Radu Brumariu wrote:
Thanks a lot!
Is there value in going down the route of different backend services and their respective service time ?
For example to answer questions like, if service X would become more popular ( say 3x current traffic ) how would capacity look like ? Although given the current values, it looks way overprovisioned

Thanks,
Radu
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-...@googlegroups.com.
To post to this group, send email to guerrilla-cap...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-...@googlegroups.com.
To post to this group, send email to guerrilla-cap...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-...@googlegroups.com.
To post to this group, send email to guerrilla-cap...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-...@googlegroups.com.
To post to this group, send email to guerrilla-cap...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-...@googlegroups.com.
To post to this group, send email to guerrilla-cap...@googlegroups.com.

DrQ

unread,
Nov 22, 2017, 11:25:49 AM11/22/17
to Guerrilla Capacity Planning
Mulit-class workloads are discussed in the following chapters of my  PDQ book.
  • 5.8 Multiple Workloads in Closed Circuits
  • 11 Client/Server Analysis with PDQ

Also study the examples included in the PDQ software distribution.


On Monday, November 20, 2017 at 8:01:34 AM UTC-8, Radu Brumariu wrote:
If I were to obtain service times at higher arrival rate, how could I use that in the model. As I mentioned, I don’t think the real throughput is linear ( b/c the service time will not be )

On Tue, Nov 14, 2017 at 2:42 PM 'DrQ' via Guerrilla Capacity Planning <guerrilla-capacity-planning@googlegroups.com> wrote:
One of the great things about PDQ models is that you can totally ignore reality (whatever that is).

You can play around with any and all of the PDQ input parameters/metrics to your heart's content. Nothing will be harmed. The only proviso for multiple workloads is, you'll need to have the respective service times and arrival rates for each workload class you want to distinguish; either from data or good guesses. That's how they're defined in PDQ.


On Tuesday, November 14, 2017 at 11:31:44 AM UTC-8, Radu Brumariu wrote:
Thanks a lot!
Is there value in going down the route of different backend services and their respective service time ?
For example to answer questions like, if service X would become more popular ( say 3x current traffic ) how would capacity look like ? Although given the current values, it looks way overprovisioned

Thanks,
Radu
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsub...@googlegroups.com.
To post to this group, send email to guerrilla-capacity-planning@googlegroups.com.

Radu Brumariu

unread,
Nov 28, 2017, 11:05:02 AM11/28/17
to guerrilla-cap...@googlegroups.com
Hi,
looking through the book and examples, I can't find any allusions to a model that has a multi node handling multiple workstreams and distributing those streams to separate MSQs.
Is this possible to model in PDQ ?

Thanks,
Radu


To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guerrilla-capacity-planning+unsubs...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages