After eating some tasty, tasty BBQ I was looking the code over again
and noticed that on line 69 the demand for the current job is being
looked at and not for all nodes:
8<-------------
Ddev = node[k].demand[c];
-------------->8
That means my prior statement is incorrect. The loop is looking at
only the current job. However, the value for Dsat is not set back to
zero with each loop working on the current job. So, when I setup my
model, I set up the heaviest demand for the lowest lamba for
workLoad_A. In the loop in the message above, once Dsat is set, it is
not zero'd out with the beginning of each loop working on the current
job indexed by integer c and remains for the rest of the saturation
calculations.
This got me to thinking. If I reverse the order of the creation of
workStream_A and workStream_B, the code that I supplied initially
should work. I modified to code to read:
8<--------------------------
#!/usr/bin/python
import pdq
pdq.Init("arrivalRate = 38.4")
# Define all of the queues
pdq.CreateNode("cpu", pdq.CEN, pdq.FCFS)
pdq.CreateNode("nic.sent", pdq.CEN, pdq.FCFS)
pdq.CreateNode("nic.recv", pdq.CEN, pdq.FCFS)
pdq.CreateNode("dat.cpu", pdq.CEN, pdq.FCFS)
# Create workStreamB and set the related service demands
pdq.streams = pdq.CreateOpen("workStream_B", 34.56)
pdq.SetDemand("cpu", "workStream_B", 0.00725)
pdq.SetDemand("nic.sent", "workStream_B", 0.00324)
pdq.SetDemand("nic.recv", "workStream_B", 0.00006)
pdq.SetDemand("dat.cpu", "workStream_B", 0.000982)
# Create workStreamA and set the related service demands
pdq.streams = pdq.CreateOpen("workStream_A", 3.84)
pdq.SetDemand("cpu", "workStream_A", 0.029)
pdq.SetDemand("nic.sent", "workStream_A", 0.01296)
pdq.SetDemand("nic.recv", "workStream_A", 0.00006)
pdq.SetDemand("dat.cpu", "workStream_A", 0.001379)
pdq.Solve(pdq.CANON)
pdq.Report()
-------------------------->8
And now the code works as expected. Huzzah! I just tested this on
another laptop and can't cut and paste the results, but I saw the
report generated correctly with expected results.
Now the question is how to prevent this from happening again? I think
it could easily be remedied by setting Dsat to zero at the beginning
of each for loop that will be working on each job (line 57 of
MVA_Cannon.c):
8<---------------- begins on line 57 of MVA_Cannon.c ---------
for (c = 0; c < streams; c++) {
Dsat = 0.0; // Added by James to reset the Dsat with each job to
ensure the correct bottlenecks are identified per job.
sumR[c] = 0.0;
X = job[c].trans->arrival_rate;
------------------------------------------------------------------------------
>8
Once again, I have not actually tried this yet under my Linux VM.
Perhaps I'll give it a try tonight when I have more free time. Now I
have to go back to doing what a real engineer does these days:
Generating PowerPoint presentations for managers.
James