Question about normalization GCU

77 views
Skip to first unread message

8oom Choi

unread,
Aug 10, 2023, 3:23:22 AM8/10/23
to googlecluste...@googlegroups.com

Thank you for your recent answer. I have another question about GCU.


According to the document:

"Borg measures CPU in internal units called “Google compute units” (GCUs): CPU resource requests are given in GCUs, and CPU consumption is measured in GCU-seconds/second. One GCU represents one CPU-core’s worth of compute on a nominal base machine. The GCU-to-core conversion rate is set for each machine type so that 1 GCU delivers about the same amount of useful computation for our workloads. The number of physical cores allocated to a task to meet its GCU request varies depending on the kind of machine the task is mapped to. We apply a similar scaling as for memory: the rescaling constant is the largest GCU capacity of the machines in the traces. In this document, we refer to these values as Normalized Compute Units, or NCUs; their values will be in the range [0,1]. All of the traces are normalized using the same scaling factors.

In the trace, most resources are described by a Resources structure, which typically contains a value for both CPU (in NCUs) and memory (normalized bytes)."


Based on my understanding, if the average_usage's CPU value is 0.004 and the largest GCU assigned to the machine is 1000, then the actual value will be 4 GCU. Does this mean the instance is using 4 CPU cores of the allocated machine?


Or if the average_usage's CPU value is 0.004 and the largest GCU assigned to the machine is 100, then the actual value will be 0.4 GCU, indicating that it is using 40% of the allocated machine's CPU?


If the first assumption is correct, does it mean we cannot know the instance's CPU utilization?

Reply all
Reply to author
Forward
0 new messages