Help needed to understand Pricing Model in Google Custer Workload traces 2019

354 views
Skip to first unread message

Sreetama Mukherjee

unread,
Jul 11, 2020, 1:12:19 AM7/11/20
to googlecluste...@googlegroups.com
Hi Team,

Please can you help me understand how to translate the requests to an approximate pricing the user finally has to pay.

For example, for this request:

image.png

given the resource_request_cpus and resource_request_memory are normalized values, how can I calculate an approximate pricing for this request if I take the pricing model as (ref: https://cloud.google.com/compute/resource-based-pricing):

image.png

resource_request_memory: Borg measures main memory (RAM) in bytes. We rescale memory sizes by dividing them by the
maximum machine memory size observed across all of the traces. This means that the capacity of a
machine with this largest memory size will be reported as 1.0.

resource_request_cpus: Borg measures CPU in internal units called “Google compute units” (GCUs): CPU resource requests
are given in GCUs, and CPU consumption is measured in GCU-seconds/second. One GCU represents
one CPU-core’s worth of compute on a nominal base machine. The GCU-to-core conversion rate is
set for each machine type so that 1 GCU delivers about the same amount of useful computation for
our workloads. The number of physical cores allocated to a task to meet its GCU request varies
depending on the kind of machine the task is mapped to. We apply a similar scaling as for memory:
the rescaling constant is the largest GCU capacity of the machines in the traces. In this document,
we refer to these values as Normalized Compute Units, or NCUs; their values will be in the range [0,1].

All of the traces are normalized using the same scaling factors.




Please if you explain for 1 request it would be highly helpful.

Thanks and Regards,
Sreetama
Attachments 

john wilkes

unread,
Jul 14, 2020, 1:24:52 AM7/14/20
to googlecluste...@googlegroups.com
Hi.
  • 1. The data we provide is for internal workloads, not Cloud ones, so any mapping you do would at best be an approximation.
  • 2. Because of the normalization, you have to determine for yourself what you think a reasonable mapping would be from the
    (normalized) sizes we report onto physical machine sizes.  We are deliberately not providing that information.
  • 3. As we report in the Borg paper, we have a much wider range of resource granularities than the range of VM sizes that are
    available (even on Google Cloud), so you will have to decide how best to round (up?) the request sizes that Borg sees to the next
    available VM size.  (See the Borg paper for one set of data about the effects of doing that.)
john

--
You received this message because you are subscribed to the "Google cluster data - discussions" group. To post to this group, send email to googlecluste...@googlegroups.com. To unsubscribe from this group, send email to googleclusterdata-...@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/googleclusterdata-discuss?hl=en-US.
---
You received this message because you are subscribed to the Google Groups "Google cluster data - discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to googleclusterdata-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/googleclusterdata-discuss/CA%2BGAAQYvaeJOpGantuTY3-Av%2BxpNxvvMMi3dh9KUeLbb4s-qqA%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages