Alright...@beaker says I must respond here, which I was planning to
anyway :)
So I think the main point is that we need to ensure pricing model does
not become a defining characteristic.
There are three modes of consumption that I see. First it's allocation-
based consumption. Example would be EC2 where you allocate a VM with a
certain amount of CPU/memory and you pay for that allocation.
Second and an extension to the first model would be a resource pool
model. In this case, a resource pool, say 5 GHz of CPU and 10GB of
memory, is allocated, and you can spin up as many VMs as you like to
consume that pool of resource. This model is an extension of the first
because it's still based on allocation. The advantage of the second
model is that you have much better control over the resource pool and
if you know your applications well (e.g., the amount of CPU/memory
they consume), this will be much more cost effective. Both of these
models require customers to pay for the allocated resources.
The third model is a true utility model of consumption. This model
does not require any allocation or resource pool and the cloud would
simply allow you to use resources as needed. For example, your
application may only use 300 MHz of CPU and 200MB of memory during
normal operations and that's all you pay for. If the application
spikes to use 1 GHz of CPU and 3 GB of memory for a period of time,
you pay for that also. This model is a lot more unpredictable but
could potentially be a lot cheaper than the previous two.
So maybe change the title to "Utility or Allocation-based Consumption"
and clarify the different models in the text?