Thanks for your comments.
I am glad you agree existing pricing models obscure the value
proposition of cloud IaaS offers.
We also agree the ECU is presently meaningless.
The call for embracing the ECU is also a call to establish a consensus
measure of the ECU (with or without Amazon).
The existing consensus measures for memory (GB), storage (TB), and
bandwidth (GB) mean we are relatively close to the transparency you
describe.
I raised the issue of instance definitions and a single comprehensive
index only as a matter of preference.
The Cloud Price Normalization index (cloudpricecalculator.com) offers
buyers a means of direct comparison, but the need for a consensus
definition of ECU is the primary issue.
Amazon dominates the confused status quo of IaaS, but even EC2 is tiny
compared to the overall consumption of on premise compute resources.
This will not change until the price performance of cloud improves.
The continuous improvement cycle that serves as the basis for all
infotech growth requires actually knowing how to measure price
performance.
Regards,
Dan
On Sat, Oct 30, 2010 at 6:41 AM, cloudsigma <rob...@cloudsigma.com> wrote:
> Dear Dan,
>
> As a vendor in the IaaS space, I completely agree that current pricing
> models are deliberately designed to mask the amount of computing
> performance actually being delivered and the raw resource being
> allocated. It isn't just an issue of pricing but also of resource
> bundling. These two go hand in hand. When you bundle resources
> together and create false constructs such as server instance sizes,
> you hide the real pricing. It can and will change.
>
<snip>
At that time we thought be being early, we will get the unit adopted. It
was before Amazon gained real strength. Today, the aspiration a unique
unit of computer utility is proven utopia.
Here is an blog comparing Cloud Sigma and Amazon offerings
http://bit.ly/cWySC0
Assume we had an unit for ECU, how can it be applied a Cluster Compute
Instance, which is arbitrarily determined by Amazon and which for sure
will change any time?
Cheers,
Miha
Apologies in advance for continuing this thread....
I think we are making the issue more complicated than necessary.
The issue is not unique to cloud computing.
The electric utility industry does not exist if the utility does not
provide customers a measure of what they are receiving?
Would people be willing to buy gasoline from a gas station that does
not give them a reliable metric to know what they are buying?
Does it make sense for different gas stations to use different metrics?
Does it seem likely customers would be happy if the cloud computing
industry decided to replace GB as the measure of Memory or TB as the
measure of storage or GB as the measure of bandwidth with some vague
abstractions as Amazon did with the ECU regarding compute resources?
Customers can buy on-premise computer equipment without a compute
metric because the unit of compute is the processor itself. This is
not the case with cloud computing where processor resources get sliced
and diced.
The issue here is fundamental and urgent. The cloud industry will not
move beyond experimentation if there is no consensus measure of what
we are selling?
Regards,
Dan
........................................
Daniel Berninger
President, goCipher Software
tel SD: +1.202.250.3838
sip HD: d...@danielberninger.com
w: www.cloudpricecalculator.com/blog
On Mon, Nov 1, 2010 at 5:00 AM, cloudsigma <rob...@cloudsigma.com> wrote:
> Dear Dan and Miha,
>
> Thanks for the additional information which is most interesting.
>
> In terms of the goal of creating some standardised unit of
> measurement, in my opinion this misses the point. There is no standard
> computing use. Computing is heterogeneous and to think that one
> measurement is directly relevant to all computing is erroneous. There
> isn't one definitive benchmark for measuring performance. Actually a
> system that performs very well for one use may be terrible for another
> use. That's why comparisons of a general form are correct in general
> and wrong specifically for practically everyone :-)
>
<snip>
But to process complex multiple transactions on multiple clouds, we are
going to have some agreed upon metrics. There is work going on to define
these metrics, along with the work on interoperability (related), but it
hasn't gotten very far yet, partly because we're at such an early market
stage and the provider market is fragmented (and may stay that way for some
time).
Perhaps we can figure out ways to measure what customers are getting?
Amy Wohl
Amy D. Wohl
Editor, Amy Wohl's Opinions
1954 Birchwood Park Drive North
Cherry Hill, NJ 08003
856-874-4034
a...@wohl.com
www.wohl.com
Hello,
Regards,
Dan
--
~~~~~
UP 2010 Call For Proposals: http://www.up-con.com/content/call-proposals.
Nominate for UP-Start Cloud Awards http://www.up-con.com/cloud-award
Official Cloud Slam websites - http://cloudslam10.com and
http://cloudslam09.com
Posting guidelines: http://bit.ly/bL3u3v
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy hundreds of conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC,
http://www.amazon.com/gp/product/B002H0IW1U or get instant access to
downloadable versions at http://cloudslam09.com/content/registration-5.html
and http://cloudslam10.com/content/registration
~~~~~
You received this message because you are subscribed to the Google Groups
"Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to
cloud-computi...@googlegroups.com
The clock rating of a processor is next to meaningless. It can be used
to compare the relative speeds of two processors in the same family, but
nothing more. Clock speed can't be used to compare two processors from
different families from the same foundry, let alone different families
from different manufacturers.
There are a dozen or more variables that come into the mix. First is
super-scalar performance -- how many instructions get executed in a
clock time. This depends on the instruction mix (fixed vs. floating
point), branch prediction, length of the pipeline, cache hit rate, and
gobs of other factors. Then here are cache contention issues. AMD
used to wipe the floor with Intel's front side bus until Intel got a lot
smarter. And the feature list goes on and on.
Then there are the number and speed of memory channels, memory speed,
cache speed, etc. All go into the mix.
Making the issue even more complex, if possible, is that nobody throws
out last year's processors to be stylish. Processors are kept in play
until either the leasing terms or energy consumption leads to retirement.
Comparing architectures as different as Intel and Sparc is even more
hopeless. Later Sparcs were big on what Sun called hardware threads
(closer to what CDC called barrel processors than cores), so a
production job architected to run on Sparc would have an performance
advantage that would not generalize.
It's hopeless, ladies and gentlemen. All you can do is run your
application mix and measure the results. The only alternative is bogo-mips.
The only way to measure the
> The clock rating of a processor is next to meaningless.
Which is why nobody in the industry has used it for anything beyond an identifying label for at least a decade and a half now! :-)
All of your points stand about the variables in the mix. Which brings us to:
> It's hopeless, ladies and gentlemen. All you can do is run your application mix and measure the results. The only alternative is bogo-mips.
Ahhh, benchmarking.
Anybody else remember when vendors would spend all their time trying to tweak their configs specifically for benchmarking loads, so the scores would be high but the real load performance might be completely different?
I remember working at an ISP where we were being benchmarked against other ISPs and our bonus payments were linked to relative performance: we ended up doing things like authenticating you before user/pass was sent down the line and then dropping the line after the fact if it failed in order to get an extra fraction of a second in the authentication. We would spot inbound calls from testing modems on CLIs and route them to quieter modem racks. We would traffic shape. We'd do anything! And it worked: we got the best marks in the UK.
That said, a benchmark of what is meant by a "large instance" for various workloads would be useful, and would be useful in helping to compare offerings from rival cloud providers.
A benchmarking system broken down into a range of common application configurations, making compute units more comparable would be quite desirable I think.
If I see an Amazon instance is benchmarked independently at 200 reqs/second for a common application configuration for a popular blogging tool, and a competing option that is 10% more expensive is rated at 300 reqs/second, I can make an assessment of "value".
If I can find x prime factors per week with one compute instance and the same software on a competitor instance get me 15% more but costs 12% more as well, again I can make an assessment of value.
Competitors can then distinguish themselves: "For high RAM and CPU workloads we're 8% cheaper"; "For blogging applications, you get 25% more requests per dollar than with the competition", etc. and can tailor their environments accordingly. Some players will come out on top for HPC applications. Others for data processing workloads, others again for web applications.
All we need to do is find a way to get it funded - would companies pay for this information do you think?
Paul Robinson
--
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376
The Service Domain Manager (SDM) module distributes resources between different services according to configurable Service Level Agreements (SLAs). The SLAs are based on Service Level Objectives (SLOs). In the context of SDM, a service is defined as a scalable and manageable software product that performs a specific function for its users. SDM functionality enables you to manage resources for all kind of scalable services. A scalable service is a service for which additional resources enable better performance.
SDM can manage services such as:
- Grid Engine clusters. The first version of the SDM module supports multi-clustering of Grid Engine clusters.
- Application servers
- Reassignment of software licenses between different organizational entities of a company
A service uses Key Performance Indicators (KPIs) to determine its need for a resource. Those KPIs are reported through a Service Adapter that acts as a proxy between a service and the SDM core system. The SDM supports the concept of a spare pool which enables SDM to withdraw unneeded computational resources (hosts, which can be physical hosts or virtualized machines) from services. SDM also enables you to power off unneeded or underutilized machines to minimize the environmental impact of your data centers.
The SDM module is designed to handle different kinds of services that have in common a need for the same set of resources. The SDM module enables those services to share their resources effectively and increase resource utilization.
Sassa
2010/11/9 Miha Ahronovitz <mij...@sbcglobal.net>:
> On 11/9/2010 4:50 AM, Sassa wrote:
>>
>> Is there a promise that the amount of resource utilized is exactly the
>> amount of resource I need? (=controlled, measurable, or no loss
>> through sharing of resources)
>>
>> I.e. do I happen to ask for more resources when the KPI went down
>> because other guests are sharing my resources?
>>
> you ask the right questions, and if you mean by KPI, key performance
> Indicators, in the example I described for Grid Engine, Each SLA has
> Service Level Objectives. In HPC / EDA could mean "maintain response time
> under 5 minutes (batch processing compute intensive), or 5 seconds in other
> instances. Then the resources are automatically allocated. See the links in
> my previous post. SLO are in exxence KPIs specific to an SLA
>
>
> Miha
>