Re: [ Cloud Computing ] Value proposition for Cloud computing is crystal clear...

47 views
Skip to first unread message

Kevin Apte

unread,
Mar 26, 2009, 2:40:58 PM3/26/09
to cloud-c...@googlegroups.com
The Wallstreet Journal article "Internet Industry is on a Cloud"   

does not do Cloud computing any justice at all.

First: Value proposition of Cloud computing is crystal clear. Averaged over 24 hours, and 7 days a week , 52 weeks in a year most servers have a CPU utilization of 1% or less.  The same is also true of network bandwidth. The  storage capacity on harddisks that can be accessed only from a specific servers is also underutilized. For example, harddisk capacity of harddisks attached to a database server, is used only when certain queries that require intermediate results to be stored to the harddisk.  At all other times the harddisk capacity is not used at all. 

Isolated pools of computing, network and storage are underutilized most of the time, but must be provisioned for that hypothetical peak capacity day, or even a peak capacity hour. What if  we could reengineer our Operating Systems, network/storage management as well as all the other higher layers of software to work in a way that we are able to treat hardware resources as a set of "Compute Pools", "Storage Pools" and "Network Pools"?

Numerous technical challenges have to be overcome to make this happen. This is what today's Cloud Computing Frameworks are hoping to achieve.

Existing software vendors with their per Server and per CPU pricing have a lot to lose from this disruptive model.  A BI provider like "Vertica"  hosted in the cloud, can compete very well with traditional datawarehousing frameworks.  Imagine, using a BI tool few months in a year, to analyze a year's worth of data, using temporarily provisioned servers and rented software.  Cost of an approach like this can be an order of magnitude less than traditional buy, install and maintain approach.

I think Sun's private cloud offering may be the tipping point that will persuade mainstream rather than cutting edge IT organizations to switch to a cloud approach.  With a private cloud, one could share compute, network and storage resources amongst a set of  business units, or even affiliated companies.

You can read a comparison of existing cloud offerings here:

         http://soarealworld.wordpress.com/2009/03/


Kevin Apte
technicalar...@gmail.com
http://soarealworld.wordpress.com


















Jim Starkey

unread,
Mar 26, 2009, 3:26:40 PM3/26/09
to cloud-c...@googlegroups.com
Kevin Apte wrote:
> The Wallstreet Journal article "Internet Industry is on a Cloud"
> <http://online.wsj.com/article/SB123802623665542725.html>
>
> does not do Cloud computing any justice at all.
>
> First: Value proposition of Cloud computing is crystal clear. Averaged
> over 24 hours, and 7 days a week , 52 weeks in a year most servers
> have a CPU utilization of 1% or less. The same is also true of
> network bandwidth. The storage capacity on harddisks that can be
> accessed only from a specific servers is also underutilized. For
> example, harddisk capacity of harddisks attached to a database server,
> is used only when certain queries that require intermediate results to
> be stored to the harddisk. At all other times the harddisk capacity
> is not used at all.
No, that's just an argument for multi-tenancy. There are many ways to
achieve that. One is by applications sharing a server. Another is
through consolidation with virtualization.

Utilization of cheap hardware is not the goal. Application service is.
Minimizing costs while delivering application service is also a goal.
If that means using a cheap disk as a boot and scratch device, that's ok.
>
> Isolated pools of computing, network and storage are underutilized
> most of the time, but must be provisioned for that hypothetical peak
> capacity day, or even a peak capacity hour. What if we could
> reengineer our Operating Systems, network/storage management as well
> as all the other higher layers of software to work in a way that we
> are able to treat hardware resources as a set of "Compute Pools",
> "Storage Pools" and "Network Pools"?
We got into the mess by trying to minimize administrative overhead
(which is why applications doesn't share operating system instances).
Balancing hardware and administrative costs is necessary.
Re-architecting operating systems, application platforms, and
applications may make sense to support multi-tenancy and dynamic
scalability. Re-architecting to use resource pools without addressing
the scalable platform question doesn't make any sense at a..
>
> Numerous technical challenges have to be overcome to make this happen.
> This is what today's Cloud Computing Frameworks are hoping to achieve.
>
> Existing software vendors with their per Server and per CPU pricing
> have a lot to lose from this disruptive model. A BI provider like
> "Vertica <http://www.vertica.com>" hosted in the cloud, can compete
> very well with traditional datawarehousing frameworks. Imagine, using
> a BI tool few months in a year, to analyze a year's worth of data,
> using temporarily provisioned servers and rented software. Cost of an
> approach like this can be an order of magnitude less than traditional
> buy, install and maintain approach.
Software has to be funded by revenue or you will be dependent solely on
open source. Given that most open source is clones of successful
commercial products (like Unix) or implementation of standards, derived
from commercial products (MySQL, Postgres, et al), I'm not sure you are
going to happy with that.

If software platforms are going to evolve to support scalable, reliable
applications on cheap, expendable hardware, there is a bill to be paid.
You can want innovative software on the cheap, but you aren't likely to
get it.
>
> I think Sun's private cloud offering may be the tipping point that
> will persuade mainstream rather than cutting edge IT organizations to
> switch to a cloud approach. With a private cloud, one could share
> compute, network and storage resources amongst a set of business
> units, or even affiliated companies.
>
> You can read a comparison of existing cloud offerings here:
>
> http://soarealworld.wordpress.com/2009/03/
>
>
> Kevin Apte
> technicalar...@gmail.com
> <mailto:technicalar...@gmail.com>
> http://soarealworld.wordpress.com
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> >


--
Jim Starkey
President, NimbusDB, Inc.
978 526-1376

Peglar, Robert

unread,
Mar 27, 2009, 7:23:03 AM3/27/09
to cloud-c...@googlegroups.com
Jim said:

>Utilization of cheap hardware is not the goal. Application service is.

>Minimizing costs while delivering application service is also a goal.
>If that means using a cheap disk as a boot and scratch device, that's
ok.

Hmmm. Agree completely with the first three sentences. But the last
one, no, it's not OK to use cheap disk as a boot and scratch device if
the goal is to minimize cost while delivering application service.
Unless, that is, you do not value your boot images, or care about I/O,
or have the time to shovel boot images around to get machines back up.
Given the intelligent and reliable small (3U or less) disk subsystems
today, there is no excuse to boot from cheap disk. Take one outage in
five years, and you've blown the TCO on the 'use cheap disk' method
because of the human intervention necessary. Using a human to crack the
tin on a server to replace or rip a blown disk is not cheap.

Rob

Jan Klincewicz

unread,
Mar 27, 2009, 9:10:57 AM3/27/09
to cloud-c...@googlegroups.com
One of the cool technologies Citrix bought a few years ago (when they acquires Ardence) is now called "Provisioning Server" where boot images are delivered directly over the wire to physical or virtual machines.   The idea is to get rid of spinning mechanical (or virtual) disks at the SERVER level and provide a 1-to-many image (or n-to-many if that level of consolidation is not feasible) via PXE boot, or boot from a flash drive with TFTP code to point to the provisioning server.

In this case, the actual boot image (or images) are (should be) maintained on very robust (and redundant) storage on the back-end, and servers can pre provisioned ) or re-provisioned) in a few seconds based on a physical (or virtual) MAC address.

It works pretty well in theory (and in proofs of concept) though there are still various complexities which creat some issues in lts of production environments (multiple PXE servers, Network eccentricities) but it does move the paradigm from  "lots of cheap disk"  to "small amounts of expensive, but reliable disk."

OS and workload are sent on as "as needed" basis so as not to flood the network with traffic (though booting 100 servers simultaneously would obviously have some ramifications for DR, etc.)  It's surprising how well it works on a single 100Mb segment..
--
Cheers,
Jan

Greg Pfister

unread,
Mar 28, 2009, 5:14:47 PM3/28/09
to Cloud Computing
Utilization of *** 1 % or less *** ???

Who fed them this? I have seen actual collected data from 1000s of
customers showing server utilization, and it's consistently 10-15%.
(Except mainframes.) (But including big proprietary UNIX systems.)

That said, I completely agree with Jim that this is an argument for
virtualization, not cloud computing.

I also agree with him the whole issue shouldn't be about hardware
utilization, but about TCO. Notoriously hard to grip, of course, but
focussing on utilization misses major points.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Mar 26, 1:40 pm, Kevin Apte <kevina...@gmail.com> wrote:
> The Wallstreet Journal article "Internet Industry is on a Cloud"
> <http://online.wsj.com/article/SB123802623665542725.html>
>
> does not do Cloud computing any justice at all.
>
> First: Value proposition of Cloud computing is crystal clear. Averaged over
> 24 hours, and 7 days a week , 52 weeks in a year most servers have a CPU
> utilization of 1% or less.  The same is also true of network bandwidth. The
> storage capacity on harddisks that can be accessed only from a specific
> servers is also underutilized. For example, harddisk capacity of harddisks
> attached to a database server, is used only when certain queries that
> require intermediate results to be stored to the harddisk.  At all other
> times the harddisk capacity is not used at all.
>
> Isolated pools of computing, network and storage are underutilized most of
> the time, but must be provisioned for that hypothetical peak capacity day,
> or even a peak capacity hour. What if  we could reengineer our Operating
> Systems, network/storage management as well as all the other higher layers
> of software to work in a way that we are able to treat hardware resources as
> a set of "Compute Pools", "Storage Pools" and "Network Pools"?
>
> Numerous technical challenges have to be overcome to make this happen. This
> is what today's Cloud Computing Frameworks are hoping to achieve.
>
> Existing software vendors with their per Server and per CPU pricing have a
> lot to lose from this disruptive model.  A BI provider like
> "Vertica<http://www.vertica.com>"
> hosted in the cloud, can compete very well with traditional datawarehousing
> frameworks.  Imagine, using a BI tool few months in a year, to analyze a
> year's worth of data, using temporarily provisioned servers and rented
> software.  Cost of an approach like this can be an order of magnitude less
> than traditional buy, install and maintain approach.
>
> I think Sun's private cloud offering may be the tipping point that will
> persuade mainstream rather than cutting edge IT organizations to switch to a
> cloud approach.  With a private cloud, one could share compute, network and
> storage resources amongst a set of  business units, or even affiliated
> companies.
>
> You can read a comparison of existing cloud offerings here:
>
>          http://soarealworld.wordpress.com/2009/03/
>
> Kevin Apte
> technicalarchitect2...@gmail.comhttp://soarealworld.wordpress.com

Kevin Apte

unread,
Mar 28, 2009, 7:37:04 PM3/28/09
to cloud-c...@googlegroups.com
I think I may have been guilty of exaggerating. But a surprising number of servers are at 1% utilization if averaged over 24 hours, 7 days a week and 365 days in a year. For example: A database server may be averaging 10 to 15% when it is in actual use.. So when it is not in use, which may be 70% of the time, the total utilization may average below 5%.  The 1% figure is not actually unrealistic-

This is what happens: Companies have a one set of servers per application policy. When the application was designed it had a utilization of 15% on older servers, 8 years ago. With newer servers the utilization drops to 5%.  Averaged over 365 days, 24 hours etc. it can become 1%. 

I am not sure of the figures- but a large fraction of companies have not virtualized in any significant way. The cautious laggards may straightaway jump into the  world of Cloud Computing, and avoid the growing pains of using less than mature virtualization solutions.

Kevin

ademello

unread,
Mar 28, 2009, 7:43:39 PM3/28/09
to Cloud Computing
Agreed.

On Mar 28, 5:14 pm, Greg Pfister <greg.pfis...@gmail.com> wrote:
> Utilization of *** 1 % or less *** ???
>
> Who fed them this? I have seen actual collected data from 1000s of
> customers showing server utilization, and it's consistently 10-15%.
> (Except mainframes.) (But including big proprietary UNIX systems.)
>
> That said, I completely agree with Jim that this is an argument for
> virtualization, not cloud computing.
>
> I also agree with him the whole issue shouldn't be about hardware
> utilization, but about TCO. Notoriously hard to grip, of course, but
> focussing on utilization misses major points.
>
> Greg Pfisterhttp://perilsofparallel.blogspot.com/

Greg Pfister

unread,
Mar 29, 2009, 2:37:53 PM3/29/09
to Cloud Computing
No, you're not exaggerating unless you were the person feeding data to
the WSJ; they had the 1% figure.

The 10-15% number I quoted was a 365.25 x 24 x 7 number, obtained by
data-mining actual CPU utilization logs collected from 1000s of
servers over months. I'd provide a pointer to the person who collected
it, but I he's planning publication and hasn't done so yet.

I think somebody in the link to the WSJ just dropped a zero somewhere
in a cut and paste.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Mar 28, 6:37 pm, Kevin Apte <kevina...@gmail.com> wrote:
> I think I may have been guilty of exaggerating. But a surprising number of
> servers are at 1% utilization if averaged over 24 hours, 7 days a week and
> 365 days in a year. For example: A database server may be averaging 10 to
> 15% when it is in actual use.. So when it is not in use, which may be 70% of
> the time, the total utilization may average below 5%.  The 1% figure is not
> actually unrealistic-
>
> This is what happens: Companies have a one set of servers per application
> policy. When the application was designed it had a utilization of 15% on
> older servers, 8 years ago. With newer servers the utilization drops to 5%.
> Averaged over 365 days, 24 hours etc. it can become 1%.
>
> I am not sure of the figures- but a large fraction of companies have not
> virtualized in any significant way. The cautious laggards may straightaway
> jump into the  world of Cloud Computing, and avoid the growing pains of
> using less than mature virtualization solutions.
>
> Kevin
>

ademello

unread,
Mar 29, 2009, 6:44:51 PM3/29/09
to Cloud Computing
Greg I don't see the 1% figure quoted in the above-linked WSJ article.

On Mar 29, 2:37 pm, Greg Pfister <greg.pfis...@gmail.com> wrote:
> No, you're not exaggerating unless you were the person feeding data to
> the WSJ; they had the 1% figure.
>
> The 10-15% number I quoted was a 365.25 x 24 x 7 number, obtained by
> data-mining actual CPU utilization logs collected from 1000s of
> servers over months. I'd provide a pointer to the person who collected
> it, but I he's planning publication and hasn't done so yet.
>
> I think somebody in the link to the WSJ just dropped a zero somewhere
> in a cut and paste.
>
> Greg Pfisterhttp://perilsofparallel.blogspot.com/

Kevin Apte

unread,
Mar 29, 2009, 8:15:09 PM3/29/09
to cloud-c...@googlegroups.com
Mea Culpa. My 1% figure is not authoritative.  It is based on my experience with a specific set of servers:

J2EE Application Servers: Only one application is allowed per cluster of servers. So if  you had 15% utilization when you
designed the application 8 years ago, on current servers it could be 5% or less.  With applications that are used only few
hours per week,  1% is certainly possible.

The other set of servers for which utilization is really low are: departmental web servers and mail servers.

The fact that you are able to collect logs of CPU utilization, and the fact that there are thousands of servers,  suggests that this is a
bigger than typical installation.  Many smaller companies or divisions of larger companies are not this sophisticated.

Greg:   Anyway, my point has been that WSJ article labels Cloud computing as "puffery" .  

Kevin

dave corley

unread,
Mar 30, 2009, 9:40:55 AM3/30/09
to cloud-c...@googlegroups.com
Its all about TCO....and specifically opex. But, the case for capex as an argument for immediate cloud and virtualization push is strong, although WSJ uses mass media brevity to make their point quickly.

The argument of better utilization for either cloud computing or virtualization can be misleading. Typically, resource deployments are based upon peak utilization. In the world of telecom, the terms P.01 or P.02 are used to signify peak utilization metrics. Because nothing in IT is certain, a probability that a certain quantity of resources will be oversubscribed is predicted (the measured when deployed) by a P-factor. P.01 indicates that the probability (in time) that an oversubscription event will happen is 1%. Telecom companies size their switches and bandwidth capacity along critical paths to such a measure, knowing that the impact on the one percenters during peak is either a dropped call or at best a message to the caller "Facilities oversubscribed. Please call back later". A callback a few seconds later usually results in a successful call as statistics work in the caller's favor. The peak resource utilization event for critical resources for the telcos in the US is usually Mother's Day. But the telcos are willing to sufffer some (expected) number of angry calls if the peak resources are exceeded. Its a balance between overspending on resources versus customer dissatisfaction and customer loss because some implicit or explicit service level agreement is breached.

Virtualization and "on-demand" computing may not help in this scenario, nor can (the already existing) cloud computing of traditoinal telco switches. Resources (the voice network) are in general constrained to a single application - voice. So, resources intended for "other" applications cannot be shifted to fill the peak utilization gap. But in soe cases, it may be good enough. For example, Walmart is "persuading" Microsoft to sell MS Office licensing by realtime use rather than per seat installation. Makes sense. The likelihood that everyone will use an office app on Mother's day or some other peak instant is low. So, for their 500,000 employees, of which an average of 100,000 are using an MS Office app at any one instant, it makes sense for Walmart to push Office to the cloud AND virtualize and ask MSFT to back off to a real-use licensing policy.

In an environment in which there are multiple applications contending for the same resources (CPU, memory, power, cooling, people, etc.), while peak overall utilization must still be considered, the logic is that the peak for individual applications(if the applicaitons are judiiciously selected to be synchronized, contra-synched and counter-synched in time across a distribution of these available resources, the average utilization becomes much higher than for the case for a single applicatoin that contends for the resources.

Dave

Rao Dronamraju

unread,
Mar 30, 2009, 11:57:45 AM3/30/09
to cloud-c...@googlegroups.com

Dave very good analogy from the telecom space.

 

I think it all boils down to criticality of the service being provided and the value associated with it.

 

For instance, if a call is dropped due to non availability of resources on a telecom cloud, the criticality and the value generally may not be high enough to cause that much of a business problem unless the call is a call to 911. In which case you will have to pay all the lawyers in the world to settle the case…

 

Whereas if a business cannot fulfill the peak resource demand, the implications could be far more severe.

 

For instance, if say Amazon.com cannot fulfill the peak demand of online Christmas shoppers, they might go to some other site to purchase the gifts. This might result in substantial dollar amount of lost business.

 

Interesting to note that Walmart is "persuading" Microsoft to sell MS Office licensing by realtime use rather than per seat installation.

 

Will Walmart sell TV’s on a per-use basis?....I do not use a TV atleast 10 hours a day – 5 to 6 days a week.

 

 

 


Greg Pfister

unread,
Mar 30, 2009, 12:05:02 PM3/30/09
to Cloud Computing
Whoops, sorry, I misremembered. Should have gone back and checked.

Thanks for catching that.

Greg Pfister
http://perilsofparallel.blogspot.com/

Greg Pfister

unread,
Mar 30, 2009, 12:24:55 PM3/30/09
to Cloud Computing
On Mar 29, 7:15 pm, Kevin Apte <technicalarchitect2...@gmail.com>
wrote:
> Mea Culpa. My 1% figure is not authoritative. It is based on my experience
> with a specific set of servers:
>
> J2EE Application Servers: Only one application is allowed per cluster of
> servers. So if you had 15% utilization when you
> designed the application 8 years ago, on current servers it could be 5% or
> less. With applications that are used only few
> hours per week, 1% is certainly possible.

I'm not saying some servers aren't as low as 1%; my figure is an
average. See below.

> The other set of servers for which utilization is really low are:
> departmental web servers and mail servers.
>
> The fact that you are able to collect logs of CPU utilization, and the fact
> that there are thousands of servers, suggests that this is a
> bigger than typical installation. Many smaller companies or divisions of
> larger companies are not this sophisticated.

Actually, it was across a very large set of companies that hired IBM
Global Services to manage their systems. Once a month, along with a
bill, each company got a report on outages, costs, ... and
utilization.

A friend of mine heard of this, and asked "are you, by any chance,
archiving those utilization numbers anywhere?" When the answer came
back "Yes" -- you can guess the rest. He drew graphs of # of servers
at a given utilization level. He was astonished that for every
category of server he had data on, the graphs all peaked between 10%
and 15%. In fact, the mean, the median, and the mode of the
distributions were all in that range.

Which also indicates that it's a range. Some were nearer zero, and
some were out past 90%. That yours was 1% is no shock.

Anyway, this *average* is really the most reliable number I know of in
this area. Since I know so few numbers that are as solid as this, I am
prone to jump on anybody saying otherwise. Sorry you got in the way of
my leap this time. :-)

Greg Pfister
http://perilsofparallel.blogspot.com/

> Greg: Anyway, my point has been that WSJ article labels Cloud computing as
> "puffery" .
>
> Kevin
>

Miha Ahronovitz

unread,
Mar 30, 2009, 1:27:14 PM3/30/09
to cloud-c...@googlegroups.com
He was astonished that for every category of server he had data on, the graphs all peaked between 10% and 15%.

Greg,

This is no surprise for me, as HPC packages like Sun Grid Engine working on batch jobs can increase close to 90% utilization. We had data that without a workload manager of sorts, the average utilization is 10% to 15%, confirming what you discovered.

This means world wide, 85% to 90% of the installed computing capacity is sitting idle. Grids improved this utilization rate dramatically, but grid adoption was limited. Clouds are the newer incarnation of grids will increase the utilization percentages. But in the process, it make us think if a blind race to get the maximum utilization is what we want.

First, there technical consideration. In large parallel computation, there is a need to Exclusive Host Access for some jobs.
Then, once we have billing, and we can make advance reservation of computer resources for which we pay, the focus will shift from the search for maximum utilization to maximum revenues.

In other words, we will not care is we have 0% utilization as long as someone is paying a bill to reserve the 100% capacity for whatever reasons.
The cloud owners will worry only about the idle percentage of computers no one uses and no one made a reservation for it.

This is exact business model of running a hotel.

2 cents,

Miha


From: Greg Pfister <greg.p...@gmail.com>
To: Cloud Computing <cloud-c...@googlegroups.com>
Sent: Monday, March 30, 2009 9:24:55 AM

Subject: [ Cloud Computing ] Re: Value proposition for Cloud computing is crystal clear...

ademello

unread,
Apr 1, 2009, 11:50:21 PM4/1/09
to Cloud Computing
Just wanted to mention that at the Data Center Efficiency Summit at
Google Mountain view earlier today, Google presented that the
utilization in their clusters ranged from 30% to 75%, which is
amazingly high if you consider it.

The power numbers are also absurd, running a PUE of 1.2 in their most
advanced DCs,

On Mar 30, 1:27 pm, Miha Ahronovitz <mij...@sbcglobal.net> wrote:
> He was astonished that for every category of server he had data on, the graphs all peaked between 10%and 15%.
>
> Greg,
>
> This is no surprise for me, as HPC packages like Sun Grid Engine working on batch jobs can increase close to 90% utilization. We had data that without a workload manager of sorts, the average utilization is 10% to 15%, confirming what you discovered.
>
> This means world wide, 85% to 90% of the installed computing capacity is sitting idle. Grids improved this utilization rate dramatically, but grid adoption was limited. Clouds are the newer incarnation of grids will increase the utilization percentages. But in the process, it make us think if a blind race to get the maximum utilization is what we want.
>
> First, there technical consideration. In large parallel computation, there is a need to Exclusive Host Access for some jobs.
> Then, once we have billing, and we can make advance reservation of computer resources for which we pay, the focus will shift from the search for maximum utilization to maximum revenues.
>
> In other words, we will not care is we have 0% utilization as long as someone is paying a bill to reserve the 100% capacity for whatever reasons.
> The cloud owners will worry only about the idle percentage of computers no one uses and no one made a reservation for it.
>
> This is exact business model of running a hotel.
>
> 2 cents,
>
> Miha
>
> ________________________________
> From: Greg Pfister <greg.pfis...@gmail.com>
> Greg Pfisterhttp://perilsofparallel.blogspot.com/

Rao Dronamraju

unread,
Apr 2, 2009, 12:28:24 AM4/2/09
to cloud-c...@googlegroups.com

Actually this is a very interesting discussion.

As Miha mentioned below, that the server utilization rates when Sun Grid
Engine was used was close to 90% on Grids.

But isn't it a fact that the Grids that are out there have been specifically
designed for Scientific/HPC centric work loads. Which means the work loads
have been inherently designed for high CPU utilization rates?....

It will be interesting to see what are the network and storage resource
utilization rates in the Grids....

On the contrary, the NON scientfic/HPC/ComputeIntensive work loads, which
are widely prevalent in most of the commercial especially
internet/web/enterprise world, the very nature of the work loads is non-CPU
intensive, more I/O centric and I/O intensive.

So don’t you think we would naturally find the CPU utilization rates low and
I/O utilization rates high.

So are the servers of today should have been designed in such way that, the
I/O bandwidth takes precedence over CPU bandwidth in terms of design
requirements and criteria?...

So let us say, Intel processors drive the network I/O and broadcom (the
networking chip folks) chip runs the CPU, assuming that they are desiged
architecturally and technologically to do that.

This way the system would have very high utlization rates in both CPU and
I/O domains.

Just wondering if servers of today are not designed for the (nature of the)
work loads of today...

Also, with regards to Google's utilization numbers.....

Are the cluster utilization rates that high because of the load
balancing?....and also the nature of the work load.....search engines
running?....just curious....

Chris Marino

unread,
Apr 2, 2009, 10:01:34 AM4/2/09
to cloud-c...@googlegroups.com
Nice write up from CNET...

http://news.cnet.com/8301-1001_3-10209580-92.html

Their UPS strategy is quite innovative as well.
CM

dave corley

unread,
Apr 2, 2009, 10:11:47 AM4/2/09
to cloud-c...@googlegroups.com
Diversify the application types according to usage cycles and increase the number of users and the average to peak utilization ratio grows higher. Good financial counselors will suggest diversification, not only across a broad range of risk spectrums, but also across a spectrum of financial instruments that are in synch, contra- synch and un-synched to major financial benchmarks. Theory is that volatility of total portfolio is therefore reduced during periods of dynamic equilibrium. Similar logic applies to resource utilization. Design infrastructure for peak utilization and some target probability of oversubscription, but distribute application types as much as possible to optimize average to peak utilization ratio.

As SaaS matures, the nature of the application utilization will become more measured. The ability of data center operators to efficiently pick and choose application type distribution across their virtualized networks is something that will become a new science. Perhaps this is another reason why there are so many math geniuses at work at google. ;-)

This logic does not account for "punctuations" of punctuated equilibrium (as opposed to relatively statistically predictable dynamic equilibrium). The punctuations are catastrophic events that cannot be predicted by statistical models...recent mega-financial crisis may be an example. Adherence to Merton-Scholes-Black models arguably fed on a reliance on statistical models. Catastrophic events (the thousand year flood happening twice in a decade) that cannot be predictable by statistical methods should also be included in large, public data center design analysis...but not sure what math can be used to predict "catastrophes". Chaos theory is advancing, but claims of 75% utilization efficiency are based upon measured results during dynamic equilibrium...I wonder whether they account for catastrophic events to their networks. My company built a multi-mega-million dollar data center. Its physical design included the relative certainty that the building would take a direct hit by an F2 tornado in its lifetime and survive without service disruption...so, there is risk on the table that an F5 could damage/eliminate local data. Risk happens.

Dave

ademello

unread,
Apr 2, 2009, 11:12:12 AM4/2/09
to Cloud Computing
There are many good articles on this matter, see the various "disk is
the new tape, RAM is the new disk" posts out there.

Reality is that Moore's law hasn't yet dawned on the storage
vendors... if you look at the disk IO rates over the past 30 years and
compare that to let's say a basic x86 processor, we are still in the
stone age in the disk / storage world. Much of this is due to the
physical limitations of spinning magnetic disks in a stack a few
microns away from each other at 15,000+ RPM... ;-)

One great white hope are SSDs, which will bridge the gap for a little
while. That, and sustained cheap RAM prices.

Its interesting to note that the Google motherboards use standard non-
ECC PC desktop RAM. Why? Its so much cheaper. They take care of the
ECC in software.

I'd like to see Rackable or Dell license the Google server design.
From their talk earlier this week, the Googlers are clearly open to
the idea.

On Apr 2, 12:28 am, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
> ...
>
> read more »

Greg Pfister

unread,
Apr 2, 2009, 1:28:11 PM4/2/09
to Cloud Computing
On Apr 2, 10:12 am, ademello <ademe...@gmail.com> wrote:
> There are many good articles on this matter, see the various "disk is
> the new tape, RAM is the new disk" posts out there.

Agree.

> Reality is that Moore's law hasn't yet dawned on the storage
> vendors... if you look at the disk IO rates over the past 30 years and
> compare that to let's say a basic x86 processor, we are still in the
> stone age in the disk / storage world. Much of this is due to the
> physical limitations of spinning magnetic disks in a stack a few
> microns away from each other at 15,000+ RPM... ;-)

Actually, Moore's law has dawned on them, and it's why they seem stone
age. Moore's law compensates: DRAM gets denser / cheaper per bit, buy
more DRAM, cache more, don't need faster disks, don't buy them, market
responds with slow data tubs.

> One great white hope are SSDs, which will bridge the gap for a little
> while. That, and sustained cheap RAM prices.

DRAM prices, yeah. Don't get me started. SSD helps, I agree, but I
believe it's only because Microsoft seems to be incompetent at memory
management.

Greg Pfister
http://perilsofparallel.blogspot.com/
> ...
>
> read more »

Jim Starkey

unread,
Apr 2, 2009, 4:02:35 PM4/2/09
to cloud-c...@googlegroups.com
Greg Pfister wrote:
> On Apr 2, 10:12 am, ademello <ademe...@gmail.com> wrote:
>
>> There are many good articles on this matter, see the various "disk is
>> the new tape, RAM is the new disk" posts out there.
>>
>
> Agree.
>
>
>> Reality is that Moore's law hasn't yet dawned on the storage
>> vendors... if you look at the disk IO rates over the past 30 years and
>> compare that to let's say a basic x86 processor, we are still in the
>> stone age in the disk / storage world. Much of this is due to the
>> physical limitations of spinning magnetic disks in a stack a few
>> microns away from each other at 15,000+ RPM... ;-)
>>
>
> Actually, Moore's law has dawned on them, and it's why they seem stone
> age. Moore's law compensates: DRAM gets denser / cheaper per bit, buy
> more DRAM, cache more, don't need faster disks, don't buy them, market
> responds with slow data tubs.
>
>
>> One great white hope are SSDs, which will bridge the gap for a little
>> while. That, and sustained cheap RAM prices.
>>
>
> DRAM prices, yeah. Don't get me started. SSD helps, I agree, but I
> believe it's only because Microsoft seems to be incompetent at memory
> management.
>
>

I think it's more like the market for Airbuses and 747s vs. the SST.
Fast is all well and good, but bits per buck wins out in the end.

I like SSDs as much as the next fellow, but I don't think they will have
much effect on anything but portable devices in the long run. For as
long as there have been computers, there has been a memory hierarchy --
register, cache, main memory, virtual memory, disk. The speed of the
hierarchy is determined by the speed of the things at the top, not the
bottom. What the database world has learned about disks is to not make
database performance dependent on them. And we're getting smarter. The
more RAM you have in a system/cluster/cloud, the less you care about
disk speed. It's always faster to get data from another node than a
disk. Disks are useful for archival storage, but that's about it. For
everything else, RAM generously distributed around a very faster net is
a better way to spend money. If you want fast, use RAM. If you want
cheap, use disks. If you want both, pay for smart software.

Peglar, Robert

unread,
Apr 2, 2009, 4:17:07 PM4/2/09
to cloud-c...@googlegroups.com
There is a somewhat (vicious?) circular pattern here. Yes, Moore's law, DRAM gets cheaper/denser, cache more, as you said. But, that data has to eventually rest somewhere else than DRAM, after all, and that somewhere else is spinning disk. I would opine that Moore's Law actually _drives_ 2nd level storage to increase at the same rate, as well as needing faster disk to keep those nice DRAM banks from being starved for data.

Market has told the disk vendors in no uncertain terms that they want big, cheap, slow disk. So, guess what? Disk vendors have given them exactly what they asked for. The only way to break that cycle is to get CIOs to understand that buying disk is not like buying ground beef at the supermarket, $/pound. You hear the mantra today "I want cheaper $ per GB", and then they complain about slow disk, so they buy more DRAM, denser/cheaper, cache more, to which they run more workload through and have to eventually dump more data to disk...you get the circularity here.

Access density is a problem, to be sure, but hardly anyone wants to buy high access density disks today, since they are not "cheap". Put another way, this illustrates the essential difference between cost and value. Most, unfortunately, are choosing cost above all; thus, big data tubs are all the rage. High access density disks provide better value over time than low access density for plenty of real-world workloads, but hardly anyone seems to care.

This is another reason why SSDs haven't 'taken off' as many would think. Too many CIOs and IT directors buy storage like ground beef, $ per pound. SSD is very expensive by that metric. SSD has much higher access density than HDD, of course, but again, hardly any CIOs notice or understand. They just look at the price tag.

Rob

Rao Dronamraju

unread,
Apr 2, 2009, 5:53:25 PM4/2/09
to cloud-c...@googlegroups.com


"For everything else, RAM generously distributed around a very faster net is
a better way to spend money."

So do you prefer RDMA over InfiniBand or RDMA over 10GigE for it?....

Also do the dataware housing applications have any problems because of the
large datasets?...


-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Jim Starkey
Sent: Thursday, April 02, 2009 3:03 PM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] Re: Value proposition for Cloud computing is
crystal clear...


Chris Marino

unread,
Apr 2, 2009, 6:53:22 PM4/2/09
to cloud-c...@googlegroups.com
Rob, I think you're painting this with too broad a brush....

For every DBA that know enough *not* to purchase disk at $/GB, there are probably 1,000 PC users that need to store their family photos.  Along with Moores law, there are experience curves and economies of scale that accrue (only) to the dense, slow technologies. 

Second, IMHO, the main reason SSD hasn't lived up to expectations is not the cost, but the R/W performance asymetry.  I can turn the cost argument inside out and argue that SSD is the cheapest cache RAM you'll every find. But for database applications, the write performance kills you.

CM

Jim Starkey

unread,
Apr 2, 2009, 8:08:23 PM4/2/09
to cloud-c...@googlegroups.com
Rao Dronamraju wrote:
>
> "For everything else, RAM generously distributed around a very faster net is
> a better way to spend money."
>
> So do you prefer RDMA over InfiniBand or RDMA over 10GigE for it?....
>
Right now, for database purposes, Gigabit Ethernet is far, far from the
bottleneck. I have not doubt that by the time we have figured out how
to beneficially consume a gigabit, 10 gigabit with be as cheap and
ubiquitous as 1 gigabit is now.

The goal of a database designer is to exhaust CPU, memory, disk
bandwidth, and (now) network bandwidth simultaneously. This is getting
to be a very daunting problem...

> Also do the dataware housing applications have any problems because of the
> large datasets?...
>

Only the financial solvency of their owners?

Seriously, I don't know yet.

Peglar, Robert

unread,
Apr 3, 2009, 6:13:10 AM4/3/09
to cloud-c...@googlegroups.com

Chris, no question, the consumer application(s) that may fit cloud compute and cloud storage are many, most of them aimed square at the ‘convenience’ factor.  There have been threads along those lines, with the realization that today’s cloud storage services are very expensive in terms of actual storage efficiency – you pay a big premium for the convenience of near-ubiquitous access. 

 

But most of this thread is talking about enterprise utilization, comparing hosting from the like of IBM Global Services to roll-your-own.  Still, I should have framed the context more clearly.  Plus, very few DBAs today actually purchase storage – most of the time, they “recommend” only, which means they don’t sign the checks.  At the end, in most datacenter cases, the buck stops with the CIO.

 

There aren’t too many folks out there that recognize the R/W performance asymmetry of SSD, yet.  Yes, the cognoscenti do, but most of the CIOs – who sign the checks – don’t.  They just see the price tag.  I hear this constantly, “we’d love to use SSD, but it is too expensive.” 

 

BTW, SSD includes more than just flash-based devices – so no, the write performance doesn’t “kill you”, if you use DRAM SSD.  Plus, some forms of flash are doing a nice job of reducing the ‘write penalty’ these days – but YMMV, no question.

 

 

We now return you to your regularly scheduled program on the definition of “cloud computing”.

 

Rob




No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.285 / Virus Database: 270.11.38/2037 - Release Date: 04/02/09 19:07:00

Brad Hollingshead

unread,
Apr 3, 2009, 8:56:33 AM4/3/09
to cloud-c...@googlegroups.com
Chris,

Historically, you are correct. There are, however, new generations of SSD coming on the market with MUCH better write performance, and even the slower SSD's are faster than writing across the SAN. When the "SAN" is fast, it's because the storage array has RAM in it providing the cache. The spindles are never as fast as memory or SSD.

So, the trick here, and the basis for next generation data architectures is to blend the capabilities and strengths of each of the components in the food chain (RAM, SSD, DISK) into an architecture that leverages the strengths of all three.

You can use RAM & SSD closer to the CPU to provide data caching with persistence. This makes the application run faster while utilizing those big, fat, lumbering JBOD arrays in the back to store the data most efficiently. When you combine those across intelligent file systems like ZFS, you now have a solution that gives you lots of capacity with minimal latency and persistence of data.

This is how the major cloud vendors will build out their architectures. They simply cannot use SAN for storage as it is far too expensive.

Thanks,
Brad
brad.vcf

Peglar, Robert

unread,
Apr 3, 2009, 10:57:41 AM4/3/09
to cloud-c...@googlegroups.com

>This is how the major cloud vendors will build out their architectures. They simply cannot use SAN for storage as it is far too >expensive.

That is far too generalistic.  _Some_ SANs are very expensive.  Others are not.  Plus, certain SANs have greater value over 5 years, for example, than direct attached JBOD or internal disk.  Higher acquisition cost, maybe; better value over time, absolutely.

 

DIF alone is reason to use SAN.  That is, unless the cloud providers believe data integrity is not of value. 

 

Plus, the entire reason behind SAN is to share storage; you can’t move virtual machines around without shared storage, for example, unless you replicate everything, and that destroys the entire concept of trying to save money on storage.

 

Rob


Dan Phillips

unread,
Apr 3, 2009, 11:14:20 AM4/3/09
to cloud-c...@googlegroups.com

Is that how Parascale is building out theirs?

Dan

 

Daniel P. Phillips

VP North American Sales

DS-Software, LTD.

404-915-8898

dan.ph...@asigra.com

 

Logo

                   It's all about the Recovery!

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Peglar, Robert
Sent: Friday, April 03, 2009 10:58 AM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] Re: Value proposition for Cloud computing is crystal clear...

 

>This is how the major cloud vendors will build out their architectures. They simply cannot use SAN for storage as it is far too >expensive.

Brad Hollingshead

unread,
Apr 3, 2009, 11:31:27 AM4/3/09
to cloud-c...@googlegroups.com
Robert,

Thanks for the comments, but I don't think so.

I didn't say there won't be shared data networks with information moving locally, regionally, and globally.

I do contend that legacy SAN's are not how this is being done today (or in the future) with cloud providers.

Providers simply can't get to the $/GB they need to be profitable on the net with traditional SAN infrastructure and expensive spinning media. They will use hybrid topologies that include a number of the media we discuss, including JBOD, SSD and RAM to achieve. You also mention an interesting point in that data integrity will always be a consideration cloud services. This does not imply, or require, legacy SAN devices with large expensive topologies to achieve. You can absolutely achieve using blended technologies like those mentioned below.

Thanks,
Brad
brad.vcf

Richard Elling

unread,
Apr 3, 2009, 11:33:19 AM4/3/09
to cloud-c...@googlegroups.com
Peglar, Robert wrote:
>
> >This is how the major cloud vendors will build out their
> architectures. They simply cannot use SAN for storage as it is far too
> >expensive.
>
> That is far too generalistic. _/Some/_ SANs are very expensive. Others
> are not. Plus, certain SANs have greater value over 5 years, for
> example, than direct attached JBOD or internal disk. Higher
> acquisition cost, maybe; better value over time, absolutely.
>

I would like to see how you calculate this... :-)
>
> DIF alone is reason to use SAN. That is, unless the cloud providers
> believe data integrity is not of value.
>

DIF is at the wrong level. You really want end-to-end to be as close
to the application (end) as possible. Ideally it will be in the
application,
but we all know that developers can be lazy :-)

Nit: DIF also could work for DAS. SAN does not have an exclusive here.

> Plus, the entire reason behind SAN is to share storage; you can’t move
> virtual machines around without shared storage, for example, unless
> you replicate everything, and that destroys the entire concept of
> trying to save money on storage.
>

NAS tends to be far less expensive for this. Though there is some
traction beginning for SAS switches, which might play here. A SAS
switch could be very economical.
-- richard

Matthew Zito

unread,
Apr 3, 2009, 11:39:39 AM4/3/09
to cloud-c...@googlegroups.com

Don't forget here, too, that SAN != Fibre Channel.  I can have an iSCSI SAN as easily as I can Fibre Channel, and the infrastructure costs are significantly reduced due to not needing FC HBAs, switches, etc.

Matt

--
Matthew Zito
Chief Scientist
GridApp Systems
P: 646-452-4090
mz...@gridapp.com
http://www.gridapp.com





-----Original Message-----
From: cloud-c...@googlegroups.com on behalf of Richard Elling
Sent: Fri 4/3/2009 11:33 AM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] Re: Value proposition for Cloud computing is crystal clear...


ademello

unread,
Apr 3, 2009, 11:45:39 AM4/3/09
to Cloud Computing
Agreed. Let's not forget as well that many, many applications that are
early entrants into he cloud are write once / read many, or at least
records are not that frequently changed. Also, you can hold the latest
version of a record in memory or cache it, and allow an eventual
consistency model to take care of persisting it to disk (or SSD) if
your write rates are that high or if the replication overhead from
redundancy introduces a lag time.

Also, re: read/write speed asymmetry take a look at the Fusion-IO Duos
and the RAMSAN from Texas Memory Systems. The performance numbers are
fantastic.

Today, we are architecting extremely high performance persistent data
storage nodes that are entire free of magnetic disks. Yes, they are
expensive, but so are SANs!

On Apr 3, 6:13 am, "Peglar, Robert" <Robert_Peg...@xiotech.com> wrote:
> Chris, no question, the consumer application(s) that may fit cloud
> compute and cloud storage are many, most of them aimed square at the
> 'convenience' factor.  There have been threads along those lines, with
> the realization that today's cloud storage services are very expensive
> in terms of actual storage efficiency - you pay a big premium for the
> convenience of near-ubiquitous access.  
>
> But most of this thread is talking about enterprise utilization,
> comparing hosting from the like of IBM Global Services to roll-your-own.
> Still, I should have framed the context more clearly.  Plus, very few
> DBAs today actually purchase storage - most of the time, they
> "recommend" only, which means they don't sign the checks.  At the end,
> in most datacenter cases, the buck stops with the CIO.
>
> There aren't too many folks out there that recognize the R/W performance
> asymmetry of SSD, yet.  Yes, the cognoscenti do, but most of the CIOs -
> who sign the checks - don't.  They just see the price tag.  I hear this
> constantly, "we'd love to use SSD, but it is too expensive."  
>
> BTW, SSD includes more than just flash-based devices - so no, the write
> performance doesn't "kill you", if you use DRAM SSD.  Plus, some forms
> of flash are doing a nice job of reducing the 'write penalty' these days
> - but YMMV, no question.
>
> We now return you to your regularly scheduled program on the definition
> of "cloud computing".
>
> Rob
>
> From: cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] On Behalf Of Chris Marino
> Sent: Thursday, April 02, 2009 5:53 PM
> To: cloud-c...@googlegroups.com
> Subject: [ Cloud Computing ] Re: Value proposition for Cloud computing
> is crystal clear...
>
> Rob, I think you're painting this with too broad a brush....
>
> For every DBA that know enough *not* to purchase disk at $/GB, there are
> probably 1,000 PC users that need to store their family photos.  Along
> with Moores law, there are experience curves and economies of scale that
> accrue (only) to the dense, slow technologies.  
>
> Second, IMHO, the main reason SSD hasn't lived up to expectations is not
> the cost, but the R/W performance asymetry.  I can turn the cost
> argument inside out and argue that SSD is the cheapest cache RAM you'll
> every find. But for database applications, the write performance kills
> you.
>
> CM
>
> On Thu, Apr 2, 2009 at 1:17 PM, Peglar, Robert
>
> Greg Pfisterhttp://perilsofparallel.blogspot.com/
> ...
>
> read more »

ademello

unread,
Apr 3, 2009, 11:49:41 AM4/3/09
to Cloud Computing
I don't actually know what the architecture is for S3, but I assume
for the prices to be that low they're not using a commercial SAN. I
had assumed it was Lustre or some other clustered file system on top
of JBOD or Sun x4500 Thumper boxes.

Does anyone actually know what the architecture is for S3?

On Apr 3, 10:57 am, "Peglar, Robert" <Robert_Peg...@xiotech.com>
wrote:

Miha Ahronovitz

unread,
Apr 3, 2009, 12:21:31 PM4/3/09
to Cloud Computing
Friends, we seem lost and forgot the heading of this thread is:
Kevin Apte started this thread and made a definition of the Value
Proposition,

"Value proposition of Cloud computing is crystal clear"

Is it? Here are some notes from a Product Manager perspective:

Forget what Product Management was 7 or 8 years ago.
Here is what PM’s are going to be from now on.

Strategy:

1.Whatever strategy where competition is ignored, is no longer a
strategy. Competition is everywhere

2.Product Managers live in a world of uncertainty. If we don’t
understand it, yet we extract constant opportunities

Assume we have two products.
Product A sells to the same customers we sell trying to solve the
same problems we solve.
Product B has the same features as our product, but is not sold to our
customers yet.

Who is our competitor? A or B? Answer: A.

So traditional Managed Hosting in many situations is the competitor.
A simpler Grid Cluster is a competitor.

In the Cloud Computing Ecosystem we have three different PERCEIVED
Value Propositions.

From USER perspective...

- VP1 for the user: How much is a user prepared to pay for cloud
service
to be better off than any other alternative?

From cloud Owner perspective

- VP2 for the cloud Owner. Can I sell enough cloud services and
attract
sufficient users to make a profit?

From Cloud tool providers and Resource Wholesalers perspective

- VP3 for cloud tool providers. How much the cloud owners will be
prepared to pay for our tools and virtual resources?

Note there is relationship in inverse order for the synergetic Cloud
VPs.

VP3 => VP2 => VP1

First comclusion: Without defining who the customers are, is
impossible
to define a Cloud's (or any other product) Value Proposition. There
is
no generic Cloud Value proposition.

Product Managers will be transforming in Customers Managers.
PM’s can’t monetize to market share, too elusive, outside competitor
perspective. PM’s will go after Customer Share .

How are we judged today? "Profits per Product, right?” NO.
You should report profits per customer. We should think of terms of
life-time value of a customer.

Forget about market size. we should name the customers by NAME.

Here is fundamental Equation of Business for our customers:

V = B – P

Value = Benefits – Price

As we live in a relative world, full of competitors, the equation
should be written as:

RV = RB – RP
Relative Value = Relative Benefit – Relative Price

The is Relative is to our competitors (which many are OUTSIDE cloud
computing arena).
The secret to win a customer is this final value of the Relative Value

RV > 0 . This is the CUSTOMER perceived value of our product

For PV3 = Customers are Cloud Owners
For PV2, Customers are the USERS

Value Proposition is what we tell our customers so they take our their
wallets and give us a PO.

If we do well the homework, a product will sell itself. If we know the
business of our customers,
We tell them, individually .

"each day you don't buy or don't use our Cloud product or service, you
loose $X per day, $X365 per year"

Easy. All we need is to quantify "X" for each customer!

IMHO, these sound Product Management principles will determine who
will be
the successful players in the budding Cloud computing industry.

Simply saying: "We increase the utilization from 0% to 99 %" is not
enough.

My 2 cents,

miha

On Mar 26, 11:40 am, Kevin Apte <kevina...@gmail.com> wrote:
> The Wallstreet Journal article "Internet Industry is on a Cloud"
> <http://online.wsj.com/article/SB123802623665542725.html>
>
> does not do Cloud computing any justice at all.
>
> First: Value proposition of Cloud computing is crystal clear. Averaged over
> 24 hours, and 7 days a week , 52 weeks in a year most servers have a CPU
> utilization of 1% or less.

Rollie Schmidt

unread,
Apr 3, 2009, 11:54:13 AM4/3/09
to cloud-c...@googlegroups.com

Parascale users will frequently just utilize cpu/disk building blocks. For instance, processor/blades w/4 drives each and build across CPU/DAS-like bricks like that. But they can work w/anything in terms of storage – iSCSI, SAN, DAS- that is attached and visible to the Linux FS  on any node they bring/add into the system.

 

rs

 

Rollie Schmidt

 

530-888-9690 (office)

530-613-2984 (mobile)

530-885-1151 (fax)

rollie....@att.net

Skype:  rollieschmidt

Peglar, Robert

unread,
Apr 3, 2009, 12:40:33 PM4/3/09
to cloud-c...@googlegroups.com

Brad,

 

Thanks for your comments too, but I don’t think so. 

 

You are correct for legacy SANs – but I didn’t say “legacy SAN”, did I?  I am referring to modern SAN technology, not that of yesteryear.

 

But it isn’t about $/GB at acquisition.  It’s about $/GB, utilization of those GB, usable space (not raw), effective usable space (i.e. without performance degradation), W/GB, $/IOP, $/MB/sec, $/repair event, MB/sec/U, IOPS/U, access density, $/human being required, a veritable plethora of things over time, including all things like failure rates, maintenance, inputs required (W, BTU, etc.) and most of all, humans.  Look at the recent study by Wikibon, for example. 

 

Once again, you shouldn’t buy storage like ground beef and expect good results.  Making money using storage is about value, not just one-time cost.

 

As for RAM, SSD, JBOD – of course, topologies will be built around those.  But if that JBOD or SSD, holding the data at rest, does not support DIF, you have no assurance that your many TB or PB of cloud data has integrity.  Why do you think DIF is a standard now, anyway? 

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Brad Hollingshead
Sent: Friday, April 03, 2009 10:31 AM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] Re: Value proposition for Cloud computing is crystal clear...

 

Robert,


 

No virus found in this incoming message.


Checked by AVG - www.avg.com

Version: 8.5.285 / Virus Database: 270.11.40/2039 - Release Date: 04/03/09 06:19:00

Peglar, Robert

unread,
Apr 3, 2009, 12:51:37 PM4/3/09
to cloud-c...@googlegroups.com
Hmmm. S3 prices are not low, they are quite high, for what you get. You pay a very large premium for convenience. $150 per TB per month ($1,500 per TB per year) is way high. Their "large" (over 500 TB) rates of $1,200 per TB per year is still very high.

Even with their recent promotion of $30 per TB of storage per month, for 3 months, that's still high.

Oh, and that's just storage. You want to read that TB? That's more ... if you read that TB, it's another $170.

So, storing 1 TB and reading it once per month is $320, or $3,840 per year. Like I said, you pay a very large premium for convenience and access.

Rob

-----Original Message-----
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of ademello
Sent: Friday, April 03, 2009 10:50 AM
To: Cloud Computing
Subject: [ Cloud Computing ] Re: Value proposition for Cloud computing is crystal clear...


Rao Dronamraju

unread,
Apr 3, 2009, 2:02:54 PM4/3/09
to cloud-c...@googlegroups.com

I have a different perspective on this.

The scope of cloud computing is broad and deep like web/internet.

So one needs to look at it first from the over arching end-to-end view of
the markets into which cloud is going to provide services/products and
solutions.

Just like internet has touched every individual and business in some form of
the other in its scope of providing services (whether free or not is a
different matter), cloud has similar potential.

For instance, I heard there are 300+ million MS Office licenses out there.
If suppose MS tomorrow from licensing point of view agrees to make these
products available in the cloud. Instantly you have created a market of 300+
million users of the cloud.

So I think first we need to take step back and define what the hell cloud is
all about and how it services the consumers across the globe.

In this respect the markets have been fairly well defined in terms of
CONSUMER MARKETS, BUSINESS MARKETS (Small, Medium and Enterprise)

So if we can start by defining what exactly a cloud provides atleast with
respect to these markets we will make a beginning in PEGGING DOWN the
DEFINITION of CLOUDS.

Once you define clouds precisely for each markets, you will know what
products, services that the clouds provide in each of these market segments.
This will give us a better understanding of what are the sizes of this
markets.

Once you know the size of the markets and the products and services that
will be sold in those markets, you will have an idea about the revenues,
costs, pricing and profits etc.

Then one would know that value proposition in much more concrete terms.
Right now it is all Azure (the UNCLOUDED sky)

If you do not define very well the clouds and their markets in concrete
terms, IT WILL BE CLOUD BUST JUST LIKE DOTCOM BUST.

It is just like a business plan, except that it has global scope.

Just like none of us can write a good business plan when we are all confused
about what product or service we are going to offer, what markets, what
market sizes, what profits, etc, etc....

There is no business plan/Value Proposition for clouds at this time until
the clouds and its markets are DEFINED PRECISELY once and for all.





-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Miha Ahronovitz
Sent: Friday, April 03, 2009 11:22 AM
To: Cloud Computing
Subject: [ Cloud Computing ] Re: Value proposition for Cloud computing is
crystal clear...


Rao Dronamraju

unread,
Apr 3, 2009, 4:52:12 PM4/3/09
to cloud-c...@googlegroups.com

Tarry Singh

unread,
Apr 3, 2009, 5:25:04 PM4/3/09
to cloud-c...@googlegroups.com
Netbooks, which I like for what they can do in terms of empowering the "undocumented" new audience, the next 5.5 Billion. It will also be a trap where a lot of folks will get stuck in 1/2 yr deals where you end up paying a lot more. At&T charging $60 pm for a data connection!?! In europe we have fiber for Euro 19.99 pm. 20up/20down.
--
Kind Regards,

Tarry Singh
______________________________________________________________
Founder, Avastu Blog: Research-Analysis-Ideation
"Do something with your ideas!"
Business Cell: +31630617633
Private Cell: +31629159400
LinkedIn: http://www.linkedin.com/in/tarrysingh
Blogs: http://www.ideationcloud.com

Mark Mackaway

unread,
Apr 4, 2009, 7:19:03 AM4/4/09
to cloud-c...@googlegroups.com

I agree with Tarry,  it is dangerous to confuse a marketing ploy designed to extract money with a strategy to empower the masses.  Also, in world terms, this has a limited ability to reach the majority of those masses, and the masses it does reach are already well developed technology markets.

 

However it worked with mobile phones, so if the marketing companies in that market can sell the concept of being always available and constantly contactable as being a good thing, maybe it is not so hard to sell it to the home Facebook “must-have” crowd who want something trendy that is mentioned on television even though they are unsure of the actual value.  Is this cloud computing, or just web-based computing which has been around for years and has grown a new moniker?

 

 

Mark

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Tarry Singh
Sent: Saturday, 4 April 2009 8:25 AM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] Re: Millions more (if not billion) will be on the cloud very soon.....

 

Netbooks, which I like for what they can do in terms of empowering the "undocumented" new audience, the next 5.5 Billion. It will also be a trap where a lot of folks will get stuck in 1/2 yr deals where you end up paying a lot more. At&T charging $60 pm for a data connection!?! In europe we have fiber for Euro 19.99 pm. 20up/20down.


On Fri, Apr 3, 2009 at 10:52 PM, Rao Dronamraju <rao.dro...@sbcglobal.net> wrote:



Microsoft is nervous!!.
http://cosmos.bcst.yahoo.com/up/player/popup/?rn=3906861&cl=12802041&ch=4226
721&src=news





Confidentiality and Privilege Notice
This document is intended solely for the named addressee.  The information contained in the pages is confidential and contains legally privileged information. If you are not the addressee indicated in this message (or responsible for delivery of the message to such person), you may not copy or deliver this message to anyone, and you should destroy this message and kindly notify the sender by reply email. Confidentiality and legal privilege are not waived or lost by reason of mistaken delivery to you.

Rao Dronamraju

unread,
Apr 4, 2009, 11:11:52 AM4/4/09
to cloud-c...@googlegroups.com

Here is an interesting paper from HP on Cloud Computing....

http://www.hpl.hp.com/techreports/2009/HPL-2009-23.html?mtxs=rss-hpl-tr

Regards,
Rao


-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Rao Dronamraju
Sent: Friday, April 03, 2009 1:03 PM
To: cloud-c...@googlegroups.com

Rao Dronamraju

unread,
Apr 4, 2009, 12:26:30 PM4/4/09
to cloud-c...@googlegroups.com

Yes, I agree with both of you….

 

If AT&T drops the price of the monthly subscription to say $25 or $30 and add Voice (VoIP and/or Cellular) this could capture some of the growing/existing mobile markets.

 

About your question whether this is cloud computing or web-based computing, it depends on what exactly is cloud computing?....

 

My understanding based on my research, I define cloud computing as follows:

 

Cloud Computing = Internet/Web + Grid Computing + Utility Computing + Managed Services (assuming that Grid and/or Utility Computing cover the dynamic autonomic resource elasticity and pay-per-use requirements of cloud)

 

 


From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Mark Mackaway
Sent: Saturday, April 04, 2009 6:19 AM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] Re: Millions more (if not billion) will be on the cloud very soon.....

Jim Starkey

unread,
Apr 4, 2009, 2:10:08 PM4/4/09
to cloud-c...@googlegroups.com
Rao Dronamraju wrote:
>
> Yes, I agree with both of you….
>
> If AT&T drops the price of the monthly subscription to say $25 or $30
> and add Voice (VoIP and/or Cellular) this could capture some of the
> growing/existing mobile markets.
>
But it would also undercut their pricing for cell and data. Don't hold
your breath on this one.
>
> About your question whether this is cloud computing or web-based
> computing, it depends on what exactly is cloud computing?....
>
> My understanding based on my research, I define cloud computing as
> follows:
>
> Cloud Computing = Internet/Web + Grid Computing + Utility Computing +
> Managed Services (assuming that Grid and/or Utility Computing cover
> the dynamic autonomic resource elasticity and pay-per-use requirements
> of cloud)
>
That's going to take a very long elevator ride to explain...
>
> ------------------------------------------------------------------------
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Mark Mackaway
> *Sent:* Saturday, April 04, 2009 6:19 AM
> *To:* cloud-c...@googlegroups.com
> *Subject:* [ Cloud Computing ] Re: Millions more (if not billion) will
> be on the cloud very soon.....
>
> I agree with Tarry, it is dangerous to confuse a marketing ploy
> designed to extract money with a strategy to empower the masses. Also,
> in world terms, this has a limited ability to reach the majority of
> those masses, and the masses it does reach are already well developed
> technology markets.
>
> However it worked with mobile phones, so if the marketing companies in
> that market can sell the concept of being always available and
> constantly contactable as being a good thing, maybe it is not so hard
> to sell it to the home Facebook “must-have” crowd who want something
> trendy that is mentioned on television even though they are unsure of
> the actual value. Is this cloud computing, or just web-based computing
> which has been around for years and has grown a new moniker?
>
> Mark
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Tarry Singh
> *Sent:* Saturday, 4 April 2009 8:25 AM
> *To:* cloud-c...@googlegroups.com
> *Subject:* [ Cloud Computing ] Re: Millions more (if not billion) will
> be on the cloud very soon.....
>
> Netbooks, which I like for what they can do in terms of empowering the
> "undocumented" new audience, the next 5.5 Billion. It will also be a
> trap where a lot of folks will get stuck in 1/2 yr deals where you end
> up paying a lot more. At&T charging $60 pm for a data connection!?! In
> europe we have fiber for Euro 19.99 pm. 20up/20down.
>
> On Fri, Apr 3, 2009 at 10:52 PM, Rao Dronamraju
> <rao.dro...@sbcglobal.net <mailto:rao.dro...@sbcglobal.net>>
> wrote:
>
>
>
> Microsoft is nervous!!.
> http://cosmos.bcst.yahoo.com/up/player/popup/?rn=3906861&cl=12802041&ch=4226
> 721&src=news
> <http://cosmos.bcst.yahoo.com/up/player/popup/?rn=3906861&cl=12802041&ch=4226%0A721&src=news>
>
>
> ------------------------------------------------------------------------
>
> **Confidentiality and Privilege Notice ***
> *This document is intended solely for the named addressee. The
> information contained in the pages is confidential and contains
> legally privileged information. If you are not the addressee indicated
> in this message (or responsible for delivery of the message to such
> person), you may not copy or deliver this message to anyone, and you
> should destroy this message and kindly notify the sender by reply
> email. Confidentiality and legal privilege are not waived or lost by
> reason of mistaken delivery to you.
>
> ------------------------------------------------------------------------

ademello

unread,
Apr 5, 2009, 10:55:42 AM4/5/09
to Cloud Computing
My $0.02 is that the cloud computing taxonomy / cloud computing
definition / what is a cloud ? / why is the sky blue discussion(s)
should have their own thread or better yet, go private and come back
when they have reached a unanimous decision on the definition.

Its distracting and pulls all the other more tanginble discussions
into the abstract world of Internet forum lollygagging.
> > <rao.dronamr...@sbcglobal.net <mailto:rao.dronamr...@sbcglobal.net>>
> > wrote:
>
> > Microsoft is nervous!!.
> >http://cosmos.bcst.yahoo.com/up/player/popup/?rn=3906861&cl=12802041&...
> > 721&src=news
> > <http://cosmos.bcst.yahoo.com/up/player/popup/?rn=3906861&cl=12802041&...>

ademello

unread,
Apr 5, 2009, 11:05:58 AM4/5/09
to Cloud Computing
Hi Miha,

Great post. One thing that continues to shock me about so many of the
people running around frothy-mouthed at the revolution of public
clouds is that they seem not to realize on basic tenet, and it goes to
your very first question - who is our competitor?

WIth many public cloud services, the competitor is Internal IT. Your
nemesis is the CIO. For this reason, I have a sinking feeling that
public clouds are going to find a place in the tech world that is
outside of traditional IT. Its not a good business position to be in,
when your customer decision maker is your competitor. The stranglehold
of IT even in medium size companies is daunting.

Contrast this to private cloud offerings. Who is your competitor?
Traditional vendors of IT products and services! Much better landscape
to go to battle on.

Private clouds make cohesive a bunch of ideas that have been
percolating in IT for some time; managed services, virtualization to
increase utilization of resources, use of commodity hardware, an API
to access the heartbeat of the company (business, customer data, etc.)
that lowers the cost of development, elastic resource management to
deal with seasonal / unexpected peaks, etc. This vision is much easier
to sell than the public cloud vision. And frankly, it makes more
business sense for everyone involved.

Rao Dronamraju

unread,
Apr 5, 2009, 12:34:20 PM4/5/09
to cloud-c...@googlegroups.com

Adamello,

Right in this forum, there was and there is even today a lot of discussion
about what is cloud?...

Even across the industry there is still a debate as what is cloud
computing?..

Haven't you seen even Larry Ellison was asking what is cloud computing?....

You think Larry does not have enough staff under him to define to him what
cloud is?....Every one watching everyone go around in circles in defining
clouds.

If you cannot define, how can you have value proposition?....

So definition of cloud computing is a legitimate discussion.

"go private and come back when they have reached a unanimous decision on the
definition."

Have you ever heard, if you don’t like the program change the channel?....
or may be cancel the subscription?....



-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of ademello
Sent: Sunday, April 05, 2009 9:56 AM
To: Cloud Computing

Matthew Zito

unread,
Apr 5, 2009, 12:57:28 PM4/5/09
to Cloud Computing

I think it's a little odd to claim that with public clouds the CIO is the "competitor".  Every CIO I know is looking for the intersection of:

- business performance
- cost
- risk

It's worth noting that many companies have already outsourced things to third-parties that they don't consider core to their backups.  Note the success of Akamai as a content delivery network, arguably an earlier generation of cloud platforms, or the success of rackspace as a managed hosting provider.  If moving certain functions out of IT was a negative to them, no one would ever outsource anything.

Instead, most CIOs are going to look for areas where they can get quick, clear wins at a low risk.   As they get more comfortable, they'll consider moving more things on to the cloud.  Look at VMWare - started out in sandbox lab environments, then people starting porting their dev boxes, then began moving production second-tier boxes, and we're starting to see a very small number of organizations consider moving databases onto VMWare.  The same model will probably apply in cloud environments.

Thanks,
Matt




-----Original Message-----
From: cloud-c...@googlegroups.com on behalf of ademello
Sent: Sun 4/5/2009 11:05 AM
To: Cloud Computing

Subject: [ Cloud Computing ] Re: Value proposition for Cloud computing is crystal clear...


> V = B - P
>
> Value = Benefits - Price


>
> As we live in a relative world, full of competitors, the equation
> should be written as:
>

> RV = RB - RP
> Relative Value = Relative Benefit - Relative Price

ademello

unread,
Apr 5, 2009, 1:05:54 PM4/5/09
to Cloud Computing
At this point, Jesus himself could not define what a cloud is. Its a
mission loaded with agenda and propelled by sophism - a mission you
seem to have chosen to accept.

If you think you have something concrete and actually valuable to add
to the endless debate of what is cloud computing, then I think you
should start a new thread or forum to engage in that fascinating
exploration. I just don't see the need to continually interrupt
threads with these deep thoughts.

Best,

Jack Handey's Bizarro Twin
aka Aaron

On Apr 5, 12:34 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:

ademello

unread,
Apr 5, 2009, 9:06:31 PM4/5/09
to Cloud Computing
I agree, I was being dogmatic. I think any business decision involving
the transition of sensitive customer or business data into a third
party is going to take a whole lot of convincing / magic / voodoo to
get your average CIO to agree with this. Perhaps indeed the managed
hosting providers will have a better shot at this, but thinking from
the PoV of the customers I have worked with now and in the past, I can
see a whole lot of hurdles for the public cloud.

Much easier to sell them on a platform to build, manage, maintain a
private cloud.

Just my feeling.

Kevin Apte

unread,
Apr 6, 2009, 1:54:02 PM4/6/09
to cloud-c...@googlegroups.com
S3 Prices, as well as Amazon EC2 prices are quite high, so are EC2 reserved instance prices.  This makes the case for inhouse public clouds quite compelling.

Kevin

Kevin Apte

unread,
Apr 6, 2009, 3:42:57 PM4/6/09
to cloud-c...@googlegroups.com
If I need a server with far less power than a standard Amazon EC2 on-demand instance how would I get it? In other words, I want a virtual server that is a "deamon" and needs only 0.001 of a standard core. How would you implement this in the current amazon scheme?  What if  I need 10% of a standard core?

Kevin

Ray Nugent

unread,
Apr 6, 2009, 4:02:49 PM4/6/09
to cloud-c...@googlegroups.com
Depends how you calculate the price. If you don't consider the cost external to a server in the in-house scenario, power, cooling, infrastructure, cost of money, extra labor to pull cable, install racks etc, then your correct. Otherwise it's a much more even comparison.

Ray


From: Kevin Apte <kevi...@gmail.com>
To: cloud-c...@googlegroups.com
Sent: Monday, April 6, 2009 10:54:02 AM

Chris Richardson

unread,
Apr 6, 2009, 4:06:57 PM4/6/09
to cloud-c...@googlegroups.com
And, what is the cost of the storage solution you are comparing S3 to?

Chris

Khazret Sapenov

unread,
Apr 6, 2009, 4:16:53 PM4/6/09
to cloud-c...@googlegroups.com
Since you have only one application, you might want to look for SaaS solution comparable with 'micro-instances'.
Perhaps slicing small EC2 instance introduces management overhead or affects SLAs, so they decided to go with this configuration as cutoff point.

Rao Dronamraju

unread,
Apr 5, 2009, 3:43:17 PM4/5/09
to cloud-c...@googlegroups.com

For those of you interested security of clouds.....

http://searchsecurity.techtarget.com/news/article/0,289142,sid14_gci1352540,
00.html?track=sy160

Peglar, Robert

unread,
Apr 7, 2009, 6:26:25 AM4/7/09
to cloud-c...@googlegroups.com

If you measure it as $/TB per year, the cost is well under $1,000 per TB per year, and that is full operational costs (acquisition price + full cost of ownership).  You can also read/write the data as much as you want J

 

Rob

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Chris Richardson
Sent: Monday, April 06, 2009 3:07 PM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] Re: Value proposition for Cloud computing is crystal clear...

 

And, what is the cost of the storage solution you are comparing S3 to?

 

Chris

 

On Fri, Apr 3, 2009 at 12:51 PM, Peglar, Robert <Robert...@xiotech.com> wrote:


Hmmm.  S3 prices are not low, they are quite high, for what you get.  You pay a very large premium for convenience.  $150 per TB per month ($1,800 per TB per year) is way high.  Their "large" (over 500 TB) rates of $1,200 per TB per year is still very high.



Even with their recent promotion of $30 per TB of storage per month, for 3 months, that's still high.

Oh, and that's just storage.  You want to read that TB?  That's more ... if you read that TB, it's another $170.

So, storing 1 TB and reading it once per month is $320, or $3,840 per year.  Like I said, you pay a very large premium for convenience and access.

Rob

 

 


 

No virus found in this incoming message.


Checked by AVG - www.avg.com

Version: 8.5.285 / Virus Database: 270.11.44/2044 - Release Date: 04/06/09 18:59:00

Jim Starkey

unread,
Apr 7, 2009, 10:31:30 AM4/7/09
to cloud-c...@googlegroups.com
Well, that's not quite true. It's cost as much to put it there as to
store it, and as much to get it back as to put it there. The storage
cost alone compares to putting the bits under your pillow (OK, putting a
hard drive in a safety deposit box). And do note that the failure rate
of drives in safety deposit boxes is very, very low. And the bandwidth
of walking out of a bank with a TB drive in your pocket is high, but at
the cost of latency.

I have nothing against S3. It's a good idea and a good service. It
just bugs me when folks (not Amazon, mind you) shamelessly misrepresent it.

Peglar, Robert wrote:
>
> If you measure it as $/TB per year, the cost is well under $1,000 per
> TB per year, and that is full operational costs (acquisition price +
> full cost of ownership). You can also read/write the data as much as
> you want J
>
>
>
> Rob
>
>
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Chris Richardson
> *Sent:* Monday, April 06, 2009 3:07 PM
> *To:* cloud-c...@googlegroups.com
> *Subject:* [ Cloud Computing ] Re: Value proposition for Cloud
Reply all
Reply to author
Forward
0 new messages