The Rise of The Dark Cloud

19 views
Skip to first unread message

Reuven Cohen

unread,
Jul 26, 2008, 3:02:06 PM7/26/08
to cloud-computing
For nearly as long as the internet has been around there have been
private subnetworks called the darknets. These private, covert and
often secret networks were typically formed as decentralized groups of
people engaged in the sharing of information, computing resources and
communications typically for illegal activities.

Recently there has been a resurgence in interest of the darknet
ranging from the more unsavory such as P2P filesharing and botnets as
well as more mainstream usages such as inter-government information
sharing, bandwidth alliances or even offensive military botnets. All
of these activities are pointing to a growing interest in the form of
covert computing I call "dark cloud computing" whereby a private
computing alliance is formed. In this alliance members are able to
pool together computing resources to address the ever expanding need
for capacity.

According to my favorite source of quick disinformation, The term
Darknet was originally coined in the 1970s to designate networks which
were isolated from ARPANET (which evolved into the Internet) for
security purposes. Some darknets were able to receive data from
ARPANET but had addresses which did not appear in the network lists
and would not answer pings or other inquiries. More recently the term
has been associated with the use of dark fiber networks, private file
sharing networks and distributed criminal botnets.

The botnet is quickly becoming the tool of choice for governments
around the globe. Recently Col. Charles W. Williamson III. staff
judge advocate, Air Force Intelligence, Surveillance and
Reconnaissance Agency, writes in Armed Forces Journal for the need of
botnets within the US DoD. In his report he writes " The world has
abandoned a fortress mentality in the real world, and we need to move
beyond it in cyberspace. America needs a network that can project
power by building an af.mil robot network (botnet) that can direct
such massive amounts of traffic to target computers that they can no
longer communicate and become no more useful to our adversaries than
hunks of metal and plastic. America needs the ability to carpet bomb
in cyberspace to create the deterrent we lack."

I highly doubt the US is alone in this thinking. The world is more
then ever driven by information and botnet usages are not just limited
to governments but to enterprises as well. In our modern information
driven economy the distinction between corporation and governmental
organization has been increasingly blurred. Corporate entities are
quickly realizing they need the same network protections. By covertly
pooling resources in the form of a dark cloud or cloud alliance,
members are able to counter or block network threats in a private,
anonymous and quarantined fashion. This type distributed network
environment may act as an early warning and threat avoidance system.
An anonymous cloud computing alliance would enable a network of
decentralized nodes capable of neutralizing potential threats through
a series of counter measures.

My question is: Are we on the brink of seeing the rise of private
corporate darknets aka dark clouds? And if so, what are the legal
ramifications, and do they out weight the need to protect ourselves
from criminals who can and will use these tactics against us?

(Original Post: http://elasticvapor.com/2008/07/rise-of-dark-cloud.html)

--
--

Reuven Cohen
Founder & Chief Technologist, Enomaly Inc.

blog > www.elasticvapor.com
-
Get Linked in> http://linkedin.com/pub/0/b72/7b4

Khazret Sapenov

unread,
Jul 26, 2008, 3:25:46 PM7/26/08
to cloud-c...@googlegroups.com
In my opinion at this stage it would be useful to formulate "3 laws of Cloud Computing"
borrowed from Isaac Asimov and adapted to botnet facilities:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
more at http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Reuven Cohen

unread,
Jul 26, 2008, 3:35:28 PM7/26/08
to cloud-c...@googlegroups.com
Khaz, that's awesome!

Now you know my side project, I call it singularity. (One cloud to
rule them all) The only problem is I keep expecting a Terminator robot
from the future to show up at my door.

r/c

--
--

www.enomaly.com :: 416 848 6036 x 1
skype: ruv.net // aol: ruv6

Sam Charrington

unread,
Jul 26, 2008, 3:50:34 PM7/26/08
to cloud-c...@googlegroups.com
A former Appistry colleague always believed that the self-managing & self-organizing behaviors of our product were the beginning of SkyNet.

If we as a group can define these three laws, I will try to get the implementation of them onto our product roadmap.

:-)

Sam

Khazret Sapenov

unread,
Jul 26, 2008, 4:00:04 PM7/26/08
to cloud-c...@googlegroups.com
Perhaps cloud computing solutions should incorporate a concept of autonomic computing to certain degree.
 
quote:

A possible solution could be to enable modern, networked computing systems to manage themselves without direct human intervention. The Autonomic Computing Initiative (ACI) aims at providing the foundation for autonomic systems. It is inspired by the autonomic nervous system of the human body. This nervous system controls important bodily functions (e.g. respiration, heart rate, and blood pressure) without any conscious intervention.

In a self-managing system Autonomic System, the human operator takes on a new role: He does not control the system directly. Instead, he defines general policies and rules that serve as an input for the self-management process. For this process, IBM has defined the following four functional areas:

  • Self-Configuration: Automatic configuration of components;
  • Self-Healing: Automatic discovery, and correction of faults;
  • Self-Optimization: Automatic monitoring and control of resources to ensure the optimal functioning with respect to the defined requirements;
  • Self-Protection: Proactive identification and protection from arbitrary attacks.

IBM defined five evolutionary levels, or the Autonomic deployment model, for its deployment: Level 1 is the basic level that presents the current situation where systems are essentially managed manually. Levels 2 - 4 introduce increasingly automated management functions, while level 5 represents the ultimate goal of autonomic, self-managing systems.

source: http://en.wikipedia.org/wiki/Autonomic_computing

Ray Nugent

unread,
Jul 26, 2008, 4:05:25 PM7/26/08
to cloud-c...@googlegroups.com
Khaz, that would be the "Holy Grail" folks have been looking for for a couple of years. Please let us know when you find it...:-)

Ray

Krishna Sankar (ksankar)

unread,
Jul 26, 2008, 4:39:53 PM7/26/08
to cloud-c...@googlegroups.com

Yep, interesting that you mention autonomics. They are very relevant, especially more so in a cloud environment. Usually these kind of systems do not get there in one shot, but go thru stages – viz connected, reactive, proactive and finally adaptive/autonomic. Long time ago, we had worked on a paper on this topic http://www.ibm.com/developerworks/autonomic/library/ac-summary/ac-cisco.html. I think the cloud computing is in the “connected” stage. We have to build “reactive-ness” to the protocols like the AWS gossip and create baselines before even going to a proactive state. BTW, many of the network protocols and the state machines thereof handle these situations very well.

 

Cheers

<k/>

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Khazret Sapenov
Sent: Saturday, July 26, 2008 1:00 PM
To: cloud-c...@googlegroups.com
Subject: Re: The Rise of The Dark Cloud

 

Perhaps cloud computing solutions should incorporate a concept of autonomic computing to certain degree.

Geva Perry

unread,
Jul 26, 2008, 7:45:57 PM7/26/08
to cloud-c...@googlegroups.com

Many of the discussions on this forum, including the one below keep bringing to mind the “Eight Fallacies of Distributed Computing” http://en.wikipedia.org/wiki/Fallacies_of_distributed_computing . Perhaps the eight “laws” of cloud computing should be the reverse of them:

 

 

Unlike traditional architecture, cloud applications will be designed with the following assumptions:

  1. The network is *not* reliable.
  2. Latency is *not* zero.
  3. Bandwidth is *not* infinite.
  4. The network is *not* secure.
  5. Topology doesn't change. [Does as opposed to Doesn’t]
  6. There is *not* one administrator.
  7. Transport cost is *not* zero.
  8. The network is *not* homogeneous.

Some platforms, such as GigaSpaces and others, have been designed in such a way. For example, I don’t know if it’s holy grail stuff or not, but we provide self-healing and self-optimizing capabilities via our SLA-Driven Container. It’s been particularly appealing to EC2 users who are asking us “What happens if an MI fails?”. Our answer is simple “You don’t care”.

 

Geva Perry

www.gigaspaces.com .


Ray Nugent

unread,
Jul 27, 2008, 2:03:10 AM7/27/08
to cloud-c...@googlegroups.com
Hey, Geva, Gigaspaces is cool stuff but at $1.60 an instance hour I'd say it's far from grail status (much less Holy.) Given the number of instances one needs to make it work properly it's priced just like enterprise software. I know you guys are entitled to a reasonable return on you're investment but...

The big potential draw of cloud computing is massive scalability at low cost. Doing the math, an instance year for a small but functioning Gigaspaces system is, at the minimum, $63K a year (3 large instances @ .80 plus $1.60 times 8736 hours per year.) This, of course, does not include other vendors charges - I'm guessing Oracle will be somewhere in the $3-5 dollar range. All of the sudden the stack is getting expensive...

Ray

Dennis Reedy

unread,
Jul 27, 2008, 10:05:35 AM7/27/08
to cloud-c...@googlegroups.com
On Jul 27, 2008, at 203AM, Ray Nugent wrote:

Hey, Geva, Gigaspaces is cool stuff but at $1.60 an instance hour I'd say it's far from grail status (much less Holy.) Given the number of instances one needs to make it work properly it's priced just like enterprise software. I know you guys are entitled to a reasonable return on you're investment but...

The big potential draw of cloud computing is massive scalability at low cost. Doing the math, an instance year for a small but functioning Gigaspaces system is, at the minimum, $63K a year (3 large instances @ .80 plus $1.60 times 8736 hours per year.) This, of course, does not include other vendors charges - I'm guessing Oracle will be somewhere in the $3-5 dollar range. All of the sudden the stack is getting expensive...

Well, you could use Rio (https://rio.dev.java.net) for free on EC2 [1]. AFAIK, Rio is one of they key enablers upon which the GigaSpaces  "SLA Driven Container" is built (of course without their space based implementation and open spaces spring support). Rio provides the dynamic deployment, built-in fault detection handling and policy driven support here. With the next release of the Rio project you will also be able to dynamically deploy & manage most any JEE application [2], not just the ones traditionally written that use the dynamic application support that Rio provides. If this is something of interest, please let me know.

Cheers

Dennis

1. Check out this entry for starting Rio on EC2: http://blog.elastic-grid.com/elastic-grid/how-to-start-rio-on-amazon-ec2/ 
2. The beginning of this had been written up a few years ago here: http://www.comp.lancs.ac.uk/computing/research/mpg/reflection/papers/rio-dynamic-sca.pdf

Geir Magnusson Jr.

unread,
Jul 27, 2008, 4:48:28 PM7/27/08
to cloud-c...@googlegroups.com

On Jul 26, 2008, at 7:45 PM, Geva Perry wrote:

> Many of the discussions on this forum, including the one below keep
> bringing to mind the “Eight Fallacies of Distributed Computing”http://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
> . Perhaps the eight “laws” of cloud computing should be the reverse
> of them:

Isn't that the point of calling them fallacies? that the reverse is
actually true? IOW, these "laws" are generally understood already by
anyone doing distributed computing?

>
>
> Unlike traditional architecture, cloud applications will be designed
> with the following assumptions:
> • The network is *not* reliable.
> • Latency is *not* zero.
> • Bandwidth is *not* infinite.
> • The network is *not* secure.
> • Topology doesn't change. [Does as opposed to Doesn’t]
> • There is *not* one administrator.
> • Transport cost is *not* zero.
> • The network is *not* homogeneous.
> Some platforms, such as GigaSpaces and others, have been designed in
> such a way. For example, I don’t know if it’s holy grail stuff or
> not, but we provide self-healing and self-optimizing capabilities
> via our SLA-Driven Container. It’s been particularly appealing to
> EC2 users who are asking us “What happens if an MI fails?”. Our
> answer is simple “You don’t care”.
>
> Geva Perry
> www.gigaspaces.com .
> From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com
> ] On Behalf Of Ray Nugent
> Sent: Saturday, July 26, 2008 1:05 PM
> To: cloud-c...@googlegroups.com
> Subject: Re: The Rise of The Dark Cloud
>
> Khaz, that would be the "Holy Grail" folks have been looking for for
> a couple of years. Please let us know when you find it...:-)
>
> Ray
>
> ----- Original Message ----
> From: Khazret Sapenov <sap...@gmail.com>
> To: cloud-c...@googlegroups.com
> Sent: Saturday, July 26, 2008 1:00:04 PM
> Subject: Re: The Rise of The Dark Cloud
> Perhaps cloud computing solutions should incorporate a concept of
> autonomic computing to certain degree.
>
> quote:
> A possible solution could be to enable modern, networked computing
> systems to manage themselves without direct human intervention. The
> Autonomic Computing Initiative(ACI) aims at providing the foundation

Kevin L Jackson

unread,
Jul 28, 2008, 9:16:55 AM7/28/08
to Cloud Computing
While this may sounds like a Internet United Nations Better Business
Bureau, the underlying questions point right at the important of cloud
computing for national security. As the world embraces cloud computing
for its ubiquity, efficiency and cost savings, the world economic
engine will become evermore dependent on cloud security and the active
management of public-private cloud interfaces.

No wonder the US DoD is jumping on the bandwagon so quickly.

Moshref

unread,
Jul 28, 2008, 3:44:32 PM7/28/08
to cloud-c...@googlegroups.com, Geir Magnusson Jr.
All the self healing, etc,, has been implemented on SAN, for enterprise
environment for mission critical applications , such as finance, Oil, and
gas, Black boxes at DOD.
Storage side of CC is in very good shape going forward.


-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Geir Magnusson Jr.
Sent: Sunday, July 27, 2008 1:48 PM
To: cloud-c...@googlegroups.com
Subject: Re: The Rise of The Dark Cloud

On Jul 26, 2008, at 7:45 PM, Geva Perry wrote:

> Many of the discussions on this forum, including the one below keep
> bringing to mind the "Eight Fallacies of Distributed
Computing"http://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
> . Perhaps the eight "laws" of cloud computing should be the reverse
> of them:

Isn't that the point of calling them fallacies? that the reverse is
actually true? IOW, these "laws" are generally understood already by
anyone doing distributed computing?

>
>
> Unlike traditional architecture, cloud applications will be designed
> with the following assumptions:

> . The network is *not* reliable.
> . Latency is *not* zero.
> . Bandwidth is *not* infinite.
> . The network is *not* secure.
> . Topology doesn't change. [Does as opposed to Doesn't]
> . There is *not* one administrator.
> . Transport cost is *not* zero.
> . The network is *not* homogeneous.

> . Self-Configuration: Automatic configuration of components;
> . Self-Healing: Automatic discovery, and correction of faults;
> . Self-Optimization: Automatic monitoring and control of resources

> to ensure the optimal functioning with respect to the defined
> requirements;

> . Self-Protection: Proactive identification and protection from

> arbitrary attacks.
> IBM defined five evolutionary levels, or the Autonomic deployment
> model, for its deployment: Level 1 is the basic level that presents
> the current situation where systems are essentially managed
> manually. Levels 2 - 4 introduce increasingly automated management
> functions, while level 5 represents the ultimate goal of autonomic,
> self-managing systems.
>
> source: http://en.wikipedia.org/wiki/Autonomic_computing
> On Sat, Jul 26, 2008 at 3:50 PM, Sam Charrington
> <s...@charrington.com> wrote:
> A former Appistry colleague always believed that the self-managing &
> self-organizing behaviors of our product were the beginning of SkyNet.
>
> If we as a group can define these three laws, I will try to get the
> implementation of them onto our product roadmap.
>
> :-)
>
> Sam
>
> On Sat, Jul 26, 2008 at 2:25 PM, Khazret Sapenov <sap...@gmail.com>
> wrote:
> In my opinion at this stage it would be useful to formulate "3 laws
> of Cloud Computing"
> borrowed from Isaac Asimov and adapted to botnet facilities:

> . A robot may not injure a human being or, through inaction, allow

> a human being to come to harm.

> . A robot must obey orders given to it by human beings, except

> where such orders would conflict with the First Law.

> . A robot must protect its own existence as long as such protection

Geva Perry

unread,
Jul 29, 2008, 1:06:53 PM7/29/08
to Cloud Computing
Hey, Ray. Thanks for bringing this up. It gives me an opportunity to
explain some things about GigaSpaces. My full response was a bit long
so I decided to post it as a blog: http://gevaperry.typepad.com/main/2008/07/gigaspaces-and.html

The short of that post is:
Although I appreciate your saying "GigaSpaces is cool stuff" it's a
bit more than that in the sense that it brings hard cost savings
compared to the alternatives. I explain how we do that in the blog
post.

You talk about "massive scalability" on the cloud but then give an
example of 3 servers running 24/7/365. I would argue that you
shouldn't really use a cloud for such a scenario, but rather sign up
for the GigaSpaces Start-Up Program (http://www.gigaspaces.com/
startup), get the license for free and get three dedicated servers.
It'll be much cheaper. GigaSpaces (as well as Amazon EC2) shines when
it comes to scalability and particularly scaling on-demand to handle
growing and fluctuating loads.

On Jul 26, 11:03 pm, Ray Nugent <rnug...@yahoo.com> wrote:
> Hey, Geva, Gigaspaces is cool stuff but at $1.60 an instance hour I'd say it's far from grail status (much less Holy.) Given the number of instances one needs to make it work properly it's priced just like enterprise software. I know you guys are entitled to a reasonable return on you're investment but...
>
> The big potential draw of cloud computing is massive scalability at low cost. Doing the math, an instance year for a small but functioning Gigaspaces system is, at the minimum, $63K a year (3 large instances @ .80 plus $1.60 times 8736 hours per year.) This, of course, does not include other vendors charges - I'm guessing Oracle will be somewhere in the $3-5 dollar range. All of the sudden the stack is getting expensive...
>
> Ray
>
> ----- Original Message ----
> From: Geva Perry <gevape...@gmail.com>
> To: cloud-c...@googlegroups.com
> Sent: Saturday, July 26, 2008 4:45:57 PM
> Subject: RE: The Rise of The Dark Cloud
>
> Many of the discussions on this forum,
> including the one below keep bringing to mind the “Eight Fallacies of
> Distributed Computing”http://en.wikipedia.org/wiki/Fallacies_of_distributed_computing. Perhaps the eight “laws” of cloud computing should be the reverse
> of them:
>
> Unlike traditional architecture, cloud
> applications will be designed with the following assumptions:
>         1. The network is *not* reliable.
>         2. Latency is *not* zero.
>         3. Bandwidth is *not* infinite.
>         4. The network is *not* secure.
>         5. Topology doesn't change. [Does as opposed to Doesn’t]
>         6. There is *not* one administrator.
>         7. Transport cost is *not* zero.
>         8. The network is *not* homogeneous.
> Some platforms, such as GigaSpaces and
> others, have been designed in such a way. For example, I don’t know if it’s
> holy grail stuff or not, but we provide self-healing and self-optimizing
> capabilities via our SLA-Driven Container. It’s been particularly
> appealing to EC2 users who are asking us “What happens if an MI fails?”.
> Our answer is simple “You don’t care”.
>
> Geva Perrywww.gigaspaces.com.
>
> ________________________________
>
> From:cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Ray Nugent
> Sent: Saturday, July 26, 2008 1:05
> PM
> To: cloud-c...@googlegroups.com
> Subject: Re: The Rise of The Dark
> Cloud
>
> Khaz, that would be the "Holy Grail" folks have
> been looking for for a couple of years. Please let us know when you find
> it...:-)
>
> Ray
>
> ----- Original Message
> ----
> From: Khazret Sapenov <sape...@gmail.com>
> To: cloud-c...@googlegroups.com
> Sent: Saturday, July 26, 2008 1:00:04 PM
> Subject: Re: The Rise of The Dark Cloud
> Perhaps cloud computing solutions should incorporate a concept of
> autonomic computing to certain degree.
>
> quote:
> A possible solution could be to enable modern, networked
> computing systems to manage themselves without direct human intervention. The Autonomic Computing Initiative (ACI) aims
> at providing the foundation for autonomic systems. It is inspired by the autonomic
> nervous systemof the human body. This nervous system controls important bodily functions
> (e.g. respiration, heart rate, and blood pressure) without any conscious
> intervention.
> In a self-managing
> systemAutonomic System, the human operator takes on a new role: He does not control
> the system directly. Instead, he defines general policies and rules that serve
> as an input for the self-management process. For this process, IBM has defined
> the following four functional areas:
>         * Self-Configuration: Automatic configuration of components;
>         * Self-Healing: Automatic discovery, and correction of faults;
>         * Self-Optimization: Automatic monitoring and control of resources to ensure the optimal functioning with respect to the defined requirements;
>         * Self-Protection: Proactive identification and protection from arbitrary attacks.
> IBM defined five evolutionary levels, or the Autonomic deployment model, for its deployment: Level 1 is
> the basic level that presents the current situation where systems are
> essentially managed manually. Levels 2 - 4 introduce increasingly automated
> management functions, while level 5 represents the ultimate goal of autonomic,
> self-managing systems.
> source:http://en.wikipedia.org/wiki/Autonomic_computing
> On Sat, Jul 26, 2008 at 3:50 PM, Sam Charrington <s...@charrington.com> wrote:
> A former Appistry colleague always believed that the self-managing
> & self-organizing behaviors of our product were the beginning of SkyNet.
>
> If we as a group can define these three laws, I will try to get the
> implementation of them onto our product roadmap.
>
> :-)
>
> Sam
>
> On Sat, Jul 26, 2008 at 2:25 PM, Khazret Sapenov <sape...@gmail.com> wrote:
> In my opinion at this stage it would be useful to formulate"3 laws of Cloud Computing"
> borrowed from Isaac Asimov and adapted to botnet facilities:
>         1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
>         2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
>         3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
> more athttp://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Khazret Sapenov

unread,
Aug 1, 2008, 1:34:30 PM8/1/08
to cloud-c...@googlegroups.com
interesting concept of using humans to do computation
 
author makes a conclusion, that machines
"rather than killing, they actually have to keep us around,
because there are problems that we can solve, that they cannot yet solve." 
KS

Sassa NF

unread,
Aug 1, 2008, 3:16:46 PM8/1/08
to cloud-c...@googlegroups.com
But they don't need to solve some of the problems we have to solve (if
that's the captcha video)


Sassa

2008/8/1 Khazret Sapenov <sap...@gmail.com>:

Barr, Bill

unread,
Aug 11, 2008, 4:24:54 PM8/11/08
to cloud-c...@googlegroups.com

Brenda Michelson summarizes an interesting article where a researcher found that the real cost of that $2500 server runs about $8K-15K.

 

http://blog.elementallinks.net/2008/08/what-does-that.html

 

 

Chris Marino

unread,
Aug 11, 2008, 5:46:08 PM8/11/08
to cloud-c...@googlegroups.com
Yes, that is very interesting.  After reading it, it occurred to me that there's a free rider in there somewhere.  Someone along the way isn't bearing their fully loaded costs.  I've got a hunch that the hosting providers are subsidizing a large part of this, which explains why that part of the business is really hard to compete (see the Rackspace IPO thread...).
 
As for the free rider, I think it's me!  For me, putting a server out at Rackspace, Serverbeach, etc. is a no brainer. Someone can make money at those rates, but it isn't me.
 
CM

Khazret Sapenov

unread,
Aug 11, 2008, 8:41:10 PM8/11/08
to cloud-c...@googlegroups.com
On Mon, Aug 11, 2008 at 4:24 PM, Barr, Bill <Bill...@tectura.com> wrote:

Brenda Michelson summarizes an interesting article where a researcher found that the real cost of that $2500 server runs about $8K-15K.

 

http://blog.elementallinks.net/2008/08/what-does-that.html

 

 
this is an argument in favour of server consolidation

Jim Starkey

unread,
Aug 12, 2008, 8:30:44 AM8/12/08
to cloud-c...@googlegroups.com
Barr, Bill wrote:

Brenda Michelson summarizes an interesting article where a researcher found that the real cost of that $2500 server runs about $8K-15K.

 

http://blog.elementallinks.net/2008/08/what-does-that.html

 

 


OK, some quibbles.

Cheap servers don't cost $2,500.  They cost about $1,200 in quantity one (4 cores, 4GB ECC, 2x Gigabit ports, 1U height, 4 hot swap SATA slot, 1 disk).

Second, 10,000 servers (2005 estimate) time times $8.3K to $15.4K yields a national total of $830B to $1.54T in cost.  That does seem a little over the top.

Third, most of his costs are per acre (the power and cooling, non-trivial, is still minor) and for manpower, not for hardware.  This suggests that using cheap 1U servers is cheaper than more expensive 2U or 4U servers.

Fourth, if the cloud were organized as applications straddling servers rather than inefficient single virtual server instances, the issue would be aggregate computing power rather the speed of individual machines, obsolescence would no longer be an issue and servers would have a much longer economic lifetime.

Fifth, running applications in an well architected multi-server application platform centralizes security and administration policy rather than distributing it to thousands of individual virtual machine instances, reducing the manpower costs.

The ranking of costs is probably something like this:
  1. Personal
  2. Real estate and environment
  3. Power and cooling
  4. Capital cost of hardware
The place to start looking for savings is at the top, not the bottom.  The cloud model of vast number of virtual machine instances requires a vast number of administrators.  The cloud model of arbitrarily large application platforms provides the opportunity to cut real costs.  I'm sorry that so many people are eager to define it out of existence. 

Khazret Sapenov

unread,
Aug 12, 2008, 9:37:56 AM8/12/08
to cloud-c...@googlegroups.com


On Tue, Aug 12, 2008 at 8:30 AM, Jim Starkey <jsta...@nimbusdb.com> wrote:
...
...

Fourth, if the cloud were organized as applications straddling servers rather than inefficient single virtual server instances, the issue would be aggregate computing power rather the speed of individual machines, obsolescence would no longer be an issue and servers would have a much longer economic lifetime.

Fifth, running applications in an well architected multi-server application platform centralizes security and administration policy rather than distributing it to thousands of individual virtual machine instances, reducing the manpower costs.

The ranking of costs is probably something like this:
  1. Personal
  2. Real estate and environment
  3. Power and cooling
  4. Capital cost of hardware
The place to start looking for savings is at the top, not the bottom.  The cloud model of vast number of virtual machine instances requires a vast number of administrators.  The cloud model of arbitrarily large application platforms provides the opportunity to cut real costs.  I'm sorry that so many people are eager to define it out of existence. 
 
 
 
I agree, that having application platform would be more compact, however this scenario is rare for hosting providers.
 
Applications need some level of isolation, provided by virtual containers, even in enterprise (you don't want people from department A get access to applications of department B, even a chance to poke a memory segment or observe network traffic). 
 
Virtualization also doesn't require modification of the code and allow wider range of applications to run (I would estimate all spectre might run without problems).

Hall, Jacob

unread,
Aug 12, 2008, 10:34:08 AM8/12/08
to cloud-c...@googlegroups.com

* A resend was requested.

 

If everyone decoupled storage from compute, placed the cheap servers in a rack sized chassis and called it a “processor array”; then the lower cost 1U solution would truly be lower cost.  Smaller compute units are more readily recycled / reused and have the benefit of using smaller power supplies which enables energy to be fully switched off when the compute is not in use.  Also… based on what I have learned, the energy required to cool a 1U is typically higher than a 2/4U due to the smaller fans running at a higher RPM.  The “processor array” model eliminates this problem (how depends upon the vendor implementation).  Cooling is only part of the problem though… hence the Processor Array concept in full.

 

Attached PPT describes a “processor array”…. which is more efficient and manageable than rack mount or blade chassis’s in enterprise class data centers.  Processor Array’s are the silver lining for clouds and fabric computing environments.  If you agree, tell your vendor.  If you disagree, then reply why… and let’s discuss.

 

Regards,

Jacob


From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Khazret Sapenov


Sent: Monday, August 11, 2008 8:41 PM
To: cloud-c...@googlegroups.com

Processor Array - Basic Idea v1.4.ppt

Jim Starkey

unread,
Aug 12, 2008, 10:55:23 AM8/12/08
to cloud-c...@googlegroups.com
Khazret Sapenov wrote:
>
>
> On Tue, Aug 12, 2008 at 8:30 AM, Jim Starkey <jsta...@nimbusdb.com
> <mailto:jsta...@nimbusdb.com>> wrote:
>
> ...
> ...
> Fourth, if the cloud were organized as applications straddling
> servers rather than inefficient single virtual server instances,
> the issue would be aggregate computing power rather the speed of
> individual machines, obsolescence would no longer be an issue and
> servers would have a much longer economic lifetime.
>
> Fifth, running applications in an well architected multi-server
> application platform centralizes security and administration
> policy rather than distributing it to thousands of individual
> virtual machine instances, reducing the manpower costs.
>
> The ranking of costs is probably something like this:
>
> 1. Personal
> 2. Real estate and environment
> 3. Power and cooling
> 4. Capital cost of hardware

>
> The place to start looking for savings is at the top, not the
> bottom. The cloud model of vast number of virtual machine
> instances requires a vast number of administrators. The cloud
> model of arbitrarily large application platforms provides the
> opportunity to cut real costs. I'm sorry that so many people are
> eager to define it out of existence.
>
>
>
>
> I agree, that having application platform would be more compact,
> however this scenario is rare for hosting providers.
Yup, the technology is evolving. Google is certain the pioneer, but
there are lots of other people working on this. There is no doubt in my
mind that this is where the industry has to and will go.

>
> Applications need some level of isolation, provided by virtual
> containers, even in enterprise (you don't want people from department
> A get access to applications of department B, even a chance to poke a
> memory segment or observe network traffic).
Absolutely. Applications have to live in a managed sandbox. This has
been known and understood for well over a decade. But that's the easy
part of the problem. Shared consistent data across the cloud is the
hard part. Appropriate database service is one solution, but there's no
agreement on what that means yet.

A virtual machine is nothing more than a huge inefficient sandbox with
an operating system and a hundred separate components, each of which
requires administration and maintenance. We can do better than this.


>
> Virtualization also doesn't require modification of the code and allow
> wider range of applications to run (I would estimate all spectre might
> run without problems).

Sorry, but trying to preserve a dated investment is the best way to die
during a platform shift. Effect use of a cloud requires a different
programming paradigm, just like GUIs require a different programming
paradigm than command line. The guys who tried to salvage their command
line based technologies just up and died. The cloud will be the same.
Pretending that the rules haven't changed is planning for extinction.

Running virtual machines in a cloud is the same as running DOS shells on
Windows. Many people argued that a DOS shell was a window, but a GUI
of another type, but those people aren't around anymore...

Christopher Steel

unread,
Aug 12, 2008, 10:57:28 AM8/12/08
to cloud-c...@googlegroups.com

Applications do require a level of isolation, but that does imply a virtual container. Virtual containers are ideal for hosting existing applications that were not designed for the cloud. They provide an excellent stop-gap measure as Amazon and others are taking advantage of. What needs to happen now, is that we need to start providing toolkits and best practices for developing cloud-based applications. Current virtual containers have an extremely high overhead. Often they take up more memory, disk space, CPU, etc., than the applications they host. We need new lightweight containers that will host applications designed for cloud computing, with minimal overhead, yet the necessary level of isolation.

 

-Chris

 

From: Khazret Sapenov [mailto:sap...@gmail.com]
Sent: Tuesday, August 12, 2008 9:38 AM
To: cloud-c...@googlegroups.com
Subject: Re: Expensive, Cheap Servers

 

 

On Tue, Aug 12, 2008 at 8:30 AM, Jim Starkey <jsta...@nimbusdb.com> wrote:

Khazret Sapenov

unread,
Aug 12, 2008, 11:35:52 AM8/12/08
to cloud-c...@googlegroups.com
On Tue, Aug 12, 2008 at 10:55 AM, Jim Starkey <jsta...@nimbusdb.com> wrote:
...

> Applications need some level of isolation, provided by virtual
> containers, even in enterprise (you don't want people from department
> A get access to applications of department B, even a chance to poke a
> memory segment or observe network traffic).
Absolutely.  Applications have to live in a managed sandbox.  This has
been known and understood for well over a decade.  But that's the easy
part of the problem.  Shared consistent data across the cloud is the
hard part.  Appropriate database service is one solution, but there's no
agreement on what that means yet.

A virtual machine is nothing more than a huge inefficient sandbox with
an operating system and a hundred separate components, each of which
requires administration and maintenance.  We can do better than this.
 
These statements are all true, but only within very narrow range of platfrom/applications.
Google AppEngine ignores existing Java, C++ and other applications. Companies have
spent oodles of money in their software, that became woven into business processes.
It is not practical to rewrite everything (do it cloud-aware) for the sake of getting another 20% boost in application
performance with other drawbacks (security etc).
 

>
> Virtualization also doesn't require modification of the code and allow
> wider range of applications to run (I would estimate all spectre might
> run without problems).
Sorry, but trying to preserve a dated investment is the best way to die
during a platform shift.  Effect use of a cloud requires a different
programming paradigm, just like GUIs require a different programming
paradigm than command line.  The guys who tried to salvage their command
line based technologies just up and died.  The cloud will be the same.
Pretending that the rules haven't changed is planning for extinction.

Running virtual machines in a cloud is the same as running DOS shells on
Windows.   Many people argued that a DOS shell was a window, but a GUI
of another type, but those people aren't around anymore...
 
Well, I personally like command line :)
Perhaps Microsoft does too, since they've added
lots of command line stuff to recent software
 
I understand, that there are other (than virtual machine instances), more efficient technologies,
but at the moment there's no viable alternative to it(commercially available), that might satisfy majority of customers. 
Thus discussions of more tight secure application container are rather hypothetical. Correct me, if I'm wrong.
 

Barr, Bill

unread,
Aug 12, 2008, 11:45:45 AM8/12/08
to cloud-c...@googlegroups.com

Well, that’s a correct perspective from a desktop or workstation viewpoint. Mainframes and mini-computers had/have extraordinarily efficient virtual machine mechanisms and management tools. In fact one of the most effective and efficient ways to deploy Linux is on an IBM mainframe running VM. Moreover, it provides the ability to either clone Linux instances or share the same instance but give each process it’s own sandbox. I don’t know what the state-of-the-art is today but, that’s what I was able to do 7 years ago.

 

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Khazret Sapenov
Sent: Tuesday, August 12, 2008 8:36 AM
To: cloud-c...@googlegroups.com
Subject: Re: Expensive, Cheap Servers

Jim Starkey

unread,
Aug 12, 2008, 11:58:47 AM8/12/08
to cloud-c...@googlegroups.com
Khazret Sapenov wrote:


On Tue, Aug 12, 2008 at 10:55 AM, Jim Starkey <jsta...@nimbusdb.com> wrote:
...
> Applications need some level of isolation, provided by virtual
> containers, even in enterprise (you don't want people from department
> A get access to applications of department B, even a chance to poke a
> memory segment or observe network traffic).
Absolutely.  Applications have to live in a managed sandbox.  This has
been known and understood for well over a decade.  But that's the easy
part of the problem.  Shared consistent data across the cloud is the
hard part.  Appropriate database service is one solution, but there's no
agreement on what that means yet.

A virtual machine is nothing more than a huge inefficient sandbox with
an operating system and a hundred separate components, each of which
requires administration and maintenance.  We can do better than this.
 
These statements are all true, but only within very narrow range of platfrom/applications.
Google AppEngine ignores existing Java, C++ and other applications. Companies have
spent oodles of money in their software, that became woven into business processes.
It is not practical to rewrite everything (do it cloud-aware) for the sake of getting another 20% boost in application
performance with other drawbacks (security etc).
You are missing the point.  This isn't about performance, it's about scalability.  A 20% improvement in an application with satisfactory performance is irrelevant.  A 20% improvement in performance of an unsatisfactorily performing application is also irrelevant if the result of the performance gain doubles the number of users.

Nobody wants to rewrite anything (actually, that's not true.  Google schedules a rewrite every two years for everything in their stable).  But if there is a benefit, they will do so.  If this were not the case, everything interactive would be running under CMS.

Case in point: Java.  Twelve years ago, there wasn't a single application written in Java.  Today, it's the language of choice for application servers, despite the fact that Sun initially pushed it solely for thin clients.  When technology changes, applications follow.

Bottom line: Companies will do what they need to do for scalable applications.  Sure, they'd prefer not to, but if meeting their goals requires changing technology, they will change.  (Really, this should be obvious by now.)

 

>
> Virtualization also doesn't require modification of the code and allow
> wider range of applications to run (I would estimate all spectre might
> run without problems).
Sorry, but trying to preserve a dated investment is the best way to die
during a platform shift.  Effect use of a cloud requires a different
programming paradigm, just like GUIs require a different programming
paradigm than command line.  The guys who tried to salvage their command
line based technologies just up and died.  The cloud will be the same.
Pretending that the rules haven't changed is planning for extinction.

Running virtual machines in a cloud is the same as running DOS shells on
Windows.   Many people argued that a DOS shell was a window, but a GUI
of another type, but those people aren't around anymore...
 
Well, I personally like command line :)
Perhaps Microsoft does too, since they've added
lots of command line stuff to recent software
 
I understand, that there are other (than virtual machine instances), more efficient technologies,
but at the moment there's no viable alternative to it(commercially available), that might satisfy majority of customers. 
Thus discussions of more tight secure application container are rather hypothetical. Correct me, if I'm wrong.
Uh, this IS a list about cloud computing, isn't it?  This is the sort of forum where ideas are cross pollinated, examined, criticized, and the like.

And no, it isn't just hypothetical.  Cameron, Billy Newport, myself, and certain many others are working on this every day (disclosure: I'm on vacation in Maine, sitting below on a sailboat, waiting for the rain to stop.  Go figure.).

However, any of this is going to happen on this list, Reuven and his acolytes  are going  to have to lighten up on their definition of cloud computing.  Otherwise, those of us who are working to make it happen are going to somewhere else.

Khazret Sapenov

unread,
Aug 12, 2008, 1:14:59 PM8/12/08
to cloud-c...@googlegroups.com
Virtualization doesn't prevent scalability, it just takes more resources, but also brings benefits. Currently Amazon EC2 allows your application, running inside virtual container, to scale horizontally and vertically (up to 20 cores on specific type of VM instances).
 
 
 

 

>
> Virtualization also doesn't require modification of the code and allow
> wider range of applications to run (I would estimate all spectre might
> run without problems).
Sorry, but trying to preserve a dated investment is the best way to die
during a platform shift.  Effect use of a cloud requires a different
programming paradigm, just like GUIs require a different programming
paradigm than command line.  The guys who tried to salvage their command
line based technologies just up and died.  The cloud will be the same.
Pretending that the rules haven't changed is planning for extinction.

Running virtual machines in a cloud is the same as running DOS shells on
Windows.   Many people argued that a DOS shell was a window, but a GUI
of another type, but those people aren't around anymore...
 
Well, I personally like command line :)
Perhaps Microsoft does too, since they've added
lots of command line stuff to recent software
 
I understand, that there are other (than virtual machine instances), more efficient technologies,
but at the moment there's no viable alternative to it(commercially available), that might satisfy majority of customers. 
Thus discussions of more tight secure application container are rather hypothetical. Correct me, if I'm wrong.
Uh, this IS a list about cloud computing, isn't it?  This is the sort of forum where ideas are cross pollinated, examined, criticized, and the like.

And no, it isn't just hypothetical.  Cameron, Billy Newport, myself, and certain many others are working on this every day (disclosure: I'm on vacation in Maine, sitting below on a sailboat, waiting for the rain to stop.  Go figure.).

However, any of this is going to happen on this list, Reuven and his acolytes  are going  to have to lighten up on their definition of cloud computing.  Otherwise, those of us who are working to make it happen are going to somewhere else.
 
 
Sorry don't understand this at all. Do you wanna say, that you are working to make cloud happen and all the others just watching it? :)

Alexis Richardson

unread,
Aug 12, 2008, 2:32:59 PM8/12/08
to cloud-c...@googlegroups.com
Jim's Java (JVM) analogy is spot on.

Until recently virtualization was in equivalent of the 'applet era'.
Virtual desktops are a client tech on demand for a location
independent world, with a manageable and coherent back end where
performance is not a showstopper.

The next step is scalable serverside deployments for delivery of more
complex apps over the network. This is analogous to what cloud is
trying to do, with the kicker of a self-service pay-per-use business
model. Ditto the private cloud which is no more than shorthand for a
certain kind of dynamic data center with visible and auditable cost
attribution.

Later, hardware, container, and software advances make the old 'manage
resources yourself to get pedal to the metal' approach may lead to
deprecation of on-metal technologies (like C++ is to Java), at least
for greenfield projects. We are not there yet but as Jim says, it
does not matter.

This is known, at least in biz school, as a technology disruption.

What matters today, really, is how we navigate the middle stage and
what it means for Real People.

alexis
CohesiveFT

Alexis Richardson

unread,
Aug 12, 2008, 2:34:38 PM8/12/08
to cloud-c...@googlegroups.com
Oops, I meant to say:

Later, hardware, container, and software advances make the old 'manage

resources yourself to get pedal to the metal' approach somewhat
redundant; and may lead to


deprecation of on-metal technologies (like C++ is to Java), at least
for greenfield projects. We are not there yet but as Jim says, it
does not matter.

Khazret Sapenov

unread,
Aug 12, 2008, 4:50:43 PM8/12/08
to cloud-c...@googlegroups.com
On Tue, Aug 12, 2008 at 2:32 PM, Alexis Richardson <alexis.r...@gmail.com> wrote:

Jim's Java (JVM) analogy is spot on.

Until recently virtualization was in equivalent of the 'applet era'.
Virtual desktops are a client tech on demand for a location
independent world, with a manageable and coherent back end where
performance is not a showstopper.

The next step is scalable serverside deployments for delivery of more
complex apps over the network.  This is analogous to what cloud is
trying to do, with the kicker of a self-service pay-per-use business
model.  Ditto the private cloud which is no more than shorthand for a
certain kind of dynamic data center with visible and auditable cost
attribution.

Later, hardware, container, and software advances make the old 'manage
resources yourself to get pedal to the metal' approach may lead to
deprecation of on-metal technologies (like C++ is to Java), at least
for greenfield projects.  We are not there yet but as Jim says, it
does not matter.

This is known, at least in biz school, as a technology disruption.

What matters today, really, is how we navigate the middle stage and
what it means for Real People.

alexis
CohesiveFT
 
 
Not sure whether Java comparison is a good one.
 
I'm aware of coming changes in application landscape and understand its potential influence on software. 
I'd like to be forward-thinking and futuristic (and actually working on some concepts of CC), but prefer to stay in reality (until 'middle stage' arrives as commercial cloud offering).
You should walk, before you run.
 
Going back to the topic, here is other point of view:
 
quote:

Energy Efficiency — Green z

When you look at your data center what do you see? Maybe racks full of servers and storage systems. Maybe some of the connections between them. But do you see BTUs and kilowatts? You should. Power and cooling demands are becoming more important as energy prices rise and utilities restrict the amount of power some customers can use. IBM System z offers extraordinary energy efficiency capabilities to help you address these issues.

According to one analyst, the IBM System z platform can be configured to require 1/12th the electricity as a distributed server farm with equivalent processor capability.  1

If you look at just one possible energy scenario, the numbers are spectacular. According to a Robert Frances Group study a company analyzed consolidation of hundreds of UNIX servers to one System z mainframe. The calculations showed monthly power costs of $30,165 for the UNIX servers versus $905 for System z. That company calculated they would save over $350,000 in power costs annually.  2

How does System z do this? It's about IBM mainframe technology and consolidation.

IBM System z servers can run at utilization rates as high as 100% for long periods of time. This means that power that is consumed is used for transaction processing, rather than just keeping the servers' lights on. So by taking advantage of System z virtualization capabilities, hundreds or even thousands of smaller servers can be replaced by a single System z mainframe. That single System z mainframe doesn't require external networking to communicate between virtual servers. All of the servers are in a single box with huge, internal I/O pathways. This may help performance of complex, interconnected applications and also save power by eliminating datacenter network infrastructure.

source:

http://www-03.ibm.com/systems/z/advantages/energy/index.html

 
 

Cameron

unread,
Aug 13, 2008, 9:51:48 AM8/13/08
to Cloud Computing
Hi Jim -

> And no, it isn't just hypothetical. Cameron, Billy Newport,
> myself, and certain many others are working on this every day ..

Despite the differences in opinion, Bill is deep into this stuff too.
When I first met him, he was in charge of the scale-out architecture
for a "little" site called Expedia ;-)

Peace,

Cameron Purdy | Oracle
http://www.oracle.com/technology/products/coherence/index.html


On Aug 12, 11:58 am, Jim Starkey <jstar...@nimbusdb.com> wrote:
> Khazret Sapenov wrote:
>
> > (seehttp://technet.microsoft.com/en-us/library/cc778084.aspx) or try

Sam Johnston

unread,
Aug 13, 2008, 12:10:05 PM8/13/08
to cloud-c...@googlegroups.com
On Tue, Aug 12, 2008 at 4:57 PM, Christopher Steel <cst...@fortmoon.com> wrote:

Applications do require a level of isolation, but that does imply a virtual container. Virtual containers are ideal for hosting existing applications that were not designed for the cloud. They provide an excellent stop-gap measure as Amazon and others are taking advantage of. What needs to happen now, is that we need to start providing toolkits and best practices for developing cloud-based applications. Current virtual containers have an extremely high overhead. Often they take up more memory, disk space, CPU, etc., than the applications they host. We need new lightweight containers that will host applications designed for cloud computing, with minimal overhead, yet the necessary level of isolation.

It's encouraging to see people coming to terms with the idea that there is more to cloud computing than virtualisation (the latter being a very narrow view). Yes there is a need for isolation in a multi-tenant environment, and yes this isolation needs to be a lot more granular than the virtual machine.

As a concrete example, if I write a python web app (say, django) I have a number of deployment options, ranging from getting bare metal hardware to virtual machines (which provide the OS but not much else) to shared hosting (which provides the OS and most of the software stack - apache, mysql, etc.), but all of these options require some amount of maintenance (eg os or app patching, library installation, etc.). On the other hand I could go for a true 'cloud' solution like Google AppEngine and I simply don't care about whether it's a grid, mainframe or an army of monkeys - my isolation is provided by way of neutered instances of the python interpreter and a bunch of standard libraries (including django).

Occasionally I still have to crank up EC2 instances (eg for software like Alfresco) but for the most part this world is history for me, and I don't miss it one bit.

Sam

Khazret Sapenov

unread,
Aug 13, 2008, 1:21:01 PM8/13/08
to cloud-c...@googlegroups.com
Where would you go for Java ? :)
(I've already tried Nikita's GridGain, and Sun's Project Caroline so far, but that's for separate thread)  

Sam Johnston

unread,
Aug 13, 2008, 2:08:43 PM8/13/08
to cloud-c...@googlegroups.com
On Wed, Aug 13, 2008 at 7:21 PM, Khazret Sapenov <sap...@gmail.com> wrote:

Where would you go for Java ? :)
(I've already tried Nikita's GridGain, and Sun's Project Caroline so far, but that's for separate thread)  

For now, EC2 (Alfresco is Java), knowing that in doing so I need to install and configure the operating system, java app server, web server, firewalls, etc. and then keep everything up to date and patched.

Java support is the #1 issue for AppEngine (with almost 2000 votes) and Google admit to using Java internally so you may well see support for it at some point in the future. It's also possible other providers will pop up offering java platforms.

Sam

Khazret Sapenov

unread,
Aug 13, 2008, 2:26:43 PM8/13/08
to cloud-c...@googlegroups.com
I know Alfresco a bit :)
Have been project of the month in 2006 http://www.alfresco.com/community/newsletters/2006/08/ 
 
Also I have created an automated Alfresco cluster management in Amazon EC2 (details at http://ihatecubicle.blogspot.com/2008/05/alfresco-cluster-in-compute-cloud.html ).
 
It would be good to try it on Google's AppEngine as soon as they bring Java in.

Jim Starkey

unread,
Aug 13, 2008, 5:57:54 PM8/13/08
to cloud-c...@googlegroups.com
Khazret Sapenov wrote:
>
> Bottom line: Companies will do what they need to do for scalable
> applications. Sure, they'd prefer not to, but if meeting their
> goals requires changing technology, they will change. (Really,
> this should be obvious by now.)
>
>
>
>
>
> Virtualization *doesn't prevent scalability*, it just takes more
> resources, but also brings benefits. Currently Amazon EC2 allows your
> application, running inside virtual container, to scale horizontally
> and vertically (up to 20 cores on specific type of VM instances).
That's not nearly enough. Hundreds of machines/cores are what is
needed. Also fault tolerance. A single virtual machine is limited in a
single physical machine, and when the physical machine goes kaput, the
same is over.

Scalability needs to go way beyond the limitations of a single server,
no matter how big and expensive that server might be.


>
>
>
> However, any of this is going to happen on this list, Reuven and
> his acolytes are going to have to lighten up on their definition
> of cloud computing. Otherwise, those of us who are working to
> make it happen are going to somewhere else.
>
>
>
> Sorry don't understand this at all. Do you wanna say, that you are
> working to make cloud happen and all the others just watching it? :)
>

Sort of. What I would really like is recognition that cloud computing
doesn't end with virtualization.

(And it stopped raining. Whew!)

Michael Fehse

unread,
Aug 14, 2008, 5:02:48 AM8/14/08
to cloud-c...@googlegroups.com
@Jim:

On Aug 13, 2008, at 11:57 PM, Jim Starkey wrote:


Virtualization *doesn't prevent scalability*, it just takes more
resources, but also brings benefits. Currently Amazon EC2 allows your
application, running inside virtual container, to scale horizontally
and vertically (up to 20 cores on specific type of VM instances).
That's not nearly enough.  Hundreds of machines/cores are what is
needed.  Also fault tolerance.  A single virtual machine is limited in a
single physical machine, and when the physical machine goes kaput,  the
same is over.

Scalability needs to go way beyond the limitations of a single server,
no matter how big and expensive that server might be.


Well, you are totally correct.
I think we have three layers here ( at least we should have):
1 Box level combine and split:
combine builds a single OS SMP machine out of single boxes (think sysplex on z/os)
split builds multiple single OS SMP machines out of a single box (think Vmware)
2 OS level:
Virtualization of OS like Vmware/Xen/Hyper V etc. (some)
Grid of SMP machines to build a MPP. (many)
3 Service level:
Cloud of services that doesn't make me deal with levels below.

What do you think ?

Cheers,
Mike

Utpal Datta

unread,
Aug 14, 2008, 9:49:44 AM8/14/08
to cloud-c...@googlegroups.com
I think most of the people who are trying to provide business
applications (HR applications, Tax applications, complex financial
analysis applications) as a service would like to start at your
level-3 or above.

They all will want scalability, reliability and security from the
platform on which they run their application, but not necessarily have
the time, money or expertise to create the platform too (they are too
busy creating their application).

All SaaS platforms (Oracle, force.com) claim to provide these.

So if cloud is going to take the place of these "non-cloud" SaaS
platforms I do not see how these services will not be considered as
basic necessity.

Yes, EC2 does not provide these today, but I do not think the next big
cloud provider (IBM or HP or Dell) will come without these basic
facilities built in.

As always the bottom of "Cloud Enablers" will bleed into "Cloud
Providers" while "Cloud Enablers" will have to add more
functionalities to differentiate themselves and stay in business.

Even Motel-6 provides pillows, towels and soap with their rooms :-)

--utpal

Reply all
Reply to author
Forward
0 new messages