Recently there has been a resurgence in interest of the darknet
ranging from the more unsavory such as P2P filesharing and botnets as
well as more mainstream usages such as inter-government information
sharing, bandwidth alliances or even offensive military botnets. All
of these activities are pointing to a growing interest in the form of
covert computing I call "dark cloud computing" whereby a private
computing alliance is formed. In this alliance members are able to
pool together computing resources to address the ever expanding need
for capacity.
According to my favorite source of quick disinformation, The term
Darknet was originally coined in the 1970s to designate networks which
were isolated from ARPANET (which evolved into the Internet) for
security purposes. Some darknets were able to receive data from
ARPANET but had addresses which did not appear in the network lists
and would not answer pings or other inquiries. More recently the term
has been associated with the use of dark fiber networks, private file
sharing networks and distributed criminal botnets.
The botnet is quickly becoming the tool of choice for governments
around the globe. Recently Col. Charles W. Williamson III. staff
judge advocate, Air Force Intelligence, Surveillance and
Reconnaissance Agency, writes in Armed Forces Journal for the need of
botnets within the US DoD. In his report he writes " The world has
abandoned a fortress mentality in the real world, and we need to move
beyond it in cyberspace. America needs a network that can project
power by building an af.mil robot network (botnet) that can direct
such massive amounts of traffic to target computers that they can no
longer communicate and become no more useful to our adversaries than
hunks of metal and plastic. America needs the ability to carpet bomb
in cyberspace to create the deterrent we lack."
I highly doubt the US is alone in this thinking. The world is more
then ever driven by information and botnet usages are not just limited
to governments but to enterprises as well. In our modern information
driven economy the distinction between corporation and governmental
organization has been increasingly blurred. Corporate entities are
quickly realizing they need the same network protections. By covertly
pooling resources in the form of a dark cloud or cloud alliance,
members are able to counter or block network threats in a private,
anonymous and quarantined fashion. This type distributed network
environment may act as an early warning and threat avoidance system.
An anonymous cloud computing alliance would enable a network of
decentralized nodes capable of neutralizing potential threats through
a series of counter measures.
My question is: Are we on the brink of seeing the rise of private
corporate darknets aka dark clouds? And if so, what are the legal
ramifications, and do they out weight the need to protect ourselves
from criminals who can and will use these tactics against us?
(Original Post: http://elasticvapor.com/2008/07/rise-of-dark-cloud.html)
--
--
Reuven Cohen
Founder & Chief Technologist, Enomaly Inc.
blog > www.elasticvapor.com
-
Get Linked in> http://linkedin.com/pub/0/b72/7b4
Now you know my side project, I call it singularity. (One cloud to
rule them all) The only problem is I keep expecting a Terminator robot
from the future to show up at my door.
r/c
--
--
www.enomaly.com :: 416 848 6036 x 1
skype: ruv.net // aol: ruv6
A possible solution could be to enable modern, networked computing systems to manage themselves without direct human intervention. The Autonomic Computing Initiative (ACI) aims at providing the foundation for autonomic systems. It is inspired by the autonomic nervous system of the human body. This nervous system controls important bodily functions (e.g. respiration, heart rate, and blood pressure) without any conscious intervention.
In a self-managing system Autonomic System, the human operator takes on a new role: He does not control the system directly. Instead, he defines general policies and rules that serve as an input for the self-management process. For this process, IBM has defined the following four functional areas:
IBM defined five evolutionary levels, or the Autonomic deployment model, for its deployment: Level 1 is the basic level that presents the current situation where systems are essentially managed manually. Levels 2 - 4 introduce increasingly automated management functions, while level 5 represents the ultimate goal of autonomic, self-managing systems.
source: http://en.wikipedia.org/wiki/Autonomic_computingYep, interesting that you mention autonomics. They are very relevant, especially more so in a cloud environment. Usually these kind of systems do not get there in one shot, but go thru stages – viz connected, reactive, proactive and finally adaptive/autonomic. Long time ago, we had worked on a paper on this topic http://www.ibm.com/developerworks/autonomic/library/ac-summary/ac-cisco.html. I think the cloud computing is in the “connected” stage. We have to build “reactive-ness” to the protocols like the AWS gossip and create baselines before even going to a proactive state. BTW, many of the network protocols and the state machines thereof handle these situations very well.
Cheers
<k/>
From:
cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On
Behalf Of Khazret Sapenov
Sent: Saturday, July 26, 2008 1:00 PM
To: cloud-c...@googlegroups.com
Subject: Re: The Rise of The Dark Cloud
Perhaps cloud computing solutions should incorporate a concept of autonomic computing to certain degree.
Many of the discussions on this forum, including the one below keep bringing to mind the “Eight Fallacies of Distributed Computing” http://en.wikipedia.org/wiki/Fallacies_of_distributed_computing . Perhaps the eight “laws” of cloud computing should be the reverse of them:
Unlike traditional architecture, cloud applications will be designed with the following assumptions:
Some platforms, such as GigaSpaces and others, have been designed in such a way. For example, I don’t know if it’s holy grail stuff or not, but we provide self-healing and self-optimizing capabilities via our SLA-Driven Container. It’s been particularly appealing to EC2 users who are asking us “What happens if an MI fails?”. Our answer is simple “You don’t care”.
Geva Perry
Hey, Geva, Gigaspaces is cool stuff but at $1.60 an instance hour I'd say it's far from grail status (much less Holy.) Given the number of instances one needs to make it work properly it's priced just like enterprise software. I know you guys are entitled to a reasonable return on you're investment but...
The big potential draw of cloud computing is massive scalability at low cost. Doing the math, an instance year for a small but functioning Gigaspaces system is, at the minimum, $63K a year (3 large instances @ .80 plus $1.60 times 8736 hours per year.) This, of course, does not include other vendors charges - I'm guessing Oracle will be somewhere in the $3-5 dollar range. All of the sudden the stack is getting expensive...
-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Geir Magnusson Jr.
Sent: Sunday, July 27, 2008 1:48 PM
To: cloud-c...@googlegroups.com
Subject: Re: The Rise of The Dark Cloud
On Jul 26, 2008, at 7:45 PM, Geva Perry wrote:
> Many of the discussions on this forum, including the one below keep
> bringing to mind the "Eight Fallacies of Distributed
Computing"http://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
> . Perhaps the eight "laws" of cloud computing should be the reverse
> of them:
Isn't that the point of calling them fallacies? that the reverse is
actually true? IOW, these "laws" are generally understood already by
anyone doing distributed computing?
>
>
> Unlike traditional architecture, cloud applications will be designed
> with the following assumptions:
> . The network is *not* reliable.
> . Latency is *not* zero.
> . Bandwidth is *not* infinite.
> . The network is *not* secure.
> . Topology doesn't change. [Does as opposed to Doesn't]
> . There is *not* one administrator.
> . Transport cost is *not* zero.
> . The network is *not* homogeneous.
> . Self-Configuration: Automatic configuration of components;
> . Self-Healing: Automatic discovery, and correction of faults;
> . Self-Optimization: Automatic monitoring and control of resources
> to ensure the optimal functioning with respect to the defined
> requirements;
> . Self-Protection: Proactive identification and protection from
> arbitrary attacks.
> IBM defined five evolutionary levels, or the Autonomic deployment
> model, for its deployment: Level 1 is the basic level that presents
> the current situation where systems are essentially managed
> manually. Levels 2 - 4 introduce increasingly automated management
> functions, while level 5 represents the ultimate goal of autonomic,
> self-managing systems.
>
> source: http://en.wikipedia.org/wiki/Autonomic_computing
> On Sat, Jul 26, 2008 at 3:50 PM, Sam Charrington
> <s...@charrington.com> wrote:
> A former Appistry colleague always believed that the self-managing &
> self-organizing behaviors of our product were the beginning of SkyNet.
>
> If we as a group can define these three laws, I will try to get the
> implementation of them onto our product roadmap.
>
> :-)
>
> Sam
>
> On Sat, Jul 26, 2008 at 2:25 PM, Khazret Sapenov <sap...@gmail.com>
> wrote:
> In my opinion at this stage it would be useful to formulate "3 laws
> of Cloud Computing"
> borrowed from Isaac Asimov and adapted to botnet facilities:
> . A robot may not injure a human being or, through inaction, allow
> a human being to come to harm.
> . A robot must obey orders given to it by human beings, except
> where such orders would conflict with the First Law.
> . A robot must protect its own existence as long as such protection
Sassa
2008/8/1 Khazret Sapenov <sap...@gmail.com>:
Brenda Michelson summarizes an interesting article where a researcher found that the real cost of that $2500 server runs about $8K-15K.
http://blog.elementallinks.net/2008/08/what-does-that.html
Brenda Michelson summarizes an interesting article where a researcher found that the real cost of that $2500 server runs about $8K-15K.
http://blog.elementallinks.net/2008/08/what-does-that.html
Brenda Michelson summarizes an interesting article where a researcher found that the real cost of that $2500 server runs about $8K-15K.
http://blog.elementallinks.net/2008/08/what-does-that.html
Fourth, if the cloud were organized as applications straddling servers rather than inefficient single virtual server instances, the issue would be aggregate computing power rather the speed of individual machines, obsolescence would no longer be an issue and servers would have a much longer economic lifetime.
Fifth, running applications in an well architected multi-server application platform centralizes security and administration policy rather than distributing it to thousands of individual virtual machine instances, reducing the manpower costs.
The ranking of costs is probably something like this:
The place to start looking for savings is at the top, not the bottom. The cloud model of vast number of virtual machine instances requires a vast number of administrators. The cloud model of arbitrarily large application platforms provides the opportunity to cut real costs. I'm sorry that so many people are eager to define it out of existence.
- Personal
- Real estate and environment
- Power and cooling
- Capital cost of hardware
* A resend was requested.
If everyone decoupled storage from compute, placed the cheap servers in a rack sized chassis and called it a “processor array”; then the lower cost 1U solution would truly be lower cost. Smaller compute units are more readily recycled / reused and have the benefit of using smaller power supplies which enables energy to be fully switched off when the compute is not in use. Also… based on what I have learned, the energy required to cool a 1U is typically higher than a 2/4U due to the smaller fans running at a higher RPM. The “processor array” model eliminates this problem (how depends upon the vendor implementation). Cooling is only part of the problem though… hence the Processor Array concept in full.
Attached PPT describes a “processor array”…. which is more efficient and manageable than rack mount or blade chassis’s in enterprise class data centers. Processor Array’s are the silver lining for clouds and fabric computing environments. If you agree, tell your vendor. If you disagree, then reply why… and let’s discuss.
Regards,
Jacob
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Khazret Sapenov
Sent: Monday, August 11, 2008 8:41
PM
To: cloud-c...@googlegroups.com
A virtual machine is nothing more than a huge inefficient sandbox with
an operating system and a hundred separate components, each of which
requires administration and maintenance. We can do better than this.
>
> Virtualization also doesn't require modification of the code and allow
> wider range of applications to run (I would estimate all spectre might
> run without problems).
Sorry, but trying to preserve a dated investment is the best way to die
during a platform shift. Effect use of a cloud requires a different
programming paradigm, just like GUIs require a different programming
paradigm than command line. The guys who tried to salvage their command
line based technologies just up and died. The cloud will be the same.
Pretending that the rules haven't changed is planning for extinction.
Running virtual machines in a cloud is the same as running DOS shells on
Windows. Many people argued that a DOS shell was a window, but a GUI
of another type, but those people aren't around anymore...
Applications do require a level of isolation, but that does imply a virtual container. Virtual containers are ideal for hosting existing applications that were not designed for the cloud. They provide an excellent stop-gap measure as Amazon and others are taking advantage of. What needs to happen now, is that we need to start providing toolkits and best practices for developing cloud-based applications. Current virtual containers have an extremely high overhead. Often they take up more memory, disk space, CPU, etc., than the applications they host. We need new lightweight containers that will host applications designed for cloud computing, with minimal overhead, yet the necessary level of isolation.
-Chris
From: Khazret Sapenov
[mailto:sap...@gmail.com]
Sent: Tuesday, August 12, 2008 9:38 AM
To: cloud-c...@googlegroups.com
Subject: Re: Expensive, Cheap Servers
On Tue, Aug 12, 2008 at 8:30 AM, Jim Starkey <jsta...@nimbusdb.com> wrote:
...
> Applications need some level of isolation, provided by virtual
> containers, even in enterprise (you don't want people from department
> A get access to applications of department B, even a chance to poke a
> memory segment or observe network traffic).
Absolutely. Applications have to live in a managed sandbox. This has
been known and understood for well over a decade. But that's the easy
part of the problem. Shared consistent data across the cloud is the
hard part. Appropriate database service is one solution, but there's no
agreement on what that means yet.
A virtual machine is nothing more than a huge inefficient sandbox with
an operating system and a hundred separate components, each of which
requires administration and maintenance. We can do better than this.
>Sorry, but trying to preserve a dated investment is the best way to die
> Virtualization also doesn't require modification of the code and allow
> wider range of applications to run (I would estimate all spectre might
> run without problems).
during a platform shift. Effect use of a cloud requires a different
programming paradigm, just like GUIs require a different programming
paradigm than command line. The guys who tried to salvage their command
line based technologies just up and died. The cloud will be the same.
Pretending that the rules haven't changed is planning for extinction.
Running virtual machines in a cloud is the same as running DOS shells on
Windows. Many people argued that a DOS shell was a window, but a GUI
of another type, but those people aren't around anymore...
Well, that’s a correct perspective from a desktop or workstation viewpoint. Mainframes and mini-computers had/have extraordinarily efficient virtual machine mechanisms and management tools. In fact one of the most effective and efficient ways to deploy Linux is on an IBM mainframe running VM. Moreover, it provides the ability to either clone Linux instances or share the same instance but give each process it’s own sandbox. I don’t know what the state-of-the-art is today but, that’s what I was able to do 7 years ago.
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Khazret Sapenov
Sent: Tuesday, August 12, 2008 8:36 AM
To: cloud-c...@googlegroups.com
Subject: Re: Expensive, Cheap Servers
On Tue, Aug 12, 2008 at 10:55 AM, Jim Starkey <jsta...@nimbusdb.com> wrote:
...Absolutely. Applications have to live in a managed sandbox. This has
> Applications need some level of isolation, provided by virtual
> containers, even in enterprise (you don't want people from department
> A get access to applications of department B, even a chance to poke a
> memory segment or observe network traffic).
been known and understood for well over a decade. But that's the easy
part of the problem. Shared consistent data across the cloud is the
hard part. Appropriate database service is one solution, but there's no
agreement on what that means yet.
A virtual machine is nothing more than a huge inefficient sandbox with
an operating system and a hundred separate components, each of which
requires administration and maintenance. We can do better than this.These statements are all true, but only within very narrow range of platfrom/applications.Google AppEngine ignores existing Java, C++ and other applications. Companies havespent oodles of money in their software, that became woven into business processes.It is not practical to rewrite everything (do it cloud-aware) for the sake of getting another 20% boost in applicationperformance with other drawbacks (security etc).
>Sorry, but trying to preserve a dated investment is the best way to die
> Virtualization also doesn't require modification of the code and allow
> wider range of applications to run (I would estimate all spectre might
> run without problems).
during a platform shift. Effect use of a cloud requires a different
programming paradigm, just like GUIs require a different programming
paradigm than command line. The guys who tried to salvage their command
line based technologies just up and died. The cloud will be the same.
Pretending that the rules haven't changed is planning for extinction.
Running virtual machines in a cloud is the same as running DOS shells on
Windows. Many people argued that a DOS shell was a window, but a GUI
of another type, but those people aren't around anymore...Well, I personally like command line :)Perhaps Microsoft does too, since they've addedlots of command line stuff to recent software(see http://technet.microsoft.com/en-us/library/cc778084.aspx ) or try to use EFI boot.I understand, that there are other (than virtual machine instances), more efficient technologies,but at the moment there's no viable alternative to it(commercially available), that might satisfy majority of customers.Thus discussions of more tight secure application container are rather hypothetical. Correct me, if I'm wrong.
Uh, this IS a list about cloud computing, isn't it? This is the sort of forum where ideas are cross pollinated, examined, criticized, and the like.
>Sorry, but trying to preserve a dated investment is the best way to die
> Virtualization also doesn't require modification of the code and allow
> wider range of applications to run (I would estimate all spectre might
> run without problems).
during a platform shift. Effect use of a cloud requires a different
programming paradigm, just like GUIs require a different programming
paradigm than command line. The guys who tried to salvage their command
line based technologies just up and died. The cloud will be the same.
Pretending that the rules haven't changed is planning for extinction.
Running virtual machines in a cloud is the same as running DOS shells on
Windows. Many people argued that a DOS shell was a window, but a GUI
of another type, but those people aren't around anymore...Well, I personally like command line :)Perhaps Microsoft does too, since they've addedlots of command line stuff to recent software(see http://technet.microsoft.com/en-us/library/cc778084.aspx ) or try to use EFI boot.I understand, that there are other (than virtual machine instances), more efficient technologies,but at the moment there's no viable alternative to it(commercially available), that might satisfy majority of customers.Thus discussions of more tight secure application container are rather hypothetical. Correct me, if I'm wrong.
And no, it isn't just hypothetical. Cameron, Billy Newport, myself, and certain many others are working on this every day (disclosure: I'm on vacation in Maine, sitting below on a sailboat, waiting for the rain to stop. Go figure.).
However, any of this is going to happen on this list, Reuven and his acolytes are going to have to lighten up on their definition of cloud computing. Otherwise, those of us who are working to make it happen are going to somewhere else.
Until recently virtualization was in equivalent of the 'applet era'.
Virtual desktops are a client tech on demand for a location
independent world, with a manageable and coherent back end where
performance is not a showstopper.
The next step is scalable serverside deployments for delivery of more
complex apps over the network. This is analogous to what cloud is
trying to do, with the kicker of a self-service pay-per-use business
model. Ditto the private cloud which is no more than shorthand for a
certain kind of dynamic data center with visible and auditable cost
attribution.
Later, hardware, container, and software advances make the old 'manage
resources yourself to get pedal to the metal' approach may lead to
deprecation of on-metal technologies (like C++ is to Java), at least
for greenfield projects. We are not there yet but as Jim says, it
does not matter.
This is known, at least in biz school, as a technology disruption.
What matters today, really, is how we navigate the middle stage and
what it means for Real People.
alexis
CohesiveFT
Later, hardware, container, and software advances make the old 'manage
resources yourself to get pedal to the metal' approach somewhat
redundant; and may lead to
deprecation of on-metal technologies (like C++ is to Java), at least
for greenfield projects. We are not there yet but as Jim says, it
does not matter.
Jim's Java (JVM) analogy is spot on.
Until recently virtualization was in equivalent of the 'applet era'.
Virtual desktops are a client tech on demand for a location
independent world, with a manageable and coherent back end where
performance is not a showstopper.
The next step is scalable serverside deployments for delivery of more
complex apps over the network. This is analogous to what cloud is
trying to do, with the kicker of a self-service pay-per-use business
model. Ditto the private cloud which is no more than shorthand for a
certain kind of dynamic data center with visible and auditable cost
attribution.
Later, hardware, container, and software advances make the old 'manage
resources yourself to get pedal to the metal' approach may lead to
deprecation of on-metal technologies (like C++ is to Java), at least
for greenfield projects. We are not there yet but as Jim says, it
does not matter.
This is known, at least in biz school, as a technology disruption.
What matters today, really, is how we navigate the middle stage and
what it means for Real People.
alexis
CohesiveFT
When you look at your data center what do you see? Maybe racks full of servers and storage systems. Maybe some of the connections between them. But do you see BTUs and kilowatts? You should. Power and cooling demands are becoming more important as energy prices rise and utilities restrict the amount of power some customers can use. IBM System z offers extraordinary energy efficiency capabilities to help you address these issues.
According to one analyst, the IBM System z platform can be configured to require 1/12th the electricity as a distributed server farm with equivalent processor capability. 1
If you look at just one possible energy scenario, the numbers are spectacular. According to a Robert Frances Group study a company analyzed consolidation of hundreds of UNIX servers to one System z mainframe. The calculations showed monthly power costs of $30,165 for the UNIX servers versus $905 for System z. That company calculated they would save over $350,000 in power costs annually. 2
How does System z do this? It's about IBM mainframe technology and consolidation.
IBM System z servers can run at utilization rates as high as 100% for long periods of time. This means that power that is consumed is used for transaction processing, rather than just keeping the servers' lights on. So by taking advantage of System z virtualization capabilities, hundreds or even thousands of smaller servers can be replaced by a single System z mainframe. That single System z mainframe doesn't require external networking to communicate between virtual servers. All of the servers are in a single box with huge, internal I/O pathways. This may help performance of complex, interconnected applications and also save power by eliminating datacenter network infrastructure.
source:
http://www-03.ibm.com/systems/z/advantages/energy/index.html
Applications do require a level of isolation, but that does imply a virtual container. Virtual containers are ideal for hosting existing applications that were not designed for the cloud. They provide an excellent stop-gap measure as Amazon and others are taking advantage of. What needs to happen now, is that we need to start providing toolkits and best practices for developing cloud-based applications. Current virtual containers have an extremely high overhead. Often they take up more memory, disk space, CPU, etc., than the applications they host. We need new lightweight containers that will host applications designed for cloud computing, with minimal overhead, yet the necessary level of isolation.
Where would you go for Java ? :)(I've already tried Nikita's GridGain, and Sun's Project Caroline so far, but that's for separate thread)
Scalability needs to go way beyond the limitations of a single server,
no matter how big and expensive that server might be.
>
>
>
> However, any of this is going to happen on this list, Reuven and
> his acolytes are going to have to lighten up on their definition
> of cloud computing. Otherwise, those of us who are working to
> make it happen are going to somewhere else.
>
>
>
> Sorry don't understand this at all. Do you wanna say, that you are
> working to make cloud happen and all the others just watching it? :)
>
Sort of. What I would really like is recognition that cloud computing
doesn't end with virtualization.
(And it stopped raining. Whew!)
Virtualization *doesn't prevent scalability*, it just takes moreresources, but also brings benefits. Currently Amazon EC2 allows yourapplication, running inside virtual container, to scale horizontallyand vertically (up to 20 cores on specific type of VM instances).That's not nearly enough. Hundreds of machines/cores are what is
needed. Also fault tolerance. A single virtual machine is limited in a
single physical machine, and when the physical machine goes kaput, the
same is over.
Scalability needs to go way beyond the limitations of a single server,
no matter how big and expensive that server might be.
They all will want scalability, reliability and security from the
platform on which they run their application, but not necessarily have
the time, money or expertise to create the platform too (they are too
busy creating their application).
All SaaS platforms (Oracle, force.com) claim to provide these.
So if cloud is going to take the place of these "non-cloud" SaaS
platforms I do not see how these services will not be considered as
basic necessity.
Yes, EC2 does not provide these today, but I do not think the next big
cloud provider (IBM or HP or Dell) will come without these basic
facilities built in.
As always the bottom of "Cloud Enablers" will bleed into "Cloud
Providers" while "Cloud Enablers" will have to add more
functionalities to differentiate themselves and stay in business.
Even Motel-6 provides pillows, towels and soap with their rooms :-)
--utpal