The Standardized Cloud

45 views
Skip to first unread message

Reuven Cohen

unread,
Aug 21, 2008, 2:26:42 PM8/21/08
to cloud-computing
Over the last few weeks I've been engaged in several conversations
about the need for a common, interoperable and open set of cloud
computing standards. During these conversations a recurring theme has
started to emerge. A need for cloud interoperability or the ability
for diverse cloud systems and organizations to work together in a
common way. In my discussion yesterday with Rich Wolski of the
Eucalyptus project he described the need for what he called
"CloudVirt" similar to that of the Libvirt project for
virtualization. For those of you that don't know about libvirt, it's
an open source toolkit which enables a common API interaction with the
virtualization capabilities of recent versions of Linux (and other
OSes).

I would like to take this opportunity to share my ideas as well as get
some feedback on some of the key pain points I see for the creation of
common cloud computing reference API or standard.

* Cloud Resource Description
The ability to describe resources is (in my opinion) the most
important aspect of any standardization effort. One potential avenue
might be to use the Resource Description Framework proposed by the
W3C. The Resource Description Framework (RDF) is a family of
specifications, originally designed as a metadata data model, which
has come to be used as a general method of modeling information
through a variety of syntax formats. The RDF metadata model is based
upon the idea of making statements about Web resources (or Cloud
Resources) in the form of subject-predicate-object expressions, called
triples in RDF lingo. This standardized approach could be modified as
a primary mechanism for describing cloud resources both locally and
remotely.

* Cloud Federation (Cloud 2 Cloud)
The holy grail of cloud computing may very well be the ability to
seamlessly bridge both private clouds (datacenters) and remote cloud
resources such as EC2 in a secure and efficient manor. To accomplish
this a federation standard must be enabled. One of the biggest hurdles
to over come in federation is the lack of clear definition to what
federation is.

So let me take a stab at defining it.

Cloud federation manages consistency and access controls when two or
more independent geographically distinct clouds share either
authentication, files, computing resources, command and control or
access to storage resources. Cloud federations can be classified into
three categories: peer-to-peer, replication, and hierarchical. Peer 2
peer seems to be the most logical first step in creating a federation
spec. Protocols like XMPP, P4P and Virtual Distributed Ethernet may
make for good starting points.

* Distributed Network Management
The need for a distributed and optimized virtual network is an
important aspect in any multi-cloud deployment. One potential
direction could be to explore the use of VPN or VDE technologies. My
preference would be to use VDE, (Virtual Distributed Ethernet). A
quick refresher, a VPN is a way to connect one or more remote
computers to a protected network, generally tunnelling the traffic
through another network. VDE implements a virtual ethernet in all its
aspects, virtual switches, virtual cables. A VDE can also be used to
create a VPN.

VDE interconnects real computers running (through a tap interface),
virtual machines as well as the other networking interfaces through a
common open framework. VDE supports heterogeneous virtual machines
running on different hosting computers and could be the ideal starting
point. Network shaping and optimization may also play an important
role in the ability to bridge two or cloud resources.

Some network optimization aspects may include;

* Compression - Relies on data patterns that can be represented
more efficiently.
* Caching/Proxy - Relies on human behavior , accessing the same
data over and over.
* Protocol Spoofing - Bundles multiple requests from chatty
applications into one.
* Application Shaping - Controls data usage based on spotting
specific patterns in the data and allowing or disallowing specific
traffic.
* Equalizing - Makes assumptions on what needs immediate priority
based on the data usage.
* Connection Limits - Prevents access gridlock in routers and
access points due to denial of service or peer to peer.
* Simple Rate Limits - Prevents one user from getting more than a
fixed amount of data.

* Memory Management
When looking at the creation of compute cloud memory tends to be a
major factor in the performance of a given virtual environment,
whether a virtual machine or some other application component. Cloud
memory management will need to involve ways to allocate portions of
virtual memory to programs at their request, and freeing it for reuse
when no longer needed. This is particularly important in "platform as
a service" cloud deployments.

Several key memory management aspects may include;

* Provide memory space to enable several processes to be executed
at the same time
* Provide a satisfactory level of performance for the system users
* Protect each program's resources
* Share (if desired) memory space between processes
* Make the addressing of memory space as transparent as possible
for the programmer.


* Distributed Storage
I've been working on creating a cloud abstraction layer called "cloud
raid" as part of our ElasticDrive platform and have been looking at
different approaches for our implementation. My initial idea is to
connect multiple remote cloud storage services (S3, Nirvanix, CloudFS)
for a variety of purposes. During my research the XAM specification
began to look like the most suitable candidate. XAM addresses storage
interoperability, information assurance (security), storage
transparency, long-term records retention and automation for
Information Lifecycle Management (ILM)-based practices.

XAM looks to solve key cloud storage problem spots including;

* Interoperability: Applications can work with any XAM conformant
storage system; information can be migrated and shared
* Compliance: Integrated record retention and disposition metadata
* ILM Practices: Framework for classification, policy, and implementation
* Migration: Ability to automate migration process to maintain
long-term readability
* Discovery: Application-independent structured discovery avoids
application obsolescence


Potential Future Additions to the API

* I/o
The virtualization of I/O resources is a critical part of enabling a
set of emerging cloud deployment models. In large scale cloud
deployments a recurring issue has the ability to effectively
management I/o resources whether on a machine level or network. One of
the problems a lot of users are encountering is that of the "nasty
neighbor" or a user who has taken all available system I/o resources.

A common I/o API for sharing, security, performance, and scalability
will need to be addressed to help resolve these issues. I've been
speaking with several hardware vendors on how we might be able to
address this problem. This will most like have to be done at a later
point after a first draft has been released.

* Monitoring and System Metrics
One of the best aspects of using cloud technology is the ability to
scale applications in tandem to the underlying infrastructure and the
demands placed on it. Rather then just scaling on system load, users
should have the ability to selectively scale on other metrics such as
response time, network throughput or other metrics made available.
Having a uniform way to interact with system metrics will enable cloud
providers and consumers a common way to scale applications.

Security & Auditability.
In my conversations with several wall street CIO's the questions of
both security and cloud transparency with regards to external audits
has come up frequently.
---
My list of requirements is by no means a complete list. Cloud
computing encompasses a wide variety of technologies, architectures
and deployment models. What I am attempting to do is address the
initial pain points whether you are deploying a cloud or just using
it. A lot of what I've outlined may be better suited to a reference
implementation then a standard, but none the less I thought I'd put
these out ideas out for discussion.

Comments Welcome.

(Original Post: http://elasticvapor.com/2008/08/standardized-cloud.html)

--
--

Reuven Cohen
Founder & Chief Technologist, Enomaly Inc.
blog > www.elasticvapor.com
-
Get Linked in> http://linkedin.com/pub/0/b72/7b4

Ameed Taylor

unread,
Aug 21, 2008, 5:03:21 PM8/21/08
to Cloud Computing
Reuven,

This is a great start with a fantastic amount of high level standards
and discussion points fleshed out.

The one body that has been very instrumental in terms of setting
general internet standards has been the WC3 (www.w3c.org) For
example,
many of us with On Demand Software Companies use the standards out
of the W3C's Web Services Policy Working Group when architecting our
solutions
to make sure that they are standards based.

Thus maybe it would be a great idea to approach the W3C to see if they
would have an interest
in taking on Cloud Computing as a new working group? That way a lot
more assistance can be
garnered for the effort of creating Cloud Computing standards that
could quickly become the
worldwide standard.

If have a couple of contacts within W3C who I can refer to if you
think exploring the W3C might be
a good idea.


Ameed Taylor
President
Applation LLC
www.applation.com
blog >www.ondemandbeat.com
Get Linked in>http://www.linkedin.com/in/ameedtaylor

Paul Renaud

unread,
Aug 21, 2008, 11:02:51 PM8/21/08
to cloud-c...@googlegroups.com
Overall I think this is a great suggestion. Good enough in fact to warrant
some quibbling on the details! :)

- Most of the resources requiring description via Cloud Resource Description
are computer system resources that are already well described via CIM. Any
standards proposal that does not base itself on such a widely accepted
resource model would need significant technical justification.

- The W3C appears to be pretty committed to WS-Federation. Again, why not
leverage an existing standard to build a Cloud Federation standard on?

- Your use of terminology for "Distributed Network Management" is more about
implementation than management. The established standard for network
management is SNMP and the most widely accepted management standard for
managing remote systems is WBEM which is easily extended to a cloud via
WS-Man. Using a VPN to implement a secure network makes perfect sense,
however, there is a wide variety of alternatives within the VPN umbrella. A
key starting point would be to establish whether to use Layer 2 or Layer 3
tunnels. Layer 2 (e.g. L2TP) is more complex but has the advantage of
creating a clear channel for IP. Layer 3 (e.g. TLS) is trendy these days
but is problematic when tunnelling TCP unless UDP is used to create the
Layer 3 tunnel (as in OpenVPN).

VDE is indeed appealing because it presents a virtual network to virtual
machines and uses a Layer 2 approach (e.g. Slirp is based on PPP) without
security to interconnect physical servers. That would work well in a cloud
that is not distributed across the Internet; e.g. a data-center resident
facility that offers cloud services to the internet. But federation implies
that a cloud standard should support distributed members, and the virtual
channels between those members would need to be secure since relying on
security at the virtual network level would not guarantee the integrity of
the interconnecting channels. There is no reason why VDE could not be mapped
onto another Layer 2 implementation, but that work has not yet been done by
the open source team. A good choice would be L2TP/IPSec tunnel mode for
maximum interoperability.

- Your list of API elements should also include resource scheduling and
reservation. There are well-established grid standards here to leverage.

- As per my previous posts on the topic, cloud security/auditibity is
critical to mainstream acceptance by customers. Closely tied to this is a
management interface standard that can be used by external customers to
obtain metrics and audit information. Again, WS-Man is a well established
candidate for this purpose.

In summary, I think your idea will gain the most support if it is anchored
in a set of existing and well accepted standards.

Reuven Cohen

unread,
Aug 21, 2008, 7:15:48 PM8/21/08
to cloud-computing
Looks like I've forgotten an obvious yet important aspect to my cloud
standards. Authentication. Maybe something like OAuth or OpenID could
form the basis for this as well. I'll need to do some more thinking on
this one.

ruv

Krishna Sankar (ksankar)

unread,
Aug 21, 2008, 8:23:27 PM8/21/08
to cloud-c...@googlegroups.com
Ameed,
Good thoughts. But am not sure we are ready, yet, for a W3C-level wg. Moreover, we also have other avenues like IETF, OASIS or as Rich/Reuven suggested a "CloudVirt". We need the classic "running code and rough consensus".

Cheers
<k/>

|-----Original Message-----
|From: cloud-c...@googlegroups.com [mailto:cloud-

Reuven Cohen

unread,
Aug 21, 2008, 6:27:40 PM8/21/08
to cloud-c...@googlegroups.com

I would love an introduction to the folks at the W3C. I've also been
speaking to some of the guys at IEEE about similar cloud
standardization efforts. As part of this, I've been asked to join the
Program Committee for International Workshop on Cloud Computing being
held with the IEEE International Symposium on Cluster Computing and
the Grid (CCGRID 2009) during May 18-21, 2009, in Shanghai, China. I
hope to unveil our efforts at this event. >
http://www.gridbus.org/cloud2009

I've also got some other standard related activities in the works, but
I'm not ready to publicly say what, just yet.

ruv

Sam Johnston

unread,
Aug 22, 2008, 6:45:50 AM8/22/08
to cloud-c...@googlegroups.com
Afternoon all,

I'm glad I'm not alone in thinking standardisation efforts are premature. I have grave concerns about trying to toss a standardisation blanket over all the innovation currently taking place at a furius rate and believe that doing so now would be at best unnecessary and at worst dangerous (we all know what happened to grid computing when they started wandering down this path - see the WS-* thread if you need an example that's closer to home).

This is not to say I'm anti-standards; quite the contrary (I'm currently serving as an 'invited' expert for W3C, have contributed to OASIS most recently on ODF, etc.), it's just that we don't yet have enough cloud users, enablers & providers to churn out sensible ones, and we're doing just fine without them (in fact the absence of such standards creates opportunities for companies like Enomaly to act as translators in the interim).

As a concrete example, take data stores (SimpleDB, AppEngine's datastore, MS SSDS, etc.) - we're just now getting to the point where patterns are starting to emerge which could result in some form of standard interface, but this will best be done by putting the users, enablers and providers in a room together and seeing what comes out.

Compute cloud interfaces are also quite similar, and have common functions with legacy virtualisation that should be addressed. Ditto for storage. But in each case it is the people directly involved in these areas who should churn out standards, when a statistically significant quorum has been achieved.

Sam

Reuven Cohen

unread,
Aug 22, 2008, 11:17:46 AM8/22/08
to cloud-c...@googlegroups.com

I'm also not totally sold on the whether or not cloud computing is
ready for a cloud standard just yet. What I do think we need is a
reference implementation (Platform & Infrastructure) and common
extensible API. "CloudVirt" This API may someday form the basis for a
standard, but in the mean times gives us a uniform API to work
against., so whether you're using Google App Engine or Force.com,
GoGrid or EC2, Nirvanix or S3, you'll have a central point of
programmatic contact. I personally don't want to have to rewrite my
platform for every new cloud providers API., which is exactly what
we're doing now.

ruv

Ray Nugent

unread,
Aug 22, 2008, 12:15:54 PM8/22/08
to cloud-c...@googlegroups.com
Hey Ruv,

I just don't see the big clouds being motivated towards a standard at this point (unless, of course, it's their own standard). makes it too easy for customers to escape. Someone will make a killing tying disparate clouds together and this will probably morph into the "standard".

Ray

----- Original Message ----
From: Reuven Cohen <r...@enomaly.com>
To: cloud-c...@googlegroups.com
Sent: Friday, August 22, 2008 8:17:46 AM
Subject: Re: The Standardized Cloud


Wayne

unread,
Aug 22, 2008, 12:28:04 PM8/22/08
to Cloud Computing

>
> I'm also not totally sold on the whether or not cloud computing is
> ready for a cloud standard just yet. What I do think we need is a
> reference implementation (Platform & Infrastructure) and common
> extensible API. "CloudVirt"  This API may someday form the basis for a
> standard, but in the mean times gives us a uniform API to work
> against., so whether you're using Google App Engine or Force.com,
> GoGrid or EC2, Nirvanix or S3, you'll have a central point of
> programmatic contact.  I personally don't want to have to rewrite my
> platform for every new cloud providers API., which is exactly what
> we're doing now.

I like the idea of standardization for the very reason you suggest -
being able to migrate from one cloud (vendor) to another, federating
standard internal cloud with provider clouds, etc.

Some other thoughts to add:

- Another area that seems to have been deprecated from the earlier
work in Globus and WS standards are around privacy. Cloud adoption is
going to be impacted by privacy/trust concerns also. This includes a
host of issues that are not well instrumented as yet in code – and
jurisdictional problems are particularly challenging. What happens
when a provider goes belly-up (http://www.npr.org/templates/story/
story.php?storyId=93841182) ?
- The assessgrid.org work that Dr. Odej Kao is doing with regard to
SLA’s and assessments of (grids) Clouds should be looked at maybe as
an adjunct to ITIL.
- What about tiering strategies? Storage is commonly tiered in the
datacenter today – but if you extend that to include a more holistic
and yet modular tiering capability such as tiered compute, network,
storage – and application.
- Promoting/deprecating clouds – for example Internal Datacenter is
current primary or for the daytime – cloud is failover or nighttime.
- Cloud neutrality? Will it be an issue for the enterprise – e.g. a
non-negotiable requirement?

-w

Reuven Cohen

unread,
Aug 22, 2008, 1:51:38 PM8/22/08
to cloud-c...@googlegroups.com
On Fri, Aug 22, 2008 at 12:15 PM, Ray Nugent <rnu...@yahoo.com> wrote:
Hey Ruv,

I just don't see the big clouds being motivated towards a standard at this point (unless, of course, it's their own standard). makes it too easy for customers to escape. Someone will make a killing tying disparate clouds together and this will probably morph into the "standard".

Ray

We're involved in a few large scale cloud projects, and all the ones we're involved with are talking about standardization. Amazon being the the obvious hold out. Although I have invited them to participate, they have not responded. I don't think anyone is foolish enough to think that large organizations are simiply going to start out sourcing all their infrastructure to Amazon or any other cloud. It's going to be a hybrid model combining local and remote resources as needed. Those who release this are going to do much better in the long term .

API standardization represents the opportunity to take cloud computing from a fringe, hype driven technology to a main stream multi billion dollar market segment.

ruv

Reuven Cohen

unread,
Aug 22, 2008, 1:55:11 PM8/22/08
to cloud-c...@googlegroups.com

My goal initially to focus on the common management touch points. Things like SLA's, neutrality and privacy will need to addressed seperatly.

ruv

Wayne

unread,
Aug 22, 2008, 1:57:58 PM8/22/08
to Cloud Computing
I just had another thought regarding a standardized protection
mechanism for defunct cloud protection– perhaps an escrow account
requirement that holds vm’s and data in escrow.

-w

Utpal Datta

unread,
Aug 22, 2008, 2:21:35 PM8/22/08
to cloud-c...@googlegroups.com
I think keeping at least the North-Bound interface of this mid-layer
API constant will be a very important goal so that all applications
written by Cloud-users will remain constant.

The South bound interface of this mid-layer API will change (as more
"serious and substantial" cloud providers (anything comparable to
AWS), appear on the horizon) for some more time to come.

But large majority of the cloud users can be protected from these
changes by adding cloud-provider specific plug-ins at the bottom of
this normalized mid-layer.

i.e. creating a provider for each distinct cloud-provider.

An important hurdle will still remain until and unless there is a
convergence into a single data-model paradigm. If the Cloud Providers
each use their own data model, providing a uniform North-bound API
will still be a big hassle and will not be worth doing.

For want of anything better, I would throw in my vote for CIM for this
uniform data model (if anyone will listen :-))

--utpal

Krishna Sankar (ksankar)

unread,
Aug 22, 2008, 2:27:27 PM8/22/08
to cloud-c...@googlegroups.com

Ruv et al,

We need standardization of some sort. We can draw parallels of the current state of cloud computing to the early days of networking - independent islands of clouds with little interoperability, no standards to speak of and proprietary management interfaces. Naturally as the domain matures (or in other words, for the domain to mature) it will follow a path that will unify the control and management plane.

 

In most of these cases, what we need is a declarative deployment and programmable model. How the underlying infrastructure implements them is out of the cloud consumers’ hands. For example we might say I need round robin between these instances and the load balancer figures out how. And when we add more instances or delete an instance, the load balancer should know what to do, without doing a CLI for every change. Most probably this is what we mean by Cloud standards.

 

Would the locus and trajectory of the cloud computing take us through things like inter-cloud IM(XMPP, anyone?), peering standards, gossip and P2P protocols, facilities for classless VM migration across a hierarchy consisting of VMs-Clouds-Cloud Federation and so forth ? Time will tell.

 

Cheers

<k/>

From my blog http://doubleclix.wordpress.com/2008/08/21/the-standard-cloud/. For some reason few of my e-mails got eaten by Google. May be they will show up along with the single socks that I lost in my washing machine ;o)

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Reuven Cohen
Sent: Friday, August 22, 2008 10:52 AM
To: cloud-c...@googlegroups.com
Subject: Re: The Standardized Cloud

 

 

On Fri, Aug 22, 2008 at 12:15 PM, Ray Nugent <rnu...@yahoo.com> wrote:

Laurent Therond

unread,
Aug 22, 2008, 2:39:00 PM8/22/08
to cloud-c...@googlegroups.com

Alas, most recent standards committees have turned good efforts into mush.
That is because most players participating in those standards committees
have only 1 objective: Preserve their own competitive advantage and
posture as leaders.

Take WS-I Basic Profile, how many years wasted to achieve what?
Interoperable request/reply over HTTP, with XML as a payload?

When pundits were busy shoving SOAP and the rest of the alphabet soup down
Amazon's throat, Amazon decided to go with REST and other less politicized
approaches. It served them well.
I believe they'll welcome standards when said standards will prove to be
more than crippling tools "designed" by wanting latecomers.

(a bit too harsh and vitriolic...sorry)

Tross

unread,
Aug 22, 2008, 4:34:42 PM8/22/08
to Cloud Computing
Virtual datacenter resources are a critical first step towards
standardization. While standards exist like DMTF's CIM, we're left
with a lot of holes to fill. IMHO, this is partially due to the
standards process (read SLOOOOOOW). The last thing I'd want to see in
this hype driven space is something that slows it down. Open Source
software on the other hand is a great way to move quickly and allows
for the emergence of de facto standards.

I firmly believe libcloud (or cloudvirt) is the right approach. It
has to be separate from all the other implementations, but seeded with
integration to cloud implementations like eucalyptus, enomalism, etc.
Perhaps you need to reafactor enomalism to separate the core cloud
implementation out and insert libcloud in the middle. Perhaps you're
already nearly there.

IMHO, the base API needs to start with virtual datacenter resources
covering virtual servers, virtual networks, and virtual storage. It's
easy to go too deep and too broad. I would resist getting into
anything running on the guests. I'd like to see an API that allows a
user to create a "Virtual Private Datacenter" including multiple
network zones, vpns, virtual NAS & SAN interconnected with virtual
hosts of various capabilities.

Some day we can come back and create a CIM profile that somewhat
matches the API created in libcloud. Who knows, someone may even want
to author some a CML library in SML, but open source software is the
place to start IMHO ;-)

Randy Bias

unread,
Aug 23, 2008, 8:40:03 AM8/23/08
to cloud-c...@googlegroups.com
If by 'early days' you mean pre 1990 networking I agree.  If by 'early days' you mean after that, I don't agree.

From my perspective, I'd say that we're well past the 1990 equivalent here.  The pre-1990 equivalent were the webhosting and co-location providers, true islands, with whom you had to deal with each individually, sign long term contracts, etc.

By comparison, barriers to moving between current cloud providers are clearly lower and have less to do with technology and more to do with will/money/time/resource.  Anyone who built their infrastructure 'properly' on Amazon's EC2 to begin with should have little problem moving it to another cloud provider.  Anyone who used the AWS toolset, which encourages lock-in, likely hitched their wagon too tightly to Amazon and will pay the price.

Personally, that looks like an architecture and planning problem to me.

It's not like trying to take an application and port it between IP and IPX or Ethernet and Token Ring, or something else where the fundamental difference in the technology is huge.

Let me give a more concrete example.  Larger players who take down datacenter space will install, configure, and bring up 1,000+ servers in a few weeks (20-50 racks) in a new facility.  I've seen smaller players who took the time to build out their ops frameworks do 100 servers in a weekend.  They do this through proper tooling.  If you used cloud independent tooling and automation to begin with in your deployment to Amazon with tools like Puppet, cfEngine, and the like, you won't have any problems moving to another provider.

The problem isn't a lack of standards.  We have all kinds of 'standards' (e.g. the OS, the monitoring systems, DNS, etc. etc.).  The problem is that people are using vendor provided tools that create lock in.  If you build your own tooling, or use someone independent like CloudScale or HJK Solutions, then you won't be locked in.

Right now I can build a full multi-server stack and deploy it to EC2 or GoGrid and have it do the same thing.  That's not lock-in.  That's proper engineering.


--Randy

Randy Bias

unread,
Aug 23, 2008, 8:49:36 AM8/23/08
to cloud-c...@googlegroups.com

On Aug 22, 2008, at 1:34 PM, Tross wrote:
> Virtual datacenter resources are a critical first step towards
> standardization.

Taking the datacenter paradigm to the cloud is a fundamental error in
my opinion. The cloud isn't a datacenter and I'm not sure you want it
to be. Besides which, if you look at where the datacenter seems to be
headed, it's more towards a cloud model. Large pools of commodity
hardware, largely throw away. The datacenter is in the same position
as Big Iron in the late 90s. It's about to expire as a model for
building infrastructure.

In the future we'll see a clean schism between the physical datacenter
(run by completely different folks) and the 'cloud' (internal or
external) where virtualization is the enabling technology (server,
storage, and network virtualization) to completely disassociate the
folks who run the cloud from the hardware and the datacenter itself.
They will just run pools of resources. You can see this happening
every where right now.

> IMHO, the base API needs to start with virtual datacenter resources
> covering virtual servers, virtual networks, and virtual storage. It's
> easy to go too deep and too broad. I would resist getting into
> anything running on the guests. I'd like to see an API that allows a
> user to create a "Virtual Private Datacenter" including multiple
> network zones, vpns, virtual NAS & SAN interconnected with virtual
> hosts of various capabilities.

I think this brings a lot of older datacenter-centric ideas to the
table. Network zones and VLANs are out (too limiting). VDE and
similar tech will be in. If you want scale, then old-school polling-
based monitoring is out and distributed instrumentation is in. If you
want storage, NAS/SAN are out, and block devices on tap are in.

Let me give you an example for why this matters. NAS failover (NFS or
iSCSI) is painful or impossible to do properly. But once virtualized
and presented as a block device on demand, a la Amazon EBS, the
integration issues and management for the individual servers go away.
Now it's just a reliable piece of hardware that shows up when you want
it as a 'real' device.

> Some day we can come back and create a CIM profile that somewhat
> matches the API created in libcloud. Who knows, someone may even want
> to author some a CML library in SML, but open source software is the
> place to start IMHO ;-)

I agree. Open source is the place to start and de facto standards
will emerge. My big thing is that I think datacenter-centric notions
are fundamentally flawed. We need new paradigms for the cloud.


--Randy

Paul Renaud

unread,
Aug 23, 2008, 10:49:51 AM8/23/08
to cloud-c...@googlegroups.com
Sam's caution is well advised.  One of the important lessons learned from the success of the IETF and failure of ISO communication standards initiatives is the importance of basing a standard on a proven implementation as opposed to innovating via standards.
 
However, I do believe that it is possible to make some standardization progress in some areas.  One of these is the customer management interface of whatever cloud you offer.  This interface should be distinct from the internal management mechanism of the cloud (which ultimately is part of the mgmt provider fabric behind the customer mgmt interface) as well as distinct from the cloud's service interface(s) to actually do the business of the cloud (e.g. storage interface, etc.).


From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Sam Johnston
Sent: August 22, 2008 6:46 AM
To: cloud-c...@googlegroups.com

Utpal Datta

unread,
Aug 23, 2008, 12:55:29 PM8/23/08
to cloud-c...@googlegroups.com
CIM is a technology for data modeling. How does that make any
standardization process any faster or slower?

Let's not mix up a technology concept (CIM) with a process concept
(Open Source).

We have to accept that the resources in the cloud and resources in the
users local data center will coexist for a long time (most likely for
ever).

So far CIM has well in standardizing storage management (SMIS). CIM is
also being used in SMASH (for both physical and virtual) servers.

It will help speed up the process (and will not be SLOOOOW) if we do
not spend time in devising a new modeling paradigm for Cloud resorces
(just because we can) which will be different from the modeling
paradigm for the local resources and then spend even more time trying
to integrate the two.

--utpal

Krishna Sankar (ksankar)

unread,
Aug 23, 2008, 5:31:38 PM8/23/08
to cloud-c...@googlegroups.com
|My big thing is that I think datacenter-centric notions
|are fundamentally flawed. We need new paradigms for the cloud.
|
<KS>
Randy, agreed ! I am of the opinion that we do need a slightly
different architectural, programming and deployment model for a cloud
infrastructure. Granted, it is not far off from current application
architectures, but not the same. I do not think we really can seamlessly
move current applications. Most probably the cloud migrations will be
staged.
</KS>

Cheers
<k/>

Andrew Rogers

unread,
Aug 24, 2008, 2:21:36 AM8/24/08
to cloud-c...@googlegroups.com
--- On Fri, 8/22/08, Sam Johnston <sa...@samj.net> wrote:
> As a concrete example, take data stores (SimpleDB, AppEngine's
> datastore, MS SSDS, etc.) - we're just now getting to the point
> where patterns are starting to emerge which could result in
> some form of standard interface, but this will best be done by
> putting the users, enablers and providers in a room together
> and seeing what comes out.


To take your example further, the SimpleDB/AppEngine models are unnecessarily simple and significantly limited, so even if they converged I would question the value of standardizing on primitive patterns. For example, I am working with some new algorithm technologies at the moment that can index-organize arbitrarily high-dimensional spaces on massive clusters, which translated means that something resembling conventional relational models can be supported on Google-scale clusters plus some additional features that are currently lacking in conventional systems as well (like scaling of geospatial datasets). While you could standardize on the trivial "distributed B-tree" model the above data stores use, I think it would be premature given how fast many of these models are being obviated by better technology.

A reality is that existing cluster technology sucks pretty badly in many cases but it is being rapidly improved, so standardizing on patterns predicated on medieval and poorly suited technology will serve us poorly over the long-term. At the very minimum these standards will get subverted by de facto standards that reflect what people are actually doing, which has killed more than a couple standards.


In selecting standardization targets, you generally want a target that will not significantly change or become obsolete in the near term. I think some cloud computing aspects *are* pretty static, but like your apparent impression of cloud databases, that may reflect my naive understanding of the state of the technology. While it would be convenient if all of this was standardized, that convenience will not return value if the target is constantly moving. At this stage, I suspect just about every aspect of cloud technology is a moving target.

Cheers,

Andrew


Sassa NF

unread,
Aug 25, 2008, 8:34:38 AM8/25/08
to cloud-c...@googlegroups.com
Shouldn't we start with use-cases we are trying to solve?

Developing a standard is not only getting it standard, but realizing
what you've got to do.

What is the target audience? The cloud users? Or providers?


2008/8/21 Reuven Cohen <r...@enomaly.com>:

What is the competency of a federation? Why would a cloud user care if
there is a federation or a single provider?


> So let me take a stab at defining it.
>
> Cloud federation manages consistency and access controls when two or
> more independent geographically distinct clouds share either
> authentication, files, computing resources, command and control or
> access to storage resources. Cloud federations can be classified into
> three categories: peer-to-peer, replication, and hierarchical. Peer 2
> peer seems to be the most logical first step in creating a federation
> spec. Protocols like XMPP, P4P and Virtual Distributed Ethernet may
> make for good starting points.
>
> * Distributed Network Management
> The need for a distributed and optimized virtual network is an
> important aspect in any multi-cloud deployment. One potential
> direction could be to explore the use of VPN or VDE technologies. My
> preference would be to use VDE, (Virtual Distributed Ethernet). A
> quick refresher, a VPN is a way to connect one or more remote
> computers to a protected network, generally tunnelling the traffic
> through another network. VDE implements a virtual ethernet in all its
> aspects, virtual switches, virtual cables. A VDE can also be used to
> create a VPN.

Is this for the cloud user to use to shape their own network? Should
they be aware of the mapping of virtual hardware to physical hardware?
Will they be able to define a network without knowing this mapping?
Otherwise, looks like you are "standardising" the wrong thing about
the cloud, i.e. the implementation.

Isn't this addressed by hardware and VMs? Why should a cloud user be
bothered about this?

Sassa

Vinayak Hegde

unread,
Aug 24, 2008, 1:37:58 AM8/24/08
to cloud-c...@googlegroups.com
On Fri, Aug 22, 2008 at 11:21 PM, Reuven Cohen <r...@enomaly.com> wrote:
> We're involved in a few large scale cloud projects, and all the ones we're
> involved with are talking about standardization. Amazon being the the
> obvious hold out. Although I have invited them to participate, they have not
> responded. I don't think anyone is foolish enough to think that large
> organizations are simiply going to start out sourcing all their
> infrastructure to Amazon or any other cloud. It's going to be a hybrid model
> combining local and remote resources as needed. Those who release this are
> going to do much better in the long term .
>
> API standardization represents the opportunity to take cloud computing from
> a fringe, hype driven technology to a main stream multi billion dollar
> market segment.

I would tend to agree with you. All the pieces are to be coming
together for cloud computing.
I would we are in the trough of two elephants.
(http://students.depaul.edu/~jabsher/apoc_eleph/apoc_eleph.html)

-- Vinayak
--
http://www.linkedin.com/in/VinayakH

Reply all
Reply to author
Forward
0 new messages