Examining Cloud Compatibility, Portability and Interoperability

46 views
Skip to first unread message

Reuven Cohen

unread,
Feb 27, 2009, 5:16:54 PM2/27/09
to cloud...@googlegroups.com
Over the last few months there has been a growing amount of momentum
around cloud interoperability. Lately it seems that a few in the
industry have gotten a little confused on the differences between
Cloud Compatibility, Portability and Interoperability. So I thought it
might be time to take a closer look at the various terms and how they
relate as well as how they differ.

First lets start with Cloud Interoperability. As I've described
before, Cloud Interoperability refers to the ability for multiple
cloud platforms to work together or inter-operate. A key driver of an
interoperable cloud computing ecosystem is to eliminate what I call
proprietary "API Propagation" whereby each new cloud service provides
their own unique set of web services and application programming
interfaces. Simply, the goal of Cloud Interoperability is to make it
easier to use multiple cloud providers who share a common set of
application interfaces as well as a consensus on the terminology /
taxonomies that describe them.

The next logical question is "how?" which brings us to Cloud
Compatibility & Portability, something I would describe as a subset of
Interoperability. What I find interesting about Cloud Computing is
that unlike traditional CPU centric software development, in a cloud
computing infrastructure the underlying hardware is typically
abstracted to a point that it no longer matters what type of hardware
is powering your cloud. Cloud Computing is about uniformity -- all
your systems acting as one. Within this vision for the cloud we now
have the opportunity to uniformly interact with a virtual
representation of the infrastructure stack, one that can look and act
anyway we choose (See my Multi & Metaverse posts). Whether you're
using a component centric virtual environment (IaaS) or an application
fabric (PaaS ) is completely secondary. What matters now is ensuring
that an application has a common method of programmatic interaction to
the underlying resources & services. More simply, Cloud Compatibility
means your application and data will always work the same way
regardless of the cloud provider or platform, internally or
externally, open or closed.

Lastly is that of "Cloud Portability", this is where the application
components are able to be easily moved and reused regardless of the
provider, location, operating system, storage, format or API. The
prerequirement for portability is the generalized abstraction between
the application logic, data and system interfaces. When you're
targeting several cloud platforms with the same application,
portability is the key issue for development & operational cost
reduction as well as a critical requirment when trying to avoid cloud
lock-in.

One such example that address several of these concepts is the Open
Virtualization Format (OVF). The format is an "open" standard proposed
by the DMTF for packaging and distributing virtual appliances or more
generally software to be run in virtual machines. The proposed
specification describes an "open, secure, portable, efficient and
extensible format for the packaging and distribution of software to be
run in virtual machines". The OVF standard is not tied to any
particular hypervisor or processor architecture. Like most standards,
the major problem is it's only useful if all platforms / clouds
actually support the OVF format. That's where a "semantic abstraction
layer" comes in handy, with semantic cloud abstraction it doesn't
matter if any providers actually implement the standard because it
acts as a unifying agent capable of adapting to the similarities all
cloud API's share while either ignoring the unique aspects.

I believe that without first a common consensus and eventually set of
cloud standards, that the cloud ecosystem will likely result in a
series of proprietary cloud's giving cloud users little hope of ever
leaving. Cloud customers will be forced to be locked-in to a
particular cloud vendor, unable to use another cloud without
substantial switching costs. At the end of the day, this is the
problem we're trying to solve at the CCIF.

I'm interested in hearing others opinions on Cloud Compatibility,
Portability and Interoperability and how they differ.

Original Post >
http://www.elasticvapor.com/2009/02/examining-cloud-compatibility.html
--
--

Reuven Cohen
CCIF Instigator
www.cloudforum.org

Gabriel Kent

unread,
Feb 28, 2009, 9:45:11 AM2/28/09
to Cloud Computing Interoperability Forum (CCIF)
Nice post Reuven....

I am for what ever puts pressure on increasingly generic grids... we
need processing to look more like electricity than not.

Right now I see a killer combo using 3tera's ADL ():::

ADL to applogic (theirs)
ADL to ec2
ADL to appengine

Of course their UI is useful... moreso if they open it ;)

If the transcoding was graceful enough, the above could really feel
like deploying over a generic grid.

Then eventually you get to create that fun script that expands and
contracts based on RT market GPU prices.

-----
OT funny thought:: Imagine GPU backed currency.

Greg Pfister

unread,
Feb 28, 2009, 11:30:22 AM2/28/09
to Cloud Computing Interoperability Forum (CCIF)
Let's back up to interoperability. Do you mean:

1) App 1 on cloud 1 can talk to app 2 on cloud 2, using normal
standard techniques, so together they accomplish something?

or

2) App 1 on cloud 1 can expand into cloud 2, so it's effectively one
app running across two clouds? -- perhaps with different compiles /
libraries / whatever for each cloud.

Or is it something I haven't thought of? If it's one of those two, I
think it backs into a corner where interoperability is either trivial
or impractical.

Type 1 is basically trivial, since nobody sane would sabotage the
ability to talk to other sites in a standard way. The way to talk
within a cloud may not be that standard way, but at least the standard
cross-Internet communications methods aren't going to be prohibited.

Type 2 requires compatible inter-node communication in both clouds,
plus the ability for that communication to go outside each cloud, plus
the ability to access cloud 1's storage from cloud 2, plus maintaining
security, etc. I'm not sure type 2 is practical short of standardizing
a large chunk of everything everywhere.

Greg Pfister
http://perilsofparallel.blogspot.com/

JP Morgenthal

unread,
Feb 28, 2009, 11:59:37 AM2/28/09
to cloud...@googlegroups.com
Is everyone that is participating in the CCIF aware of Amoeba?

http://74.125.47.132/search?q=cache:xa8oZxL8U88J:www.cs.vu.nl/pub/amoeba/Intro.pdf+amoeba+distributed+operating+system&hl=en&ct=clnk&cd=2&gl=us

http://en.wikipedia.org/wiki/Amoeba_distributed_operating_system

I have a concern that if this project goes off the rails in any one
particular direction, then it will be re-inventing Amoeba.

My belief is that initiatives like this will go right up to the
horizon, see an Amoeba-like outcome, and dissipate. Why? Because once
you see the inevitable outcome for a project of this ilk, it loses its
steam.

That said, I believe there's some good work that can occur that
provides some benefit for building cross-vendor cloud solutions (e.g.
database-as-a-service + platform-as-a-service from 2 different
providers), although, for this to occur, the DB service would have to
offer some really unique capabilities not available elsewhere, like
ultra-fast microcubes for biz intelligence on the fly. Still, I can
envision this being handled at the application layer, and thus simply
being a matter of application design, not cloud interop.

Of note, if we did embed Amoeba interfaces into the Hyper-V layer, we
have the basis for moving processes around between clouds regardless
of the provider. I'm not recommending this, but I am just tracing the
line of thinking out beyond the visible horizon to illustrate where
this could lead.

JP

-----------------------------------------------
JP Morgenthal
cell : 703-554-5301
email: jpmorg...@gmail.com
email: m...@jpmorgenthal.com
twitter: www.twitter.com/jpmorgenthal
blog: www.jpmorgenthal.com/morgenthal

C Wegrzyn

unread,
Feb 28, 2009, 12:07:42 PM2/28/09
to cloud...@googlegroups.com
I am. There was also a program at MIT called "Trix" that was a network
operating system.

Chuck Wegrzyn

C Wegrzyn

unread,
Feb 28, 2009, 12:13:16 PM2/28/09
to cloud...@googlegroups.com
My personal view is that thinking of threads and processes in a cloud is
a wrong model. I keep thinking that what I write might need thread and
processes but it runs within the framework of a VM. What I want is a VM
that can move from cloud to cloud.

Chuck Wegrzyn

JP Morgenthal wrote:

JP Morgenthal

unread,
Feb 28, 2009, 12:33:44 PM2/28/09
to cloud...@googlegroups.com
Chuck,

That is a good middle of the road model along the lines of what I was
alluding to when I wrote "going off the rails" in my original email.
Obviously, ESX can do this today across ESX hypervisors. But, that's
extremely resource intensive. I would prefer to only eat up the
resources I need if I'm paying for what I use.

The bigger issue for me, is what are the reasons for cloud interop
from business perspective, but perhaps there's a glitch in my
thinking, so I'll throw it out here for you all to beat up on

1) I want to scale my application to support peak usage periods, e.g.
end of month processing. Why would I go to another cloud vendor for
that extra scale?
2) I want the ability to change cloud vendors on the fly to foster
competition and keep prices low. Easily enough done by acquiring IaaS
services and acquiring hardware with specific OS characteristics.
[Note: I don't see specialty hardware requirements being pushed out to
the cloud, so we're talking commoditized hardware / software usage for
99% of cloud usage]
3) I am acquiring a particular service because it is unique. You
will be locked into this vendor because the offering is unique, so
other cloud offerings are relatively useless in this scenario.
4) I want to join a unique service offering (SaaS/PaaS) with my
already existing commodity cloud application. For example, I may
acquire a unique security service to front-end my e-commerce
application that is in the cloud. This is a configuration issue that
may require my e-commerce platform to connect in alternate
authentication services--a common multi-tiered design that we see in
many data centers today, except now it's billed based on usage instead
of owning it.

So, am I missing a business use case that indicates that cloud lock-in
would occur unnaturally (e.g. #3 above)?


JP

C Wegrzyn

unread,
Feb 28, 2009, 1:02:22 PM2/28/09
to cloud...@googlegroups.com
JP,

You are right you don't want cloud vendor lock-in. That is the only way
to apply pressure to keep things competitive.

Your question about needed to scale cloud resources to move to another
provider is interesting. It got me thinking about sla and performance.
Cloud providers will only have finite resources, even if it is "big".
But it could be the case that your provider just doesn't have the
instantaneous resources you need. At that point you want to be able to
move part of your operation to another provider.

Krishna Sankar (ksankar)

unread,
Feb 28, 2009, 1:00:17 PM2/28/09
to cloud...@googlegroups.com
Good discussions, JP. One of those rare occasions where I agree with you ;o)

a) Yep we should look for business cases.
b) One use case would be : DR and redundancy. To reduce risk of single source, companies need at least two sources. For example if an outage happens - like what happened to AMZ or Gmail -, which is specific to a cloud provider, businesses need to continue.
c) You are right, we are tending towards a cloud OS, and that is not practical. That my major peeve against Reuven's blog. I will jot down a few points by this weekend.
d) As Chuck says, if we start talk about threads we are way off the mark

Cheers
<k/>

|-----Original Message-----
|From: cloud...@googlegroups.com [mailto:cloud...@googlegroups.com]
|On Behalf Of JP Morgenthal
|Sent: Saturday, February 28, 2009 9:34 AM
|To: cloud...@googlegroups.com
|Subject: Re: Examining Cloud Compatibility, Portability and
|Interoperability
|
|

JP Morgenthal

unread,
Feb 28, 2009, 1:10:35 PM2/28/09
to cloud...@googlegroups.com
See Below:


On Sat, Feb 28, 2009 at 1:00 PM, Krishna Sankar (ksankar)
<ksa...@cisco.com> wrote:
>
> Good discussions, JP. One of those rare occasions where I agree with you ;o)

Ugg! You're killing me Bro!

>
> a)      Yep we should look for business cases.
> b)      One use case would be : DR and redundancy. To reduce risk of single source, companies need at least two sources. For example if an outage happens - like what happened to AMZ or Gmail -, which is specific to a cloud provider, businesses need to continue.

Yes, Yes & Yes. I missed this one completely and boy does it make
sense. Although, my experience has been that outsourced data centers
go to facilities with DR built in. This is a bit dangerous since
there's no "auto failover" as you would expect within a single cloud
provider. You would also be responsible for spinning up the DR site
if there was not an active/active relationship and if there was an
active/active relationship, chances are you've taken the steps
necessary to bring your app up in each cloud provider's environment.
Portability may have been helpful with this last task, but interop may
be helpful for indicating a DR event.

> c)      You are right, we are tending towards a cloud OS, and that is not practical. That my major peeve against Reuven's blog. I will jot down a few points by this weekend.
> d)      As Chuck says, if we start talk about threads we are way off the mark

Chuck's point about getting additional juice when you're cloud
provider is out of steam may be a bit far fetched since how would you
know that your cloud provider is out of steam? It's not like they're
going to advertise this and if they're a good cloud provider, this
should be transparent to you.

C Wegrzyn

unread,
Feb 28, 2009, 1:12:31 PM2/28/09
to cloud...@googlegroups.com
What I want to see is a VM that is movable. So that means I need to
connect to a real network, and perhaps a "virtual network" that spans
hosts and cloud providers (my virtual network should be able to go any
where I want it).

Outside of the network, I don't think it really needs anything else,
other than quote like things. After all the "local" disk is nothing but
some file, and all the other disks can be attached via mounts of remote
file systems.

That is a pretty simple system to build out.

Chuck

C Wegrzyn

unread,
Feb 28, 2009, 1:17:52 PM2/28/09
to cloud...@googlegroups.com
JP,

I think as part of every contract to use a cloud provider there has to
be an SLA in place (if you don't have one you are really at their
mercy!). So, while the VM won't know (maybe), their has to be some meta
management support to monitor the SLA and invoke the movement.

Of course it might be possible for cloud providers to have an agreement
amongst themselves to provide back up and movement, I wouldn't count on
it. And it is still an SLA agreement.

But I don't know if I want my cloud provider doing the movement when
they detect over-scheduling. It would be more to my benefit to know when
it happens and deal with it. (Having been an ISP I can tell you that a
great many of them would add users without adding extra backbone
connections!).

Besides disaster recovery, I can also imagine you might want your
application to follow the sun.

Krishna Sankar (ksankar)

unread,
Feb 28, 2009, 1:23:14 PM2/28/09
to cloud...@googlegroups.com
JP,
It is not that they are running out of steam, but that they cannot meet the set of declarative policies that one requires at that instant.
For example if the Government of CA has a policy which says processing should happen in the US and if AMZ does not have the required resources they have to say so. Remember there might also other policies like data and compute adjacency or network load balancing resource requirements or even VLAN and other security stuff.

Bert Armijo

unread,
Feb 28, 2009, 2:38:40 PM2/28/09
to cloud...@googlegroups.com
IMHO ready portability of workloads changes the nature of SLA's. Most SLAs today are mere credits, but that does nothing for application availability. Cloud SLAs, on the other hand, can involve definitive actions.

For example, our users already have the ability to script the movement of applications between providers with a few lines of PHP, and the trigger can be any data the script has access to. At the moment, this requires accounts with both providers, but in the future that may change.

JP Morgenthal

unread,
Feb 28, 2009, 2:52:06 PM2/28/09
to cloud...@googlegroups.com
This is a good use case that needs to be captured for sure. You didn't
clarify if all the demand would get shifted to another provider or a
portion of the demand, that would certainly make the use case for
interop more attractive in the latter case.

If solely for DR purposes, however, my experience is that DR sites are
usually minimally supported by data replication to a cold site and
then activation of the cold site if the hot site goes down.

Just because there's Cloud interop in place, doesn't mean that his
architecture would not continue, but it would simplify the
configuration of a DR site. That being said, the biggest issue with
the DR site is still going to be a reliable connection between the two
sites and replication software that supports the minimal time lag.
Thus, I don't see this being a dynamic event, but a planned event,
which brings us back to the use case for the ability to configure and
setup a DR site keeps prices low and increases competition.


-----------------------------------------------
JP Morgenthal
cell : 703-554-5301
email: jpmorg...@gmail.com
email: m...@jpmorgenthal.com
twitter: www.twitter.com/jpmorgenthal
blog: www.jpmorgenthal.com/morgenthal



On Sat, Feb 28, 2009 at 1:23 PM, Krishna Sankar (ksankar)

tluk...@exnihilum.com

unread,
Feb 28, 2009, 6:46:43 PM2/28/09
to cloud...@googlegroups.com
Has CCIF developed and published working definitions of Cloud "Compatibility", "Portability" and "Interoperability"?

On one hand it appears that collectively we share a belief that Cloud Portability, Interoperability and Compatibility are necessary to mitigate risks and minimize financial losses to organizations moving to and from the Cloud, and to help protect and insure the continuation of their business operations when (for any reason) a Cloud-hosted solution must be relocated.

But on the other hand, I have a growing sense that as individuals we all have very different ideas about how we should execute on this shared vision. I finally had to make this post after reading Reuven's earlier post, which included the following comment:


> What I find interesting about Cloud Computing is that
> unlike traditional CPU centric software development, in
> a cloud computing infrastructure the underlying hardware
> is typically abstracted to a point that it no longer
> matters what type of hardware is powering your cloud.

I haven't had to think about the processor since the early 80's, when I wrote code in assembly language for (what was then) Advanced Netware. And I don't see that it is in any way an exclusive property of the Cloud that "the hardware no longer matters". That's been true for almost every compiled language for decades, certainly the whole idea behind the C Programming Language invented way back when by Kernighan and Ritchie.

Please note this is not a criticism of Reuven's post; I am only using it as an illustration of what I see as an underlap of our individual visions of and perspectives on the very things that we're discussing in this group, and attempting to define in this thread. And since I realize that everyone in this group is hyper-busy, I'm going to try to keep just a bit of your valuable attention by looking at only one of the three: Portability

Let's say that I write one application for a client using Salesforce's APEX language and Multitenant Data API, and another for the same client using Intuit's development platform on Quicken. Since both solutions were developed for a single client using a common business and data model, and because communication between them relies on standard XML and HTTP messaging, from a logical and practical standpoint they've been made Interoperable and Compatible.

Portability however is a different story. There's no way to move the Intuit application to Salesforce's Cloud or the Salesforce application to Intuit's Cloud without a complete rewrite. And this was my motivation for opening the discussion "How standards could have reduced the challenges that Coghead's clients faced.." (Thanks to Simon for his thoughtful response.. ;)

Now, if we took a minimalist approach and defined "Portability" as simply "the ability to 'port'" then I dare say that almost everything is already Portable. Given enough time and money I can "port" that APEX application to any other Cloud provider by completely rewriting it. Clearly not a good working definition of Portability.

So while Einstein would agree that we should not define Portability "too simply", we also have to avoid being too narrow and detailed. For example, I'll have to admit to everyone here that, while it is often used in CCIF conversations, I can't really imagine the word "hypervisor" ever being uttered by a group of software architects and developers discussing Cloud Portability in the context of how to avoid a total rewrite when moving an application between Coghead, Salesforce and Intuit.

But is that not a story about, and use case for, Portability?

So, getting back to the point and the question: Does the CCIF group have available clear, documented working definitions of Portability, Interoperability and Compatibility? If the answer is "Yes", I would really appreciate directions to these resources so that I can post more productively in the future, and it the answer comes back "No" or even "Not really.." then I wonder how can we really be effective or successful without these tools?

And please accept my sincere appreciation for your sticking with me to the end of this post,, ;)

Cheers,
Tom




-----Original Message-----
From: "Reuven Cohen" [r...@enomaly.com]
Date: 02/27/2009 05:17 PM
To: cloud...@googlegroups.com
Subject: Re: Examining Cloud Compatibility, Portability and Interoperability


Gabriel Kent

unread,
Feb 28, 2009, 7:50:44 PM2/28/09
to Cloud Computing Interoperability Forum (CCIF)
"Like most exurb data-commuters, Pollack rented the standard optical
links: Bell, Boeing, Nippon Electric. Those, together with the local
West Coast data companies, gave him more than enough paths to proceed
with little chance of detection to any acceptable processor on Earth.
In minutes, he had traced through three changes of carrier and found a
place to do his intermediate computing. The comsats rented processor
time almost as cheaply as ground stations, and an automatic payment
transaction (through several dummy accounts set up over the last
several years) gave him sole control of a large data space within
milliseconds of his request." (True Names - http://books.google.com/books?id=p_7DwC8Rwk4C)

The generic grid trend is important because computation is
increasingly personal. Large gov, edu and com nets will benefit to be
sure but individual usage in aggregate will out pace that kind of
usage... as it is today, the vast majority of computation exists with
end-users.

Again, we need processing to work more like an electricity grid than
not and app developers need the equivalent of a 3-core power cord to
make life easy.

Greg Pfister

unread,
Mar 1, 2009, 10:35:11 PM3/1/09
to Cloud Computing Interoperability Forum (CCIF)
Holy History, Batman! I haven't heard Amoeba referred to in at least
10 years.

I used to really believe in systems like that. In fact, there's a
whole chapter in my book (published 1997) explaining why single system
image (at the level of the kernel API) is the best, right, clearly
obvious thing to do.

But time passed, and it didn't take over the world, so something's
wrong there are obviously flaws in the concept -- which I finally
figured out, and published in a multi-post series in my blog, starting
here:
http://perilsofparallel.blogspot.com/2009/01/multi-multicore-single-system-image.html
.

It doesn't specifically mention Amoeba -- thanks for reminding me --
but does mention other similar systems, and particularly Locus, which
I had the most involvement with.

The capsule summary of "why not," quoting from the end of the
concluding entry: (a) you can't sell it; (b) it's at right angles to
the vastly dominant application programming model; (c) it scales less
well than many applications you would like to run on it.

("You can't sell it" is backed by focus group research. Take a look.)

Oh, and the second post in the series, describing what this notion is
all about, is I think enough to indicate that none of the kinds of
discussion going on here is likely to turn into something like that.
It really is a completely different concept.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Feb 28, 10:59 am, JP Morgenthal <jpmorgent...@gmail.com> wrote:
> Is everyone that is participating in the CCIF aware of Amoeba?
>
> http://74.125.47.132/search?q=cache:xa8oZxL8U88J:www.cs.vu.nl/pub/amo...
>
> http://en.wikipedia.org/wiki/Amoeba_distributed_operating_system
>
> I have a concern that if this project goes off the rails in any one
> particular direction, then it will be re-inventing Amoeba.
>
> My belief is that initiatives like this will go right up to the
> horizon, see an Amoeba-like outcome, and dissipate.  Why? Because once
> you see the inevitable outcome for a project of this ilk, it loses its
> steam.
>
> That said, I believe there's some good work that can occur that
> provides some benefit for building cross-vendor cloud solutions (e.g.
> database-as-a-service + platform-as-a-service from 2 different
> providers), although, for this to occur, the DB service would have to
> offer some really unique capabilities not available elsewhere, like
> ultra-fast microcubes for biz intelligence on the fly.  Still, I can
> envision this being handled at the application layer, and thus simply
> being a matter of application design, not cloud interop.
>
> Of note, if we did embed Amoeba interfaces into the Hyper-V layer, we
> have the basis for moving processes around between clouds regardless
> of the provider.  I'm not recommending this, but I am just tracing the
> line of thinking out beyond the visible horizon to illustrate where
> this could lead.
>
> JP
>
> -----------------------------------------------
> JP Morgenthal
> cell : 703-554-5301
> email: jpmorgent...@gmail.com
> email: m...@jpmorgenthal.com
> twitter:www.twitter.com/jpmorgenthal
> blog:www.jpmorgenthal.com/morgenthal
>
> On Sat, Feb 28, 2009 at 9:45 AM, Gabriel Kent
>

Stuart Charlton

unread,
Mar 2, 2009, 1:33:36 AM3/2/09
to cloud...@googlegroups.com
Hi Greg,

You've inspired a tl;dr post from me. ;-) I've blogged this here:
http://www.stucharlton.com/blog/archives/000582.html

Contents follow:

I tend to think of interoperability as a gradient.

The old industry stalwart from the 1990's is what I'd call "runtime
interoperability", wherein you could write a Java EE application,
deploy it on a Java EE application server, and (with a questionable
amount of tweaking), get it to operate. SQL was another attempt at
this, with mixed success. The later CORBA standards tried too, with
the Portable Object Adapter (POA). And clearly, the ISO/ANSI C
runtime libraries have been successful, as have many other programming
libraries.

The other angle of interoperability grew in the 2000's is what I'd
call "protocol interoperability", an approach that, at first anyway,
only a network engineer could love. Most of the *TP's on the
internet take this approach, where the "network" is first, and
dictates the pattern of interaction -- the "developer" and their
desires or productivity is secondary.

With cloud computing, we're currently going through the age old
discovery of "what form of interoperability makes sense?".
Especially given that we're dealing with networked applications
(indicating a need for "protocol interoperability") but also with
complex configuration & resource dependencies for security,
scalability, etc. (an area where "runtime interoperability" usually
plays).

Starting Observation: Microsoft Has A Clue.

Windows Azure is trying to balance these approaches to
interoperability. For example, .NET Access Control Services allow
you to federate identity between your own Active Directory and
Azure. This is all just Active Directory Federation Services (ADFS)
and using the WS-Federation "standard"; something you could do with
OpenSSO too, for example, for over a year. But they'll probably make
it easier if you stick within the .NET / Windows world.

A similar case could be made with their .NET Service Bus, as a way
of enabling Windows Communication Foundation and Biztalk applications
span Windows Azure and private deployment(s). This isn't just a pipe
dream, either, they're actively doing this with the early Azure
releases.


What makes this work is that .NET is already a widely used platform in
private data centers, and that .NET is a single-source runtime.
Now, an astute observer may exclaim, "but that's not interoperable!
Where is the multi-vendor ecosystem!?" At which point we have to ask
ourselves, what's the scope of desired interoperability?

Is it :

- A vendor ecosystem of interoperable runtimes? Ponder the success
and market results of SQL, Java EE, etc. before wishing for this.
Where they did make a difference? (They did make a difference, but
perhaps not where one would intuitively think.)

and/or

- The ability to enable multiple providers to host a runtime and
enable interoperable "services" (e.g. identity, data, process, etc.)
across them?

I suspect the latter is more readily attainable, and likely higher
value, than the former.

So.... What are the alternatives for a "hybrid cloud" platform to .NET
and Azure?

- Force.com APEX might work if they invested in private deployments --
not likely.
- There's Java, though Sun, IBM and Oracle haven't been doing much
there yet.
- There's EngineYard starting down the Enterprise Ruby on Rails path.
- Google perhaps heading down the Enterprise Python path (also not
likely)

and of course, everyone's favorite...
- Infrastructure as a Service, where you could write your
infrastructure in Erlang and OCaml for all your cloud provider cares
(so long as you don't use multicast ;-)


In this last case, runtime interoperability would require a lot of
"roll your own" configuration management, integration, and
interoperability. Or you could rely on...

- So-called "Cloud Servers" (e.g. CloudSwitch, 3Tera, Elastra, etc.)

.... which give you ways to help craft models & designs &
orchestrations that help you with configuration management,
integration, policy, interoperability, and governance. Which in
essence is just like what the Hybrid PaaS guys are doing above:
constraining the problem space to gain some level of deployment
flexibility. The difference is that cloud servers boil the problem
down to a (hard) configuration management problem, instead of building
"a standard runtime to rule them all".

Naturally, because I work at one of those "cloud server" vendors,
you'd think this is my preferred model. But honestly, I'd be pretty
happy for the industry if they agreed on either model. Time will tell.

Given all this,

a) I have serious doubts about a "new" cloud runtime portability
standard. The battle lines were drawn long ago, and while they'll
blur, it likely will continue to look like " .NET vs. Java vs.
everything else" for at least 2+ years.

b) One could argue it would be nice to build a standard protocol
(using an architecture that fits how early adopters think) for
Infrastructure as a Service to provision "Obvious Stuff" like storage,
CPU, and network. The DMTF CIM stuff is great but probably too low-
level, and too WS-* focused to be palatable to early adopters. The
DMTF OVF stuff is likewise great, but isn't focused on "lifecycle",
i.e. what the heck happens to this deployment over time? It's (thus
far) focused on creating virtual appliance bundles.

Something RESTful would be nice to enhance our serendipity, but
frankly the EC2 API isn't all that great of a starting point (for
several reasons; different discussion though).

Regardless of the architecture, the big win here would be that it
would reduce the need for a "Cloud Service Bus" that mediates among
different APIs. I think this kind of standard will happen, but it
will take 2+ years, thus being ratified just as early adopters have
bought their shiny new Cloud Service Bus..... ;-)

c) A wild card is where the "massively parallel processing uber alles"
crowd will flock. From what I can tell, three options:
(i) Hadoop (i.e., Java; though I bet there's a .NET port coming),
(ii) Open Grid Forum / Globus,
(iii) Parallel SQL Database (e.g. Vertica, ParAccel, Greenplum, etc.)

And those cases are very clearly NOT going to be a likely candidate
for hybrid-cloud interoperability below the service-API level, given
the latency requirements and tight coupling inside those services.

d) Even in a world of a handful of interoperable Cloud Platforms, I
expect there's a going to still be a big configuration management and
governance problem.



Cheers
Stu

JP Morgenthal

unread,
Mar 2, 2009, 10:45:57 AM3/2/09
to cloud...@googlegroups.com
Greg,

I disagree with your point below

> Oh, and the second post in the series, describing what this notion is
> all about, is I think enough to indicate that none of the kinds of
> discussion going on here is likely to turn into something like that.
> It really is a completely different concept.

Clearly, some level of homogeneity will greatly simplify this
problem. The means to homogeneity are: a) is there enough momentum to
forge agreement between cloud providers, b) will a new market emerge
that creates a homogeneity layer on top of disparate clouds, or c) a
deflationary economy leads to one major leader forcing direction.

I agree that process migration goes too far, but virtual machine
movement is too coarse grained and is abusive of resource allocations.
So, that indicates a proclivity toward managing applications across
platforms. Moreover, the components of an application can span
specialty and generic functions, e.g. security, load balancing,
database, etc.

Ultimately, the end picture looks like the ability to package,
move, unpack, and execute an application across diverse cloud
architectures. Definitely, not Amoeba, but also, definitely in line
with some of it's core concepts.

JP

Greg Pfister

unread,
Mar 2, 2009, 11:56:40 AM3/2/09
to Cloud Computing Interoperability Forum (CCIF)
Hi, Stu.

On Mar 2, 12:33 am, Stuart Charlton <stuartcharl...@gmail.com> wrote:
> Hi Greg,
>
> You've inspired a tl;dr post from me.    ;-)   I've blogged this here:http://www.stucharlton.com/blog/archives/000582.html
>
> Contents follow:
>
> I tend to think of interoperability as a gradient.
>
> The old industry stalwart from the 1990's is what I'd call "runtime  
> interoperability", wherein you could write a Java EE application,  
> deploy it on a Java EE application server, and (with a questionable  
> amount of tweaking), get it to operate.    SQL was another attempt at  
> this, with mixed success.    The later CORBA standards tried too, with  
> the Portable Object Adapter (POA).   And clearly, the ISO/ANSI C  
> runtime libraries have been successful, as have many other programming  
> libraries.

What you call "interoperability" above is what I'd call "portability"
-- the ability to host an application (or other piece of software) on
multiple platforms with little, or at least manageable, alteration.

> The other angle of interoperability  grew in the 2000's is what I'd  
> call "protocol interoperability", an approach that, at first anyway,  
> only a network engineer could love.   Most of the *TP's on the  
> internet take this approach, where the "network" is first, and  
> dictates the pattern of interaction -- the "developer"  and their  
> desires or productivity is secondary.

This is more what I'd call interoperability -- it necessarily involves
communication, so communication providers naturally focus on it.

One key to me is that interoperating applications (or other stuff) do
not move from one platform to another; they just communicate.

If you type "define interoperate" at Google, you get a pile of
definitions that some down to just that -- the ability to communicate
and exchange information.

Like, old mainframes and PCs did not interoperate easily because of
the need to change data formats (EBCDIC/ASCII, endian issues, floating-
point formats). They could communicate, but meaningfully exchanging
information was harder.

> With cloud computing, we're currently going through the age old  
> discovery of "what form of interoperability makes sense?".    
> Especially given that we're dealing with networked applications  
> (indicating a need for "protocol interoperability") but also with  
> complex configuration & resource dependencies for security,  
> scalability, etc. (an area where "runtime interoperability" usually  
> plays).
>
> Starting Observation: Microsoft Has A Clue.

Is that a typo? Did you mean "Startling"? :-)

Greg Pfister
http://perilsofparallel.blogspot.com/
> ...
>
> read more »

Greg Pfister

unread,
Mar 2, 2009, 12:04:10 PM3/2/09
to Cloud Computing Interoperability Forum (CCIF)
JP,

I agree that some degree of homogeneity does help, and there's a
spectrum.

Here's a different thought, though, which just occurred to me (so it's
not in those blog entries):

Approaches like Amoeba, Locus, VMScluster (HP still sells it!), etc.,
have another disadvantage: They're Big Bang approaches. You do all
this work, and when you're finally done everything is wonderful. The
problem is the delay through all the work.

Approaches layering on top of existing tools, like virtualization,
offer a more incremental approach that gets some value sooner.

The final result usually isn't pretty -- often it's layers on layers
of crud and overhead -- but at each step, it does some good and
therefore sells.

That's what I think the current attempts are more like.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Mar 2, 9:45 am, JP Morgenthal <jpmorgent...@gmail.com> wrote:
> Greg,
>
>   I disagree with your point below
>
> > Oh, and the second post in the series, describing what this notion is
> > all about, is I think enough to indicate that none of the kinds of
> > discussion going on here is likely to turn into something like that.
> > It really is a completely different concept.
>
>    Clearly, some level of homogeneity will greatly simplify this
> problem.  The means to homogeneity are: a) is there enough momentum to
> forge agreement between cloud providers, b) will a new market emerge
> that creates a homogeneity layer on top of disparate clouds, or c) a
> deflationary economy leads to one major leader forcing direction.
>
>     I agree that process migration goes too far, but virtual machine
> movement is too coarse grained and is abusive of resource allocations.
>  So, that indicates a proclivity toward managing applications across
> platforms.  Moreover, the components of an application can span
> specialty and generic functions, e.g. security, load balancing,
> database, etc.
>
>    Ultimately, the end picture looks like the ability to package,
> move, unpack, and execute an application across diverse cloud
> architectures.  Definitely, not Amoeba, but also, definitely in line
> with some of it's core concepts.
>
> JP
>
> On Sun, Mar 1, 2009 at 10:35 PM, Greg Pfister <greg.pfis...@gmail.com> wrote:
>
> > Holy History, Batman! I haven't heard Amoeba referred to in at least
> > 10 years.
>
> > I used to really believe in systems like that. In fact, there's a
> > whole chapter in my book (published 1997) explaining why single system
> > image (at the level of the kernel API) is the best, right, clearly
> > obvious thing to do.
>
> > But time passed, and it didn't take over the world, so something's
> > wrong there are obviously flaws in the concept -- which I finally
> > figured out, and published in a multi-post series in my blog, starting
> > here:
> >http://perilsofparallel.blogspot.com/2009/01/multi-multicore-single-s...
> ...
>
> read more »

Stuart Charlton

unread,
Mar 2, 2009, 12:40:56 PM3/2/09
to cloud...@googlegroups.com
On Mon, Mar 2, 2009 at 8:56 AM, Greg Pfister <greg.p...@gmail.com> wrote:

Hi, Stu.

On Mar 2, 12:33 am, Stuart Charlton <stuartcharl...@gmail.com> wrote:
> Hi Greg,
>
> You've inspired a tl;dr post from me.    ;-)   I've blogged this here:http://www.stucharlton.com/blog/archives/000582.html
>
> Contents follow:
>
> I tend to think of interoperability as a gradient.
>
> The old industry stalwart from the 1990's is what I'd call "runtime  
> interoperability", wherein you could write a Java EE application,  
> deploy it on a Java EE application server, and (with a questionable  
> amount of tweaking), get it to operate.    SQL was another attempt at  
> this, with mixed success.    The later CORBA standards tried too, with  
> the Portable Object Adapter (POA).   And clearly, the ISO/ANSI C  
> runtime libraries have been successful, as have many other programming  
> libraries.

What you call "interoperability" above is what I'd call "portability"
-- the ability to host an application (or other piece of software) on
multiple platforms with little, or at least manageable, alteration.

Sure, though there is some nuance here.    

The "Type #2" of interoperability you listed below seemed to me to be what one traditionally thinks of as requiring a "standard runtime".   One can achieve portability without a "standard runtime" (you could just have recompiled libraries you bind to, for example).    But a standard runtime is usually what's required to enable portability while ensuring certain "illities", such as scalability, reliability, etc.    It's hard to do with a library.   The runtime will often take the "Hollywood" approach to userland code (Don't call us, we'll call you).    That's quite different from, say, recompiling an ANSI C library for portability, so I decided to call it "runtime interoperability". 


> Starting Observation: Microsoft Has A Clue.

Is that a typo? Did you mean "Startling"? :-)

I wish I had, in retrospect ;-)

Cheers
Stu

Krishna Sankar (ksankar)

unread,
Mar 2, 2009, 10:01:43 PM3/2/09
to cloud...@googlegroups.com

My thoughts at http://doubleclix.wordpress.com/2009/03/02/cloud-interoperability-of-the-5th-kind/.

 

 

In the last couple of days there have been a few good posts on Cloud Interoperability from Reuven, Greg and Stu. I also had written a blog about standard cloud couple of months ago. To understand the cloud interoperability, we do not have to go that far - from the clouds I mean - just taking a cue from the world of UFOs would be enough !

 

   1. Cloud Interoperability of the 1st Kind => Clouds exist and can be observed. This is the state we are in

   2. Cloud Interoperability of the 2nd Kind => Cloud services can talk to each other

   3. Cloud Interoperability of the 3rd Kind => Clouds can observe each other and some form of intelligent communication can happen

   4. Cloud Interoperability of the 4th Kind => Cloud are abducted by each other - just VM movement from one cloud to another

   5. Cloud Interoperability of the 5th Kind => This is where “joint, bilateral contact events produced through the conscious, voluntary and proactive … cooperative communication” happens. In essence, the clouds can work together

With this enlightenment, we can examine the POV of the cloud luminaries …

... <more on line>

As I have said earlier, we have to look at this from the four planes viz: policy, control, management & data. For Cloud Interoperability of the fifth kind, we need :

 

    * A declarative policy plane that can be interpreted in a common way but implemented in proprietary ways as the platform sees fit

    * A control plane that can be dynamic with rich attributes that are semantically equivalent across clouds (I do not care if they talk the same words so long as I know how to map them and do equivalence)

    * A management plane - again built on semantically equivalent attributes so that I can make (collective and individual) inferences as and when needed

    * And A data path which is the OS/technology of my choice and I do not want the cloud to change my data path characteristics.

 

Cheers

<k/>

 

From: cloud...@googlegroups.com [mailto:cloud...@googlegroups.com] On Behalf Of Stuart Charlton


Sent: Monday, March 02, 2009 9:41 AM
To: cloud...@googlegroups.com

Greg Pfister

unread,
Mar 3, 2009, 6:19:32 PM3/3/09
to Cloud Computing Interoperability Forum (CCIF)
I like it! Good analogies breakdown of the possibilities.

One question:

What is the difference between 2 and 3? Or, said another way: If the
communication isn't "intelligent" (that's #3), what kind of
communication is it? DDoS attacks? I think we have, or could have,
that level of communication now.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Mar 2, 9:01 pm, "Krishna Sankar (ksankar)" <ksan...@cisco.com>
wrote:
> My thoughts athttp://doubleclix.wordpress.com/2009/03/02/cloud-interoperability-of-the
> -5th-kind/.
>
> In the last couple of days there have been a few good posts on Cloud
> Interoperability from Reuven, Greg and Stu. I also had written a blog
> about standard cloud couple of months ago. To understand the cloud
> interoperability, we do not have to go that far - from the clouds I mean
> - just taking a cue from the world of UFOs would be enough !
>
> 1. Cloud Interoperability of the 1st Kind => Clouds exist and can be
> observed. This is the state we are in
>
> 2. Cloud Interoperability of the 2nd Kind => Cloud services can talk
> to each other
>
> 3. Cloud Interoperability of the 3rd Kind => Clouds can observe each
> other and some form of intelligent communication can happen
>
> 4. Cloud Interoperability of the 4th Kind => Cloud are abducted by
> each other - just VM movement from one cloud to another
>
> 5. Cloud Interoperability of the 5th Kind => This is where "joint,
> bilateral contact events produced through the conscious, voluntary and
> proactive ... cooperative communication" happens. In essence, the clouds
> can work together
>
> With this enlightenment, we can examine the POV of the cloud luminaries
> ...
>
> ... <more on line>
>
> ...
>
> ...
>
> As I have said earlier, we have to look at this from the four planes
> viz: policy, control, management & data. For Cloud Interoperability of
> the fifth kind, we need :
>
> * A declarative policy plane that can be interpreted in a common way
> but implemented in proprietary ways as the platform sees fit
>
> * A control plane that can be dynamic with rich attributes that are
> semantically equivalent across clouds (I do not care if they talk the
> same words so long as I know how to map them and do equivalence)
>
> * A management plane - again built on semantically equivalent
> attributes so that I can make (collective and individual) inferences as
> and when needed
>
> * And A data path which is the OS/technology of my choice and I do
> not want the cloud to change my data path characteristics.
>
> Cheers
>
> <k/>
>
> From: cloud...@googlegroups.com [mailto:cloud...@googlegroups.com]
> On Behalf Of Stuart Charlton
> Sent: Monday, March 02, 2009 9:41 AM
> To: cloud...@googlegroups.com
> Subject: Re: Examining Cloud Compatibility, Portability and
> Interoperability
>
> On Mon, Mar 2, 2009 at 8:56 AM, Greg Pfister <greg.pfis...@gmail.com>
> ...
>
> read more »

Reuven Cohen

unread,
Mar 3, 2009, 6:39:38 PM3/3/09
to cloud...@googlegroups.com


I also really like the analogy, could be because I'm a geek who grew
up watching star trek..the original series :)

Also, would it be ok if I post this to the Cloud Interop Magazine
site? http://cloudinterop.ulitzer.com/

FYI the site gets a ridiculous amount of traffic 56,267+ viewers so
far this month and it's only the 3rd.

r/c

Krishna Sankar (ksankar)

unread,
Mar 3, 2009, 7:32:30 PM3/3/09
to cloud...@googlegroups.com
I still am a fan of the original Star Trek series.

And yep, no problem posting in the Cloud Interop Magazine.

Cheers
<k/>

|-----Original Message-----
|From: cloud...@googlegroups.com [mailto:cloud...@googlegroups.com]
|On Behalf Of Reuven Cohen
|Sent: Tuesday, March 03, 2009 3:40 PM
|To: cloud...@googlegroups.com
|Subject: Re: Examining Cloud Compatibility, Portability and
|Interoperability
|
|
|> proactive ... cooperative communication" happens. In essence, the clouds
|can
|> work together
|>
|> With this enlightenment, we can examine the POV of the cloud
|luminaries ...
|>
|> ... <more on line>
|>
|> ...
|>
|> ...
|>

Krishna Sankar (ksankar)

unread,
Mar 3, 2009, 7:32:30 PM3/3/09
to cloud...@googlegroups.com
Good point. #2 is connectivity and #3 is communication with context.
Even connectivity requires some notion of clouds, their posture et al. I
am talking about connectivity at the cloud layer not at the application
layer. Also the connectivity is very relevant in the hybrid world - now
usually done thru a VPN or leased line connection.

Cheers
<k/>

|-----Original Message-----
|From: cloud...@googlegroups.com [mailto:cloud...@googlegroups.com]
|On Behalf Of Greg Pfister
|Sent: Tuesday, March 03, 2009 3:20 PM
|To: Cloud Computing Interoperability Forum (CCIF)
|Subject: Re: Examining Cloud Compatibility, Portability and
|Interoperability
|
|
Reply all
Reply to author
Forward
0 new messages