what role did virtual machine play in Cloud Computing?

178 vues
Accéder directement au premier message non lu

smith jack

non lue,
27 déc. 2009, 21:39:0927/12/2009
à cloud-c...@googlegroups.com
i think this is a quite open question.
we can implement Cloud Computing without VM indeed, and i am sure lot of Cloud Systems are implemented without VMs,
then what is the advantage of VMs in Cloud Computing?
(easy deployment?)
any reply is appreciated

Gabor Fulop

non lue,
28 déc. 2009, 01:46:2428/12/2009
à cloud-c...@googlegroups.com

I agree that we can implement Cloud without VM.  If you are familiar with SalesForce.com (a SaaS example of Cloud Computing) then you may be familiar with Force.com (by the same company), which I consider a PaaS or BPaaS (Business Platform-as-a-Service), because it is a platform for creating applications.  I wonder if any would agree that VM is the obsolete version of cloud because there is so much power available directly from shared service platforms that virtual sharing will soon be a thing of the past.

 

Yours inquisitively,

Gabor

--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
 
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com

Jan Klincewicz

non lue,
28 déc. 2009, 17:57:1128/12/2009
à cloud-c...@googlegroups.com
@Gabor:

Do you think SalesForce.com provides a PHYSICAL machine for every instance they offer customers ??  Or do you mean end customers may give up running their OWN VMs (and calling them "Private Clouds?"

In any event, I don't think sever virtualization is going anywhere soon ...


On Mon, Dec 28, 2009 at 1:46 AM, Gabor Fulop <Gabor...@cloudharbor.com> wrote:

I agree that we can implement Cloud without VM.  If you are familiar with SalesForce.com (a SaaS example of Cloud Computing) then you may be familiar with Force.com (by the same company), which I consider a PaaS or BPaaS (Business Platform-as-a-Service), because it is a platform for creating applications.  I wonder if any would agree that VM is the obsolete version of cloud because there is so much power available directly from shared service platforms that virtual sharing will soon be a thing of the past.

 

Yours inquisitively,

Gabor

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of smith jack
Sent: Sunday, December 27, 2009 6:39 PM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?

 

i think this is a quite open question.

--

~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
 
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com

--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
 
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com



--
Cheers,
Jan

Gabor Fulop

non lue,
29 déc. 2009, 02:47:4029/12/2009
à cloud-c...@googlegroups.com

Hi Jan,

 

Keeping in mind I’m on the business not technical side of Cloud Computing, I’m saying that virtualization is the lowest common denominator i.e. a way to split a machine to serve multiple very different purposes rather than many users or organizations using the same server for a similar purpose e.g. CRM, but in a secure way without unauthorized access like SalesForce.

 

You are probably right that server virtualization won’t go anywhere “soon”, but in the long-term, isn’t it possible that a majority of organizations won’t see any benefit to having their own servers (remember Scott McNeely’s Network Computer concept?) and eventually as PaaS reaches its tipping point and all custom applications can be developed in the cloud, private data centers will only be cost-effective for the very largest of organizations that need at least one server per major application? 

 

Scott McNeely had the right concept, but the platforms and applications weren’t there to support it.  Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more, organizations can integrate, customize and mash-up almost anything to create powerful apps without a single server on their books.

 

Thoughts?

 

Gabor

Jeanne Morain

non lue,
29 déc. 2009, 03:44:2529/12/2009
à cloud-c...@googlegroups.com
VMs provide ease of standardization and re purposing for the underlying platform based on requirements.  This is critical for complying with vendor requirements for minimum specs to deploy a specific application and to enable applications that were not originally developed as a Web 2.0 to live both on and off the cloud.  VMs enable critical functionality such as Disaster Recovery by providing a completely replicated environment for packaging, test, production and back up.

Some of this capability can also be achieved with other types of virtualization such as Application Virtualization (ability to segment applications from OS and each other) this can enable multiple users to access legacy applications without having to rewrite them for Web 2.0.  In addition - IT is enabled to provide ubiquitous access to applications both centrally from the datacenter or check them out to the local system (for those areas of the world that still have low link bandwidth).

Virtualization in general plays a critical part in the overall success of the Cloud because it provides continous operations across the underlying hardware and/or operating system.  Commoditization of hardware, OS, etc is critical in order for the Cloud to expand and provide relevance to the overall market as a whole.  This can be seen throughout history with the evolution of the PC, Word Processing, explosion of Java enable applications, etc.   

Although virtualization is not the ONLY critical ingrediant required for a successful cloud - it is one of the top 5. 


From: Jan Klincewicz <jan.kli...@gmail.com>
To: cloud-c...@googlegroups.com
Sent: Mon, December 28, 2009 3:57:11 PM
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?

Jan Klincewicz

non lue,
29 déc. 2009, 08:55:2229/12/2009
à cloud-c...@googlegroups.com
@Gabor:

Being on the business end of a technical discipline is a tough place to be.

Yes, what you discuss all sounds good in concept, but as many on this board have realized, end-users are not giving up their own servers in droves just yet. I don't know if you can look at some of the archival threads, but issues such as privacy, transportability, lack of sufficient SLAs are a few reasons why the world is not switching wholesale to Public Clouds right away.

Bur, we are getting way off topic (which was the value of virtualization in ANY kind of Cloud) which I find undeniably high except in rare cases.  It seems you are now addressing the viability of stand-alone data centers (or customers who are NOT adopting Public Cloud.)

On Tue, Dec 29, 2009 at 2:47 AM, Gabor Fulop <Gabor...@cloudharbor.com> wrote:

Hi Jan,

 

Keeping in mind I’m on the business not technical side of Cloud Computing, I’m saying that virtualization is the lowest common denominator i.e. a way to split a machine to serve multiple very different purposes rather than many users or organizations using the same server for a similar purpose e.g. CRM, but in a secure way without unauthorized access like SalesForce.

 

You are probably right that server virtualization won’t go anywhere “soon”, but in the long-term, isn’t it possible that a majority of organizations won’t see any benefit to having their own servers (remember Scott McNeely’s Network Computer concept?) and eventually as PaaS reaches its tipping point and all custom applications can be developed in the cloud, private data centers will only be cost-effective for the very largest of organizations that need at least one server per major application? 

 

Scott McNeely had the right concept, but the platforms and applications weren’t there to support it.  Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more, organizations can integrate, customize and mash-up almost anything to create powerful apps without a single server on their books.

 

Thoughts?

 

Gabor

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Jan Klincewicz


Sent: Monday, December 28, 2009 2:57 PM

Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?




--
Cheers,
Jan

Ray Nugent

non lue,
29 déc. 2009, 10:56:0129/12/2009
à cloud-c...@googlegroups.com
Salesforce does not provide a server per customer. Neither do they use virtualization as a basis for their "cloud". I think the  problem here is discussing Salesforce under the topic of Cloud. They are a SaaS platform for CRM. I think one of the main distinctions between SaaS and Cloud is that SaaS tends to be siloed functionality (i.e. supports one type of application or functionality) where as Cloud tends to be more platform (i.e. supports a broad range of applications) oriented. SaaS can hard code it's multi-tennant schema because of this (which is why all of Salesforce DB back end runs on just a few physical servers). Cloud cannot. Cloud in the context of IaaS relies heavily on virtualization. Cloud in the context of PaaS or SaaS is less dependent on a VM layer and, in fact, the biggest "cloud" of them all - Google - does not use virtualization.

So while virtualization is indeed an important technology and is destine to be so for a while it is not required in a SaaS/PaaS architecture.

If you think in terms of what Jan hates to call an "Enterprise Cloud", virtualization provides a more efficient use of the underlying hardware and thus saves money on hardware purchases. It brings a whole new level of management challenges and opportunities as well which is why the concept of Cloud in the enterprise is appealing. The key qualities that virtualization brings to a true cloud are lowered hardware costs (more efficient use of existing hardware), more rapid deployment of workloads (applications, virtual machines and so forth) and the potential for automation of management of workloads and VMs.

Ray

Sent: Mon, December 28, 2009 2:57:11 PM
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?

ro...@embeddedcomponents.com

non lue,
29 déc. 2009, 12:21:4529/12/2009
à cloud-c...@googlegroups.com
This conversation could use a few more references perhaps:

Salesforce has positioned themselves as a platform as a service for some time - not SaaS at all. They also see themselves as a using cloud features too:
http://www.salesforce.com/paas/


Best regards,

Ron Fredericks
Videographer
Video studio director
408-390-1895
www.LectureMaker.com
"Where lectures are crafted for impact distributed on the Internet with cloud support "

Sent on the Sprint® Now Network from my BlackBerry®


From: Ray Nugent <rnu...@yahoo.com>
Date: Tue, 29 Dec 2009 07:56:01 -0800 (PST)

Rao Dronamraju

non lue,
29 déc. 2009, 12:38:3829/12/2009
à cloud-c...@googlegroups.com

“Keeping in mind I’m on the business not technical side of Cloud Computing, I’m saying that virtualization is the lowest common denominator i.e. a way to split a machine to serve multiple very different purposes rather than many users or organizations using the same server for a similar purpose e.g. CRM, but in a secure way without unauthorized access like SalesForce.”

 

If you are from the business side, why bother about virtualization at all. If you are from the buisness side your primary focus would be CORE – CapEx, OpEx, Revenues and Earnings in addition to Security and Risk to business assets. It shuould not matter as to how CC is accomplished with virtualization or bare-metal provisioning.  Virtualization is not necessarily a way to split a machine to serve multiple very different puposes. If you have multiple tenants who all use CRM, then all of them can be hosted on the same physical hardware in muptiple CRM VMs, isolated and separated for security reasons. In speciality clouds, clouds specializing in vertcial markets, many clients will be running their similar applilcations in different virtual machines.

 

You are probably right that server virtualization won’t go anywhere “soon”, but in the long-term, isn’t it possible that a majority of organizations won’t see any benefit to having their own servers (remember Scott McNeely’s Network Computer concept?) and eventually as PaaS reaches its tipping point and all custom applications can be developed in the cloud, private data centers will only be cost-effective for the very largest of organizations that need at least one server per major application? “

 

Without taking credit away from Scott McNealy, his concept/vision was nothing new. Telecom Cloud has been around for 50+ years and so are the utility clouds. Why would private data enters host one server per major application?...Isn’t server consolidation in the enterprise all about NOT having one server per major application. Infact private cloud is nothing but virtualization/server consolidation+live migration of virtual machines for efficent real-time resource utilization and optimization+chargeback models.

 

“Scott McNeely had the right concept, but the platforms and applications weren’t there to support it.  Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more, organizations can integrate, customize and mash-up almost anything to create powerful apps without a single server on their books.”

 

The most fundamental problem/roadblock with (public) CC today is Security, Security, Security….that is why in the near future as forecasted by gartner and others, it is going to be private clouds. This does not mean hybrid and public clouds will not have a market. Non-critical  applications  & SMBs will  be the primary markets. You may want to read the following excellent article published recently by MIT Technology Review.

 

http://www.technologyreview.com/web/24166/

 

 

 


Ray Nugent

non lue,
29 déc. 2009, 12:54:3829/12/2009
à cloud-c...@googlegroups.com
Ron, you are correct, they have positioned themselves in that genre. However, I think the proof is in how they advertise their offerings. Look them up on Google and what do you see?

CRM - salesforce.com

"CRM software solutions and enterprise cloud computing from salesforce.com, the leader in CRM and platform as a service. Free 30 day trial."

So, aside from the technology which is clearly not cloud, they seem to promote CRM as their core competence. Indeed it would probably be tough to host an enterprise ERP system or a Gene Folding application on their platform (disclaimer: I have not personally tried either of these).

Ray

Adwait Ullal

non lue,
29 déc. 2009, 13:14:5829/12/2009
à cloud-c...@googlegroups.com
 
Isn't www.salesforce.com the SaaS face & www.force.com the PaaS one?
 
Happy Holidays!

- Adwait
--
Adwait Ullal

w: http://tr.im/adwait
p: (408) 898-2581


Tim M. Crawford

non lue,
29 déc. 2009, 14:00:2229/12/2009
à cloud-c...@googlegroups.com
That's a better way to put it.

SaaS: salesforce.com (requires configuration, but no application development to use...runs out of the box).
PaaS: force.com (requires development of an application to use)

-t
__________________________________
Tim M. Crawford

Ray Nugent

non lue,
29 déc. 2009, 13:59:3329/12/2009
à cloud-c...@googlegroups.com
Yes and they have a new platform called Chatter that is focused on social networking applications. I would be interested to hear from anyone with experience building an enterprise application on Force.com or Chatter to see how they stack up as PaaS.

Ray


From: Adwait Ullal <adwait...@gmail.com>
To: cloud-c...@googlegroups.com
Sent: Tue, December 29, 2009 10:14:58 AM

--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com/

Post Job/Resume at http://cloudjobs.net/

Jeanne Morain

non lue,
29 déc. 2009, 14:02:2629/12/2009
à cloud-c...@googlegroups.com
These are interesting posts.  With valid points on both sides but some critical pieces are being left out of the debate that would indeed have the business think beyond CORE - to critical aspects like Business Continuity (impact on Customer Service) and Compliance (regulatory, security, and business directives).

Part of the confusion is the ubiquity in which the term Cloud is being used.  Is it Web 2.0 ( a single application serving multiple users) or is it providing Utility Computing (virtualized desktops/servers, etc that reduce the costs/dependancy on hardware).  Those are two very different technology paradigms and concepts.  Not to mention take very different skills to manage, deploy, and architected.

Other key elements should be taken into account as well (from both Business and Technology perspective) like compliance (Cyber Security Act Just passed congress in September), Connectivity, and End User Trends (what is happening in the consumer space and how it will impact business).  Rao is spot on with Security but there are ways with the public cloud that security can be achieved through encryption, limiting access and the combination of the technologies around applications, OS, and hardware. 

The hybrid approach is very viable for the SMB and even larger enterprises that I have worked with because the the one fundamental fact that pushed companies to adopt virtualization to begin with - Power Consumption and lack there of in the datacenter.  As customers move to more centralized approaches around the desktop we will see a larger pull for Hybrid models to deal with not only the power consumption question for the other 2 factors that make desktop computing very different from your typical datacenter application - increased mobility (working from remote locations, home office, virtual cafe) and limited bandwidth (yes we still have limited bandwidth not just in areas like India, China or 3rd world countries but also in this country - areas like Childress, Texas that have a single T1 to support the entire town). 

The later is why many moved from the mainframe model to the distributed computing model.  There will be times that the network goes down - does that mean all work stops?  For CRM applications like Salesforce that have a huge following in the SME market - they provide lead gen, bookings and other data.  Most critical tasks like delivering the quote to the customer won't be impacted for SME customers because they will have some offline tool or way to generate the quote and any sales rep worth their salt also keeps their key customer contacts that are closing in their offline Outlook.  SaaS and PaaS models work for non-mission critical applications such as expense reporting (Concur) or CRM tools (of which many have an offline product that connects to the online product with few exceptions).  Pleaes note - prior to moving into systems management I spent some time as a product manager for "offline" CRM application used by Cisco, Dell and other large enterprises. 

Customers are looking for Nirvana for the Desktop or what many have coined as the Universal Client (I was one of the primary authors of this concept while at VMware).  The concept came from quite a bit of customer interviews and market direction from where we are versus where the industry would like us to be.  In a nutshell - end users want their applications and data to follow them within the confines of how they work (offline, online, virtual cafe) regardless of hardware, OS, or VM, whether they have connectivity or not.  Particularly up and coming generations that have this capability with online gaming systems like World of Warcraft, PS3, Smartphones, and other devices that provide entertainment at their fingertips. 

Business should care beyond the accounting principles below because it is with revolutionary technology that money is saved (reducing down time, reducing impact of compliance on time to value, etc), money is earned (through new offerings)  or in some cases wasted (when not thoroughly thought through beyond not just the technology implications but those on the business.  Vista is a PERFECT example.  Many customers I worked with did not adopt it after spending tens of thousands on rollout plans etc because their end users complained about performance and impact on business continuity.  Although hard to measure in dollars and cents for many companies - it is definitely been a lesson for Microsoft as is shown with the advances in Win 7 Platform.

Again - good points on the various side and a nice spirited debate.  Happy New Year!

Cheers,
Jeanne

www.universalclient.blogspot.com







From: Rao Dronamraju <rao.dro...@sbcglobal.net>
To: cloud-c...@googlegroups.com
Sent: Tue, December 29, 2009 10:38:38 AM
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
 
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group..

Jan Klincewicz

non lue,
29 déc. 2009, 14:04:2529/12/2009
à cloud-c...@googlegroups.com
http://googleenterprise.blogspot.com/2009/04/what-we-talk-about-when-we-talk-about.html

Here is an interesting response to VMware from Google.  I think I have prefaced most of my posts with a disclaimer that virtualization  is a good solution for MOST (not all) Cloud Platforms.

@Ray:

I believe by most definitions SaaS falls under the "Cloud" umbrella. Certainly there are those who can provide multi-tenant web-based apps without hypervisors.  Especially where a provider has a single platform, and has no need to "genericize" CC can work well without what we traditionally think of as "server virtualization."

Ray Nugent

non lue,
29 déc. 2009, 14:11:1729/12/2009
à cloud-c...@googlegroups.com
As tempting as it is I won't debate the definition of cloud computing again ;-) Other than that I would say we are in violent agreement.

Ray

Sent: Tue, December 29, 2009 11:04:25 AM

Gabor Fulop

non lue,
29 déc. 2009, 14:40:0729/12/2009
à cloud-c...@googlegroups.com

Thanks for all the comments.  Perhaps I should stick to CORE, but I’m on the business side of IT and this is such an interesting topic. 

 

Great article Rao, thanks!  I thought we were farther along in Security, with all the financial transactions conducted daily, but the article clearly points out the new threats related to the Cloud.

 

Jeanne, thanks for your insight too.  Great discussion =)

 

@Rao Yes, lots of things “can” be done, such as “If you have multiple tenants who all use CRM, then all of them can be hosted on the same physical hardware in multiple CRM VMs, isolated and separated for security reasons.”  But are we truly maximizing the efficiency of that equipment?  Isn’t it more efficient to use the same code with configurations and customizations for each user or organization sitting on the same instance separated for security purposes through identity and organization management as opposed to having a completely separate virtual machine?  Doesn’t each VM need its own OS, etc?

 

Best regards,

Jan Klincewicz

non lue,
29 déc. 2009, 22:19:4529/12/2009
à cloud-c...@googlegroups.com
@Gabor:

Yes, each VM needs it own OS etc. but the fact is that there are very few examples of commercial software packages that can operate in a multi-tenant environment natively.  Very few commonly use business software packages can utilize all the resources of a two hundred dollar PC.  It is precisely because of the disparity between hardware and software maturity that Virtualization becomes and easy "shim" to bridge the gap.

There are virtualiztion techniques aside from hypervisors (for example Solaris Containers) that allow multiple instances of certain applications to run simultaneously.   Try running two instances of Exchange on Windows 2008, though, and it's a different story.

Partially, we have a roadblock in 32-bit software bottlenecking at 4GB of RAM (without PAE orthodontia) but I am getting beyond the typical "business-person's" technical realm here.  64-bit software could mitigate some of the restrictions faced today, but it is slow in coming.

The simple fact is, that given the way things are right now, the easiest and most efficient way to maximize hardware resources for MOST common-off-the-shelf applications is to run them on VMs using commonly known hypervisors.   It is proven, can be cheap (or free) and there is a huge body of moderately-trained people who know how to do it.  Commodity solutions are often the best choice given all the parameters, despite their being more optimal solutions "theoretically" available.


On Tue, Dec 29, 2009 at 2:40 PM, Gabor Fulop <Gabor...@cloudharbor.com> wrote:

Thanks for all the comments.  Perhaps I should stick to CORE, but I’m on the business side of IT and this is such an interesting topic. 

 

Great article Rao, thanks!  I thought we were farther along in Security, with all the financial transactions conducted daily, but the article clearly points out the new threats related to the Cloud.

 

Jeanne, thanks for your insight too.  Great discussion =)

 

@Rao Yes, lots of things “can” be done, such as “If you have multiple tenants who all use CRM, then all of them can be hosted on the same physical hardware in multiple CRM VMs, isolated and separated for security reasons.”  But are we truly maximizing the efficiency of that equipment?  Isn’t it more efficient to use the same code with configurations and customizations for each user or organization sitting on the same instance separated for security purposes through identity and organization management as opposed to having a completely separate virtual machine?  Doesn’t each VM need its own OS, etc?

 

Best regards,

Gabor




--
Cheers,
Jan

Frank D. Greco

non lue,
30 déc. 2009, 12:08:0830/12/2009
à cloud-c...@googlegroups.com
At 10:19 PM 12/29/2009, Jan Klincewicz wrote:

Some great points as usual Jan.

>The simple fact is, that given the way things are right now, the
>easiest and most efficient way to maximize hardware resources for
>MOST common-off-the-shelf applications is to run them on VMs using
>commonly known hypervisors. It is proven, can be cheap (or free)
>and there is a huge body of moderately-trained people who know how
>to do it. Commodity solutions are often the best choice given all
>the parameters, despite their being more optimal solutions
>"theoretically" available.

Using virtualization is a *lot* easier than figuring out how
the industry can
truly take advantage of multi-core. Its just a stepping
stone imho until we all
figure out how to program all these cores well.

I've asked very well-known computer scientists about this
problem over the past
few years and the consensus is that programming multicore
hardware effectively is
a very, very hard problem (to solve
generically). Virtualization is just a simpler
way to use the cores since we haven't gotten that problem
licked yet. Probably a
remnant of the single-user/single-app mentality that the PC
bequeathed to us a while
ago.

Imo, if you have mediocre software, you don't multiply it by
N (cores) to try to
improve it...

Btw, there's a reason that Google doesn't use virtualization
to solve search.

Frank G.
- Happy New Decade everyone! [gotta be better than the last one...]


Rao Dronamraju

non lue,
30 déc. 2009, 12:30:1030/12/2009
à cloud-c...@googlegroups.com

“Perhaps I should stick to CORE”

 

I wasn’t suggesting that you should stick to the CORE principles. Your statement about virtualization came across a bit negative on value of virtualization to CC, so I suggested then you should focus on the business apects of CC. As far as I am concerned, it doesn’t matter what gas (petrol, ethanol, battery) I use in my car as long as it gets me 100+ mpg and does not screw up the environment.

 

“I thought we were farther along in Security, with all the financial transactions conducted daily, but the article clearly points out the new threats related to the Cloud.”

 

Yes and No. IMO, secuirty in CC is 50% FUD and 50% real threats. The CIOs/businesses have to make a major leap in their psyche with respect to TRUST in the cloud. In addition, there might be some new larger threat issues as with respect to clouds being single point of concentration of business/wealth due to multi-tenancy, hence an attacker (more likely a rogue organization or a country) can focus on this one entity to cause major damage. It is the cumulative, aggregated threat due to multi-tenancy on a large scale.

 

“But are we truly maximizing the efficiency of that equipment?  Isn’t it more efficient to use the same code with configurations and customizations for each user or organization sitting on the same instance separated for security purposes through identity and organization management as opposed to having a completely separate virtual machine?  Doesn’t each VM need its own OS, etc?”

 

It is lot more insecure/risky if you use the same code to host multiple tenants. In separate virtual machines, you are isolated and more secure than sharing the same code with others. Yes, security and performance/efficiency are inversely proportional. By automating security, you can make it more efficient.

 

Here is an interesting article on how Xen hypervisor was breached through simple DMA and a backdoor was opened to get control. The guy has done a pretty creative hack.

 

http://invisiblethingslab.com/bh08/papers/part1-subverting_xen.pdf

 

This might have been fixed by Intel’s VT-d???....technology.

anthony...@gmail.com

non lue,
30 déc. 2009, 12:34:1230/12/2009
à cloud-c...@googlegroups.com
Frank et al

I agree that you have addressed 1/2 of the equation. COTS apps are inherently not built for optimal performance or efficiency. The focus is usually on the logic or automation they are providing.

Programming for mutlicores, parallel processing or even high concurrency is a strict discipline with few master craftsmen available.

What I would challenge you and Jan on is that your only providing answers to the supply management. Ie - abstract and stack apps by vm.

What is missing in your solution formula is the demand management equation. To ensure both performance and maximum efficiency you must include with the VM's run time managers that drive the workload to the core/vm.
There are mature managers from Appistry, Tibco and IBM that manage the runtime demand containers (app servers, web servers, event managers, rules engines, message queues, etc...) which COTS and custom apps can be controlled without code changes. By inserting such control you can optimally ensure QoS in terms of performance, cost and efficiency AND exploit the VM/core strategy.

Now perhaps you got stuck in the batch management days at Lehman and forgot to look outside the HPC/job space to general purpose computing! :)


Sent via BlackBerry by AT&T

-----Original Message-----
From: "Frank D. Greco" <fgr...@javasig.com>
Date: Wed, 30 Dec 2009 12:08:08
To: <cloud-c...@googlegroups.com>
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud
Computing?

Jim Starkey

non lue,
30 déc. 2009, 13:20:4030/12/2009
à cloud-c...@googlegroups.com
Rao Dronamraju wrote:
>
> �Perhaps I should stick to CORE�
>
> I wasn�t suggesting that you should stick to the CORE principles. Your
> statement about virtualization came across a bit negative on value of
> virtualization to CC, so I suggested then you should focus on the
> business apects of CC. As far as I am concerned, it doesn�t matter
> what gas (petrol, ethanol, battery) I use in my car as long as it gets
> me 100+ mpg and does not screw up the environment.
>
> �I thought we were farther along in Security, with all the financial
> transactions conducted daily, but the article clearly points out the
> new threats related to the Cloud.�

>
> Yes and No. IMO, secuirty in CC is 50% FUD and 50% real threats. The
> CIOs/businesses have to make a major leap in their psyche with respect
> to TRUST in the cloud. In addition, there might be some new larger
> threat issues as with respect to clouds being single point of
> concentration of business/wealth due to multi-tenancy, hence an
> attacker (more likely a rogue organization or a country) can focus on
> this one entity to cause major damage. It is the cumulative,
> aggregated threat due to multi-tenancy on a large scale.
>
A couple things. First, security with regard to vendor/client
relationships is not about trust, it's about consequences. All promises
are hollow. Contractual obligations to pay damages, particularly
consequential damages, have teeth. Any who accepts a promise without
consequences either isn't dealing with valuable stuff or is a fool. I
haven't made a study of SLAs, but the ones I have read shouldn't make an
enterprise user very happy.

Second, much of security is about sandboxes. A VM is a pretty good
sandbox, but is quite expensive with regard to wasted memory, wasted
cycles, and hypervisor overhead. Non-VM sandboxes can be just as good --
or better -- and more efficient and easier to administer. The cost, as
it were, is forgoing the ability to introduce machine level code. A VM,
on the other hand, has an host operating system that must be maintained
like any other operating system. If you've ever run a server of any ilk,
you will understand the number of security patches issued per month, and
should understand that an unmaintained OS, on a hard server or in a
cloud, is a very insecure beast indeed.

The computing world has not yet agreed on an application platform
sufficiently good enough to standardize. Sooner or later, it will. At
that point (or maybe earlier), VMs will in all likelihood be regarded as
hopelessly inefficient, insecure anachronisms.

> �But are we truly maximizing the efficiency of that equipment? Isn�t

> it more efficient to use the same code with configurations and
> customizations for each user or organization sitting on the same
> instance separated for security purposes through identity and
> organization management as opposed to having a completely separate

> virtual machine? Doesn�t each VM need its own OS, etc?�


>
> It is lot more insecure/risky if you use the same code to host
> multiple tenants. In separate virtual machines, you are isolated and
> more secure than sharing the same code with others. Yes, security and
> performance/efficiency are inversely proportional. By automating
> security, you can make it more efficient.
>

Inversely proportional? Where did that come from? Do you have a proof?


>
> Here is an interesting article on how Xen hypervisor was breached
> through simple DMA and a backdoor was opened to get control. The guy
> has done a pretty creative hack.
>
> http://invisiblethingslab.com/bh08/papers/part1-subverting_xen.pdf
>

> This might have been fixed by Intel�s VT-d???....technology.
>
> ------------------------------------------------------------------------
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Gabor Fulop
> *Sent:* Tuesday, December 29, 2009 1:40 PM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>
> Thanks for all the comments. Perhaps I should stick to CORE, but I�m

> on the business side of IT and this is such an interesting topic.
>
> Great article Rao, thanks! I thought we were farther along in
> Security, with all the financial transactions conducted daily, but the
> article clearly points out the new threats related to the Cloud.
>
> Jeanne, thanks for your insight too. Great discussion =)
>

> @Rao Yes, lots of things �can� be done, such as �If you have multiple

> tenants who all use CRM, then all of them can be hosted on the same
> physical hardware in multiple CRM VMs, isolated and separated for

> security reasons.� But are we truly maximizing the efficiency of that
> equipment? Isn�t it more efficient to use the same code with

> configurations and customizations for each user or organization
> sitting on the same instance separated for security purposes through
> identity and organization management as opposed to having a completely

> separate virtual machine? Doesn�t each VM need its own OS, etc?
>
> Best regards,
>
> Gabor
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Rao Dronamraju
> *Sent:* Tuesday, December 29, 2009 9:39 AM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>
> �Keeping in mind I�m on the business not technical side of Cloud
> Computing, I�m saying that virtualization is the lowest common

> denominator i.e. a way to split a machine to serve multiple very
> different purposes rather than many users or organizations using the
> same server for a similar purpose e.g. CRM, but in a secure way

> without unauthorized access like SalesForce.�


>
> If you are from the business side, why bother about virtualization at
> all. If you are from the buisness side your primary focus would be

> CORE � CapEx, OpEx, Revenues and Earnings in addition to Security and

> Risk to business assets. It shuould not matter as to how CC is
> accomplished with virtualization or bare-metal provisioning.
> Virtualization is not necessarily a way to split a machine to serve
> multiple very different puposes. If you have multiple tenants who all
> use CRM, then all of them can be hosted on the same physical hardware
> in muptiple CRM VMs, isolated and separated for security reasons. In
> speciality clouds, clouds specializing in vertcial markets, many
> clients will be running their similar applilcations in different
> virtual machines.
>

> �You are probably right that server virtualization won�t go anywhere
> �soon�, but in the long-term, isn�t it possible that a majority of
> organizations won�t see any benefit to having their own servers
> (remember Scott McNeely�s Network Computer concept?) and eventually as

> PaaS reaches its tipping point and all custom applications can be
> developed in the cloud, private data centers will only be
> cost-effective for the very largest of organizations that need at

> least one server per major application? �


>
> Without taking credit away from Scott McNealy, his concept/vision was
> nothing new. Telecom Cloud has been around for 50+ years and so are
> the utility clouds. Why would private data enters host one server per

> major application?...Isn�t server consolidation in the enterprise all

> about NOT having one server per major application. Infact private
> cloud is nothing but virtualization/server consolidation+live
> migration of virtual machines for efficent real-time resource
> utilization and optimization+chargeback models.
>

> �Scott McNeely had the right concept, but the platforms and
> applications weren�t there to support it. Now that there are platforms

> for building apps in the cloud like Force.com, CloudHarbor.com, Cordys
> Process Factory, MS Azure, and many more, organizations can integrate,
> customize and mash-up almost anything to create powerful apps without

> a single server on their books.�


>
> The most fundamental problem/roadblock with (public) CC today is

> Security, Security, Security�.that is why in the near future as

> forecasted by gartner and others, it is going to be private clouds.
> This does not mean hybrid and public clouds will not have a market.
> Non-critical applications & SMBs will be the primary markets. You may
> want to read the following excellent article published recently by MIT
> Technology Review.
>

> *http://www.technologyreview.com/web/24166/*
>
> * *
>
> ------------------------------------------------------------------------
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Gabor Fulop
> *Sent:* Tuesday, December 29, 2009 1:48 AM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play

> in Cloud Computing?
>
> Hi Jan,
>

> Keeping in mind I�m on the business not technical side of Cloud
> Computing, I�m saying that virtualization is the lowest common

> denominator i.e. a way to split a machine to serve multiple very
> different purposes rather than many users or organizations using the
> same server for a similar purpose e.g. CRM, but in a secure way
> without unauthorized access like SalesForce.
>

> You are probably right that server virtualization won�t go anywhere
> �soon�, but in the long-term, isn�t it possible that a majority of
> organizations won�t see any benefit to having their own servers
> (remember Scott McNeely�s Network Computer concept?) and eventually as

> PaaS reaches its tipping point and all custom applications can be
> developed in the cloud, private data centers will only be
> cost-effective for the very largest of organizations that need at
> least one server per major application?
>
> Scott McNeely had the right concept, but the platforms and

> applications weren�t there to support it. Now that there are platforms

> for building apps in the cloud like Force.com, CloudHarbor.com, Cordys
> Process Factory, MS Azure, and many more, organizations can integrate,
> customize and mash-up almost anything to create powerful apps without
> a single server on their books.
>
> Thoughts?
>
> Gabor
>

> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Jan Klincewicz
> *Sent:* Monday, December 28, 2009 2:57 PM
> *To:* cloud-c...@googlegroups.com
> *Subject:* Re: [ Cloud Computing ] what role did virtual machine play

> in Cloud Computing?
>
> @Gabor:
>
> Do you think SalesForce.com provides a PHYSICAL machine for every
> instance they offer customers ?? Or do you mean end customers may give
> up running their OWN VMs (and calling them "Private Clouds?"
>
> In any event, I don't think sever virtualization is going anywhere
> soon ...
>
> On Mon, Dec 28, 2009 at 1:46 AM, Gabor Fulop
> <Gabor...@cloudharbor.com <mailto:Gabor...@cloudharbor.com>> wrote:
>
> I agree that we can implement Cloud without VM. If you are familiar
> with SalesForce.com (a SaaS example of Cloud Computing) then you may
> be familiar with Force.com (by the same company), which I consider a
> PaaS or BPaaS (Business Platform-as-a-Service), because it is a
> platform for creating applications. I wonder if any would agree that
> VM is the obsolete version of cloud because there is so much power
> available directly from shared service platforms that virtual sharing
> will soon be a thing of the past.
>
> Yours inquisitively,
>
> Gabor
>

> *From:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> [mailto:cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>] *On Behalf Of *smith jack
> *Sent:* Sunday, December 27, 2009 6:39 PM
> *To:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> *Subject:* [ Cloud Computing ] what role did virtual machine play in

> Cloud Computing?
>
> i think this is a quite open question.
>
> we can implement Cloud Computing without VM indeed, and i am sure lot
> of Cloud Systems are implemented without VMs,
>
> then what is the advantage of VMs in Cloud Computing?
>
> (easy deployment?)
>
> any reply is appreciated
>
> --
>


--
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376

Rao Dronamraju

non lue,
30 déc. 2009, 13:57:0630/12/2009
à cloud-c...@googlegroups.com

"A couple things. First, security with regard to vendor/client relationships
is not about trust, it's about consequences"

Jim, another way to spell SECURITY is TRUST!., Ask any security expert they
will tell you.
It does not matter whether it is client/vendor relationship or not. If you
have enough TRUST in the SECURITY that cloud provides in terms of
facilities, HW, SW, people and processes (which includes SLAs) then the
cloud is TRUSTWORTHY. Consequences are related to RISK which is a major
aspect of SECURITY/TRUST.

"I haven't made a study of SLAs, but the ones I have read shouldn't make an
enterprise user very happy."

This is a different issue. It all depends on what you move to the cloud. Eli
Lily ran their cluster in the cloud for $6.40, because they did not need a
whole lot of securty for their application.
That is why I said, non-critical (from a security perspective) workloads
will move to public clouds first. Do you need a lot of SLAs for moving
enterprise (content) websites to clouds to begin with. How many websites
have been created in the last 15+ years?...hundreds of thousands if not
millions! Most test and dev groups of enterprises can move to clouds without
the consequences as you put it. Clouds can host DR&BC sites for many
enterprises.

"Second, much of security is about sandboxes."

Poppycock!. Much of the security is about segregation, isolation,
confidentaility, integrity, authentication, auhorization, auditing,
non-repudiation, attestation, visibility, control, dynamic mutation,
artificial diversification etc etc....all fall under TRUST.

"A VM is a pretty good sandbox, but is quite expensive with regard to wasted
memory, wasted
cycles, and hypervisor overhead. Non-VM sandboxes can be just as good -- or
better -- and more efficient and easier to administer."

Yeah, try live migration of non-VM sandboxes?...why do you think Solaris
containers are not migratable?...In a cloud live migration is a very
efficient way to utilize resources. May be you can migrate your ACID
database in a JVM sandbox:-)

"Inversely proportional? Where did that come from? Do you have a proof?"

Sure, have you ever travelled through an airport?....The more the security
checks you go through the longer is your travel time and lesses is your
travelling efficeincy....
May be I should suggest a database security technique. Try encrypting your
database data and try without encryption and see which is more efficient to
process.

-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Jim Starkey
Sent: Wednesday, December 30, 2009 12:21 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud
Computing?

Rao Dronamraju wrote:
>
> "Perhaps I should stick to CORE"
>

> I wasn't suggesting that you should stick to the CORE principles. Your

> statement about virtualization came across a bit negative on value of
> virtualization to CC, so I suggested then you should focus on the

> business apects of CC. As far as I am concerned, it doesn't matter

> what gas (petrol, ethanol, battery) I use in my car as long as it gets
> me 100+ mpg and does not screw up the environment.
>

> "I thought we were farther along in Security, with all the financial
> transactions conducted daily, but the article clearly points out the
> new threats related to the Cloud."
>

> "But are we truly maximizing the efficiency of that equipment? Isn't

> it more efficient to use the same code with configurations and
> customizations for each user or organization sitting on the same
> instance separated for security purposes through identity and
> organization management as opposed to having a completely separate

> virtual machine? Doesn't each VM need its own OS, etc?"


>
> It is lot more insecure/risky if you use the same code to host
> multiple tenants. In separate virtual machines, you are isolated and
> more secure than sharing the same code with others. Yes, security and
> performance/efficiency are inversely proportional. By automating
> security, you can make it more efficient.
>
Inversely proportional? Where did that come from? Do you have a proof?
>
> Here is an interesting article on how Xen hypervisor was breached
> through simple DMA and a backdoor was opened to get control. The guy
> has done a pretty creative hack.
>
> http://invisiblethingslab.com/bh08/papers/part1-subverting_xen.pdf
>

> This might have been fixed by Intel's VT-d???....technology.


>
> ------------------------------------------------------------------------
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Gabor Fulop
> *Sent:* Tuesday, December 29, 2009 1:40 PM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>

> Thanks for all the comments. Perhaps I should stick to CORE, but I'm

> on the business side of IT and this is such an interesting topic.
>
> Great article Rao, thanks! I thought we were farther along in
> Security, with all the financial transactions conducted daily, but the
> article clearly points out the new threats related to the Cloud.
>
> Jeanne, thanks for your insight too. Great discussion =)
>

> @Rao Yes, lots of things "can" be done, such as "If you have multiple

> tenants who all use CRM, then all of them can be hosted on the same
> physical hardware in multiple CRM VMs, isolated and separated for

> security reasons." But are we truly maximizing the efficiency of that
> equipment? Isn't it more efficient to use the same code with

> configurations and customizations for each user or organization
> sitting on the same instance separated for security purposes through
> identity and organization management as opposed to having a completely

> separate virtual machine? Doesn't each VM need its own OS, etc?


>
> Best regards,
>
> Gabor
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Rao Dronamraju
> *Sent:* Tuesday, December 29, 2009 9:39 AM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>

> "Keeping in mind I'm on the business not technical side of Cloud
> Computing, I'm saying that virtualization is the lowest common

> denominator i.e. a way to split a machine to serve multiple very
> different purposes rather than many users or organizations using the
> same server for a similar purpose e.g. CRM, but in a secure way
> without unauthorized access like SalesForce."
>

> If you are from the business side, why bother about virtualization at
> all. If you are from the buisness side your primary focus would be

> CORE - CapEx, OpEx, Revenues and Earnings in addition to Security and

> Risk to business assets. It shuould not matter as to how CC is
> accomplished with virtualization or bare-metal provisioning.
> Virtualization is not necessarily a way to split a machine to serve
> multiple very different puposes. If you have multiple tenants who all
> use CRM, then all of them can be hosted on the same physical hardware
> in muptiple CRM VMs, isolated and separated for security reasons. In
> speciality clouds, clouds specializing in vertcial markets, many
> clients will be running their similar applilcations in different
> virtual machines.
>

> "You are probably right that server virtualization won't go anywhere

> "soon", but in the long-term, isn't it possible that a majority of
> organizations won't see any benefit to having their own servers
> (remember Scott McNeely's Network Computer concept?) and eventually as

> PaaS reaches its tipping point and all custom applications can be
> developed in the cloud, private data centers will only be
> cost-effective for the very largest of organizations that need at
> least one server per major application? "
>

> Without taking credit away from Scott McNealy, his concept/vision was
> nothing new. Telecom Cloud has been around for 50+ years and so are
> the utility clouds. Why would private data enters host one server per

> major application?...Isn't server consolidation in the enterprise all

> about NOT having one server per major application. Infact private
> cloud is nothing but virtualization/server consolidation+live
> migration of virtual machines for efficent real-time resource
> utilization and optimization+chargeback models.
>

> "Scott McNeely had the right concept, but the platforms and

> applications weren't there to support it. Now that there are platforms

> for building apps in the cloud like Force.com, CloudHarbor.com, Cordys
> Process Factory, MS Azure, and many more, organizations can integrate,
> customize and mash-up almost anything to create powerful apps without
> a single server on their books."
>

> The most fundamental problem/roadblock with (public) CC today is

> Security, Security, Security..that is why in the near future as

> forecasted by gartner and others, it is going to be private clouds.
> This does not mean hybrid and public clouds will not have a market.
> Non-critical applications & SMBs will be the primary markets. You may
> want to read the following excellent article published recently by MIT
> Technology Review.
>
> *http://www.technologyreview.com/web/24166/*
>
> * *
>
> ------------------------------------------------------------------------
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Gabor Fulop
> *Sent:* Tuesday, December 29, 2009 1:48 AM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>
> Hi Jan,
>

> Keeping in mind I'm on the business not technical side of Cloud
> Computing, I'm saying that virtualization is the lowest common

> denominator i.e. a way to split a machine to serve multiple very
> different purposes rather than many users or organizations using the
> same server for a similar purpose e.g. CRM, but in a secure way
> without unauthorized access like SalesForce.
>

> You are probably right that server virtualization won't go anywhere

> "soon", but in the long-term, isn't it possible that a majority of
> organizations won't see any benefit to having their own servers
> (remember Scott McNeely's Network Computer concept?) and eventually as

> PaaS reaches its tipping point and all custom applications can be
> developed in the cloud, private data centers will only be
> cost-effective for the very largest of organizations that need at
> least one server per major application?
>
> Scott McNeely had the right concept, but the platforms and

> applications weren't there to support it. Now that there are platforms

--

Benmerar Tarik Zakaria

non lue,
31 déc. 2009, 11:49:2231/12/2009
à cloud-c...@googlegroups.com
 So, if even VMs hypervisors like Xen have breaches, Is it possible today to host binary code compiled in C or Fortran(Not .Net or Python or Java) on servers without having security breaches ? And if yes, how to sandbox those application(tools, techniques)?
 
   A really tough challenge, I think.

     
    "Perhaps I should stick to CORE"
     

     
    I wasn't suggesting that you should stick to the CORE principles. Your
    statement about virtualization came across a bit negative on value of
    virtualization to CC, so I suggested then you should focus on the business
    apects of CC. As far as I am concerned, it doesn't matter what gas (petrol,
    ethanol, battery) I use in my car as long as it gets me 100+ mpg and does
    not screw up the environment.
     

     
    "I thought we were farther along in Security, with all the financial
    transactions conducted daily, but the article clearly points out the new
    threats related to the Cloud."
     

     
    Yes and No. IMO, secuirty in CC is 50% FUD and 50% real threats. The
    CIOs/businesses have to make a major leap in their psyche with respect to
    TRUST in the cloud. In addition, there might be some new larger threat
    issues as with respect to clouds being single point of concentration of
    business/wealth due to multi-tenancy, hence an attacker (more likely a rogue
    organization or a country) can focus on this one entity to cause major
    damage. It is the cumulative, aggregated threat due to multi-tenancy on a
    large scale.
     

     
    "But are we truly maximizing the efficiency of that equipment? Isn't it
    more efficient to use the same code with configurations and customizations
    for each user or organization sitting on the same instance separated for
    security purposes through identity and organization management as opposed to
    having a completely separate virtual machine? Doesn't each VM need its own
    OS, etc?"
     

     
    It is lot more insecure/risky if you use the same code to host multiple
    tenants. In separate virtual machines, you are isolated and more secure than
    sharing the same code with others. Yes, security and performance/efficiency
    are inversely proportional. By automating security, you can make it more
    efficient.
     

     

Rao Dronamraju

non lue,
31 déc. 2009, 12:58:5131/12/2009
à cloud-c...@googlegroups.com

Not sure if I understand your question correctly, but it shouldn’t matter if the code is a compiled C or Fortran binary. Since Hypervisor runs at a privileged level, it can pretty much do what it wants. As we all know, the fundamental hacking technique that most hackers use is escalation of privileges as soon as they get access to a system, here hacking into a hypervisor (most probably this will be done by an internal threat), you do not even need to escalate. You hacked into one. So it doesn’t matter even if you have a sandbox, hypervisor got good enough shovel!:-)

 


--

Jim Starkey

non lue,
31 déc. 2009, 14:17:3131/12/2009
à cloud-c...@googlegroups.com
Benmerar Tarik Zakaria wrote:
> So, if even VMs hypervisors like Xen have breaches, Is it possible
> today to host binary code compiled in C or Fortran(Not .Net or Python
> or Java) on servers without having security breaches ? And if yes, how
> to sandbox those application(tools, techniques)?
Linux, Windows, or any modern Unix can be administered to run multiple
independent applications securely. But it requires a cost, skill set,
and experience that dwarfs the cost of another server. So it's cheaper
to throw hardware at the problem, and cheaper still to throw VMs at the
problem.

So, yes, it is possible to unvetted binary apps on a shared server, but
nobody in his right mind would want to.

I wouldn't get hot and bothered about past bugs in Xen. All software
has bugs, and eventually they get fixed, hopefully before the software
wears out. There are certainly no theoretic problems with hypervisors
as a class.

>
> A really tough challenge, I think.
>>
>> "Rao Dronamraju" <rao.dro...@sbcglobal.net> Dec 30 11:30AM

>> -0600 ^ <#digest_top>

> --
> ~~~~~
> Register Today for Cloud Slam 2010 at official website -
> http://cloudslam10.com
> Posting guidelines:
> http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
> Follow us on Twitter http://twitter.com/cloudcomp_group or
> @cloudcomp_group
> Post Job/Resume at http://cloudjobs.net
> Buy 88 conference sessions and panels on cloud computing on DVD at
> http://www.amazon.com/gp/product/B002H07SEC,
> http://www.amazon.com/gp/product/B002H0IW1U or get instant access to
> downloadable versions at
> http://cloudslam09.com/content/registration-5.html
>
> ~~~~~
> You received this message because you are subscribed to the Google
> Groups "Cloud Computing" group.
> To post to this group, send email to cloud-c...@googlegroups.com
> To unsubscribe from this group, send email to
> cloud-computi...@googlegroups.com

Jan Klincewicz

non lue,
31 déc. 2009, 16:43:2831/12/2009
à cloud-c...@googlegroups.com
I would at least say that hypervisor-based servers are at least no inherently less secure than physical servers at this point.  Open Source variants in fact (specifically Xen) have folks like the Department of Defense and National Security Agency contributing to the security.  I would not say they are 100% secure, but I would not say (nor expect) that about anything to so with IT.
Cheers,
Jan

Ray Nugent

non lue,
31 déc. 2009, 18:41:3031/12/2009
à cloud-c...@googlegroups.com
It's not the technology that is at play regarding security. The problem is that there tend to be a lot more virtual servers and since they are ephemeral admins don't secure them as well ("hey, if a VM is breached, just kill it and start a new one"...). As with all things security, it's the people factor that is the weak link.

Ray

Sent: Thu, December 31, 2009 1:43:28 PM
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?

Jan Klincewicz

non lue,
31 déc. 2009, 19:39:3731/12/2009
à cloud-c...@googlegroups.com
I don't like to generalize about admins. I run across some pretty competent and loyal ones from time to time.  Inasmuch as there is little difference between securing a VM and a physical box, once you have "hardened" a VM it is much easier to use that as a template and clone other VMS from it using a "locked down" image unless an an exceptions is required (and documented.)

I agree that humans are the weakest link, so any automation or procedure that diminishes the chance of human error should theoretically give VMs a POTENTIAL upside compared to physical boxes.

But, hey, screw all that .. it's New Years Eve !!  Let's knock back a few !!!

Happy 2010 everyone !!!

Ray Nugent

non lue,
31 déc. 2009, 21:47:1331/12/2009
à cloud-c...@googlegroups.com
So Jan, you're saying the Cloud is potentially MORE secure than a physical data center? :-)

Happy New Year!

Sent: Thu, December 31, 2009 4:39:37 PM

Benmerar Tarik Zakaria

non lue,
1 janv. 2010, 11:47:4101/01/2010
à cloud-c...@googlegroups.com
Thanks Rao and Jan for your reply. And let me ask another question to
make clear my stand point:

If we don't put those applications in hypervisors(for performance
reasons), could we keep the security in such a system ? Is it in
opposite, it's much easier to manage because it's just un application
like another?

This is very important, in the high performance field, where we still
have codes written in C or FORTRAN and using libraries like MPI, and we
want to deploy them on the cloud, to take advantage of the
infrastructure available to us.

Rao Dronamraju

non lue,
1 janv. 2010, 18:10:3501/01/2010
à cloud-c...@googlegroups.com

Benmerar,

I hope you have not mis-understood my postings about hypervisor and its
security. I did not post them to promote any FUD. I posted it to create some
discussion about hypervisor security in particular and cloud security in
general. So you do not need to be worried that much about the possibility of
someone hacking into the hypervisor in a public cloud etc. The probabilty of
some one hacking into a hypervisor in a cloud could be low. So I wouldn't
worry about everything I read about cloud insecurity.

If you do not use a virtualized (hypervised) environment, then you are back
to the regular data center environement. It appears your application is a
HPC application, so you are probably running it in a grid environment. If
you want to run them in today's clouds, you may have no choice but to use a
virtualized/hypervised environment. You cannot run some applications in a
virtualized environment and some outside a virtual environment in a cloud,
atlesat AFAIK.
May be there are some grid clouds available out there that you may want to
run your application in.

-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Benmerar Tarik
Zakaria

Sent: Friday, January 01, 2010 10:48 AM
To: cloud-c...@googlegroups.com

Subject: [ Cloud Computing ] what role did virtual machine play in Cloud
Computing?

Jan Klincewicz

non lue,
1 janv. 2010, 19:19:4101/01/2010
à cloud-c...@googlegroups.com

POTENTIALLY, yes... Chances are, running your apps in the Cloud,  the admins will at least not personally HATE you like the ones still surviving and overworked since their company just terminated four or five of their lifelong colleagues because of "downsizing"  while the CEO uses private jet to shuttle his kids back and forth from college to Xmas Break in Aspen.  I don't recall ever being on the side of in-house Data Centers being inherently more secure than a Cloud.

You must be mistaking me for Mark Hurd <g>.


<< So Jan, you're saying the Cloud is potentially MORE secure than a physical data center? :-) >>

Miha Ahronovitz

non lue,
1 janv. 2010, 19:21:1601/01/2010
à cloud-c...@googlegroups.com

Benmerar Tarik Zakaria <sniper...@gmail.com> writes

"If we don't put those applications in hypervisors (for performance
reasons)...  in the high performance field ...[may we] deploy
them on the cloud, to take advantage of the infrastructure available to us."

If you mean HPC with MPIs, therefore parallel code,  the answer may be some hybrid clouds. I can see your meaning of the cloud here is ability to use instances of nodes for computers someone else owns. This means you need some local resource managers to create a parallel environment , like Sun Grid Engine. In a tightly integrated application (like Sun Grid Engine with Cluster tools), the SGE will decide for each slave job how to start it how to stop it, how to distribute slaves, usage policy. It may be possible, but I am not 100% sure, that you may place some slave jobs in instances of nodes of Amazon.com, as long as SGE controls their placement and not the MPI application itself. SGE can connect to AWS EC2 via a cloud connector feature and the parallel environment (which is an exclusive to parallel applications space) can include external nodes. This means we have a hybrid cloud and the resource management is done from the private cloud with SGE.

I am not aware of a 100% Amazon MPI environment.

But why would you want to run MPI apps on Amazon rented nodes in addition to the local nodes, and why don't you buy more nodes? This is a business question, When some one asks: can I use VMware for cloud? the answer is"Yes" . One can use any technology, including sign language or voodoo witchcraft to generate the final goal of a cloud business model.

The final goal of cloud computing is to deliver application on a pay-par-use model. Whether the customers and users actually pay for service, is another decision. But as business model, any run of the parallel API application is available to the user any time and someone must pay for this privilege. The cloud computing owner will always HAVE THE RIGHT RESOURCES IN PLACE. And the cloud owner knows in a terms of $ what everyone uses. If not actual invoices are send out, then the CIO has exact detailed reports of what each user should have paid and can decide how much
income can be generated at what costs. I call this billings. wheter actual invoicesa are send out or not, is not relevant. The infrastructure that we call "cloud" should have this function. Eactly how Amazon.com knows the $ of each user, every enterprise on earth should lnow the same.

Cloud is about a business model. Any cloud  technology should support this business model. Some solutions are more elegant than others, some clouds will be specialized, but the litmus test to qualify any technology is whether it serves the final cloud business model.

Back the question "Why would you want to run MPI apps on Amazon rented nodes in addition to the local nodes, and why don't you buy more nodes?" If you have cloud with the capabilities above, one can answer this question easily" The decision, yes or not, is now clear as daylight,

To go one step further, if the internal data centers including grids are not responsive to cloud busainess model, they will be des-intermediated and will eventually disappear as  stalwart of the Enterprise, following  the same fate of  the typing pools - large room with people typing letters - which disappeared for ever. See how they looked like here:
http://img.dailymail.co.uk/i/pix/2007/11_01/typistDM0411_600x444.jpg

Miha


Jan Klincewicz

non lue,
1 janv. 2010, 19:27:5701/01/2010
à cloud-c...@googlegroups.com
Benmerar:

As Rao states, many (not all) HPC apps tend to run on grids, and to my knowledge, grids are often comprised pf physical machines as opposed to VMs.  The nature of HPC apps with which I was familiar (and it has probably been 5-6 years since I was close to those) was that they were compute intensive, and generally could consume a full CPU at the time (but that was before multi-cores were common.)  I also suspect, as Rao says, that you will not find many Cloud providers that will give you physical servers anyway as it is not very cost-effective.

Bear in mind though, that running a paravirtualized OS (like Linux) on a virt platform that takes advantage of it will have a minuscule performance hit compare to bare metal.  Probably 8-12% for Linux vs. 12-15% for Windows (which is not highly paravirtualized (especially prior to 2008))  - written to understand it is running in a virtualized environment.

The cost / performance ratio weighs VERY heavily in favour of VMs no matter what.


--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com



--
Cheers,
Jan

Rao Dronamraju

non lue,
1 janv. 2010, 19:29:0401/01/2010
à cloud-c...@googlegroups.com

Worse yet, the laid-off admins working as baggage handlers for the CEOs private jetJ

 


Peglar, Robert

non lue,
1 janv. 2010, 19:46:0101/01/2010
à cloud-c...@googlegroups.com

There is also the well-known BOINC approach to running certain types of parallel (or embarrassingly so) codes.  It combines some cloud-like aspects of computing with some grid-like aspects (e.g. schedulers, batches of tasks, etc.)

 

Rob

 



Robert Peglar

Vice President, Technology, Storage Systems Group
Xiotech Corporation
Robert...@xiotech.com
952 983 2287 (Office)
314 308 6983 (Cell)

636 532 0828 (Fax)
www.xiotech.com : Toll-Free 866 472 6764

Xiotech Website



From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Miha Ahronovitz
Sent: Friday, January 01, 2010 6:21 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?

 

 

Benmerar Tarik Zakaria <sniper...@gmail.com> writes

 

--

Ray Nugent

non lue,
1 janv. 2010, 20:15:2601/01/2010
à cloud-c...@googlegroups.com
I don't think Cloud causes downsizing. I think it just takes the screwdriver out of your back pocket. It may lead you to learn a new language or two like Puppet or Chef. It might motivate you to train on Xen or VMware. It will certainly bring a whole new set of issues to deal with as you have 8-10 time the number of servers to manage as you once did and they can now disappear in an instant because one of you colleagues that did not get downsized clobbered your virtual datacenter full of VMs...

I don't think Mark was concerned about security so much as he was cost. He could not find a significant difference between AWS and a hosted physical server dollar wise.

Sent: Fri, January 1, 2010 4:19:41 PM

Ray DePena

non lue,
1 janv. 2010, 22:09:3201/01/2010
à cloud-c...@googlegroups.com
May be.  Isn't it something like 70% of security violations from insiders?

Insiders couldn't simply copy a HDD and walk it out with cloud..... :-)
Best Regards,

Ray DePena, MBA, PMP
+1.916.941.5558
Ray.D...@gmail.com
Twitter: @RayDePena
LinkedIn: http://www.linkedin.com/in/raydepena

Ray Nugent

non lue,
4 janv. 2010, 01:58:1504/01/2010
à cloud-c...@googlegroups.com
Using different VMs does not buy you anything security wise if the provider can access either your VMs or you storage or both (Hint: They Can).  So this is not a viable security solution. It should be pointed out that Salesforce, eBay, Google and the rest of the SaaS crowd can also access your data and some even use it and sell it. Virtually all SaaS providers collect data about you and use or sell that.


From: Rao Dronamraju <rao.dro...@sbcglobal.net>
To: cloud-c...@googlegroups.com
Sent: Tue, December 29, 2009 9:38:38 AM
Subject: RE: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?

“Keeping in mind I’m on the business not technical side of Cloud Computing, I’m saying that virtualization is the lowest common denominator i.e. a way to split a machine to serve multiple very different purposes rather than many users or organizations using the same server for a similar purpose e.g. CRM, but in a secure way without unauthorized access like SalesForce.”

 

If you are from the business side, why bother about virtualization at all. If you are from the buisness side your primary focus would be CORE – CapEx, OpEx, Revenues and Earnings in addition to Security and Risk to business assets. It shuould not matter as to how CC is accomplished with virtualization or bare-metal provisioning.  Virtualization is not necessarily a way to split a machine to serve multiple very different puposes. If you have multiple tenants who all use CRM, then all of them can be hosted on the same physical hardware in muptiple CRM VMs, isolated and separated for security reasons. In speciality clouds, clouds specializing in vertcial markets, many clients will be running their similar applilcations in different virtual machines.

 

You are probably right that server virtualization won’t go anywhere “soon”, but in the long-term, isn’t it possible that a majority of organizations won’t see any benefit to having their own servers (remember Scott McNeely’s Network Computer concept?) and eventually as PaaS reaches its tipping point and all custom applications can be developed in the cloud, private data centers will only be cost-effective for the very largest of organizations that need at least one server per major application? “

 

Without taking credit away from Scott McNealy, his concept/vision was nothing new. Telecom Cloud has been around for 50+ years and so are the utility clouds. Why would private data enters host one server per major application?...Isn’t server consolidation in the enterprise all about NOT having one server per major application. Infact private cloud is nothing but virtualization/server consolidation+live migration of virtual machines for efficent real-time resource utilization and optimization+chargeback models.

 

“Scott McNeely had the right concept, but the platforms and applications weren’t there to support it.  Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more, organizations can integrate, customize and mash-up almost anything to create powerful apps without a single server on their books.”

 

The most fundamental problem/roadblock with (public) CC today is Security, Security, Security….that is why in the near future as forecasted by gartner and others, it is going to be private clouds. This does not mean hybrid and public clouds will not have a market. Non-critical  applications  & SMBs will  be the primary markets. You may want to read the following excellent article published recently by MIT Technology Review.

 

http://www.technologyreview.com/web/24166/

 

 

 


From: cloud-c...@googlegroups.com [mailto: cloud-c...@googlegroups.com ] On Behalf Of Gabor Fulop


Sent: Tuesday, December 29, 2009 1:48 AM
To: cloud-c...@googlegroups.com

Subject: RE: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?

 

Hi Jan,

 

Keeping in mind I’m on the business not technical side of Cloud Computing, I’m saying that virtualization is the lowest common denominator i.e. a way to split a machine to serve multiple very different purposes rather than many users or organizations using the same server for a similar purpose e.g. CRM, but in a secure way without unauthorized access like SalesForce.

 

You are probably right that server virtualization won’t go anywhere “soon”, but in the long-term, isn’t it possible that a majority of organizations won’t see any benefit to having their own servers (remember Scott McNeely’s Network Computer concept?) and eventually as PaaS reaches its tipping point and all custom applications can be developed in the cloud, private data centers will only be cost-effective for the very largest of organizations that need at least one server per major application? 

 

Scott McNeely had the right concept, but the platforms and applications weren’t there to support it.  Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more, organizations can integrate, customize and mash-up almost anything to create powerful apps without a single server on their books.

 

Thoughts?

 

Gabor

 

From: cloud-c...@googlegroups.com [mailto: cloud-c...@googlegroups.com ] On Behalf Of Jan Klincewicz
Sent: Monday, December 28, 2009 2:57 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?

 

@Gabor:



Do you think SalesForce.com provides a PHYSICAL machine for every instance they offer customers ??  Or do you mean end customers may give up running their OWN VMs (and calling them "Private Clouds?"

In any event, I don't think sever virtualization is going anywhere soon ...

On Mon, Dec 28, 2009 at 1:46 AM, Gabor Fulop <Gabor...@cloudharbor.com> wrote:

I agree that we can implement Cloud without VM.  If you are familiar with SalesForce.com (a SaaS example of Cloud Computing) then you may be familiar with Force.com (by the same company), which I consider a PaaS or BPaaS (Business Platform-as-a-Service), because it is a platform for creating applications.  I wonder if any would agree that VM is the obsolete version of cloud because there is so much power available directly from shared service platforms that virtual sharing will soon be a thing of the past.

 

Yours inquisitively,

Gabor

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of smith jack
Sent: Sunday, December 27, 2009 6:39 PM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?

 

i think this is a quite open question.

we can implement Cloud Computing without VM indeed, and i am sure lot of Cloud Systems are implemented without VMs,

then what is the advantage of VMs in Cloud Computing?

(easy deployment?)

any reply is appreciated

--

Roland Rambau

non lue,
4 janv. 2010, 05:48:2604/01/2010
à cloud-c...@googlegroups.com
Ray,

Ray Nugent schrieb:


> Using different VMs does not buy you anything security wise if the
> provider can access either your VMs or you storage or both (Hint: They Can).
> So this is not a viable security solution. It should be pointed out that
> Salesforce, eBay, Google and the rest of the SaaS crowd can also access
> your data and some even use it and sell it. Virtually all SaaS providers
> collect data about you and use or sell that.

yes, but some jurisdictions (not the USA afaik) do have legal recourses
against misuse of personal data, including a legal right to have incorrect data
corrected or deleted. That is why it is so important that any cloud framework
needs to have controls on where data physically resides (i.e. which jurisdiction
applies).

-- Roland


>
>
>
>
> ________________________________
> From: Rao Dronamraju <rao.dro...@sbcglobal.net>
> To: cloud-c...@googlegroups.com
> Sent: Tue, December 29, 2009 9:38:38 AM
> Subject: RE: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?
>
>

> �Keeping in mind I�m on the
> business not technical side of Cloud Computing, I�m saying that


> virtualization is the lowest common denominator i.e. a way to split a machine
> to serve multiple very different purposes rather than many users or
> organizations using the same server for a similar purpose e.g. CRM, but in a

> secure way without unauthorized access like SalesForce.�


>
> If you are from the business side, why
> bother about virtualization at all. If you are from the buisness side your

> primary focus would be CORE � CapEx, OpEx, Revenues and Earnings in


> addition to Security and Risk to business assets. It shuould not matter as to
> how CC is accomplished with virtualization or bare-metal provisioning. Virtualization
> is not necessarily a way to split a machine to serve multiple very different
> puposes. If you have multiple tenants who all use CRM, then all of them can be
> hosted on the same physical hardware in muptiple CRM VMs, isolated and
> separated for security reasons. In speciality clouds, clouds specializing in
> vertcial markets, many clients will be running their similar applilcations in
> different virtual machines.
>

> �You are probably
> right that server virtualization won�t go anywhere �soon�,
> but in the long-term, isn�t it possible that a majority of organizations
> won�t see any benefit to having their own servers (remember Scott
> McNeely�s Network Computer concept?) and eventually as PaaS reaches its


> tipping point and all custom applications can be developed in the cloud,
> private data centers will only be cost-effective for the very largest of

> organizations that need at least one server per major application? �


>
> Without taking credit away from Scott
> McNealy, his concept/vision was nothing new. Telecom Cloud has been around for 50+
> years and so are the utility clouds. Why would private data enters host one

> server per major application?...Isn�t server consolidation in the enterprise


> all about NOT having one server per major application. Infact private cloud is
> nothing but virtualization/server consolidation+live migration of virtual
> machines for efficent real-time resource utilization and optimization+chargeback
> models.
>

> �Scott McNeely had the right
> concept, but the platforms and applications weren�t there to support


> it. Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more,
> organizations can integrate, customize and mash-up almost anything to create

> powerful apps without a single server on their books.�
>
> The most fundamental problem/roadblock
> with (public) CC today is Security, Security, Security�.that is why in


> the near future as forecasted by gartner and others, it is going to be private
> clouds. This does not mean hybrid and public clouds will not have a market. Non-critical
> applications & SMBs will be the primary markets. You may want
> to read the following excellent article published recently by MIT Technology
> Review.
>
> http://www.technologyreview.com/web/24166/
>
>
>
>
> ________________________________
>
> From:cloud-c...@googlegroups.com [mailto: cloud-c...@googlegroups.com ] On Behalf Of Gabor Fulop
> Sent: Tuesday, December 29, 2009
> 1:48 AM
> To: cloud-c...@googlegroups.com
> Subject: RE: [ Cloud Computing ]
> what role did virtual machine play in Cloud Computing?
>
> Hi Jan,
>
> Keeping in mind

> I�m on the business not technical side of Cloud Computing, I�m


> saying that virtualization is the lowest common denominator i.e. a way to split
> a machine to serve multiple very different purposes rather than many users or
> organizations using the same server for a similar purpose e.g. CRM, but in a
> secure way without unauthorized access like SalesForce.
>
> You are probably

> right that server virtualization won�t go anywhere �soon�,
> but in the long-term, isn�t it possible that a majority of organizations
> won�t see any benefit to having their own servers (remember Scott
> McNeely�s Network Computer concept?) and eventually as PaaS reaches its


> tipping point and all custom applications can be developed in the cloud,
> private data centers will only be cost-effective for the very largest of
> organizations that need at least one server per major application?
>
> Scott McNeely had

> the right concept, but the platforms and applications weren�t there to

Jan Klincewicz

non lue,
4 janv. 2010, 10:23:0104/01/2010
à cloud-c...@googlegroups.com
There are technical issues and legal issues.  I think the technical issues are small stuff compared to the legal ones.

On Mon, Jan 4, 2010 at 5:48 AM, Roland Rambau <Roland...@freenet.de> wrote:
Ray,

Ray Nugent schrieb:
> Using different VMs does not buy you anything security wise if the
 > provider can access either your VMs or you storage or both (Hint: They Can).
 > So this is not a viable security solution. It should be pointed out that
 > Salesforce, eBay, Google and the rest of the SaaS crowd can also access
 > your data and some even use it and sell it. Virtually all SaaS providers
 > collect data about you and use or sell that.

yes, but some jurisdictions (not the USA afaik) do have legal recourses
against misuse of personal data, including a legal right to have incorrect data
corrected or deleted. That is why it is so important that any cloud framework
needs to have controls on where data physically resides (i.e. which jurisdiction
applies).

  -- Roland


>
>
>
>
> ________________________________
> From: Rao Dronamraju <rao.dro...@sbcglobal.net>
> To: cloud-c...@googlegroups.com
> Sent: Tue, December 29, 2009 9:38:38 AM
> Subject: RE: [ Cloud Computing ] what role did virtual machine play in Cloud   Computing?
>
>
> “Keeping in mind I’m on the
> business not technical side of Cloud Computing, I’m saying that

> virtualization is the lowest common denominator i.e. a way to split a machine
> to serve multiple very different purposes rather than many users or
> organizations using the same server for a similar purpose e.g. CRM, but in a
> secure way without unauthorized access like SalesForce.”

>
> If you are from the business side, why
> bother about virtualization at all. If you are from the buisness side your
> primary focus would be CORE – CapEx, OpEx, Revenues and Earnings in

> addition to Security and Risk to business assets. It shuould not matter as to
> how CC is accomplished with virtualization or bare-metal provisioning.  Virtualization
> is not necessarily a way to split a machine to serve multiple very different
> puposes. If you have multiple tenants who all use CRM, then all of them can be
> hosted on the same physical hardware in muptiple CRM VMs, isolated and
> separated for security reasons. In speciality clouds, clouds specializing in
> vertcial markets, many clients will be running their similar applilcations in
> different virtual machines.
>
> “You are probably
> right that server virtualization won’t go anywhere “soon”,
> but in the long-term, isn’t it possible that a majority of organizations
> won’t see any benefit to having their own servers (remember Scott
> McNeely’s Network Computer concept?) and eventually as PaaS reaches its

> tipping point and all custom applications can be developed in the cloud,
> private data centers will only be cost-effective for the very largest of
> organizations that need at least one server per major application? “

>
> Without taking credit away from Scott
> McNealy, his concept/vision was nothing new. Telecom Cloud has been around for 50+
> years and so are the utility clouds. Why would private data enters host one
> server per major application?...Isn’t server consolidation in the enterprise

> all about NOT having one server per major application. Infact private cloud is
> nothing but virtualization/server consolidation+live migration of virtual
> machines for efficent real-time resource utilization and optimization+chargeback
> models.
>
> “Scott McNeely had the right
> concept, but the platforms and applications weren’t there to support

> it.  Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more,
> organizations can integrate, customize and mash-up almost anything to create
> powerful apps without a single server on their books.”
>
> The most fundamental problem/roadblock
> with (public) CC today is Security, Security, Security….that is why in

> the near future as forecasted by gartner and others, it is going to be private
> clouds. This does not mean hybrid and public clouds will not have a market. Non-critical
> applications  & SMBs will  be the primary markets. You may want
> to read the following excellent article published recently by MIT Technology
> Review.
>
> http://www.technologyreview.com/web/24166/
>
>
>
>
> ________________________________
>
> From:cloud-c...@googlegroups.com [mailto: cloud-c...@googlegroups.com ] On Behalf Of Gabor Fulop
> Sent: Tuesday, December 29, 2009
> 1:48 AM
> To: cloud-c...@googlegroups.com
> Subject: RE: [ Cloud Computing ]
> what role did virtual machine play in Cloud Computing?
>
> Hi Jan,
>
> Keeping in mind
> I’m on the business not technical side of Cloud Computing, I’m

> saying that virtualization is the lowest common denominator i.e. a way to split
> a machine to serve multiple very different purposes rather than many users or
> organizations using the same server for a similar purpose e.g. CRM, but in a
> secure way without unauthorized access like SalesForce.
>
> You are probably
> right that server virtualization won’t go anywhere “soon”,
> but in the long-term, isn’t it possible that a majority of organizations
> won’t see any benefit to having their own servers (remember Scott
> McNeely’s Network Computer concept?) and eventually as PaaS reaches its

> tipping point and all custom applications can be developed in the cloud,
> private data centers will only be cost-effective for the very largest of
> organizations that need at least one server per major application?
>
> Scott McNeely had
> the right concept, but the platforms and applications weren’t there to
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com



--
Cheers,
Jan

Frank D. Greco

non lue,
4 janv. 2010, 19:50:2104/01/2010
à cloud-c...@googlegroups.com
At 12:34 PM 12/30/2009, anthony...@gmail.com wrote:
>Frank et al
>
>I agree that you have addressed 1/2 of the equation. COTS apps are
>inherently not built for optimal performance or efficiency. The
>focus is usually on the logic or automation they are providing.
>
>Programming for mutlicores, parallel processing or even high
>concurrency is a strict discipline with few master craftsmen available.

Truer words were never spoken. :) And that's why its a
very hard problem.
But its a problem that eventually needs to be
solved. Perhaps the HotSpot
runtime of the future would inspect code as its running and
instead make
algorithmic alterations instead of just byte/opcode replacement.

>What I would challenge you and Jan on is that your only providing
>answers to the supply management. Ie - abstract and stack apps by vm.
>
>What is missing in your solution formula is the demand management
>equation. To ensure both performance and maximum efficiency you must
>include with the VM's run time managers that drive the workload to the core/vm.

But workloads vary. If you assume workloads are equal or
quantiz-able, perhaps
you got a shot. I had more luck using javaspaces without
virtualization
to drive work to workers based on workload affinity, ie,
machines with gpu's,
fast caches, 4 cores, x86 performance vs sparc throughput,
database connectivity,
et al.

>There are mature managers from Appistry, Tibco and IBM that manage
>the runtime demand containers (app servers, web servers, event
>managers, rules engines, message queues, etc...) which COTS and
>custom apps can be controlled without code changes. By inserting
>such control you can optimally ensure QoS in terms of performance,
>cost and efficiency AND exploit the VM/core strategy.

While this can be effective for certain enterprise
applications, I posit this
can be done more effectively with tuple-spaces rather than
coarse-grained "demand
containers". But either way, its still coarse-grained "QoS"
optimization.

>Now perhaps you got stuck in the batch management days at Lehman and
>forgot to look outside the HPC/job space to general purpose computing! :)

The Cloud notion is a refinement of
batch/services-grid/hpc/jobs-submission/parallelism/network-arch.
Understanding these components gives you a deeper
understanding of general purpose
computing. Perhaps you should try enterprise services
management and look inside
to see how its done? :)

Frank G.

anthony...@gmail.com

non lue,
5 janv. 2010, 07:54:4705/01/2010
à cloud-c...@googlegroups.com
Well said Master Craftsmen !

Tony Bishop
Sent via BlackBerry by AT&T

-----Original Message-----
From: "Frank D. Greco" <fgr...@javasig.com>
Date: Mon, 04 Jan 2010 19:50:21
To: <cloud-c...@googlegroups.com>
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud
Computing?

Rich Wellner

non lue,
12 janv. 2010, 12:46:0412/01/2010
à cloud-c...@googlegroups.com
Jim Starkey wrote:
>
> Second, much of security is about sandboxes. A VM is a pretty good
> sandbox, but is quite expensive with regard to wasted memory, wasted
> cycles, and hypervisor overhead.
As a data point. We've don't extensive testing of OVM in support of our
HPC customers looking for cloud solutions and see 1-3% overhead in
critical use cases. That's well within the range of what they are
finding acceptable considering the substantial benefits gained.

> A VM,
> on the other hand, has an host operating system that must be maintained
> like any other operating system. If you've ever run a server of any ilk,
> you will understand the number of security patches issued per month, and
> should understand that an unmaintained OS, on a hard server or in a
> cloud, is a very insecure beast indeed.
>
Or, said another way, one of the benefits of VMs is that they contain an
OS that can be managed according to the needs of the application contained.

When I worked at Fermilab I was amazed at the amount of time spent
managing applications due to the lack of an effective virtualization
mechanism. Different applications are going to require different OS and
library patches. There is simply no way around that. VMs allow those
requirements to be bundled together and tested as a unit rather than
have a quicksand of OS changes occurring underneath an application.

rw2

Stephen Fleece

non lue,
12 janv. 2010, 14:14:4412/01/2010
à cloud-c...@googlegroups.com
Though I don't have research to prove it, I expect a more expensive form
of overhead is human labor without computing virtualization. I suspect
the computing cycle overhead of virtualization is inexpensive to most
businesses, compared to the incremental human labor costs to provision
and administer operating system software directly against a physical
machines without virtualization.

It also enables key benefits in terms of machine image portability and
reuse.

I vote that computing virtualization has a big role in the IaaS model of
both public and private cloud computing.

Stephen

Khazret Sapenov

non lue,
12 janv. 2010, 15:47:5712/01/2010
à cloud-c...@googlegroups.com
Here's an interesting perspective on virtualisation pro and cons from Mr. Staimer and Intelicloud guys:

Virtualization Approach Strengths
For an online services provider, the fundamental appeal of the virtualization approach comes from the perception that it significantly reduces both server and storage hardware costs. It can actually reduce some costs, albeit to a much lesser extent than touted in the hype.
Operationally, both server and hardware virtualization considerably reduce scheduled downtime for upgrades, moves, changes, data migration and additions. For virtualized servers, it is the hypervisor’s ability to move OS guests around on different physical servers live, online and even in mid-transaction with no application downtime. For virtual storage, the abstraction of the storage image from the actual storage (both SAN and NAS) allows maintenance, changes, moves and, most importantly, data migration to occur non-disruptively online. These capabilities greatly improve SLA (service level agreement) management and simplify data protection disaster recovery procedures.

Virtualization Approach Gotchas
The first weakness to the virtualization approach is that the infrastructure costs are always much higher than expected, usually exceeding the savings in hardware costs. Most server virtualization implementations require networked storage, with the preferred storage being SAN-based storage. SAN storage infrastructure means storage, switches, adapters, cables, interfaces, power, cooling, rack space and floor space. The costs are far from trivial.
Then there are the hidden hypervisor infrastructure costs. There is no such thing as a free lunch, and hypervisors are no exception. Hypervisors have overhead. The overhead commonly ranges from 10% to 30% of the server’s resources, depending on the number of guests. That means up to 25% of the physical server’s total cost of ownership produces nothing, and growth requires up to 25% more physical servers.
A much stickier issue is the lack of automatic integration between the virtual servers, virtual storage, SAN storage, NAS, infrastructure, networks, power, cooling, etc. Whenever there is a change or growth in one part of the total infrastructure, it most likely requires change or growth in other parts, meaning that there must be extraordinary planning, communication, coordination and cooperation between “human” administrators of applications, servers, storage, storage networks, TCP/IP networks, plant, cables, etc. It may sound difficult, and it is even more difficult than it sounds.
Issues and problems, especially about performance, crop up all the time. Because there is no automatic integration within the infrastructure, troubleshooting is a blood-chilling nightmare. Take, for example, the all too common issue of “too much” oversubscription within the infrastructure. Besides requiring unprecedented levels of multi-departmental cooperation, attempting to isolate the root cause of an application slow down or failure provides no easy way or guarantee of determining where the “too much” oversubscription is occurring.

Too much oversubscription is not just a storage phenomenon; it can and will occur in the TCP/IP network, leading to severe congestion events that decimate users’ application performance. So when an application begins to fall below SLA performance requirements, how will the admin know where to look first? Is the “too-much” oversubscription in the network? Is it in the physical server? Is it in the SAN? Is it in the virtualized storage? Is it in the volume? Is it in the storage system? Is it in all the above? This is a troubling problem with this type of discrete architectural (a.k.a. best-of-breed) approach that has no simple answers.
The integration problems get much worse and even more difficult as the virtualization approach scales. It reaches a point when there are just not enough service provider IT professionals or hours in the day to manage the ongoing integration issues. It is like squeezing a balloon – fix or squeeze something here, and it bulges out there.
What becomes all too apparent is that the previously discussed issue of greater than expected capital expenditures is just the tip of the iceberg. The operating expenditures in time, maintenance and human assets turn out to far beyond all expectations. It’s further exacerbated by power and cooling requirements of discrete systems not designed to cooperate in optimizing energy consumption.
Service providers with either of these approaches attempt to manage their problems by limiting them to what has become known as the traditional silo model. The silo model limits the size or scale of each silo so that any individual silo does not become unmanageable. The problem with the traditional silo model is that contrary to conventional wisdom, adding silos does not merely increase management requirements linearly. It actually increases management requirements exponentially. This is because each additional silo requires some level of load balancing for application access, data, networks and/or data protection between silos, usually requiring ongoing data migration. As the numbers of silos grows, so does the complexity. The formulas for load balancing and data migration become unwieldy, eventually becoming unreliable and unsustainable.
Service providers grossly underestimating cost and complexity will often mean the difference between profit and loss. There has to be a better way. 



Miha Ahronovitz

non lue,
12 janv. 2010, 16:40:4112/01/2010
à cloud-c...@googlegroups.com
Khazret, thanks for posting. We have a clear explanation of what we knew is happening in v12n, but we could express as clearly as Mr.Staimer did.

Miha

From: Khazret Sapenov <sap...@gmail.com>
To: cloud-c...@googlegroups.com
Sent: Tue, January 12, 2010 12:47:57 PM

Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?

Jeanne Morain

non lue,
12 janv. 2010, 14:40:3212/01/2010
à cloud-c...@googlegroups.com
http://www.ca.com/us/press/release.aspx?cid=225545

This will be an interesting match and timely.  Service Level Management is critical.  Tracking who has what throughout the life cycle for users and vendors will become more critical as virtualization and SaaS becomes more prevalent to the extended enterprise.

Jan Klincewicz

non lue,
12 janv. 2010, 17:50:3712/01/2010
à cloud-c...@googlegroups.com
Much of this is just plain wrong.....specifically hypervisor costs vs. physical servers.   Many "facts" are incorrect, and the suppositions even more so.  Simple arithmetic can bear this out, and in the absence of any specific examples, I would tend to discount much of that is proposed here.
--
Cheers,
Jan

Greg Pfister

non lue,
12 janv. 2010, 21:14:5412/01/2010
à Cloud Computing
Unfortunately, I completely agree with Jan.

Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.

Greg Pfister
http://perilsofparallel.blogspot.com/

> > On Tue, Jan 12, 2010 at 2:14 PM, Stephen Fleece <sfle...@tmforum.org>wrote:
>
> >> Though I don't have research to prove it, I expect a more expensive form
> >> of overhead is human labor without computing virtualization.  I suspect
> >> the computing cycle overhead of virtualization is inexpensive to most
> >> businesses, compared to the incremental human labor costs to provision
> >> and administer operating system software directly against a physical
> >> machines without virtualization.
>
> >> It also enables key benefits in terms of machine image portability and
> >> reuse.
>
> >> I vote that computing virtualization has a big role in the IaaS model of
> >> both public and private cloud computing.
>
> >> Stephen
>
> >> --
>
> >> ~~~~~
> >> Register Today for Cloud Slam 2010 at official website -
> >>http://cloudslam10.com
> >> Posting guidelines:

> >>http://groups.google.ca/group/cloud-computing/web/frequently-asked-qu...
> >> Follow us on Twitterhttp://twitter.com/cloudcomp_groupor
> >> @cloudcomp_group
> >> Post Job/Resume athttp://cloudjobs.net


> >> Buy 88 conference sessions and panels on cloud computing on DVD at
> >>http://www.amazon.com/gp/product/B002H07SEC,

> >>http://www.amazon.com/gp/product/B002H0IW1Uor get instant access to


> >> downloadable versions at
> >>http://cloudslam09.com/content/registration-5.html
>
> >> ~~~~~
> >> You received this message because you are subscribed to the Google Groups
> >> "Cloud Computing" group.
> >> To post to this group, send email to cloud-c...@googlegroups.com
> >> To unsubscribe from this group, send email to
> >> cloud-computi...@googlegroups.com
>
> > --
> > ~~~~~
> > Register Today for Cloud Slam 2010 at official website -
> >http://cloudslam10.com
> > Posting guidelines:

> >http://groups.google.ca/group/cloud-computing/web/frequently-asked-qu...
> > Follow us on Twitterhttp://twitter.com/cloudcomp_groupor
> > @cloudcomp_group
> > Post Job/Resume athttp://cloudjobs.net


> > Buy 88 conference sessions and panels on cloud computing on DVD at
> >http://www.amazon.com/gp/product/B002H07SEC,

> >http://www.amazon.com/gp/product/B002H0IW1Uor get instant access to

Jan Klincewicz

non lue,
13 janv. 2010, 09:02:4913/01/2010
à cloud-c...@googlegroups.com
         It would be possible to draw such a biased conclusion by taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each having 2% overhead compared to bare-metal servers.  Does that mean running in that fashion is equal to purchasing, installing, maintaining and paying maintenance on 19 PHYSICAL servers ?  I think not.

          Whenever I see arguments with extreme data (in either direction) I think it important to see if the author has an agenda of any sort (ie. a product which mitigates the downside.) 

          Certainly, to get the most out of virtualization,shared storage is necessary (for high-availability, failover, live-migration etc.) but it is not mandatory.  Additionally, most data centers I have encountered in the past decade already OWN some SAN or NAS device and are merely leveraging what they already own.  Sharing of infrastructure such as HBAs and high-end offloading NICs by hosting VMs is FAR more efficient than purchasing individual cards for physical servers, and providing port density to accommodate them.  Virtualization, by use of internal virtual switches radically decreases the complexity and failure exposure of physical cabling.

           As Greg states, why is it it be so popular ??  Sure it is complex, and probably more so than constructing a purely physical counterpart, from a software perspective, but I suspect this balances out by requiring fewer (but smarter) bodies to maintain the environment.

    So let's hear Intelicloud's "better way."

***************************************************************************

Unfortunately, I completely agree with Jan.

Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.


Post Job/Resume at http://cloudjobs.net

Buy 88 conference sessions and panels on cloud computing on DVD at


~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com




--
Cheers,
Jan

Ray DePena

non lue,
13 janv. 2010, 13:02:2413/01/2010
à cloud-c...@googlegroups.com
Greg,

Looks like you were referring to the statement below (in quotes).  Ok. Fair enough.

When I hear generic statements like, "industry wide server utilization is 10-30%", we know there are some good IT shops that are highly efficient running at 80% and many others that are highly inefficient running at 5-10%. 

It may or may not average out, but generally speaking for IT shops that have not looked at server consolidation, and are not exactly models of efficiency, that generic statement of 10-30% resonates with me.

What do you guys think is a reasonable estimate for virtualization overhead in the industry?  I'd be interested to know your perspectives.


Best Regards,

Ray DePena, MBA, PMP
+1.916.941.5558
Ray.D...@gmail.com
Twitter: @RayDePena
LinkedIn: http://www.linkedin.com/in/raydepena

"Hypervisors have overhead. The overhead commonly ranges from 10% to 30% of the server’s resources, depending on the number of guests. That means up to 25% of the physical server’s total cost of ownership produces nothing, and growth
requires up to 25% more physical servers."

Post Job/Resume at http://cloudjobs.net

Buy 88 conference sessions and panels on cloud computing on DVD at


~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com




--

Jim Starkey

non lue,
13 janv. 2010, 13:23:1913/01/2010
à cloud-c...@googlegroups.com
I found the post not only insightful, but one that changed my ideas about private clouds.

Sure, compute cycles are compute cycles whether on hard iron or VMs.� But disk and network traffic is something else again.� Yes, there is an overhead induced by the hypervisor, but there is also an effect, probably a great deal most significant, of contention among the VMs for disk and network resources.� Combine disk bound, network bound, or CPU bound applications on a single server, and everyone is going to suffer.

And yes, there are ways around those problems, but the solutions are different from the solutions on hard iron, there's a learning curve to pay for, and ultimately more, not less, administration may be required.� Pre-VM, it was necessary to administer each of the applications and the dedicated server.� Post-VM, those administration costs are still there, but now there are additional administration expenses to contention among the servers, now virtual, as well as managing more sophisticated storage.

None of this should be surprising, since the world went to dedicated servers to save on administration costs in the first place.

So I guess the bottom line is the balance the gain by reducing the number of physical servers versus the incremental cost of administrating more complex servers.

Too close to eyeball for me.

(Jan, not everyone who disagrees with you has a hidden nefarious agenda.)

Jan Klincewicz wrote:
�������� It would be possible to draw such a biased conclusion by taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each having 2% overhead compared to bare-metal servers.� Does that mean running in that fashion is equal to purchasing, installing, maintaining and paying maintenance on 19 PHYSICAL servers ?� I think not.

��������� Whenever I see arguments with extreme data (in either direction) I think it important to see if the author has an agenda of any sort (ie. a product which mitigates the downside.)�

��������� Certainly, to get the most out of virtualization,shared storage is necessary (for high-availability, failover, live-migration etc.) but it is not mandatory.� Additionally, most data centers I have encountered in the past decade already OWN some SAN or NAS device and are merely leveraging what they already own.� Sharing of infrastructure such as HBAs and high-end offloading NICs by hosting VMs is FAR more efficient than purchasing individual cards for physical servers, and providing port density to accommodate them.� Virtualization, by use of internal virtual switches radically decreases the complexity and failure exposure of physical cabling.

���������� As Greg states, why is it it be so popular ??� Sure it is complex, and probably more so than constructing a purely physical counterpart, from a software perspective, but I suspect this balances out by requiring fewer (but smarter) bodies to maintain the environment.

��� So let's hear Intelicloud's "better way."


***************************************************************************
Unfortunately, I completely agree with Jan.

Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.


On Tue, Jan 12, 2010 at 9:14 PM, Greg Pfister <greg.p...@gmail.com> wrote:
Unfortunately, I completely agree with Jan.

Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Jan 12, 3:50�pm, Jan Klincewicz <jan.klincew...@gmail.com> wrote:
> Much of this is just plain wrong.....specifically hypervisor costs vs.
> physical servers. � Many "facts" are incorrect, and the suppositions even
> more so. �Simple arithmetic can bear this out, and in the absence of any

> specific examples, I would tend to discount much of that is proposed here.
>
>
>
>
>
> On Tue, Jan 12, 2010 at 3:47 PM, Khazret Sapenov <sape...@gmail.com> wrote:
> > Here's an interesting perspective on virtualisation pro and cons from Mr.
> > Staimer and Intelicloud guys:
>
> > Virtualization Approach Strengths
> > For an online services provider, the fundamental appeal of the
> > virtualization approach comes from the perception that it significantly
> > reduces both server and storage hardware costs. It can actually reduce some
> > costs, albeit to a much lesser extent than touted in the hype.
> > Operationally, both server and hardware virtualization considerably reduce
> > scheduled downtime for upgrades, moves, changes, data migration and
> > additions. For virtualized servers, it is the hypervisor�s ability to move

> > OS guests around on different physical servers live, online and even in
> > mid-transaction with no application downtime. For virtual storage, the
> > abstraction of the storage image from the actual storage (both SAN and NAS)
> > allows maintenance, changes, moves and, most importantly, data migration to
> > occur non-disruptively online. These capabilities greatly improve SLA
> > (service level agreement) management and simplify data protection disaster
> > recovery procedures.
>
> > Virtualization Approach Gotchas
> > The first weakness to the virtualization approach is that the
> > infrastructure costs are always much higher than expected, usually exceeding
> > the savings in hardware costs. Most server virtualization implementations
> > require networked storage, with the preferred storage being SAN-based
> > storage. SAN storage infrastructure means storage, switches, adapters,
> > cables, interfaces, power, cooling, rack space and floor space. The costs
> > are far from trivial.
> > Then there are the hidden hypervisor infrastructure costs. There is no such
> > thing as a free lunch, and hypervisors are no exception. Hypervisors have
> > overhead. The overhead commonly ranges from 10% to 30% of the server�s

> > resources, depending on the number of guests. That means up to 25% of the
> > physical server�s total cost of ownership produces nothing, and growth

> > requires up to 25% more physical servers.
> > A much stickier issue is the lack of automatic integration between the
> > virtual servers, virtual storage, SAN storage, NAS, infrastructure,
> > networks, power, cooling, etc. Whenever there is a change or growth in one
> > part of the total infrastructure, it most likely requires change or growth
> > in other parts, meaning that there must be extraordinary planning,
> > communication, coordination and cooperation between �human� administrators

> > of applications, servers, storage, storage networks, TCP/IP networks, plant,
> > cables, etc. It may sound difficult, and it is even more difficult than it
> > sounds.
> > Issues and problems, especially about performance, crop up all the time.
> > Because there is no automatic integration within the infrastructure,
> > troubleshooting is a blood-chilling nightmare. Take, for example, the all
> > too common issue of �too much� oversubscription within the infrastructure.

> > Besides requiring unprecedented levels of multi-departmental cooperation,
> > attempting to isolate the root cause of an application slow down or failure
> > provides no easy way or guarantee of determining where the �too much�

> > oversubscription is occurring.
>
> > Too much oversubscription is not just a storage phenomenon; it can and will
> > occur in the TCP/IP network, leading to severe congestion events that
> > decimate users� application performance. So when an application begins to

> > fall below SLA performance requirements, how will the admin know where to
> > look first? Is the �too-much� oversubscription in the network? Is it in the

> > physical server? Is it in the SAN? Is it in the virtualized storage? Is it
> > in the volume? Is it in the storage system? Is it in all the above? This is
> > a troubling problem with this type of discrete architectural (a.k.a.
> > best-of-breed) approach that has no simple answers.
> > The integration problems get much worse and even more difficult as the
> > virtualization approach scales. It reaches a point when there are just not
> > enough service provider IT professionals or hours in the day to manage the
> > ongoing integration issues. It is like squeezing a balloon � fix or squeeze

> > something here, and it bulges out there.
> > What becomes all too apparent is that the previously discussed issue of
> > greater than expected capital expenditures is just the tip of the iceberg.
> > The operating expenditures in time, maintenance and human assets turn out to
> > far beyond all expectations. It�s further exacerbated by power and cooling

> > requirements of discrete systems not designed to cooperate in optimizing
> > energy consumption.
> > Service providers with either of these approaches attempt to manage their
> > problems by limiting them to what has become known as the traditional silo
> > model. The silo model limits the size or scale of each silo so that any
> > individual silo does not become unmanageable. The problem with the
> > traditional silo model is that contrary to conventional wisdom, adding silos
> > does not merely increase management requirements linearly. It actually
> > increases management requirements exponentially. This is because each
> > additional silo requires some level of load balancing for application
> > access, data, networks and/or data protection between silos, usually
> > requiring ongoing data migration. As the numbers of silos grows, so does the
> > complexity. The formulas for load balancing and data migration become
> > unwieldy, eventually becoming unreliable and unsustainable.
> > Service providers grossly underestimating cost and complexity will often
> > mean the difference between profit and loss. There has to be a better way.
>
> > Source:http://www.intelicloud.com/next-gen/white-papers.html
>
> > On Tue, Jan 12, 2010 at 2:14 PM, Stephen Fleece <sfle...@tmforum.org>wrote:
>
> >> Though I don't have research to prove it, I expect a more expensive form
> >> of overhead is human labor without computing virtualization. �I suspect

> >> the computing cycle overhead of virtualization is inexpensive to most
> >> businesses, compared to the incremental human labor costs to provision
> >> and administer operating system software directly against a physical
> >> machines without virtualization.
>
> >> It also enables key benefits in terms of machine image portability and
> >> reuse.
>
> >> I vote that computing virtualization has a big role in the IaaS model of
> >> both public and private cloud computing.
>
> >> Stephen
>


-- 
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376

Peglar, Robert

non lue,
13 janv. 2010, 13:44:2213/01/2010
à cloud-c...@googlegroups.com

Completely agree with Jim on the effect of virtualization.  The effect on disk and network resources is palpable in large virtual farms.  This is why much R&D is being placed into highly efficient and intelligent disk elements to relieve the ‘squeezed balloon’ effect that hypervisors tend to exhibit.  Boot storms are one example of such, especially in virtual desktop environments, and this is highly germane to many cloud providers.

 

Rob

 


Xiotech Website
Robert Peglar

Vice President, Technology, Storage Systems Group
Xiotech Corporation | Toll-Free: 866.472.6764
o 952 983 2287   m 314 308 6983   f 636 532 0828
Robert...@xiotech.com | www.xiotech.com


From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Jim Starkey


Sent: Wednesday, January 13, 2010 12:23 PM
To: cloud-c...@googlegroups.com

Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

I found the post not only insightful, but one that changed my ideas about private clouds.

Sure, compute cycles are compute cycles whether on hard iron or VMs.  But disk and network traffic is something else again.  Yes, there is an overhead induced by the hypervisor, but there is also an effect, probably a great deal most significant, of contention among the VMs for disk and network resources.  Combine disk bound, network bound, or CPU bound applications on a single server, and everyone is going to suffer.

And yes, there are ways around those problems, but the solutions are different from the solutions on hard iron, there's a learning curve to pay for, and ultimately more, not less, administration may be required.  Pre-VM, it was necessary to administer each of the applications and the dedicated server.  Post-VM, those administration costs are still there, but now there are additional administration expenses to contention among the servers, now virtual, as well as managing more sophisticated storage.



None of this should be surprising, since the world went to dedicated servers to save on administration costs in the first place.

So I guess the bottom line is the balance the gain by reducing the number of physical servers versus the incremental cost of administrating more complex servers.

Too close to eyeball for me.

(Jan, not everyone who disagrees with you has a hidden nefarious agenda.)

Jan Klincewicz wrote:

         It would be possible to draw such a biased conclusion by taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each having 2% overhead compared to bare-metal servers.  Does that mean running in that fashion is equal to purchasing, installing, maintaining and paying maintenance on 19 PHYSICAL servers ?  I think not.


          Whenever I see arguments with extreme data (in either direction) I think it important to see if the author has an agenda of any sort (ie. a product which mitigates the downside.) 

          Certainly, to get the most out of virtualization,shared storage is necessary (for high-availability, failover, live-migration etc.) but it is not mandatory.  Additionally, most data centers I have encountered in the past decade already OWN some SAN or NAS device and are merely leveraging what they already own.  Sharing of infrastructure such as HBAs and high-end offloading NICs by hosting VMs is FAR more efficient than purchasing individual cards for physical servers, and providing port density to accommodate them.  Virtualization, by use of internal virtual switches radically decreases the complexity and failure exposure of physical cabling.

           As Greg states, why is it it be so popular ??  Sure it is complex, and probably more so than constructing a purely physical counterpart, from a software perspective, but I suspect this balances out by requiring fewer (but smarter) bodies to maintain the environment.



    So let's hear Intelicloud's "better way."

***************************************************************************
Unfortunately, I completely agree with Jan.

Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.

On Tue, Jan 12, 2010 at 9:14 PM, Greg Pfister <greg.p...@gmail.com> wrote:

Unfortunately, I completely agree with Jan.

Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.

Greg Pfister
http://perilsofparallel.blogspot.com/


On Jan 12, 3:50 pm, Jan Klincewicz <jan.klincew...@gmail.com> wrote:
> Much of this is just plain wrong.....specifically hypervisor costs vs.

> physical servers.   Many "facts" are incorrect, and the suppositions even
> more so.  Simple arithmetic can bear this out, and in the absence of any


> specific examples, I would tend to discount much of that is proposed here.
>
>
>
>
>

> On Tue, Jan 12, 2010 at 3:47 PM, Khazret Sapenov <sape...@gmail.com> wrote:
> > Here's an interesting perspective on virtualisation pro and cons from Mr.
> > Staimer and Intelicloud guys:
>
> > Virtualization Approach Strengths
> > For an online services provider, the fundamental appeal of the
> > virtualization approach comes from the perception that it significantly
> > reduces both server and storage hardware costs. It can actually reduce some
> > costs, albeit to a much lesser extent than touted in the hype.
> > Operationally, both server and hardware virtualization considerably reduce
> > scheduled downtime for upgrades, moves, changes, data migration and

> > additions. For virtualized servers, it is the hypervisor’s ability to move


> > OS guests around on different physical servers live, online and even in
> > mid-transaction with no application downtime. For virtual storage, the
> > abstraction of the storage image from the actual storage (both SAN and NAS)
> > allows maintenance, changes, moves and, most importantly, data migration to
> > occur non-disruptively online. These capabilities greatly improve SLA
> > (service level agreement) management and simplify data protection disaster
> > recovery procedures.
>
> > Virtualization Approach Gotchas
> > The first weakness to the virtualization approach is that the
> > infrastructure costs are always much higher than expected, usually exceeding
> > the savings in hardware costs. Most server virtualization implementations
> > require networked storage, with the preferred storage being SAN-based
> > storage. SAN storage infrastructure means storage, switches, adapters,
> > cables, interfaces, power, cooling, rack space and floor space. The costs
> > are far from trivial.
> > Then there are the hidden hypervisor infrastructure costs. There is no such
> > thing as a free lunch, and hypervisors are no exception. Hypervisors have

> > overhead. The overhead commonly ranges from 10% to 30% of the server’s


> > resources, depending on the number of guests. That means up to 25% of the

> > physical server’s total cost of ownership produces nothing, and growth


> > requires up to 25% more physical servers.
> > A much stickier issue is the lack of automatic integration between the
> > virtual servers, virtual storage, SAN storage, NAS, infrastructure,
> > networks, power, cooling, etc. Whenever there is a change or growth in one
> > part of the total infrastructure, it most likely requires change or growth
> > in other parts, meaning that there must be extraordinary planning,

> > communication, coordination and cooperation between “human” administrators


> > of applications, servers, storage, storage networks, TCP/IP networks, plant,
> > cables, etc. It may sound difficult, and it is even more difficult than it
> > sounds.
> > Issues and problems, especially about performance, crop up all the time.
> > Because there is no automatic integration within the infrastructure,
> > troubleshooting is a blood-chilling nightmare. Take, for example, the all

> > too common issue of “too much” oversubscription within the infrastructure.


> > Besides requiring unprecedented levels of multi-departmental cooperation,
> > attempting to isolate the root cause of an application slow down or failure

> > provides no easy way or guarantee of determining where the “too much”


> > oversubscription is occurring.
>
> > Too much oversubscription is not just a storage phenomenon; it can and will
> > occur in the TCP/IP network, leading to severe congestion events that

> > decimate users’ application performance. So when an application begins to


> > fall below SLA performance requirements, how will the admin know where to

> > look first? Is the “too-much” oversubscription in the network? Is it in the


> > physical server? Is it in the SAN? Is it in the virtualized storage? Is it
> > in the volume? Is it in the storage system? Is it in all the above? This is
> > a troubling problem with this type of discrete architectural (a.k.a.
> > best-of-breed) approach that has no simple answers.
> > The integration problems get much worse and even more difficult as the
> > virtualization approach scales. It reaches a point when there are just not
> > enough service provider IT professionals or hours in the day to manage the

> > ongoing integration issues. It is like squeezing a balloon – fix or squeeze


> > something here, and it bulges out there.
> > What becomes all too apparent is that the previously discussed issue of
> > greater than expected capital expenditures is just the tip of the iceberg.
> > The operating expenditures in time, maintenance and human assets turn out to

> > far beyond all expectations. It’s further exacerbated by power and cooling


> > requirements of discrete systems not designed to cooperate in optimizing
> > energy consumption.
> > Service providers with either of these approaches attempt to manage their
> > problems by limiting them to what has become known as the traditional silo
> > model. The silo model limits the size or scale of each silo so that any
> > individual silo does not become unmanageable. The problem with the
> > traditional silo model is that contrary to conventional wisdom, adding silos
> > does not merely increase management requirements linearly. It actually
> > increases management requirements exponentially. This is because each
> > additional silo requires some level of load balancing for application
> > access, data, networks and/or data protection between silos, usually
> > requiring ongoing data migration. As the numbers of silos grows, so does the
> > complexity. The formulas for load balancing and data migration become
> > unwieldy, eventually becoming unreliable and unsustainable.
> > Service providers grossly underestimating cost and complexity will often
> > mean the difference between profit and loss. There has to be a better way.
>
> > Source:http://www.intelicloud.com/next-gen/white-papers.html
>

> > On Tue, Jan 12, 2010 at 2:14 PM, Stephen Fleece <sfle...@tmforum.org>wrote:

>
> >> Though I don't have research to prove it, I expect a more expensive form

> >> of overhead is human labor without computing virtualization.  I suspect


> >> the computing cycle overhead of virtualization is inexpensive to most
> >> businesses, compared to the incremental human labor costs to provision
> >> and administer operating system software directly against a physical
> >> machines without virtualization.
>
> >> It also enables key benefits in terms of machine image portability and
> >> reuse.
>
> >> I vote that computing virtualization has a big role in the IaaS model of
> >> both public and private cloud computing.
>
> >> Stephen
>

Jan Klincewicz

non lue,
13 janv. 2010, 14:27:5313/01/2010
à cloud-c...@googlegroups.com

A lot pf people can live with "palpable" compared to "irretrievable" or "devastating."  Again. I do not offer hypervisor-based virtualization as the ultimate answer, but in the absence of viable alternatives (and I do not see a plethora of these being offered) they seem pretty popular.

Boot Storms, to my knowledge, typically occur after disasters of some sort (in a well-designed DC.)  If disasters are a daily occurrence, I would suggest there is something amiss in the original architecture of such a DC.  Aside from that (post-Katrina FEMA notwithstanding) I think most organizations are cut a little slack after an act-of-god type occurrence. 

I will be happy to change my position 180 degrees when presented with a working alternative, but until then, I maintain the position that hypervisor-based virtualization is and will continue to be the primary means of deploying Cloud servers for the foreseeable future.



****************************************************************************************************************************************************************************************************************

Completely agree with Jim on the effect of virtualization.  The effect on disk and network resources is palpable in large virtual farms.  This is why much R&D is being placed into highly efficient and intelligent disk elements to relieve the ‘squeezed balloon’ effect that hypervisors tend to exhibit.  Boot storms are one example of such, especially in virtual desktop environments, and this is highly germane to many cloud providers.

 

Rob



--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com




--
Cheers,
Jan

Peglar, Robert

non lue,
13 janv. 2010, 14:35:4513/01/2010
à cloud-c...@googlegroups.com

As for boot storms, this is an interesting phenomenon.  Consider a cloud provider getting a request to stand up 1,500 VMs in the next 10 minutes.  That’s a boot storm.  Or, a VDI implementation where the clients all come in and either boot or login (I’ve seen VDI now that stands up/tears down the VD on login/logout, very interesting) and the network & storage get blasted with a billion (yes with a B) or more I/Os.  Of course, everyone wants their server or desktop to boot ASAP.  It’s a huge strain on legacy storage and/or cheap disk-in-server.

 

Rob

Jan Klincewicz

non lue,
13 janv. 2010, 14:45:1513/01/2010
à cloud-c...@googlegroups.com
Well, if a Cloud Provider cheaps out on storage (or any component for that matter) and are not able to provide a customer's needs, they will lose customers.  1500 VMs with 10 minute notice might indicate some poor planning on the part of a customer, though, don't you think ??   Then again, that IS the premise of CC <g>.

Ray Nugent

non lue,
13 janv. 2010, 14:54:5113/01/2010
à cloud-c...@googlegroups.com
Actually 1500 VMs with10 minutes is a pretty common use case for things like automated test and fail over. A well known cloud test vendor recently tried to get 1500 from AWS and could not get the request filled.

Ray

Sent: Wed, January 13, 2010 11:45:15 AM

Jan Klincewicz

non lue,
13 janv. 2010, 15:02:5213/01/2010
à cloud-c...@googlegroups.com
And if AWS cannot fulfill such a request, might I inquire who can ??  My point is, that it may be a typical REQUEST, but if a Cloud Vendor (as extreme as AWS) cannot fulfill it, then it is probably at the extreme end of reality, and thus not an example of what can currently be expected given today's state-of-the-art.



****************************************************************************************************************************************************************************************************************************************************

On Wed, Jan 13, 2010 at 2:54 PM, Ray Nugent <rnu...@yahoo.com> wrote:
Actually 1500 VMs with10 minutes is a pretty common use case for things like automated test and fail over. A well known cloud test vendor recently tried to get 1500 from AWS and could not get the request filled.

Ray



Jim Starkey

non lue,
13 janv. 2010, 16:23:2913/01/2010
à cloud-c...@googlegroups.com
Jan Klincewicz wrote:
> Well, if a Cloud Provider cheaps out on storage (or any component for
> that matter) and are not able to provide a customer's needs, they will
> lose customers. 1500 VMs with 10 minute notice might indicate some
> poor planning on the part of a customer, though, don't you think ??
> Then again, that IS the premise of CC <g>.
Whether or not it's poor planning on the part a few customers, aren't
all customers going to get hammered as the disk arms melt and the
network I/O backs up?

>
> On Wed, Jan 13, 2010 at 2:35 PM, Peglar, Robert
> <Robert...@xiotech.com <mailto:Robert...@xiotech.com>> wrote:
>
> As for boot storms, this is an interesting phenomenon. Consider a
> cloud provider getting a request to stand up 1,500 VMs in the next

> 10 minutes. That�s a boot storm. Or, a VDI implementation where
> the clients all come in and either boot or login (I�ve seen VDI


> now that stands up/tears down the VD on login/logout, very
> interesting) and the network & storage get blasted with a billion
> (yes with a B) or more I/Os. Of course, everyone wants their

> server or desktop to boot ASAP. It�s a huge strain on legacy


> storage and/or cheap disk-in-server.
>
>
>
> Rob
>
>
>

> *From:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> [mailto:cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>] *On Behalf Of *Jan
> Klincewicz
> *Sent:* Wednesday, January 13, 2010 1:28 PM
>
> *To:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> *Subject:* Re: [ Cloud Computing ] Re: what role did virtual


> machine play in Cloud Computing?
>
>
>
> A lot pf people can live with "palpable" compared to
> "irretrievable" or "devastating." Again. I do not offer
> hypervisor-based virtualization as the ultimate answer, but in the
> absence of viable alternatives (and I do not see a plethora of
> these being offered) they seem pretty popular.
>
> Boot Storms, to my knowledge, typically occur after disasters of
> some sort (in a well-designed DC.) If disasters are a daily
> occurrence, I would suggest there is something amiss in the
> original architecture of such a DC. Aside from that (post-Katrina
> FEMA notwithstanding) I think most organizations are cut a little
> slack after an act-of-god type occurrence.
>
> I will be happy to change my position 180 degrees when presented
> with a working alternative, but until then, I maintain the
> position that hypervisor-based virtualization is and will continue
> to be the primary means of deploying Cloud servers for the
> foreseeable future.
>
>
>
>
>
> ****************************************************************************************************************************************************************************************************************
>
> Completely agree with Jim on the effect of virtualization. The
> effect on disk and network resources is palpable in large virtual
> farms. This is why much R&D is being placed into highly efficient

> and intelligent disk elements to relieve the �squeezed balloon�


> effect that hypervisors tend to exhibit. Boot storms are one
> example of such, especially in virtual desktop environments, and
> this is highly germane to many cloud providers.
>
>
>
> Rob
>
>
>
> On Wed, Jan 13, 2010 at 1:44 PM, Peglar, Robert
> <Robert...@xiotech.com <mailto:Robert...@xiotech.com>> wrote:
>
> Completely agree with Jim on the effect of virtualization. The
> effect on disk and network resources is palpable in large virtual
> farms. This is why much R&D is being placed into highly efficient

> and intelligent disk elements to relieve the �squeezed balloon�


> effect that hypervisors tend to exhibit. Boot storms are one
> example of such, especially in virtual desktop environments, and
> this is highly germane to many cloud providers.
>
>
>
> Rob
>
>
>
>

> Xiotech Website <http://www.xiotech.com>*
> Robert Peglar*


> Vice President, Technology, Storage Systems Group
> Xiotech Corporation | Toll-Free: 866.472.6764
> o 952 983 2287 m 314 308 6983 f 636 532 0828

> Robert...@xiotech.com <mailto:Robert...@xiotech.com> |
> *www.xiotech.com <http://www.xiotech.com/>*
>
> *From:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> [mailto:cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>] *On Behalf Of *Jim Starkey
> *Sent:* Wednesday, January 13, 2010 12:23 PM
>
>
> *To:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
>
> *Subject:* Re: [ Cloud Computing ] Re: what role did virtual

> > > additions. For virtualized servers, it is the hypervisor�s

> server�s


> > > resources, depending on the number of guests. That means up to
> 25% of the

> > > physical server�s total cost of ownership produces nothing,


> and growth
> > > requires up to 25% more physical servers.
> > > A much stickier issue is the lack of automatic integration
> between the
> > > virtual servers, virtual storage, SAN storage, NAS,
> infrastructure,
> > > networks, power, cooling, etc. Whenever there is a change or
> growth in one
> > > part of the total infrastructure, it most likely requires
> change or growth
> > > in other parts, meaning that there must be extraordinary planning,

> > > communication, coordination and cooperation between �human�


> administrators
> > > of applications, servers, storage, storage networks, TCP/IP
> networks, plant,
> > > cables, etc. It may sound difficult, and it is even more
> difficult than it
> > > sounds.
> > > Issues and problems, especially about performance, crop up all
> the time.
> > > Because there is no automatic integration within the
> infrastructure,
> > > troubleshooting is a blood-chilling nightmare. Take, for
> example, the all

> > > too common issue of �too much� oversubscription within the


> infrastructure.
> > > Besides requiring unprecedented levels of multi-departmental
> cooperation,
> > > attempting to isolate the root cause of an application slow
> down or failure
> > > provides no easy way or guarantee of determining where the

> �too much�


> > > oversubscription is occurring.
> >
> > > Too much oversubscription is not just a storage phenomenon; it
> can and will
> > > occur in the TCP/IP network, leading to severe congestion
> events that

> > > decimate users� application performance. So when an


> application begins to
> > > fall below SLA performance requirements, how will the admin
> know where to

> > > look first? Is the �too-much� oversubscription in the network?


> Is it in the
> > > physical server? Is it in the SAN? Is it in the virtualized
> storage? Is it
> > > in the volume? Is it in the storage system? Is it in all the
> above? This is
> > > a troubling problem with this type of discrete architectural
> (a.k.a.
> > > best-of-breed) approach that has no simple answers.
> > > The integration problems get much worse and even more
> difficult as the
> > > virtualization approach scales. It reaches a point when there
> are just not
> > > enough service provider IT professionals or hours in the day
> to manage the

> > > ongoing integration issues. It is like squeezing a balloon �


> fix or squeeze
> > > something here, and it bulges out there.
> > > What becomes all too apparent is that the previously discussed
> issue of
> > > greater than expected capital expenditures is just the tip of
> the iceberg.
> > > The operating expenditures in time, maintenance and human
> assets turn out to

> > > far beyond all expectations. It�s further exacerbated by power

> <sfle...@tmforum.org <mailto:sfle...@tmforum.org>>wrote:

Ray Nugent

non lue,
13 janv. 2010, 16:46:0013/01/2010
à cloud-c...@googlegroups.com
The use case is the use case regardless of the provider's capability. In this particular case the 1500 VM request had been routinely fulfilled before but was now not available. It's really less start of the art than it is capacity. Ss the cloud fills up one becomes very aware that there are limitations.

Ray

Sent: Wed, January 13, 2010 12:02:52 PM
Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

And if AWS cannot fulfill such a request, might I inquire who can ??  My point is, that it may be a typical REQUEST, but if a Cloud Vendor (as extreme as AWS) cannot fulfill it, then it is probably at the extreme end of reality, and thus not an example of what can currently be expected given today's state-of-the-art.



****************************************************************************************************************************************************************************************************************************************************
On Wed, Jan 13, 2010 at 2:54 PM, Ray Nugent <rnu...@yahoo.com> wrote:
Actually 1500 VMs with10 minutes is a pretty common use case for things like automated test and fail over. A well known cloud test vendor recently tried to get 1500 from AWS and could not get the request filled.

Ray



On Wed, Jan 13, 2010 at 2:54 PM, Ray Nugent <rnu...@yahoo.com> wrote:
Actually 1500 VMs with10 minutes is a pretty common use case for things like automated test and fail over. A well known cloud test vendor recently tried to get 1500 from AWS and could not get the request filled.

Ray

From: Jan Klincewicz <jan.kli...@gmail.com>
Sent: Wed, January 13, 2010 11:45:15 AM

Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

Well, if a Cloud Provider cheaps out on storage (or any component for that matter) and are not able to provide a customer's needs, they will lose customers.  1500 VMs with 10 minute notice might indicate some poor planning on the part of a customer, though, don't you think ??   Then again, that IS the premise of CC <g>.

On Wed, Jan 13, 2010 at 2:35 PM, Peglar, Robert <Robert...@xiotech.com> wrote:

As for boot storms, this is an interesting phenomenon.  Consider a cloud provider getting a request to stand up 1,500 VMs in the next 10 minutes.  That’s a boot storm.  Or, a VDI implementation where the clients all come in and either boot or login (I’ve seen VDI now that stands up/tears down the VD on login/logout, very interesting) and the network & storage get blasted with a billion (yes with a B) or more I/Os.  Of course, everyone wants their server or desktop to boot ASAP.  It’s a huge strain on legacy storage and/or cheap disk-in-server.

 

Rob

 

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Jan Klincewicz
Sent: Wednesday, January 13, 2010 1:28 PM


To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

 

A lot pf people can live with "palpable" compared to "irretrievable" or "devastating."  Again. I do not offer hypervisor-based virtualization as the ultimate answer, but in the absence of viable alternatives (and I do not see a plethora of these being offered) they seem pretty popular.

Boot Storms, to my knowledge, typically occur after disasters of some sort (in a well-designed DC.)  If disasters are a daily occurrence, I would suggest there is something amiss in the original architecture of such a DC.  Aside from that (post-Katrina FEMA notwithstanding) I think most organizations are cut a little slack after an act-of-god type occurrence. 

I will be happy to change my position 180 degrees when presented with a working alternative, but until then, I maintain the position that hypervisor-based virtualization is and will continue to be the primary means of deploying Cloud servers for the foreseeable future.

 

 

****************************************************************************************************************************************************************************************************************

Completely agree with Jim on the effect of virtualization.  The effect on disk and network resources is palpable in large virtual farms.  This is why much R&D is being placed into highly efficient and intelligent disk elements to relieve the ‘squeezed balloon’ effect that hypervisors tend to exhibit.  Boot storms are one example of such, especially in virtual desktop environments, and this is highly germane to many cloud providers.

 

Rob

 

On Wed, Jan 13, 2010 at 1:44 PM, Peglar, Robert <Robert...@xiotech.com> wrote:

Completely agree with Jim on the effect of virtualization.  The effect on disk and network resources is palpable in large virtual farms.  This is why much R&D is being placed into highly efficient and intelligent disk elements to relieve the ‘squeezed balloon’ effect that hypervisors tend to exhibit.  Boot storms are one example of such, especially in virtual desktop environments, and this is highly germane to many cloud providers.

 

Rob

 


Xiotech Website
Robert Peglar

Vice President, Technology, Storage Systems Group
Xiotech Corporation | Toll-Free: 866.472.6764
o 952 983 2287   m 314 308 6983   f 636 532 0828

Robert...@xiotech.com | www.xiotech.com

From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Jim Starkey
Sent: Wednesday, January 13, 2010 12:23 PM

Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

 

I found the post not only insightful, but one that changed my ideas about private clouds.

Sure, compute cycles are compute cycles whether on hard iron or VMs.  But disk and network traffic is something else again.  Yes, there is an overhead induced by the hypervisor, but there is also an effect, probably a great deal most significant, of contention among the VMs for disk and network resources.  Combine disk bound, network bound, or CPU bound applications on a single server, and everyone is going to suffer.

And yes, there are ways around those problems, but the solutions are different from the solutions on hard iron, there's a learning curve to pay for, and ultimately more, not less, administration may be required.  Pre-VM, it was necessary to administer each of the applications and the dedicated server.  Post-VM, those administration costs are still there, but now there are additional administration expenses to contention among the servers, now virtual, as well as managing more sophisticated storage.

None of this should be surprising, since the world went to dedicated servers to save on administration costs in the first place.

So I guess the bottom line is the balance the gain by reducing the number of physical servers versus the incremental cost of administrating more complex servers.

Too close to eyeball for me.

(Jan, not everyone who disagrees with you has a hidden nefarious agenda.)

Jan Klincewicz wrote:

         It would be possible to draw such a biased conclusion by taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each having 2% overhead compared to bare-metal servers.  Does that mean running in that fashion is equal to purchasing, installing, maintaining and paying maintenance on 19 PHYSICAL servers ?  I think not.

          Whenever I see arguments with extreme data (in either direction) I think it important to see if the author has an agenda of any sort (ie. a product which mitigates the downside.) 

          Certainly, to get the most out of virtualization,shared storage is necessary (for high-availability, failover, live-migration etc.) but it is not mandatory.  Additionally, most data centers I have encountered in the past decade already OWN some SAN or NAS device and are merely leveraging what they already own.  Sharing of infrastructure such as HBAs and high-end offloading NICs by hosting VMs is FAR more efficient than purchasing individual cards for physical servers, and providing port density to accommodate them.  Virtualization, by use of internal virtual switches radically decreases the complexity and failure exposure of physical cabling.

           As Greg states, why is it it be so popular ??  Sure it is complex, and probably more so than constructing a purely physical counterpart, from a software perspective, but I suspect this balances out by requiring fewer (but smarter) bodies to maintain the environment.

    So let's hear Intelicloud's "better way."

***************************************************************************
Unfortunately, I completely agree with Jan.

Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.

On Tue, Jan 12, 2010 at 9:14 PM, Greg Pfister <greg.p...@gmail.com> wrote:

Unfortunately, I completely agree with Jan.

Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.

Greg Pfister
http://perilsofparallel.blogspot.com/


On Jan 12, 3:50 pm, Jan Klincewicz <jan.klincew...@gmail.com> wrote:
> Much of this is just plain wrong.....specifically hypervisor costs vs.
> physical servers.   Many "facts" are incorrect, and the suppositions even
> more so.  Simple arithmetic can bear this out, and in the absence of any
> specific examples, I would tend to discount much of that is proposed here.
>
>
>
>
>

> On Tue, Jan 12, 2010 at 3:47 PM, Khazret Sapenov <sape...@gmail.com> wrote:
> > Here's an interesting perspective on virtualisation pro and cons from Mr.
> > Staimer and Intelicloud guys:
>
> > Virtualization Approach Strengths
> > For an online services provider, the fundamental appeal of the
> > virtualization approach comes from the perception that it significantly
> > reduces both server and storage hardware costs. It can actually reduce some
> > costs, albeit to a much lesser extent than touted in the hype.
> > Operationally, both server and hardware virtualization considerably reduce
> > scheduled downtime for upgrades, moves, changes, data migration and

> > additions. For virtualized servers, it is the hypervisor’s ability to move


> > OS guests around on different physical servers live, online and even in
> > mid-transaction with no application downtime. For virtual storage, the
> > abstraction of the storage image from the actual storage (both SAN and NAS)
> > allows maintenance, changes, moves and, most importantly, data migration to
> > occur non-disruptively online. These capabilities greatly improve SLA
> > (service level agreement) management and simplify data protection disaster
> > recovery procedures.
>
> > Virtualization Approach Gotchas
> > The first weakness to the virtualization approach is that the
> > infrastructure costs are always much higher than expected, usually exceeding
> > the savings in hardware costs. Most server virtualization implementations
> > require networked storage, with the preferred storage being SAN-based
> > storage. SAN storage infrastructure means storage, switches, adapters,
> > cables, interfaces, power, cooling, rack space and floor space. The costs
> > are far from trivial.
> > Then there are the hidden hypervisor infrastructure costs. There is no such
> > thing as a free lunch, and hypervisors are no exception. Hypervisors have

> > overhead. The overhead commonly ranges from 10% to 30% of the server’s


> > resources, depending on the number of guests. That means up to 25% of the

> > physical server’s total cost of ownership produces nothing, and growth


> > requires up to 25% more physical servers.
> > A much stickier issue is the lack of automatic integration between the
> > virtual servers, virtual storage, SAN storage, NAS, infrastructure,
> > networks, power, cooling, etc. Whenever there is a change or growth in one
> > part of the total infrastructure, it most likely requires change or growth
> > in other parts, meaning that there must be extraordinary planning,

> > communication, coordination and cooperation between “human” administrators


> > of applications, servers, storage, storage networks, TCP/IP networks, plant,
> > cables, etc. It may sound difficult, and it is even more difficult than it
> > sounds.
> > Issues and problems, especially about performance, crop up all the time.
> > Because there is no automatic integration within the infrastructure,
> > troubleshooting is a blood-chilling nightmare. Take, for example, the all

> > too common issue of “too much” oversubscription within the infrastructure.


> > Besides requiring unprecedented levels of multi-departmental cooperation,
> > attempting to isolate the root cause of an application slow down or failure

> > provides no easy way or guarantee of determining where the “too much”


> > oversubscription is occurring.
>
> > Too much oversubscription is not just a storage phenomenon; it can and will
> > occur in the TCP/IP network, leading to severe congestion events that

> > decimate users’ application performance. So when an application begins to


> > fall below SLA performance requirements, how will the admin know where to

> > look first? Is the “too-much” oversubscription in the network? Is it in the


> > physical server? Is it in the SAN? Is it in the virtualized storage? Is it
> > in the volume? Is it in the storage system? Is it in all the above? This is
> > a troubling problem with this type of discrete architectural (a.k.a.
> > best-of-breed) approach that has no simple answers.
> > The integration problems get much worse and even more difficult as the
> > virtualization approach scales. It reaches a point when there are just not
> > enough service provider IT professionals or hours in the day to manage the

> > ongoing integration issues. It is like squeezing a balloon – fix or squeeze


> > something here, and it bulges out there.
> > What becomes all too apparent is that the previously discussed issue of
> > greater than expected capital expenditures is just the tip of the iceberg.
> > The operating expenditures in time, maintenance and human assets turn out to

> > far beyond all expectations. It’s further exacerbated by power and cooling




--
Cheers,
Jan




--
Cheers,
Jan

Jan Klincewicz

non lue,
13 janv. 2010, 16:48:4013/01/2010
à cloud-c...@googlegroups.com
That is a good argument against multi-tenancy. Perhaps PRIVATE Clouds could provide dedicated storage and networking.


-----Original Message-----
From: Jim Starkey <jsta...@nimbusdb.com>
Sent: Wednesday, January 13, 2010 04:23 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

Jan Klincewicz wrote:
> Well, if a Cloud Provider cheaps out on storage (or any component for
> that matter) and are not able to provide a customer's needs, they will
> lose customers. 1500 VMs with 10 minute notice might indicate some
> poor planning on the part of a customer, though, don't you think ??
> Then again, that IS the premise of CC <g>.

Whether or not it's poor planning on the part a few customers, aren't
all customers going to get hammered as the disk arms melt and the
network I/O backs up?

>
> On Wed, Jan 13, 2010 at 2:35 PM, Peglar, Robert
> <Robert...@xiotech.com <mailto:Robert...@xiotech.com>> wrote:
>
> As for boot storms, this is an interesting phenomenon. Consider a
> cloud provider getting a request to stand up 1,500 VMs in the next
> 10 minutes. That’s a boot storm. Or, a VDI implementation where
> the clients all come in and either boot or login (I’ve seen VDI
> now that stands up/tears down the VD on login/logout, very
> interesting) and the network & storage get blasted with a billion
> (yes with a B) or more I/Os. Of course, everyone wants their
> server or desktop to boot ASAP. It’s a huge strain on legacy
> storage and/or cheap disk-in-server.
>
>
>
> Rob
>
>
>

> *From:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> [mailto:cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>] *On Behalf Of *Jan
> Klincewicz
> *Sent:* Wednesday, January 13, 2010 1:28 PM
>
> *To:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>

> *Subject:* Re: [ Cloud Computing ] Re: what role did virtual


> machine play in Cloud Computing?
>
>
>
> A lot pf people can live with "palpable" compared to
> "irretrievable" or "devastating." Again. I do not offer
> hypervisor-based virtualization as the ultimate answer, but in the
> absence of viable alternat

[The entire original message is not included]

Miha Ahronovitz

non lue,
13 janv. 2010, 19:45:0313/01/2010
à cloud-c...@googlegroups.com
Khazret post from Staimer and Intelicloud folks is timely. It shows v12n is not all pink roses and heavenly bliss. It introduces complexities, which could be worse than the complexities it tries to eliminate.

This is true of any hyped technologies or even for deciding the new Healthcare program. Democrats say it's perfect , and Republicans say it's "evil". But one day, from this debate, we will have a better than today Healthcare, and this is what it counts

This does not kill v12n. It just shows how to improve, where to improve and where it can be used. It's a tool and not a panacea. Sometimes we need it. Sometimes we don't...

Miha


From: Jim Starkey <jsta...@nimbusdb.com>
To: cloud-c...@googlegroups.com
Sent: Wed, January 13, 2010 10:23:19 AM

Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

I found the post not only insightful, but one that changed my ideas about private clouds.

Sure, compute cycles are compute cycles whether on hard iron or VMs.  But disk and network traffic is something else again.  Yes, there is an overhead induced by the hypervisor, but there is also an effect, probably a great deal most significant, of contention among the VMs for disk and network resources.  Combine disk bound, network bound, or CPU bound applications on a single server, and everyone is going to suffer.

And yes, there are ways around those problems, but the solutions are different from the solutions on hard iron, there's a learning curve to pay for, and ultimately more, not less, administration may be required.  Pre-VM, it was necessary to administer each of the applications and the dedicated server.  Post-VM, those administration costs are still there, but now there are additional administration expenses to contention among the servers, now virtual, as well as managing more sophisticated storage.


None of this should be surprising, since the world went to dedicated servers to save on administration costs in the first place.

So I guess the bottom line is the balance the gain by reducing the number of physical servers versus the incremental cost of administrating more complex servers.

Too close to eyeball for me.

(Jan, not everyone who disagrees with you has a hidden nefarious agenda.)

Jan Klincewicz wrote:
         It would be possible to draw such a biased conclusion by taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each having 2% overhead compared to bare-metal servers.  Does that mean running in that fashion is equal to purchasing, installing, maintaining and paying maintenance on 19 PHYSICAL servers ?  I think not.

          Whenever I see arguments with extreme data (in either direction) I think it important to see if the author has an agenda of any sort (ie. a product which mitigates the downside.) 

          Certainly, to get the most out of virtualization,shared storage is necessary (for high-availability, failover, live-migration etc.) but it is not mandatory.  Additionally, most data centers I have encountered in the past decade already OWN some SAN or NAS device and are merely leveraging what they already own.  Sharing of infrastructure such as HBAs and high-end offloading NICs by hosting VMs is FAR more efficient than purchasing individual cards for physical servers, and providing port density to accommodate them.  Virtualization, by use of internal virtual switches radically decreases the complexity and failure exposure of physical cabling.

           As Greg states, why is it it be so popular ??  Sure it is complex, and probably more so than constructing a purely physical counterpart, from a software perspective, but I suspect this balances out by requiring fewer (but smarter) bodies to maintain the environment.


    So let's hear Intelicloud's "better way."

***************************************************************************
Unfortunately, I completely agree with Jan.

Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.


On Tue, Jan 12, 2010 at 9:14 PM, Greg Pfister <greg.p...@gmail.com> wrote:
Unfortunately, I completely agree with Jan.

Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Jan 12, 3:50 pm, Jan Klincewicz <jan.klincew...@gmail.com> wrote:
> Much of this is just plain wrong.....specifically hypervisor costs vs.
> physical servers.   Many "facts" are incorrect, and the suppositions even
> more so.  Simple arithmetic can bear this out, and in the absence of any

> specific examples, I would tend to discount much of that is proposed here.
>
>
>
>
>
> On Tue, Jan 12, 2010 at 3:47 PM, Khazret Sapenov <sape...@gmail.com> wrote:
> > Here's an interesting perspective on virtualisation pro and cons from Mr.
> > Staimer and Intelicloud guys:
>
> > Virtualization Approach Strengths
> > For an online services provider, the fundamental appeal of the
> > virtualization approach comes from the perception that it significantly
> > reduces both server and storage hardware costs. It can actually reduce some
> > costs, albeit to a much lesser extent than touted in the hype.
> > Operationally, both server and hardware virtualization considerably reduce
> > scheduled downtime for upgrades, moves, changes, data migration and
> > additions. For virtualized servers, it is the hypervisor’s ability to move

> > OS guests around on different physical servers live, online and even in
> > mid-transaction with no application downtime. For virtual storage, the
> > abstraction of the storage image from the actual storage (both SAN and NAS)
> > allows maintenance, changes, moves and, most importantly, data migration to
> > occur non-disruptively online. These capabilities greatly improve SLA
> > (service level agreement) management and simplify data protection disaster
> > recovery procedures.
>
> > Virtualization Approach Gotchas
> > The first weakness to the virtualization approach is that the
> > infrastructure costs are always much higher than expected, usually exceeding
> > the savings in hardware costs. Most server virtualization implementations
> > require networked storage, with the preferred storage being SAN-based
> > storage. SAN storage infrastructure means storage, switches, adapters,
> > cables, interfaces, power, cooling, rack space and floor space. The costs
> > are far from trivial.
> > Then there are the hidden hypervisor infrastructure costs. There is no such
> > thing as a free lunch, and hypervisors are no exception. Hypervisors have
> > overhead. The overhead commonly ranges from 10% to 30% of the server’s

> > resources, depending on the number of guests. That means up to 25% of the
> > physical server’s total cost of ownership produces nothing, and growth

> > requires up to 25% more physical servers.
> > A much stickier issue is the lack of automatic integration between the
> > virtual servers, virtual storage, SAN storage, NAS, infrastructure,
> > networks, power, cooling, etc. Whenever there is a change or growth in one
> > part of the total infrastructure, it most likely requires change or growth
> > in other parts, meaning that there must be extraordinary planning,
> > communication, coordination and cooperation between “human” administrators

> > of applications, servers, storage, storage networks, TCP/IP networks, plant,
> > cables, etc. It may sound difficult, and it is even more difficult than it
> > sounds.
> > Issues and problems, especially about performance, crop up all the time.
> > Because there is no automatic integration within the infrastructure,
> > troubleshooting is a blood-chilling nightmare. Take, for example, the all
> > too common issue of “too much” oversubscription within the infrastructure.

> > Besides requiring unprecedented levels of multi-departmental cooperation,
> > attempting to isolate the root cause of an application slow down or failure
> > provides no easy way or guarantee of determining where the “too much”

> > oversubscription is occurring.
>
> > Too much oversubscription is not just a storage phenomenon; it can and will
> > occur in the TCP/IP network, leading to severe congestion events that
> > decimate users’ application performance. So when an application begins to

> > fall below SLA performance requirements, how will the admin know where to
> > look first? Is the “too-much” oversubscription in the network? Is it in the

> > physical server? Is it in the SAN? Is it in the virtualized storage? Is it
> > in the volume? Is it in the storage system? Is it in all the above? This is
> > a troubling problem with this type of discrete architectural (a.k.a.
> > best-of-breed) approach that has no simple answers.
> > The integration problems get much worse and even more difficult as the
> > virtualization approach scales. It reaches a point when there are just not
> > enough service provider IT professionals or hours in the day to manage the
> > ongoing integration issues. It is like squeezing a balloon – fix or squeeze

> > something here, and it bulges out there.
> > What becomes all too apparent is that the previously discussed issue of
> > greater than expected capital expenditures is just the tip of the iceberg.
> > The operating expenditures in time, maintenance and human assets turn out to
> > far beyond all expectations. It’s further exacerbated by power and cooling

> > requirements of discrete systems not designed to cooperate in optimizing
> > energy consumption.
> > Service providers with either of these approaches attempt to manage their
> > problems by limiting them to what has become known as the traditional silo
> > model. The silo model limits the size or scale of each silo so that any
> > individual silo does not become unmanageable. The problem with the
> > traditional silo model is that contrary to conventional wisdom, adding silos
> > does not merely increase management requirements linearly. It actually
> > increases management requirements exponentially. This is because each
> > additional silo requires some level of load balancing for application
> > access, data, networks and/or data protection between silos, usually
> > requiring ongoing data migration. As the numbers of silos grows, so does the
> > complexity. The formulas for load balancing and data migration become
> > unwieldy, eventually becoming unreliable and unsustainable.
> > Service providers grossly underestimating cost and complexity will often
> > mean the difference between profit and loss. There has to be a better way.
>
> > Source:http://www.intelicloud.com/next-gen/white-papers.html
>
> > On Tue, Jan 12, 2010 at 2:14 PM, Stephen Fleece <sfle...@tmforum.org>wrote:
>
> >> Though I don't have research to prove it, I expect a more expensive form
> >> of overhead is human labor without computing virtualization.  I suspect

> >> the computing cycle overhead of virtualization is inexpensive to most
> >> businesses, compared to the incremental human labor costs to provision
> >> and administer operating system software directly against a physical
> >> machines without virtualization.
>
> >> It also enables key benefits in terms of machine image portability and
> >> reuse.
>
> >> I vote that computing virtualization has a big role in the IaaS model of
> >> both public and private cloud computing.
>
> >> Stephen
>


-- 
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376

Greg Pfister

non lue,
13 janv. 2010, 19:52:3813/01/2010
à Cloud Computing
What's reasonable? Close to zero is possible, but not with PCIe 2.x,
due to IO overhead. Mainframes -- with built-in hardware support for
not just CPU and memory, but IO virtualization -- have been running
less than 3% for decades.

But "the industry" means X86 and VMware or Xen or Hyper-V or
something, with PCIe busses, which can't get out of the way on
virtualization.

So, "it depends" primarily on the amount of IO done by the workload.
Industry average? I'd ***guess*** <10%. Total SWAG, though. I'd like
to hear if anybody has real data.

Gonna go to zero if PCIe virtualization takes off, which will be a
while, I'd guess.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Jan 13, 11:02 am, Ray DePena <ray.dep...@gmail.com> wrote:
> Greg,
>
> Looks like you were referring to the statement below (in quotes).  Ok. Fair
> enough.
>
> When I hear generic statements like, "industry wide server utilization is
> 10-30%", we know there are some good IT shops that are highly efficient
> running at 80% and many others that are highly inefficient running at
> 5-10%.
>
> It may or may not average out, but generally speaking for IT shops that have
> not looked at server consolidation, and are not exactly models of
> efficiency, that generic statement of 10-30% resonates with me.
>
> What do you guys think is a reasonable estimate for virtualization overhead
> in the industry?  I'd be interested to know your perspectives.
>
> Best Regards,
>
> Ray DePena, MBA, PMP
> +1.916.941.5558

> Ray.DeP...@gmail.com


> Twitter: @RayDePena
> LinkedIn:http://www.linkedin.com/in/raydepena
>
> "Hypervisors have overhead. The overhead commonly ranges from 10% to 30% of
> the server’s resources, depending on the number of guests. That means up to
> 25% of the physical server’s total cost of ownership produces nothing, and
> growth
> requires up to 25% more physical servers."
>

> > > >>http://www.amazon.com/gp/product/B002H0IW1Uorget instant access to


> > > >> downloadable versions at
> > > >>http://cloudslam09.com/content/registration-5.html
>
> > > >> ~~~~~
> > > >> You received this message because you are subscribed to the Google
> > Groups
> > > >> "Cloud Computing" group.
> > > >> To post to this group, send email to cloud-c...@googlegroups.com
> > > >> To unsubscribe from this group, send email to
> > > >> cloud-computi...@googlegroups.com
>
> > > > --
> > > > ~~~~~
> > > > Register Today for
>

> ...
>
> read more »

Rao Dronamraju

non lue,
13 janv. 2010, 19:56:3813/01/2010
à cloud-c...@googlegroups.com

Seems to be a formidable partnership / strategy

 

http://tinyurl.com/yh76hbr

 

 

Jan Klincewicz

non lue,
13 janv. 2010, 20:43:1913/01/2010
à cloud-c...@googlegroups.com
http://en.community.dell.com/blogs/insideit/archive/2008/10/27/dell-powers-microsoft-azure.aspx

Funny what a couple of months can do ...   I was surprised to see Dell hook this up.  I guess some arms were twisted.

On Wed, Jan 13, 2010 at 7:56 PM, Rao Dronamraju <rao.dro...@sbcglobal.net> wrote:

Seems to be a formidable partnership / strategy

 

http://tinyurl.com/yh76hbr

 

 


--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Post Job/Resume at http://cloudjobs.net

Buy 88 conference sessions and panels on cloud computing on DVD at


~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com




--
Cheers,
Jan

Jan Klincewicz

non lue,
13 janv. 2010, 20:51:0013/01/2010
à cloud-c...@googlegroups.com

Ray DePena

non lue,
13 janv. 2010, 22:38:5713/01/2010
à cloud-c...@googlegroups.com
Greg,

Excluding mainframes what do you think that SWAG looks like?


Post Job/Resume at http://cloudjobs.net

Buy 88 conference sessions and panels on cloud computing on DVD at


~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com




--
Best Regards,

Ray DePena, MBA, PMP
+1.916.941.5558
Ray.D...@gmail.com

Rao Dronamraju

non lue,
13 janv. 2010, 21:57:2813/01/2010
à cloud-c...@googlegroups.com

It will be interesting to see how both HP and Dell are going to balance their Linux & Windows strategy.

Although they have done this all these years, Linux could have far more advantageous position vis a vis windows 200X (or 20XX?) in the clouds purely from pricing & volume position unless of course MS adopts GPLJJ

 


From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Jan Klincewicz


Sent: Wednesday, January 13, 2010 7:43 PM
To: cloud-c...@googlegroups.com

Sassa

non lue,
15 janv. 2010, 13:07:2415/01/2010
à Cloud Computing
Somehow every time they say "20 Linux VMs" I seem to add in my mind
"(or Java VMs)".

You do need to host more than one JVM on the box if it can't max out
memory, CPU, disk, etc, even if you deploy all your applications into
one JVM.

On the other hand, all those SPECjs demonstrate that the amount of
administration needed to max out the resources by the VMs smaller than
the horse they share the ride on, is non-trivial: you start to play
with CPU affinity, assign one NW interface to each
JVM, ...errrmmmm....and what do they do with sharing disk?..

So we've got to embrace virtualization; and yes, to utilize all the
resources, every datacenter becomes "a SPECj" of its own.


Sassa

On Jan 13, 6:23 pm, Jim Starkey <jstar...@nimbusdb.com> wrote:
> I found the post not only insightful, but one that changed my ideas
> about private clouds.
>
> Sure, compute cycles are compute cycles whether on hard iron or VMs.

> But disk and network traffic is something else again. Yes, there is an


> overhead induced by the hypervisor, but there is also an effect,
> probably a great deal most significant, of contention among the VMs for

> disk and network resources. Combine disk bound, network bound, or CPU


> bound applications on a single server, and everyone is going to suffer.
>
> And yes, there are ways around those problems, but the solutions are
> different from the solutions on hard iron, there's a learning curve to
> pay for, and ultimately more, not less, administration may be required.

> Pre-VM, it was necessary to administer each of the applications and the

> dedicated server. Post-VM, those administration costs are still there,


> but now there are additional administration expenses to contention among
> the servers, now virtual, as well as managing more sophisticated storage.
>
> None of this should be surprising, since the world went to dedicated
> servers to save on administration costs in the first place.
>
> So I guess the bottom line is the balance the gain by reducing the
> number of physical servers versus the incremental cost of administrating
> more complex servers.
>
> Too close to eyeball for me.
>
> (Jan, not everyone who disagrees with you has a hidden nefarious agenda.)
>
> Jan Klincewicz wrote:
> > It would be possible to draw such a biased conclusion by
> > taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each

> > having 2% overhead compared to bare-metal servers. Does that mean


> > running in that fashion is equal to purchasing, installing,

> > maintaining and paying maintenance on 19 PHYSICAL servers ? I think not.


>
> > Whenever I see arguments with extreme data (in either
> > direction) I think it important to see if the author has an agenda of
> > any sort (ie. a product which mitigates the downside.)
>

> > Certainly, to get the most out of virtualization,shared
> > storage is necessary (for high-availability, failover, live-migration

> > etc.) but it is not mandatory. Additionally, most data centers I have


> > encountered in the past decade already OWN some SAN or NAS device and

> > are merely leveraging what they already own. Sharing of


> > infrastructure such as HBAs and high-end offloading NICs by hosting
> > VMs is FAR more efficient than purchasing individual cards for
> > physical servers, and providing port density to accommodate them.

> > Virtualization, by use of internal virtual switches radically
> > decreases the complexity and failure exposure of physical cabling.
>

> > As Greg states, why is it it be so popular ?? Sure it is


> > complex, and probably more so than constructing a purely physical
> > counterpart, from a software perspective, but I suspect this balances
> > out by requiring fewer (but smarter) bodies to maintain the environment.
>

> > So let's hear Intelicloud's "better way."
>
> > ***************************************************************************
> > Unfortunately, I completely agree with Jan.
>
> > Just to pick one item: Virtualization overhead of 10-30%? I could
> > concoct an example like that, hand-picking some worst cases of app
> > characteristics and bad hypervisor support. But were it generally the
> > case, virtualization just wouldn't be used as much as it is.
>

> > On Tue, Jan 12, 2010 at 9:14 PM, Greg Pfister <greg.pfis...@gmail.com
> > <mailto:greg.pfis...@gmail.com>> wrote:
>
> > Unfortunately, I completely agree with Jan.
>
> > Just to pick one item: Virtualization overhead of 10-30%? I could
> > concoct an example like that, hand-picking some worst cases of app
> > characteristics and bad hypervisor support. But were it generally the
> > case, virtualization just wouldn't be used as much as it is.
>
> > Greg Pfister
> > http://perilsofparallel.blogspot.com/
>

> > On Jan 12, 3:50 pm, Jan Klincewicz <jan.klincew...@gmail.com

> > <mailto:jan.klincew...@gmail.com>> wrote:
> > > Much of this is just plain wrong.....specifically hypervisor
> > costs vs.

> > > physical servers. Many "facts" are incorrect, and the
> > suppositions even
> > > more so. Simple arithmetic can bear this out, and in the

> > <sfle...@tmforum.org <mailto:sfle...@tmforum.org>>wrote:


>
> > > >> Though I don't have research to prove it, I expect a more
> > expensive form
> > > >> of overhead is human labor without computing virtualization.

Greg Pfister

non lue,
15 janv. 2010, 17:49:2315/01/2010
à Cloud Computing
Hm, apparently my syntax got garbled. The SWAG was <10% per VM,
excluding mainframes.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Jan 13, 8:38 pm, Ray DePena <ray.dep...@gmail.com> wrote:
> Greg,
>

> Excluding mainframes what do you think that SWAG looks like?
>

> On Wed, Jan 13, 2010 at 4:52 PM, Greg Pfister <greg.pfis...@gmail.com>wrote:
>
>
>
> > What's reasonable? Close to zero is possible, but not with PCIe 2.x,

> > due to IO overhead. *Mainframes -- with built-in hardware support for


> > not just CPU and memory, but IO virtualization -- have been running

> > less than 3% for decades.*
>
> > *But "the industry" means X86 and VMware or Xen or Hyper-V or


> > something, with PCIe busses, which can't get out of the way on
> > virtualization.
>
> > So, "it depends" primarily on the amount of IO done by the workload.

> > Industry average? I'd ***guess*** <10%. Total SWAG, though.* I'd like

> ...
>
> read more »

Ray DePena

non lue,
15 janv. 2010, 18:26:2115/01/2010
à cloud-c...@googlegroups.com
I have absolutely no data to back this up, but I would guess industry wide it's higher.  More companies virtualizing their compute resources than virtualization professionals who know how to do it efficiently.  Just a "gut feel".

--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com




--
Best Regards,

Ray DePena, MBA, PMP
+1.916.941.5558
Ray.D...@gmail.com

Peglar, Robert

non lue,
15 janv. 2010, 20:19:5715/01/2010
à cloud-c...@googlegroups.com
I concur with Ray. One look at the headhunter lists indicates same. 

Sent via my PDA.  Please forgive any typos, as thumbs are funny things. 

Jan Klincewicz

non lue,
15 janv. 2010, 20:38:4715/01/2010
à cloud-c...@googlegroups.com
I would like to know how everyone is thinking of "overhead" ??  Is it per VM vs. a physical machine, or per host (percentage utilized ?)  Either way, it looks like a pretty good deal to me versus NOT virtualizing, and doing things like it were 1982 ....

There is no rocket science to "tweaking" VMs ... you allocate the necessary resources and Bob's your Uncle.  Yes, you can spread different workloads more efficiently across hosts, but that can be calculated (and done) for you automatically.  CPU sharing is VERY efficient ... for some hypervisors (soon ALL) memory sharing is pretty efficient as well.

Again, I am am still waiting for someone to propose alternatives  (for bread-and-butter apps) .. I am not getting a lot of takers ....
Cheers,
Jan

Ray DePena

non lue,
15 janv. 2010, 21:55:1515/01/2010
à cloud-c...@googlegroups.com
Jan,

Please don't misunderstand.  You're taking for granted your skill set. 

Virtualization reminds me of the early days of networking routers at an enterprise level.  Excellent router folks did so efficiently with just the right backup design for redundancy, and then there were those that just tried to route every path through every router...

That's the way I view virtualization - good architectural design, planning, virtualization techniques etc. will yield a highly efficient environment.

Others will virtualize department A resources (inefficiently because it's Joe the "computer" guy) and Bob the other "computer" guy will virtualize department B, then comes a manager without expertise in this area from department C that wants in because he/she heard of some savings derived.  All 3 doing it their own way.  Then the division executive starts mandating it across the division... all division execs are calling on IT to "make it happen" with the true virtualization experts sitting somewhere in the IT division not even aware it's happening.

That's a quite different scenario than a well orchestrated top-down plan to virtualize the enterprise's resources.

And no, I have no desire to go back to 1982 technology....

-Ray D.

Ray Nugent

non lue,
15 janv. 2010, 22:11:4515/01/2010
à cloud-c...@googlegroups.com
Jan, I think there is a degree of rocket science involved at this point in the evolution of virtualization, particularly given there are no good underlying network orchestration tools yet. I agree the value of virtualization far outweighs not virtualizing but it's also a new problem and requires a new skill set.

2 cents,

Ray

Sent: Fri, January 15, 2010 5:38:47 PM

Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

Ray Nugent

non lue,
15 janv. 2010, 22:12:4315/01/2010
à cloud-c...@googlegroups.com
What make you think network management tools have improved since 1982? :-)

Ray


From: Ray DePena <ray.d...@gmail.com>
To: cloud-c...@googlegroups.com
Sent: Fri, January 15, 2010 6:55:15 PM

Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

Jeanne Morain

non lue,
16 janv. 2010, 14:32:5916/01/2010
à cloud-c...@googlegroups.com
All bring up some pretty sound points.  Virtualization like any industry in it's infancy (some areas are more advanced like Type 2 hypervisors but others still have room to grow such as application virtualization, I/O, Type 1 hypervisors - CVP, network, etc) there are more people that are jumping on the hypecycle that lack the full picture to understand not only how to efficiently deploy a optimize virtual machine but what the overall impact of adding virtualization will have on all the layers of the stack, business continuity, end users, and compliance.

There is not enough "accurate" information on the overall impact of various types of virtualization in the market.  Many customers I have worked with at BMC, VMware, and in my current role believe it is just machine virtualization or type 2 hypervisors.  They start to apply what they know of virtualizing servers to others areas like desktop, applications, I/O etc and hit many stumbling blocks.  In many cases it is because they don't know what they don't know.  Maintaining, patching, etc a server farm requires a different set of skills and management tools then desktops, networks or other layers of the stack.  It also requires to rethink many other processes withing an organization such as service desk, change control, asset management and license compliance.

Oversimplfying the situation to say optimizing a VM is all the necessary skills required is a grave mistake that I have seen some customers make only to bite them later.  The calculations in determining optimization, throughput, etc are all impacted by the actual intent of the virtualization being deployed, usage statistics, etc.  What one would calculate for sizing and planning for a VM for a server that is used as an application server would be very different than one that is being utilized for a hosted virtual desktop.  More importantly the number of staffed needed to be trained on all the various aspects such as troubleshooting a virtual application, versus a physical application on a VM, traversing a persistent versus non-persistent desktop, audit control - license tracking/usage, patch updates, remediation, packaging, etc will impact all within the overall lifecycle of the applications.  How they are managed varies based on internal versus external clouds, multi-tenancy, chargeback and other various aspects.

Many new skills have to be obtained both in and outside of the cloud and more accurate information needs to be more readily available - versus continued oversimplification that machine virtualization is running a VM in the data center.  It could also be running a VM on USB stick, Thin Client, Laptop (Type 1 or Type 2 Hypervisor).

Cloud computing can be done without virtualization and virtualization can be done without cloud computing - one is not a pre-requisite for the other - but they do enable each other nicely and both require additional learning, skills, and information to be successful.

Cheers,
Jeanne Morain
www.universalclient.blogspot.com


From: Ray Nugent <rnu...@yahoo.com>
To: cloud-c...@googlegroups.com
Sent: Fri, January 15, 2010 8:11:45 PM

Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?
Jan, I think there is a degree of rocket science involved at this point in the evolution of virtualization, particularly given there are no good underlying network orchestration tools yet. I agree the value of virtualization far outweighs not virtualizing but it's also a new problem and requires a new skill set.

2 cents,

Ray

From: Jan Klincewicz <jan.kli...@gmail.com>
To: cloud-c...@googlegroups.com
Sent: Fri, January 15, 2010 5:38:47 PM
Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

I would like to know how everyone is thinking of "overhead" ??  Is it per VM vs. a physical machine, or per host (percentage utilized ?)  Either way, it looks like a pretty good deal to me versus NOT virtualizing, and doing things like it were 1982 ....

There is no rocket science to "tweaking" VMs ... you allocate the necessary resources and Bob's your Uncle.  Yes, you can spread different workloads more efficiently across hosts, but that can be calculated (and done) for you automatically.  CPU sharing is VERY efficient ... for some hypervisors (soon ALL) memory sharing is pretty efficient as well.

Again, I am am still waiting for someone to propose alternatives  (for bread-and-butter apps) .. I am not getting a lot of takers ....

On Fri, Jan 15, 2010 at 8:19 PM, Peglar, Robert <Robert_Peglar@xiotech..com> wrote:
I concur with Ray. One look at the headhunter lists indicates same. 

Sent via my PDA.  Please forgive any typos, as thumbs are funny things. 

On Jan 15, 2010, at 4:31 PM, "Ray DePena" <ray.d...@gmail.com> wrote:

I have absolutely no data to back this up, but I would guess industry wide it's higher.  More companies virtualizing their compute resources than virtualization professionals who know how to do it efficiently.  Just a "gut feel".

On Fri, Jan 15, 2010 at 2:49 PM, Greg Pfister <greg.p...@gmail.com> wrote:
Hm, apparently my syntax got garbled. The SWAG was <10% per VM,
excluding mainframes.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Jan 13, 8:38 pm, Ray DePena <ray.dep...@gmail.com> wrote:
> Greg,
>
> Excluding mainframes what do you think that SWAG looks like?
>
> On Wed, Jan 13, 2010 at 4:52 PM, Greg Pfister <greg.pfis...@gmail.com>wrote:
>
>
>
> > What's reasonable? Close to zero is possible, but not with PCIe 2.x,
> > due to IO overhead. *Mainframes -- with built-in hardware support for
> > not just CPU and memory, but IO virtualization -- have been running
> > less than 3% for decades.*
>
> > *But "the industry" means X86 and VMware or Xen or Hyper-V or
> > something, with PCIe busses, which can't get out of the way on
> > virtualization.
>
> > So, "it depends" primarily on the amount of IO done by the workload.
> > Industry average? I'd ***guess*** <10%. Total SWAG, though.* I'd like
> > to hear if anybody has real data..
> > > > > > networks, power, cooling, etc.. Whenever there is a change or growth

Miha Ahronovitz

non lue,
16 janv. 2010, 15:25:2816/01/2010
à cloud-c...@googlegroups.com
"Cloud computing can be done without virtualization and virtualization can be done without cloud computing - one is not a pre-requisite for the other - but they do enable each other nicely and both require additional learning, skills, and information to be successful."

Memorable...

M

Rao Dronamraju

non lue,
16 janv. 2010, 15:50:1416/01/2010
à cloud-c...@googlegroups.com

I am not sure I  agree with folks who think virtualization has shifted the paradigm in terms of skill set and experience necessary to do the job.

 

Considering that virtualization is nothing but re-inventing of what has been done in the last 20 to 30 years....(virtual) desktop, (virtual)server, (virtual)network, (virtual)storage, (virtual) SMP, (virtual) HA, (virtual) FT, (virtual)security...why is it such a big deal for anyone whether it is a systems administrators, networks administrators, developers, architects, technical managers to transfer their non-virtualization based fundamental knowledge to virtualized environment. Hypervisor itself nothing but what existed as micro-kernels before (how many remember MACH from Carnagie Mellon in the late 80s and early 90s) & nothing but 100% OS technology (OK, may not be 100% but atleast 95%+, OS did not have virtual networks to deal with). Even live migration is not new, process migartion was tried in many environments like TCF and DCE etc. Even the management is very similar. Just a matter of time SNMP/WEBM/CIM/.DMTF/SNIA/IETF standards will be enhanced to cover the virtual world. Will the fundamental abstractions change just because of the virtual entities. Not at all. It is the sameO, sameO in a different way.

 

 


From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Ray DePena
Sent: Friday, January 15, 2010 8:55 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

 

Jan,

Jan Klincewicz

non lue,
16 janv. 2010, 17:05:3316/01/2010
à cloud-c...@googlegroups.com
VMware ESX server 1.0 was released in 2001.  That was about the same year Ian Pratt successfully deployed Xen at Caimbridge (though it was not publicly available until 2003.)     With rounding, and averaging, that would make Type I hypervisors a decade old.  I would consider them "mature" products when counted in IT years.  This is not a new technology, and as evidenced by its success (especially VMWAre's) one which has been readily adopted by organizations world wide as a standard practice for some time now. 

Sever Virtualization can certainly be MADE complex, by introducing Load Balancing, High Availability, Disaster Recovery etc. (though even these aspects are an order of magnitude more simple to achieve with VMs than their physical counterparts.)   There are certainly no fewer skill sets required to do clustering of physical servers as virtual. I've done them both.  The steps may be different, but I really don't think one requires any more sophisticated knowledge than another.

Fundamentally, server virtualization (and the hypervisors on which it relies) has not significantly changed in the past 10 years.  Certainly CPU support for virtualization has probably outstripped the coders' abilities to take advantage of all the features, and full Paravirtualization of Windows OSs is still a ways off.  But at the end of the day, running n number of virtual machines on a physical host is what this stuff does, and if I can count myself as living proof, it does not take a genius to achieve this in a couple of hours without a manual.  A few taps of the F1 button here and there maybe, but just as a sanity check.

We can argue minutiae about what PERCENTAGE of utilization is achieved, yada yada, but the bottom line is that if you can run 40 VMs on a pretty standard off-the-shelf box in a production environment and keep them up for several years with no downtime, then this is ready for prime time AND it saves a boatload of cash (even after paying for the additional Virt software.) Whether you have tweaked every last MIP is immaterial.

A virtual NIC gets treated just like a physical one (unless you're running Hyper-V in Native mode where PXE is not supported) and a Vdisk formats and partitions and stores the same as a physical disk.  A virtual machines is configured with x amount of RAM and n number of cores just like a physical machine.  As Rao states, these are not new skill sets to learn.

Certainly there are more efficient ways to install an OS and Apps (and there have been physical means like Ghost, Altiris, etc. for years.)  But try CLONING a physical server in a rack.  Where cloning a VM takes minutes, just unpacking the boxes, screwing in rails, setting up power distribution units, etc. takes a lot of physical labor that is just not necessary in the virtual world, and that is why these are so popular for Cloud deployments.  To imitate the physical world in software is a lot easier on the back than actually doing things in the physical world.

Obviously, there are other ways (especially in the SaaS world) to run and deliver multiple server apps.  If you can run multiple instances of an app on one OS image, you are absolutely better off for it.)   But go ahead and try this with Exchange, or SQL Server or any other standard Windows-oriented app.

@Ray:  I do not understand your statement that there are no good "network orchestration tools" yet.  What exactly IS a network orchestration tool, and are you sure that none exist (even good ones) ? 

P.S.   I will grant that DESKTOP Virtualization is a whole 'nother ballgame (mostly because it is so broadly defined, and there are so many approaches to delivering it.  Also, because Desktops reflect the personalities and emotional needs of human individuals the technology must accommodate them.  A server doesn't GIVE a crap what its wallpaper looks like, and to the best of my knowledge, they don't play Solitaire when nobody is looking.  That's just my assumption though ..

Rao Dronamraju

non lue,
16 janv. 2010, 18:19:5216/01/2010
à cloud-c...@googlegroups.com

“A server doesn't GIVE a crap what its wallpaper looks like, and to the best of my knowledge, they don't play Solitaire when nobody is looking.  That's just my assumption though ..”

 

(with desktop virtualization) you can also play Solitaire on both Linux and Windows at the same timeJ (even better) without installing it on the clientJ

 

 


Greg Pfister

non lue,
16 janv. 2010, 22:38:2916/01/2010
à Cloud Computing
I have absolutely no data, either. That's the SWAG part.

What I'm thinking of writing it, though, are all the middle-tier app
engines (10X the DB back end), all running Java or C# or the like,
therefore all using gobs of compute cycles to do any IO. (Have to move
data out of GC space first.)

I'm assuming that IO inefficiency drowns out the VM/PCIe-based IO
inefficiency. Then Intel or AMD hardware eliminates the intrinsic
CPU / memory inefficiency.

All bets off, though, if (a) you include old hardware that requires
trap&emulate for CPU/memory; (b) bad stuff happens in the hypervisor
scheduler; or (c) there's not enough real memory to back all the VMs
without undue paging. Any of those could cause overhead to skyrocket.

Greg Pfister
http://perilsofparallel.blogspot.com/

On Jan 15, 4:26 pm, Ray DePena <ray.dep...@gmail.com> wrote:
> I have absolutely no data to back this up, but I would guess industry wide
> it's higher.  More companies virtualizing their compute resources than
> virtualization professionals who know how to do it efficiently.  Just a "gut
> feel".
>

> ...
>
> read more »

scottxu

non lue,
17 janv. 2010, 15:06:4517/01/2010
à Cloud Computing
Mach is still alive, even popular. People may use it without notice
it.

Within Mac OS X, there is a Mach core, and a BSD kernel above it. Mac
OS X environment on the top. A sophiscated structure. A kind of thing
to make an OS working on another OS, similar to virtualization
technology.

VMWare and Xen may go further to provide virtual machines, which
completely encapsulate the environment of a machine.

I haven't used Mac OS X so far, but read a book of it. There is a good
historic story in it.

Scott

On Jan 16, 12:50 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:


> I am not sure I  agree with folks who think virtualization has shifted the
> paradigm in terms of skill set and experience necessary to do the job.
>
> Considering that virtualization is nothing but re-inventing of what has been
> done in the last 20 to 30 years....(virtual) desktop, (virtual)server,
> (virtual)network, (virtual)storage, (virtual) SMP, (virtual) HA, (virtual)
> FT, (virtual)security...why is it such a big deal for anyone whether it is a
> systems administrators, networks administrators, developers, architects,
> technical managers to transfer their non-virtualization based fundamental
> knowledge to virtualized environment. Hypervisor itself nothing but what
> existed as micro-kernels before (how many remember MACH from Carnagie Mellon
> in the late 80s and early 90s) & nothing but 100% OS technology (OK, may not
> be 100% but atleast 95%+, OS did not have virtual networks to deal with).
> Even live migration is not new, process migartion was tried in many
> environments like TCF and DCE etc. Even the management is very similar. Just
> a matter of time SNMP/WEBM/CIM/.DMTF/SNIA/IETF standards will be enhanced to
> cover the virtual world. Will the fundamental abstractions change just
> because of the virtual entities. Not at all. It is the sameO, sameO in a
> different way.
>
>   _____  
>

> On Fri, Jan 15, 2010 at 5:38 PM, Jan Klincewicz <jan.klincew...@gmail.com>


> wrote:
>
> I would like to know how everyone is thinking of "overhead" ??  Is it per VM
> vs. a physical machine, or per host (percentage utilized ?)  Either way, it
> looks like a pretty good deal to me versus NOT virtualizing, and doing
> things like it were 1982 ....
>
> There is no rocket science to "tweaking" VMs ... you allocate the necessary
> resources and Bob's your Uncle.  Yes, you can spread different workloads
> more efficiently across hosts, but that can be calculated (and done) for you
> automatically.  CPU sharing is VERY efficient ... for some hypervisors (soon
> ALL) memory sharing is pretty efficient as well.
>
> Again, I am am still waiting for someone to propose alternatives  (for
> bread-and-butter apps) .. I am not getting a lot of takers ....
>

> On Fri, Jan 15, 2010 at 8:19 PM, Peglar, Robert <Robert_Peg...@xiotech.com>


> wrote:
>
> I concur with Ray. One look at the headhunter lists indicates same.
>
> Sent via my PDA.  Please forgive any typos, as thumbs are funny things.
>

> On Jan 15, 2010, at 4:31 PM, "Ray DePena" <ray.dep...@gmail.com> wrote:
>
> I have absolutely no data to back this up, but I would guess industry wide
> it's higher.  More companies virtualizing their compute resources than
> virtualization professionals who know how to do it efficiently.  Just a "gut
> feel".
>
> On Fri, Jan 15, 2010 at 2:49 PM, Greg Pfister <
>

> <mailto:greg.pfis...@gmail.com> greg.pfis...@gmail.com> wrote:
>
> Hm, apparently my syntax got garbled. The SWAG was <10% per VM,
> excluding mainframes.
>
> Greg Pfister

>  <http://perilsofparallel.blogspot.com/>http://perilsofparallel.blogspot.com/

> ...
>
> read more »

Sassa

non lue,
18 janv. 2010, 15:26:4018/01/2010
à Cloud Computing
Do you know what would be the point of running a VM on a VM? Is one
layer of virtualization not enough?

(i.e. run JVM on a host OS, NOT JVM on a host VM on a host OS/
hypervisor)


Sassa

On Jan 17, 3:38 am, Greg Pfister <greg.pfis...@gmail.com> wrote:
> I have absolutely no data, either. That's the SWAG part.
>
> What I'm thinking of writing it, though, are all the middle-tier app
> engines (10X the DB back end), all running Java or C# or the like,
> therefore all using gobs of compute cycles to do any IO. (Have to move
> data out of GC space first.)
>
> I'm assuming that IO inefficiency drowns out the VM/PCIe-based IO
> inefficiency. Then Intel or AMD hardware eliminates the intrinsic
> CPU / memory inefficiency.
>
> All bets off, though, if (a) you include old hardware that requires
> trap&emulate for CPU/memory; (b) bad stuff happens in the hypervisor
> scheduler; or (c) there's not enough real memory to back all the VMs
> without undue paging. Any of those could cause overhead to skyrocket.
>

> Greg Pfisterhttp://perilsofparallel.blogspot.com/

Rao Dronamraju

non lue,
18 janv. 2010, 16:15:2718/01/2010
à cloud-c...@googlegroups.com

I think we are mixing up JVM and (OS/Hypervisor based) VMs as if they are
the same just because they are called virtual machines.

A JVM is primarily an interpreter of Java language/byte codes. Yes it does
create a sandbox to do this job, but it is a misnomer to call a JVM a VM in
the traditional VM sense. The reason a JVM is called a VM is because it
abstracts the underlying processor architecture - primarily instruction set
- in the context of an interpreter/compiler but not in the context of an OS.
A JVM does not do any OS specific functions primarily process management,
virual memory management, file systems management, hardware abstraction and
I/O, networking etc.

Whereas a traditional VM does all the above but does NOT play the role of an
interpreter like the JVM, although some VMs like QEMU do emulation.

So when we are talking about JVM and VM they are two different beasts
entirely. So when you run Java applications in a VM, it is perfectly OK and
needed/required to run a VM (JVM) within a VM (traditional VM).


-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Sassa
Sent: Monday, January 18, 2010 2:27 PM
To: Cloud Computing
Subject: [ Cloud Computing ] Re: what role did virtual machine play in Cloud
Computing?

Jeanne Morain

non lue,
18 janv. 2010, 16:21:2218/01/2010
à cloud-c...@googlegroups.com
Rao,

I do agree that fundamentally the protocols are the same for both but there are some things that have had to shift and will continue to need to shift how we manage the overall stack not just from a technologist point of view but also from a business perspective. Similar to how Cellular Technology changed Communications - Virtualization is a paradigm shift at many levels when you consider the following:

1) Systems Management Vendors have had to re-think and retool how they deal with virtual environments for the Data Center.  Everything from Monitoring, Capacity Planning, Discovery, Change, and Configuration Management.  One of the biggest early inhibitors to implementing BSM back between 2002-2007 was virtual sprawl and inability of systems to detect VMs if they were offline.  Thus efficiency in how patching is done, discovery is tracked etc had to be built in with tools like ESX, Run Book Automation, CMDBs, Discovery tools, etc.  Most of these were built out to fit the requirements of Machine Virtualization implemented in the datacenter. 

Question - What happens for Type 1 Hypervisors on a Desktop?  What about traditional tools for Workstation on a Desktop? What about virtual applications that don't show up in the registry for traditional discovery tools to pull?

2) DMTF and other standards all had to shift and be enhanced to address adding requirements for virtual formats.  The current OVF standard just added in the Fall - was driven by the President Winston Bumpus (also architect in office of CTO of VMware) - if there were no paradigm shift - they why would the standards need to be updated?  Additional standards will also need to be taken into consideration - such as looking at the User Data file for drift versus application (because most virtual applications are read only - user data is where plugins, and other significant changes could occur for audit - like viral infections, wall paper, other components mentioned).

3) Desktops are moving into the data center which requires different skills, processes, and updates.  New technology to enable the hybrid approach (CVP or type 1 hypervisors) will have duplicate applications, data, and information that is being tracked in 2 or more locations possibly.  What does this mean for IT?  A HUGE paradigm shift in the following areas that must be considered to succeed
  • ESX Expertise for their Support Desk for Desktops - Today when a call comes in the IT staff can ask a user for basic information like machine name or have it appear in their systems management directory to identify, remote into the machine and trouble shoot an application issue.  For Virtual Desktops - an additional step is needed - identifying what server the desktop is being manifested from (could be one of many from VMotion, or other broker solutions), determining is it an application issue (Virtual or Physical), Is it a connection issue?, Is it an ESX or other Virtual Server Host issue, Storage issue (capacity, throughput for access), etc.  If it is a VM issue - they will need someone that is authorized to troubleshoot, create, and work with the virtual machine.

  • Virtual Application Expertise - depending on the type of application virtualization used a myriad of different expertise and tools may be required to support it.  Meaning if it is an agent based tool with special packaging - you will need to acquire the skills and potentially relearn what you know about packaging all together.  OR train employees on new tools to discern and traverse virtual environment (OS, agents, etc) as well as the physical registry in the event the issue could be with local machine interaction.

  • Enhanced Discovery and Delivery - Additional changes will need to be implemented to enhanced how Systems Management tools both deliver and audit applications to virtual environments.  For persistent implementation - for example - when a typical endpoint comes up the agent runs and tells it to pull discovery and/or application updates that are set at the primary distribution point.  What happens when all the VMs spin up at a given time and go to check for an update and download?  What about tools that do not provide the ability to vary their update functionality because they assume a distributed environment?  Or those that look for network detection and throttle accordingly that detect full capacity and open up the connection full throttle?  Now multiply that by 40 or more?  Will offline patching be the cure?  Maybe - but what about those that need to be sent via ECO during blackout hours?  These and many more questions have cropped up with the early implementations and cause delay in large scale production deployments. 

  • Advanced mechanism for Audit and Control to adhere to license compliance- most customers that are implementing some type of architecture around Virtual Desktops or a hybrid approach with Type 1 Hypervisors will need a way to 1)Audit that the "approved version is what is deployed", 2) that the user only has the number of licenses approved through the change control system in accordance with company policy, 3)Their back end systems are not double counting licenses for systems that are deployed via machine distribution versus user distribution (avoid paying for multiple licenses for a single user).  That Configuration Items in the CMDB have proper mapping to the various environments the user has Type 1 Hypervisor, Data center VM, Physical, etc.

  • Other process shifts will need to occur as well such as understanding the implications to things like test, production, patching (ESX versus Guest Machines), How they correlate with new regulations (EMR, PCI, CyberSecurity Act) and what are the hardening guidelines based on the implementation (not just with the Server host but are there any with guest hosts that check out a Type 1 Hypervisor on an Employee Owned PC), Impact on Application Virtualization to the normal packaging process - meaning single application is used across multiple environments - this enables finally shifting from Machine Based provisioning (which 90% of Companies do for Desktops) to user based provisioning.  A user based provisioning approach can significantly reduce license cost or in the case of shared environments could increase them.  More analysis would need to be reviewed and process changes for packaging updated accordingly. 
  • User Access Policies Enhanced - What are the policies around this?  Meaning where some companies force users to leverage company owned PCs, VPNs, etc will these requirements be obsoleted because nothing is installed?  The Desktop Environment is pristine and separated?  Or would there need to be more restrictions placed on the "personal environment" to adhere to CyberSecurity policies such as going after companies of Denial of Service Attacks?  Who is responsible for maintaining the "personal environment"?
There are many other considerations that will impact ITIL Implementations, Audit, and basic business continuity as these things roll out, government oversight (understanding of virtual environments by Audit), and mobility continues to increase. 

Cheers,
Jeanne
www.universalclient.blogspot.com

On Sat, Jan 16, 2010 at 3:50 PM, Rao Dronamraju <rao.dro...@sbcglobal.net> wrote:

I am not sure I  agree with folks who think virtualization has shifted the paradigm in terms of skill set and experience necessary to do the job.

 

Considering that virtualization is nothing but re-inventing of what has been done in the last 20 to 30 years....(virtual) desktop, (virtual)server, (virtual)network, (virtual)storage, (virtual) SMP, (virtual) HA, (virtual) FT, (virtual)security...why is it such a big deal for anyone whether it is a systems administrators, networks administrators, developers, architects, technical managers to transfer their non-virtualization based fundamental knowledge to virtualized environment. Hypervisor itself nothing but what existed as micro-kernels before (how many remember MACH from Carnagie Mellon in the late 80s and early 90s) & nothing but 100% OS technology (OK, may not be 100% but atleast 95%+, OS did not have virtual networks to deal with). Even live migration is not new, process migartion was tried in many environments like TCF and DCE etc. Even the management is very similar. Just a matter of time SNMP/WEBM/CIM/.DMTF/SNIA/IETF standards will be enhanced to cover the virtual world. Will the fundamental abstractions change just because of the virtual entities. Not at all. It is the sameO, sameO in a different way.

 

 

--




--



~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com




--
Cheers,
Jan

scottxu

non lue,
18 janv. 2010, 16:40:2218/01/2010
à Cloud Computing
JVMs are VMs which run bytecode. The instruction set and logic
architecture of JVMs may be very different from the underlying
physical CPUs. But JVMs are still VMs, just not the type of VMs in
sense of VMware or Xen products: para-virtualization or pure-
virtualization.

Not sure if there are some architectures which run Java bytecode
directly. If yes, JVMs on these architecture would be the VMs with the
same sense as VMware or Xen products: para-virtualization or pure-
virtualization.

Scott

On Jan 18, 1:15 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:

> ...
>
> read more »

Jeff Darcy

non lue,
18 janv. 2010, 17:07:3218/01/2010
à cloud-c...@googlegroups.com
On 01/18/2010 04:40 PM, scottxu wrote:
> Not sure if there are some architectures which run Java bytecode
> directly. If yes, JVMs on these architecture would be the VMs with the
> same sense as VMware or Xen products: para-virtualization or pure-
> virtualization.

The only vendor I know of in this space is Azul Systems, and I don't
think virtualization as the term is discussed here really figures much
into what they do. In a way it's the opposite - aggregating many
resources to solve a single problem instead of splitting a resource up
to solve multiple problems. I certainly wouldn't compare them to VMware
or Xen.

scottxu

non lue,
18 janv. 2010, 18:59:5518/01/2010
à Cloud Computing
I saw before people mentioned Azul here. Just not sure if they run
Java bytecode directly, so ask this question. Thank you for the
answer.

I saw from some book that there are people even talking of aggregating
several physical units into one logic unit as virtual machines. By
virtual, they mean some logic things not identical to physical ones.
Some may reserve the phrase to the slicing one physical machine into
many logic machines.

Scott

Greg Pfister

non lue,
18 janv. 2010, 21:46:1218/01/2010
à Cloud Computing
On Jan 18, 1:26 pm, Sassa <sassa...@gmail.com> wrote:
> Do you know what would be the point of running a VM on a VM? Is one
> layer of virtualization not enough?
>
> (i.e. run JVM on a host OS, NOT JVM on a host VM on a host OS/
> hypervisor)

As Rao points out below, a JVM is a different level/kind of
virtualization than providing a virtual "physical" machine.

That said:

1) Running a virtual physical machine inside another virtual physical
machine is useful for debugging the hypervisor itself. An old friend
of mine who developed VM/370 did that regularly. He used to say that
you actually have to do three levels to get everything right; once you
have 3, you can do any number. (No, I never quite understood why, but
it had to do with paging & virtual memory.)

2) Many virtual physical machines run a JVM - think of consolidating a
bunch of middle-tier application systems all written in Java. (Or C#,
or whatever.) Why not just run multiple JVMs on one OS? Different apps
on different JVMs may require different OS tuning that the OS isn't
able to isolate to just its applications.

3) There have been efforts to do the opposite: Run the JVM directly on
the hardware, hoping to reap efficiency benefits by eliminating a
level of (sort of) simulation. I know there was one in IBM Research,
and maybe there are others. Not sure what Azure does, for example;
probably they run JVM on the iron.

Greg Pfister
http://perilsofparallel.blogspot.com/
http://randomgorp.blogspot.com/

> ...
>
> read more »

Ignacio Martin Llorente

non lue,
19 janv. 2010, 03:03:0119/01/2010
à cloud-c...@googlegroups.com
Hi

This is a good real life use case on the role of virtualization in cloud computing.

http://www.computing.co.uk/computing/analysis/2256208/cern-takes-virtual-server-turn

Cheers

Jan Klincewicz

non lue,
19 janv. 2010, 10:39:1619/01/2010
à cloud-c...@googlegroups.com
I think Desktop Virtualization certainly represents a true paradigm shift, and with that, provides an opportunity to throw out a lot of old thinking that was necessary to support physical end-devices. 

The homogeneity of "virtual machines" certainly eliminates the horrendous issues of driver management which has plagued organization since even the DOS days.  Even organizations with a single-vendor strategy face the fact that hardware is rarely standardized 100%, and that lifecycles are shorter and change with less notice all the time.

As Jeanne states, its adoption WILL shift expertise from the skill sets required to support multiple devices to a more centralized one where Virtual Server expertise will absolutely be necessary.   However, in ADDITION to the skill sets needed to maintain a highly available farm of VM hosts, the personalization required by Client computing will still necessitate a thorough knowledge of Active Directory, Group Policies, etc.  Unless we intend to go back to the days of dumb terminals (highly unlikely end-users will accept THAT) it will be necessary for IT to make the desktop and server teams share knowledge and JOINTLY design DV architectures which will accommodate the needs of disparate user communities while maintaining as much standardization and simplification as possible.

The current crop of solutions (and I will grant DESKTOP Virt is in its infancy, compared to the relative maturity of SERVER versions) leverages a lot of the tools and techniques learned in managing traditional client management, and in fact many can be leveraged across both physical and virtual clients.

I do disagree that there is such a huge gap between the characteristics of a virtual client versus a physical counterpart.  Virtual clients still have IP addresses, and though they may or may not be tied to a specific user, share many characteristics (though greatly simplified) with traditional end-user devices.

So yes, a different approach will be necessary to cope with the changes brought about in a paradigm shift, but does this suprise anyone ?  I think that many of the products and approaches which have evolved over the years are very easily adapted to this new approach, and may even find larger markets than ever as more focus is being leveled at end-device management than ever before.


--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html

~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com




--
Cheers,
Jan

Rao Dronamraju

non lue,
19 janv. 2010, 11:44:5819/01/2010
à cloud-c...@googlegroups.com

Jeanne,

 

Good write up.

 

You and I are approaching this issue from two diametrically opposite ends. I am primarily a data center guy and your expertise seems to be desktop.

 

I will answer your questions but let me explain why I think it is not a paradigm shift when we talk about the skills and competencies necessary to do a job in the virtual world.

 

If you look at a data center and its componets, Applications, Middleware, OS, HW, Networks, Storage, Security, Management etc etc, what has really changed RADICALLY to call it a paradigm shift?....It is the same components in a CONTAINER called Virtual Machine. Yes, the container is fluid and moves around a bit, so what?, has the adminsitration of AD in a VM changed?....Has the DNS, DHCP, .NET, Java, Linux, Windows, the entire networks, storage components, security and management changed FUNDAMENTALLY?....NO!!!. I think the most affected components are security and management. Even here, Firewall works the same way, same is the IDS, Management of a VM is no different than a physical machine, have they invented new protocols to manage the virtual machines or are they being managed with the same SNMP, WEBM etc?...A Virtual Machine is a CONTAINER!. But what is CONTAINED which makes up the 90%+ of the data center components is SAME. So why is administering, developing, deploying  and operating so different?....

 

For instance, take the job of a newtork adminstrator in a data center, have the network topologies changed radically?....NO!, have the network interfaces, switches, protocols, routers, links, software/algorithms, technology itself – layer 2, layer 3 etc any of it changed because of virtualization…NO!, if you have configured a HW network interface, is it radically different than configuring a software network interface in the VM/Hypervisor?....has the (V)VLAN concept changed in virtual networks?...NO, same with any software entity that is basically a representation of HW entity in whether it is a bridge, switch, router etc etc. So how has the network admins job changed?...Infact, the virtual networks are a subset of physical networks with lot less capabilities, so the job is even easier. OK, it is a little more hard because now you not only have to deal with the physical networks+virtual networks. But it is application of same principles in software instead of HW.

 

Same with Systems Admins, they have to do the backups, configurations, administration etc etc the same way for a virtual machine as they have done with physical machines. User account administration, security adminsitration, NFS, SAMBA, DNS, DHCP etc etc all are same. Yes, their job got some more work added to them, now they have to adminster a bunch of hypervisors in addition to physical machines and VMs. So I do not see the job functions itself changing radically to call it a paradigm shift. Hypervisor admisntration and migration may be one major change/addition but not a paradigm shift!.

 

 “1) Systems Management Vendors have had to re-think and retool how they deal with virtual environments for the Data Center.  Everything from Monitoring, Capacity Planning, Discovery, Change, and Configuration Management.  One of the biggest early inhibitors to implementing BSM back between 2002-2007 was virtual sprawl and inability of systems to detect VMs if they were offline.  Thus efficiency in how patching is done, discovery is tracked etc had to be built in with tools like ESX, Run Book Automation, CMDBs, Discovery tools, etc.  Most of these were built out to fit the requirements of Machine Virtualization implemented in the datacenter.”

 

Sure, you will have to make some changes/modification when you have created an entire infrastructure in software. But my point is have the capacity planning, monitoring, discovery, change management etc etc have they changed in a FUNDAMENTAL way in the virtual world?...No!. In systems management if you discover a physical device by its IP address, you discover a VM also by its IP address. If you keep track of the availability of a physical device with a periodic ping to it, you also ping a VM peridically. Similarly, capacity planning algorithms have not changed in a FUNDAMENTAL way. Yes, they have to take into consideration the virtual sprawl which is numbers and scalability issue. So yes, the paradigm shift is in the sheer numbers you deal with and the associated scalability of the solution. But, monitoring, capacity planning, discovery, configuration management etc have not changed FUNDAMENTALLY. For change management, are the folks continue to use CMDB?....Yes…you will have more number of CIs in CMDB, more relationships etc etc. but the very architecture of CMDB has not been changed radically for the virtual entities.

 

“Question - What happens for Type 1 Hypervisors on a Desktop?  What about traditional tools for Workstation on a Desktop? What about virtual applications that don't show up in the registry for traditional discovery tools to pull?”

 

I am not sure what you mean by “What happens for Type 1 Hypervisors on a Desktop?”…Workstation on a Desktop?...are you talking about VMWare Workstation?....Virtual Applications?...you are talking about desktop virtualization is it not?....again I do not take desktop virtualization seriouslyJ…just kidding!.

 
2) DMTF and other standards all had to shift and be enhanced to address adding requirements for virtual formats.  The current OVF standard just added in the Fall - was driven by the President Winston Bumpus (also architect in office of CTO of VMware) - if there were no paradigm shift - they why would the standards need to be updated?  Additional standards will also need to be taken into consideration - such as looking at the User Data file for drift versus application (because most virtual applications are read only - user data is where plugins, and other significant changes could occur for audit - like viral infections, wall paper, other components mentioned).

Yes, I have met Winston before. He is a very nice guy. Before he moved to VMware he was in Dell and is and was working with DMTF and OASIS standards for a long time. Since you brought up the topic of systems management and standards, I was an architect on a highly successful systems management product - HP Insight Manager and have worked with DMTF standards and the folks there. Just because OVF standard being worked on, it does not mean paradigm has shifted. Anytime new things evolve, you have to either come up with new standards. OVF is just a representation of VMs for portability, packaging and distribution….just a standardization of representation of a VM.

 

“Desktops are moving into the data center which requires different skills, processes, and updates”

 

I do not know what you mean by desktops are moving into data centers?....are all the employees of a company who use desktops and laptops for say accessing their email and ERP applications are now going to be relocated into the data center perimeter and so there is a paradigm shift?....

“ESX Expertise for their Support Desk for Desktops - Today when a call comes in the IT staff can ask a user for basic information like machine name or have it appear in their systems management directory to identify, remote into the machine and trouble shoot an application issue.  For Virtual Desktops - an additional step is needed - identifying what server the desktop is being manifested from (could be one of many from VMotion, or other broker solutions), determining is it an application issue (Virtual or Physical), Is it a connection issue?, Is it an ESX or other Virtual Server Host issue, Storage issue (capacity, throughput for access), etc.  If it is a VM issue - they will need someone that is authorized to troubleshoot, create, and work with the virtual machine.”

As you said “an additional step is needed” is NOT paradigm shift!.

Enhanced Discovery and Delivery - Additional changes will need to be implemented to enhanced how Systems Management tools both deliver and audit applications to virtual environments.  For persistent implementation - for example - when a typical endpoint comes up the agent runs and tells it to pull discovery and/or application updates that are set at the primary distribution point.  What happens when all the VMs spin up at a given time and go to check for an update and download?  What about tools that do not provide the ability to vary their update functionality because they assume a distributed environment?  Or those that look for network detection and throttle accordingly that detect full capacity and open up the connection full throttle?  Now multiply that by 40 or more?  Will offline patching be the cure?  Maybe - but what about those that need to be sent via ECO during blackout hours?  These and many more questions have cropped up with the early implementations and cause delay in large scale production deployments. 

Again you said it “Additional changes will be needed….” Additional changes is by no means paradigm shift!.

 

Now you are bringing up scalability issues. Yes, with virtualization and VM sprawl, as I mentioned before, there will be some scalability issues. But none that are of paradigm shift magnitude. How many VMs do you expect a desktop/laptop user to have?...100s?....probably a couple on each…and how many of them simultaneously, all at the same time pull the updates. Most times a few at a time. Some times like email being checked at 8:00am by all employees at the same time, updates can be pulled simultaneously. So this is a scalability problem and there are many different ways scalability is already addressed unless it is of internet/web scale in which case it is a paradigm shift. But most SMBs, enterprises do not have internet/web scale systems unless they are hybrid (consumer/enterprise) environments like say amazon.com, not employee/enterprise environments.

 

Even scalability has been blown out of proportions unless you dealing with an internet/web scale application. If you have a data center with 1000 machines and have consolidated 10:1, you would end up with 100 HW systems and 1000 VMs, so instead of managing 1000 HW entities, now you are managing 1100 entities (1000 VMs+100 HW). Your management solution was managing 1000 systems before any way, so how big a paradigm shift/scaling it is to manage 1100 systems?....

 

So in summary, I do not think it is a paradgm shift. I think it is just a re-invention of HW wheel in SW. So if you have worked with the wheel before, it should be very similar in the virtual world also.

 


Miha Ahronovitz

non lue,
19 janv. 2010, 12:09:4819/01/2010
à cloud-c...@googlegroups.com
CERN case is impressive. V12n  on a grid  that
"......has 5,000 physical systems each with 16 cores, which can potentially support 80,000 virtual machines (VMs). But we need something to help us manage and understand what is going on in that virtual environment...."

...without performance degradation except storage... Wow

That something to help iss Platform ISF, plus LSF and this is still not enough, they are thinking of using Open Nebula open source. Two questions:

1. Is the usage of Open Nebula in addition or instead Platform commercial products?
2. How can anyone else manage this complexity in a commercial environment?

Miha


From: Ignacio Martin Llorente <llor...@dacya.ucm.es>
To: cloud-c...@googlegroups.com
Sent: Tue, January 19, 2010 12:03:01 AM

Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?

Rao Dronamraju

non lue,
19 janv. 2010, 12:43:1719/01/2010
à cloud-c...@googlegroups.com

“But we need something to help us manage and understand what is going on in that virtual environment...."”

 

There aren’t many management solutions that scale to 80,000 entities with a single instance.

 

OpenNMS claims that an installation some where in Europe (Switzerland??) manages 50,000 devices.

 

But OpenNMS folks are physical world management folks.

 

Can OpenNebula Manage 80,000 VMs?...I think this question is wide OPEN!!:-) is it not?...

 

Actually managing a VM is lot easier than managing an OS/HW. The number of operations/objects supported by libvirt is quite/very limited at this time.


Sassa

non lue,
19 janv. 2010, 16:23:0419/01/2010
à Cloud Computing
On Jan 19, 2:46 am, Greg Pfister <greg.pfis...@gmail.com> wrote:
> On Jan 18, 1:26 pm, Sassa <sassa...@gmail.com> wrote:
>
> > Do you know what would be the point of running a VM on a VM? Is one
> > layer of virtualization not enough?
>
> > (i.e. run JVM on a host OS, NOT JVM on a host VM on a host OS/
> > hypervisor)
>
> As Rao points out below, a JVM is a different level/kind of
> virtualization than providing a virtual "physical" machine.
>
> That said:
>
> 1) Running a virtual physical machine inside another virtual physical
> machine is useful for debugging the hypervisor itself. An old friend
> of mine who developed VM/370 did that regularly. He used to say that
> you actually have to do three levels to get everything right; once you
> have 3, you can do any number. (No, I never quite understood why, but
> it had to do with paging & virtual memory.)
>
> 2) Many virtual physical machines run a JVM - think of consolidating a
> bunch of middle-tier application systems all written in Java. (Or C#,
> or whatever.) Why not just run multiple JVMs on one OS? Different apps
> on different JVMs may require different OS tuning that the OS isn't
> able to isolate to just its applications.

That's a problem of the OS, just like two guest OSes might want
different hypervisor tuning; not a conceptual problem.

> 3) There have been efforts to do the opposite: Run the JVM directly on
> the hardware, hoping to reap efficiency benefits by eliminating a
> level of (sort of) simulation. I know there was one in IBM Research,
> and maybe there are others. Not sure what Azure does, for example;
> probably they run JVM on the iron.

yes, there's JRockit VE, too, running on baremetal.


Sassa


> Greg Pfisterhttp://perilsofparallel.blogspot.com/http://randomgorp.blogspot.com/> Sassa

Sassa

non lue,
19 janv. 2010, 16:48:1619/01/2010
à Cloud Computing
I disagree. The reason a JVM is called a VM is because it abstracts
*everything*, including the underlying processor architecture, for
portability and resource sharing. It doesn't matter what JVM does and
what it asks the OS to do.

I don't care what the "traditional" VM does, and what it asks the BIOS
to do. I don't care if my program is being run on a physical CPU
shared with other VMs or being interpreted by a VM. All of that
doesn't make the VM less of a VM.

Besides, the JVMs don't interpret much these days. They will often
compile CPU-specific code that calls OS-specific routines to interact
with the outside world.

let's not start the flame about the definition of the cloud, but if
you run the code in the cloud, do you care if it uses a grid
underneath? ;-)


I think there are more similarities than discrepancies, if you look
from the app designer, developer, or deployer point of view.


Sassa


On Jan 18, 9:15 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:


> I think we are mixing up JVM and (OS/Hypervisor based) VMs as if they are
> the same just because they are called virtual machines.
>
> A JVM is primarily an interpreter of Java language/byte codes. Yes it does
> create a sandbox to do this job, but it is a misnomer to call a JVM a VM in
> the traditional VM sense. The reason a JVM is called a VM is because it
> abstracts the underlying processor architecture - primarily instruction set
> - in the context of an interpreter/compiler but not in the context of an OS.
> A JVM does not do any OS specific functions primarily process management,
> virual memory management, file systems management, hardware abstraction and
> I/O, networking etc.
>
> Whereas a traditional VM does all the above but does NOT play the role of an
> interpreter like the JVM, although some VMs like QEMU do emulation.
>
> So when we are talking about JVM and VM they are two different beasts
> entirely. So when you run Java applications in a VM, it is perfectly OK and

> needed/required to run a VM (JVM) within a VM (traditional VM).-----Original Message-----

Sassa

non lue,
19 janv. 2010, 17:05:1919/01/2010
à Cloud Computing
On Jan 18, 9:15 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
> ...The reason a JVM is called a VM is because it

> abstracts the underlying processor architecture - primarily instruction set
> - in the context of an interpreter/compiler but not in the context of an OS.
> A JVM does not do any OS specific functions primarily process management,
> virual memory management, file systems management, hardware abstraction and
> I/O, networking etc.

Oh yes, it does:

process management: java.lang.Thread - create new processes, interrupt
processes, assign priorities; suspend, stop are deemed unsafe (and is
so for any old "kill"), but that's still implemented

virtual memory management: don't care if it is virtual; "new <class
name>" allocates more memory; in fact, it is so virtualized, you
shouldn't worry about managing memory at all. Garbage Collection is a
feature that allows to reuse memory that can be proven to be no longer
used, on machines with a finite amount of memory. GC will even do
defragmentation of memory for you. Does malloc or OS do that?

file systems management: java.io.File, java.io.* - create, list,
permissions, read, write, delete files in a virtualized filesystem
with hierarchical namespace

hardware abstraction: what hardware do you access directly from any
given C program? If you do, your app is extremely platform-specific. I
would say normally a program interacts with the hardware via a driver
API. You can have a hardware and OS-specific JNI plug installed for
the hardware that you need to access from a Java program, too.

networking: yes, it is abstracted, too. The network interfaces can
have no direct relationship to the hardware interfaces - they don't
have to, even if they normally do.

I/O: what is it? :-)


The fact that JVMs are extremely ubiquitous is another proof just how
well it does the job of abstracting and virtualizing.


Sassa

Chargement d'autres messages en cours.
0 nouveau message